uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,156,728 | arxiv | \section{Introduction}
Cancers in thoracic region are the most common cancers worldwide~\cite{sung2021global} and significant proportions of patients are diagnosed at late stages involved with lymph node (LN) metastasis. The treatment protocol is a sophisticated combination of surgical resection and chemotherapy and/or radiotherapy~\cite{hirsch2017lung}. Assessment of involved LNs~\cite{zhu2020lymph,chao2020lymph} and accurate labeling their corresponding stations are essential for the treatment selection and planning. For example, in radiation therapy, the delineation accuracy of gross tumor volume (GTV) and clinical target volume (CTV) are the two most critical factors impacting the patient outcome. For CTV delineation, areas containing metastasis \acp{LN} should be included to sufficiently cover the sub-clinical disease regions~\cite{chapet2005ct}. One strategy to outline the sub-clinical disease region is to include the \ac{LNS} that containing the metastasized \acp{LN}~\cite{pignon1992meta,yuan2019lymph}. Thoracic LNS is determined according to the text definitions of \ac{IASLC}~\cite{rusch2009iaslc}. The delineation of \ac{LNS} in the current clinical workflow is predominantly a manual process using \ac{CT} images. Visual assessment and manual delineation is a challenging and time-consuming task even for experienced physicians, since converting text definitions of \ac{IASLC} to precise 3D voxel-wise annotations can be error prone leading to large intra- and inter-user variability~\cite{chapet2005ct}.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\textwidth]{figure/Fig_demo_v4.eps}
\caption{An illustration of \ac{LNS} and key referencing organs. The top row illustrates the auto-searched top-6 key referencing organs; the bottom row depicts the 12 \acp{LNS}.} \label{Fig:LNS_demo}
\end{figure}
Deep \acp{CNN} have made remarkable progress in segmenting organs and tumors in medical imaging~\cite{tang2019clinically,zhang2020robust,jin2019accurate,jin2019deep,guo2020organ,jin2020deeptarget}. Only a handful of non-deep learning studies have tackled the automated LNS segmentation~\cite{feuerstein2012mediastinal,matsumoto2014automatic,sarrut2014learning,liu2016mediastinal}. A LNS atlas was established using deformable registration~\cite{feuerstein2012mediastinal}. Predefined margins from manually selected organs, such as the aorta, trachea, and vessels, were applied to infer \acp{LNS}~\cite{liu2016mediastinal}, which was not able to accurately adapt to individual subject. Other methods~\cite{matsumoto2014automatic,sarrut2014learning} built fuzzy models to directly parse the LNS or learn the relative positions between LNS and some referencing organs. Average location errors ranging from $6.9$mm to $34.2$mm were reported using 22 test cases in~\cite{matsumoto2014automatic}, while an average Dice score (DSC) of $66.0\%$ for $10$ LNSs in 5 patients was observed in~\cite{sarrut2014learning}.
In this work, we propose the DeepStationing -- an anatomical context encoded deep \ac{LNS} parsing framework with key organ auto-search. We first segment a comprehensive set of 22 chest organs related to the description of LNS according to \ac{IASLC} guideline. As inspired by~\cite{guo2020organ}, the 22 organs are stratified into the anchor or non-anchor categories. The predictions of the former category are exploited to guide and boost the segmentation performance of the later category. Next, \ac{CT} image and referencing organ predictions are combined as different input channels to the \ac{LNS} parsing module. The 22 referencing organs are identified by human experts. However, relevant but different from the human process, \ac{CNN} may require a particular set of referencing organs (key organs) that can opt for optimal performance. Therefore, we automatically search for the key organs by applying a channel-weighting to the input organ prediction channels based on differentiable neural search~\cite{liu2018darts}. The auto-searched final top-6 key organs, i.e., esophagus, aortic arch, ascending aorta, heart, spine and sternum (shown in Fig.~\ref{Fig:LNS_demo}), facilitate our DeepStationing method to achieve high LNS parsing accuracy. We adopt 3D nnU-Net~\cite{isensee2020nnu} as our segmentation and parsing backbone. Extensive 4-fold cross-validation is conducted using a dataset of $98$ \ac{CT} images with $12$ \ac{LNS} + $22$ Organ labels each, as \textit{the first of its kind} to date. Experimental results demonstrate that deep model encoded with the spatial context of auto-searched key organs significantly improves the LNS paring performance, resulting in an average \ac{DS} of $81.1\%\pm6.1\%$, which is $5.0\%$ and $19.2\%$ higher over the pure CT-based deep model and the most recent relevant work~\cite{liu2016mediastinal} (from our re-implementations), respectively.
\section{Method}
Fig.~\ref{Fig:LNS_overall_pipeline} depicts the overview of our DeepStationing framework, consisting of two major modularized components: (1) stratified chest organ segmentation; (2) context encoded \ac{LNS} parsing with key organ auto-search.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\textwidth]{figure/Fig_pipeline_v3.eps}
\caption{Overall workflow of our DeepStationing, which consists of stratified chest organ segmentation and anatomical context encoded \ac{LNS} parsing with key organ auto-search. }\label{Fig:LNS_overall_pipeline}
\end{figure}
\subsection{Stratified Chest Organ Segmentation}\label{sec:prior_seg}
To provide the spatial context for LNS parsing, we first segment a comprehensive set of 22 chest organs related to the description of LNS. Simultaneously segmenting a large number of organs increase optimization difficulty leading to sub-optimal performance. Motivated by ~\cite{guo2020organ}, we stratify 22 chest organs into the anchor and non-anchor categories. Anchor organs have high contrast, hence, it is relatively easy and robust to segment them directly using the deep appearance features. Anchor organs are first segmented, and their results serve as ideal candidates to support the segmentation of other difficult non-anchors. We use two CNN branches to stratify the anchor and non-anchor organ segmentation. With predicted anchor organs as additional input, the non-anchor organs are segmented. Assuming $N$ data instances, we denote the training data as $\mathbb{S}=\left\{ X_n, Y_n^{\mathrm{A}}, Y_n^{\mathrm{\neg A}}, Y_n^{\mathrm{L}}, \right\} _{n=1}^{N}$, where $X_n$, $Y_n^{\mathrm{A}}$, $Y_n^{\mathrm{\neg A}}$ and $Y_n^{\mathrm{L}}$ denote the input \ac{CT} and ground-truth masks for the anchor, non-anchor organs and \ac{LNS}, respectively. Assuming there are $C_{\mathrm{A}}$ and $C_{\mathrm{\neg A}}$ classes for anchor and non-anchor organs and dropping $n$ for clarity, our organ segmentation module generate the anchor and non-anchor organ predictions at every voxel location, $j$, and every output class, $c$:
\begin{align}
\hat{Y}^{\mathrm{A}}_c(j) = p^{\mathrm{A}}\left( Y^{\mathrm{A}}(j) = c\, |\, X ; \mathbf{W}^{\mathrm{A}}\right) \mathrm{,} & \quad \hat{\mathbf{Y}}^{\mathrm{A}}=\left[ \hat{Y}^{\mathrm{A}}_1\ldots\hat{Y}^{\mathrm{A}}_{C_{\mathrm{A}}} \right] \label{eq:anchor} \mathrm{,} \\
\hat{Y}^{\mathrm{\neg A}}_c(j) = p^{\mathrm{\neg A}}\left( Y^{\mathrm{\neg A}}(j) = c\, |\, X, \hat{\mathbf{Y}}^{\mathrm{A}}; \mathbf{W}^{\mathrm{\neg A}}\right) \mathrm{,} & \quad
\hat{\mathbf{Y}}^{\mathrm{\neg A}}=\left[ \hat{Y}^{\mathrm{\neg A}}_1\ldots\hat{Y}^{\mathrm{\neg A}}_{C_{\mathrm{\neg A}}} \right] \label{eq:non-anchor} \mathrm{,}
\end{align}
where $p^{(\ast)}(.)$ denotes the \ac{CNN} functions and and $\hat{Y}^{(\ast)}_c$ for the output segmentation maps. Here, we combine both anchor and non-anchor organ predictions into an overall prediction map $\hat{\mathbf{Y}}^{\mathfrak{A}}=\hat{\mathbf{Y}}^{\mathrm{A}} \cup \hat{\mathbf{Y}}^{\mathrm{\neg A}}$. Predictions are vector valued 3D masks as they provide a pseudo-probability for every class. $\mathbf{W}^{(\ast)}$ represents the corresponding \ac{CNN} parameters.
\subsection{Anatomical Context Encoded LNS Parsing}\label{Sec:LNS_parse}
Segmenting \ac{LNS} by only CT appearance can be error prone, since LNS highly relies on the spatial context of adjacent anatomical structures. Emulating the clinical practice of \ac{IASLC} guidelines, we incorporate the referencing organs into the training process of \ac{LNS} parsing. Given $C_{\mathrm{L}}$ classes of the \acp{LNS}, as illustrated in Fig.~\ref{Fig:LNS_overall_pipeline}, we combine the above organ predictions with \ac{CT} images to create a multi-channel input: $\left[ X, \,\, \hat{\mathbf{Y}}^{\mathfrak{A}} \right]$:
\begin{equation}
\hat{Y}^{\mathrm{L}}_c(j) = p^{\mathrm{L}}\left( Y^{\mathrm{L}}(j) = c \, | \, X, \hat{\mathbf{Y}}^{\mathfrak{A}}; \mathbf{W}^{\mathrm{L}}\right) \mathrm{,} \quad \hat{\mathbf{Y}}^{\mathrm{L}} = \left[ \hat{Y}^{\mathrm{L}}_1\ldots\hat{Y}^{\mathrm{L}}_{C_{\mathrm{L}}} \right] \mathrm{.}
\end{equation}
Thereupon, the LNS parsing module leverages both the CT appearance and the predicted anatomical structures, implicitly encoding the spatial distributions of referencing organs during training. Similar to Eq.~\eqref{eq:anchor}, we have the \ac{LNS} prediction in its vector-valued form as $\hat{\mathbf{Y}}^{\mathrm{L}}$.
\subsubsection{Key Organ Auto-search}\label{Sec:organ_search}
The 22 referencing organs are previously selected according to the IASLC guideline. Nevertheless for deep learning based LNS model training, those manually selected organs might not lead to the optimal performance. Considering the potential variations in organ location and size distributions, and differences in automated organ segmentation accuracy, we hypothesize that the deep \ac{LNS} parsing model would benefit from an automated reference organ selection process that are tailored to this purpose. Hence, we use the differentiable neural search~\cite{guo2020organ} to search the key organs by applying a channel-weighting strategy to input organ masks. We make the search space continuous by relaxing the selection of the referencing organs to a Softmax function over the channel weights of the one-hot organ predictions $\hat{\mathbf{Y}}^{\mathfrak{A}}$. For $C_{\mathrm{L}}$ classes, we define a set of $C_{\mathrm{L}}$ learn-able logits for each channel, denoted as $\alpha_c, \forall c \in\left[1\cdots C_{\mathrm{L}}\right]$. The channel weight $\phi_c$ for a referencing organ is defined as:
\begin{align}
\phi_c = \dfrac{\text{exp}\left( \alpha_{c} \right)}{\sum_{m=1}^{C_{\mathrm{L}}}\text{exp}\left( \alpha_{m} \right)} & \mathrm{,} \quad \Phi = \left[\phi_1 \cdots \phi_{C_{\mathrm{L}}} \right] \mathrm{,} \\
F(\hat{Y}^{\mathfrak{A}}_c, \phi_c) = \phi_c \cdot \hat{Y}^{\mathfrak{A}}_c & \mathrm{,} \quad F (\hat{\mathbf{Y}}^{\mathfrak{A}}, \Phi) = \left[F(\hat{Y}^{\mathfrak{A}}_1, \phi_1) \cdots F(\hat{Y}^{\mathfrak{A}}_{C_{\mathrm{L}}}, \phi_{C_{\mathrm{L}}}) \right]
\end{align}
where $\Phi$ denotes the set of channel weights and $F(\phi_c, \hat{Y}^{\mathfrak{A}}_c)$ denotes the channel-wise multiplication between the scalar $\phi_c$ and the organ prediction $\hat{Y}^{\mathfrak{A}}_c$. The input of \ac{LNS} parsing model becomes $\left[ X, \,\, F (\hat{\mathbf{Y}}^{\mathfrak{A}}, \Phi) \right]$. As the results of the key organ auto-search, we select the organs with the top-$n$ weights to be the searched $n$ key organs. In this paper, we heuristically select the $n=6$ based on the experimental results. Last, we train the \ac{LNS} parsing model using the combination of original \ac{CT} images and the auto-selected top-$6$ key organs' segmentation predictions.
\section{Experimental Results}
\noindent{\bf Dataset.} We collected $98$ contrast-enhanced venous-phase \ac{CT} images of patients with esophageal cancers underwent surgery and/or radiotherapy treatments. A board-certified radiation oncologist with 15 years of experience annotated each patient with 3D masks of $12$ \acp{LNS}, involved \acp{LN} (if any), and $22$ referencing organs related to LNS according to \ac{IASLC} guideline. The 12 annotated \ac{LN} stations are: S1 \textit{(left + right)}, S2 \textit{(left + right)}, S3 \textit{(anterior + posterior)}, S4 \textit{(left + right)}, S5, S6, S7, S8. The average \ac{CT} image size is $512 \times 512 \times 80$ voxels with an average resolution of $0.7 \times 0.7 \times 5.0$mm. Extensive four-fold cross-validation (CV), separated at the patient level, was conducted. We report the segmentation performance using \ac{DS} in percentage, \ac{HD} and \ac{ASD} in mm.
\begin{table}[!ht]
\caption{Mean DSCs, HDs, and ASDs, and their standard deviations of LNS parsing performance using: (1) only CT appearance; (2) CT$+$all 22 referencing organ ground-truth masks; (3) CT$+$all 22 referencing organ predicted masks; (4) CT$+$auto-searched 6 referencing organ predicted masks. The best performance scores are shown in {\bf bold}.} \label{tab: quant}
\centering
\scalebox{.85}{
\setlength{\tabcolsep}{4.5mm}{
\begin{tabular}{|l|r|r|r|r|}
\hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\
\multicolumn{1}{|c|}{\multirow{-2}{*}{LNS}} & \multicolumn{1}{c|}{\multirow{-2}{*}{CT Only}} & \multicolumn{1}{c|}{\multirow{-2}{*}{\begin{tabular}[c]{@{}c@{}}+22 \\ Organ GT\end{tabular}}} & \multicolumn{1}{c|}{\multirow{-2}{*}{\begin{tabular}[c]{@{}c@{}}+22 \\ Organ Pred\end{tabular}}} & \multicolumn{1}{c|}{\multirow{-2}{*}{\begin{tabular}[c]{@{}c@{}}+6 Searched \\ Organ Pred\end{tabular}}} \\ \hline
\multicolumn{5}{|c|}{\cellcolor[HTML]{EFEFEF}DSC} \\ \hline
S1 Left & 78.1 $\pm$ 6.8 & 84.3 $\pm$ 4.5 & 82.3 $\pm$ 4.6 & \textbf{85.1 $\pm$ 4.0} \\
S1 Right & 76.8 $\pm$ 5.0 & 84.3 $\pm$ 3.4 & 82.2 $\pm$ 3.4 & \textbf{85.0 $\pm$ 4.1} \\
S2 Left & 66.9 $\pm$ 11.4 & 75.8 $\pm$ 9.0 & 73.7 $\pm$ 8.9 & \textbf{76.1 $\pm$ 8.2} \\
S2 Right & 70.7 $\pm$ 8.5 & 74.8 $\pm$ 7.6 & 72.8 $\pm$ 7.6 & \textbf{77.5 $\pm$ 6.4} \\
S3 Anterior & 77.4 $\pm$ 4.9 & 79.8 $\pm$ 5.6 & 79.7 $\pm$ 5.6 & \textbf{81.5 $\pm$ 4.9} \\
S3 Posterior & 84.6 $\pm$ 3.1 & 87.9 $\pm$ 2.8 & 87.8 $\pm$ 2.9 & \textbf{88.6 $\pm$ 2.7} \\
S4 Left & 74.1 $\pm$ 8.2 & 77.0 $\pm$ 8.9 & 76.9 $\pm$ 8.9 & \textbf{77.9 $\pm$ 9.4} \\
S4 Right & 73.8 $\pm$ 8.9 & 74.9 $\pm$ 9.3 & 74.9 $\pm$ 9.4 & \textbf{76.7 $\pm$ 8.3} \\
S5 & 72.6 $\pm$ 6.7 & 73.2 $\pm$ 7.4 & 73.2 $\pm$ 7.4 & \textbf{77.9 $\pm$ 8.0} \\
S6 & 72.4 $\pm$ 5.7 & 74.9 $\pm$ 4.4 & 74.8 $\pm$ 4.5 & \textbf{75.7 $\pm$ 4.3} \\
S7 & 85.0 $\pm$ 5.1 & 86.6 $\pm$ 5.8 & 86.6 $\pm$ 5.8 & \textbf{88.0 $\pm$ 6.1} \\
S8 & 80.9 $\pm$ 6.1 & 84.0 $\pm$ 5.9 & 82.0 $\pm$ 5.9 & \textbf{84.3 $\pm$ 6.3} \\ \hdashline
Average & 76.1 $\pm$ 6.7 & 79.8 $\pm$ 6.2 & 78.9 $\pm$ 6.3 & \textbf{81.1 $\pm$ 6.1} \\ \hline
\multicolumn{5}{|c|}{\cellcolor[HTML]{EFEFEF}HD} \\ \hline
S1 Left & 11.9 $\pm$ 3.2 & 12.3 $\pm$ 6.0 & 27.6 $\pm$ 38.8 & \textbf{10.3 $\pm$ 4.1} \\
S1 Right & 18.0 $\pm$ 29.3 & 10.6 $\pm$ 2.6 & 61.1 $\pm$ 97.6 & \textbf{9.7 $\pm$ 1.8} \\
S2 Left & 13.3 $\pm$ 9.2 & 9.7 $\pm$ 3.1 & 35.6 $\pm$ 76.9 & \textbf{9.2 $\pm$ 3.1} \\
S2 Right & 36.3 $\pm$ 61.7 & 10.8 $\pm$ 3.0 & 10.8 $\pm$ 3.0 & \textbf{9.5 $\pm$ 3.2} \\
S3 Anterior & 41.7 $\pm$ 62.4 & 13.5 $\pm$ 4.9 & 50.4 $\pm$ 79.1 & \textbf{12.2 $\pm$ 4.3} \\
S3 Posterior & 9.1 $\pm$ 3.3 & 8.0 $\pm$ 2.0 & 18.0 $\pm$ 30.9 & \textbf{7.6 $\pm$ 1.9} \\
S4 Left & 11.5 $\pm$ 4.9 & 14.7 $\pm$ 22.2 & 14.5 $\pm$ 22.2 & \textbf{9.8 $\pm$ 3.8} \\
S4 Right & 32.8 $\pm$ 69.7 & \textbf{9.8 $\pm$ 3.5} & 16.2 $\pm$ 21.5 & \textbf{9.8 $\pm$ 3.6} \\
S5 & 36.4 $\pm$ 56.4 & 20.5 $\pm$ 35.2 & 38.1 $\pm$ 60.3 & \textbf{10.9 $\pm$ 4.0} \\
S6 & 19.2 $\pm$ 30.6 & 8.6 $\pm$ 2.5 & 52.5 $\pm$ 85.3 & \textbf{8.5 $\pm$ 2.7} \\
S7 & 26.3 $\pm$ 42.6 & 9.6 $\pm$ 3.7 & 9.6 $\pm$ 3.7 & \textbf{9.5 $\pm$ 3.5} \\
S8 & 14.5 $\pm$ 6.0 & 13.6 $\pm$ 5.7 & 13.1 $\pm$ 5.8 & \textbf{12.2 $\pm$ 6.2} \\ \hdashline
Average & 22.6 $\pm$ 31.6 & 11.8 $\pm$ 7.9 & 28.9 $\pm$ 43.8 & \textbf{9.9 $\pm$ 3.5} \\ \hline
\multicolumn{5}{|c|}{\cellcolor[HTML]{EFEFEF}ASD} \\ \hline
S1 Left & 1.6 $\pm$ 0.8 & 1.3 $\pm$ 0.6 & 1.4 $\pm$ 1.0 & \textbf{0.9 $\pm$ 0.5} \\
S1 Right & 1.8 $\pm$ 0.8 & 1.2 $\pm$ 0.5 & 1.6 $\pm$ 1.1 & \textbf{0.9 $\pm$ 0.5} \\
S2 Left & 1.4 $\pm$ 0.8 & 1.0 $\pm$ 0.6 & 1.3 $\pm$ 0.8 & \textbf{0.8 $\pm$ 0.6} \\
S2 Right & 1.5 $\pm$ 0.8 & 1.3 $\pm$ 0.7 & 1.3 $\pm$ 0.7 & \textbf{1.0 $\pm$ 0.7} \\
S3 Anterior & 1.0 $\pm$ 0.8 & 0.7 $\pm$ 0.4 & 0.9 $\pm$ 0.9 & \textbf{0.6 $\pm$ 0.4} \\
S3 Posterior & 0.9 $\pm$ 0.5 & \textbf{0.6 $\pm$ 0.3} & 0.8 $\pm$ 1.1 & \textbf{0.6 $\pm$ 0.4} \\
S4 Left & 1.0 $\pm$ 0.6 & 1.4 $\pm$ 2.7 & 1.2 $\pm$ 1.6 & \textbf{0.8 $\pm$ 0.6} \\
S4 Right & 1.5 $\pm$ 1.0 & 1.4 $\pm$ 1.0 & 1.5 $\pm$ 1.0 & \textbf{1.3 $\pm$ 1.0} \\
S5 & 1.3 $\pm$ 0.6 & 1.9 $\pm$ 3.4 & 1.6 $\pm$ 1.8 & \textbf{1.0 $\pm$ 0.5} \\
S6 & 0.8 $\pm$ 0.4 & 0.7 $\pm$ 0.3 & 1.0 $\pm$ 1.1 & \textbf{0.6 $\pm$ 0.3} \\
S7 & 0.9 $\pm$ 0.7 & 0.8 $\pm$ 0.6 & 0.8 $\pm$ 0.6 & \textbf{0.7 $\pm$ 0.6} \\
S8 & 1.7 $\pm$ 1.2 & 1.6 $\pm$ 1.1 & 1.6 $\pm$ 1.1 & \textbf{1.3 $\pm$ 1.3} \\ \hdashline
Average & 1.3 $\pm$ 0.7 & 1.1 $\pm$ 1.0 & 1.3 $\pm$ 1.1 & \textbf{0.9 $\pm$ 0.6} \\ \hline
\end{tabular}
}}
\end{table}
\noindent{\bf Implementation details.} We adopt the nnU-Net~\cite{isensee2020nnu} with DSC+CE losses as our backbone for all experiments due to its high accuracy on many medical image segmentation tasks. The nnU-Net has been proposed to automatically adapt different preprocessing strategies (i.e., the training image patch size, resolution, and learning rate) to a given 3D medical imaging dataset. We use the default nnU-Net settings for our model training. The total training epochs is 1000. For the organ auto-search parameter $\alpha_c$, we first fix the $\alpha_c$ for $200$ epochs and alternatively update the $\alpha_c$ and the network weights for another $800$ epochs. The rest settings are the same as the default nnU-Net setup. We implemented our DeepStationing method in PyTorch, and an NVIDIA Quadro RTX 8000 was used for training. The average training/inference time is 2.5 GPU days or 3 mins.
\subsubsection*{Quantitative Results.}\label{Sec:Eva}
We first evaluate the performance of our stratified referencing organ segmentation. The average DSC, HD and ASD for anchor and nonanchor organs are $90.0\pm4.3\%$, $16.0\pm18.0mm$, $1.2\pm1.1mm$, and $82.1\pm6.0\%$, $19.4\pm15.0mm$, $1.2\pm1.4mm$, respectively. We also train a model by segmenting all organs using only one nnUNet. The average DSCs of the anchor, non-anchor, and all organs are $86.4\pm5.1\%$, $72.7\pm8.7\%$, and $80.8\pm7.06\%$, which are $3.6\%$, $9.4\%$, and $5.7\%$ less than the stratified version, respectively. The stratified organ segmentation demonstrates high accuracy, which provides robust organ predictions for the subsequent LNS parsing model.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{figure/Fig_quali_v5.eps}
\caption{(a) Examples of \ac{LNS} parsing results using different setups. For better comparison, red arrows are used to depict visual improvements. (b) The bottom charts demonstrate the performance using different numbers of searched referencing organs.} \label{Fig:quali}
\end{figure}
Table~\ref{tab: quant} outlines the quantitative comparisons on different deep \ac{LNS} parsing setups. Columns 1 to 3 show the results using: 1) only \ac{CT} images, 2) \ac{CT} $+$ all $22$ ground-truth organ masks, and 3) \ac{CT} $+$ all $22$ predicted organ masks. Using only \ac{CT} images, \ac{LNS} parsing exhibits lowest performance with an average \ac{DS} of $76.1\%$ and \ac{HD} of $22.6$mm. E.g., distant false predictions is observed in the first image $2^{nd}$ row of~Fig.~\ref{Fig:quali} and false-positive S3 posterior is predicted (in pink) between the S1 and S2. When adding $22$ ground-truth organ masks as spatial context, both \ac{DS} and \ac{HD} show remarked improvements: from $76.1\%$ to $79.8\%$ in \ac{DS} and $22.6$mm to $11.8$mm in \ac{HD}. This verifies the importance and effectiveness of referencing organs in inferring LNS boundaries. However, when predicted masks of the 22 organs are used (the real testing condition), it has a significant increase in HD from $11.8$mm to $28.9$mm as compared to that using ground truth organ masks. This shows the necessity to select the key organs suited for the deep parsing model. Finally, using the top-6 auto-searched referencing organs, our DeepStationing model achieves the best performance reaching {\bf 81.1 $\pm$ 6.1\%} DSC, {\bf 9.9 $\pm$ 3.5mm} HD and {\bf 0.9 $\pm$ 0.6mm} ASD. Qualitative examples are shown in~Fig.~\ref{Fig:quali} illustrating these performance improvements.
We auto-search for the organs that are tailored to optimize the \ac{LNS} parsing performance. Using an interval of 3, we train 7 additional \ac{LNS} parsing models, by including the top-3 up to top-21 organs. The auto-searched ranking of the 22 organs is listed as follows: \textit{esophagus, aortic arch, ascending aorta, heart, spine, sternum, V.BCV (R+L), V.pulmonary, descending aorta, V.IJV (R+L), A.CCA (R+L), V.SVC, A.pulmonary, V.azygos, bronchus (R+L), lung (R+L), trachea}, where \textit{`A'} and \textit{`V'} denote the \textit{Artery} and \textit{Vein}. The quantitative \ac{LNS} parsing results in selecting the top-n organs are illustrated in the bottom charts of Fig.~\ref{Fig:quali}. With more organs included gradually, the \ac{DS} first improves, then slightly drops after having more than top-6 organs. The performance later witnesses a sharp drop after including more than top-9 organs, then becoming steady when we include more than top-15 organs. This demonstrates that deep LNS paring model does not need a complete set of referencing organs to capture the LNS boundaries. We choose the top-6 as our final key organs based on experimental results. We notice that the trachea, lungs, and bronchus are surprisingly ranked in the bottom-5 of the auto-search, although previous works~\cite{lu2011automatic,liu2016mediastinal} manually selected them for the LNS parsing. The assumed reasons are that those organs are usually filled with air and have clear boundaries while \ac{LNS} does not include air or air-filled organs. With the help of the other found key organs, it is relatively straightforward for the \ac{LNS} parsing \ac{CNN} to distinguish them and reject the false-positives located in those air-filled organs. We further include 6 ablation studies and segment LNS using: (1) randomly selected 6 organs; (2) top-6 organs with best organ segmentation accuracy; (3) anchor organs; (4) recommended 6 organs from the senior oncologists; (5) searched 6 organs predictions from less accurate non-stratified organ segmentor; (6) searched 6 organs GT. The randomly selected 6 organs are: \textit{V.BCV (L)}, \textit{V.pulmonary}, \textit{V.IJV (R)}, \textit{heart}, \textit{spine}, \textit{trachea}; The 6 organs with the best segmentation accuracy are: \textit{lungs (R+L)}, \textit{descending aorta}, \textit{heart}, \textit{trachea}, \textit{spine}; Oncologists recommended 6 organs are: \textit{trachea}, \textit{aortic arch}, \textit{spine}, \textit{lungs (R+L)}, \textit{descending aorta}; The DSCs for setups (1-6) are 77.2\%, 78.2\%, 78.6\%, 79.0\%, 80.2\%, 81.7\%; the HDs are 19.3mm, 11.8mm, 12.4mm, 11.0mm, 10.1mm, 8.6mm, respectively. In comparison to the LNS predictions using only CT images, the ablation studies demonstrate that using the referencing organ for LNS segmentation is the key contributor for the performance gain, and the selection and the quality of supporting organs are the main factors for the performance boost, e.g., our main results of the setups (5) and (6) show that better searched-organ delineation can help get superior LNS segmentation performance.
\noindent{\bf Comparison to previous work.} We compare the DeepStationing to the previous most relevant approach~\cite{liu2016mediastinal} that exploits heuristically pre-defined spatial margins for \ac{LNS} inference. The DeepStationing outperforms ~\cite{liu2016mediastinal} by $19.2\%$ in \ac{DS}, $30.2$mm in \ac{HD}, and $5.2$mm in \ac{ASD}. For the ease of comparison, similar to~\cite{liu2016mediastinal}, we also merge our \acp{LNS} into four \ac{LN} zones, i.e., \textit{supraclavicular} (S1), \textit{superior} (S2, S3, and S4), \textit{aortic} (S5 and S6) and \textit{inferior} (S7 and S8) zones, and calculate the accuracy of \ac{LN} instances that are correctly located in the predicted zones. DeepStationing achieves an average accuracy of $96.5\%$, or $13.3\%$ absolutely superior than \cite{liu2016mediastinal} in \ac{LN} instance counting accuracy. We tested additionally 2 backbone networks: 3D PHNN (3D UNet with a light-weighted decoding path) and 2D UNet. The DSCs of 3D PHNN and 2D UNet are 79.5\% and 78.8\%, respectively. The assumed reason for the performance drop might be the loss of the boundary precision/3D information.
\section{Conclusion}
In this paper, we propose DeepStationing as a novel framework that performs key organ auto-search based \ac{LNS} parsing on contrasted \ac{CT} images. Emulating the clinical practices, we segment the referencing organs in thoracic region and use the segmentation results to guide \ac{LNS} parsing. Different from employing the key organs directly suggested by oncologists, we search for the key organs automatically as a neural architecture search problem that can opt for optimal performance. Evaluated using a most comprehensive \ac{LNS} dataset, DeepStationing method outperforms previous most relevant approach by a significant quantitative margin of $19.2\%$ in \ac{DS}, and is coherent to clinical explanation. This work is an important step towards reliable and automated \ac{LNS} segmentation.
\bibliographystyle{splncs04}
|
2,869,038,156,729 | arxiv | \section{Introduction}
\label{sec:intro}
Observations within sensing applications result from the convolution between the latent signal and the sensors's transfer function, therefore, a desired property of the sensor is to have a transfer function that is close to a Dirac delta function so that the latent signal can be recovered from the observations. We will model this convolution in a discrete manner to give rise to the representation of a sensing application described in fig. \ref{fig:diagram}, where we model the observations as a (noisy) mixture of (again noisy) measurements and aim to recover the latent signal from the observations. Mixing of the latent signal's values stems from the inability of the sensor to measure the latent signal at the required resolution, this is due to low quality of the sensors that \textit{colour} the observations which have to then be \textit{whiten} in order to recover the latent process. Observations composed by mixtures of measurements are commonplace in sensing applications in different areas: in robot localization using radars or sonars \cite{adams_laser,Moravec85}, in astronomical applications \cite{starck2002deconvolution}, and in super-resolution image recovery \cite{baboulaz2009}, to name but a few.
A workaround to the problem of recovering a latent process from observations composed by mixtures of measurements is to define a set of sensing locations (i.e., a grid) and model the observations as a system of linear equations---recall that the measurements are combined in a linear fashion.However, this approach is rather restrictive since it constrains the measurements to be collected at fixed locations, which is rarely the case in real-world applications, and it leads to heavily underdetermined linear systems. In our view, the key to this problem is to assume spatial correlations in the latent signals so that observations of overlapping regions reveal structure in the signal; we do so in a probabilistic manner by placing a Gaussian process (GP) prior \cite{rasmussen06} on the signal, which is then updated into the posterior density of the latent signal conditional to such observations. However, GPs in their standard form are not suited to deal with observations comprising a mixture of measurements.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{img/sensors.pdf}
\caption{Sensing application: A latent signal (blue) is measured
up to sensor noise (green), these measurements are linearly mixed to yield the mixture-of-observations process (black) which is in turn corrupted by noise (red).}
\label{fig:diagram}
\end{figure}
In this letter we propose a GP-based mixture-of-measurements model, illustrate how to train it and perform Bayesian inference on the latent signal, establish a connection between the proposed model and the linear system approach to sensing applications, and validate our model on real-world and synthetic data
\subsection{Background: Gaussian processes and related work}
Gaussian processes (GPs) \cite{rasmussen06} are a nonparametric prior distribution on functions $f:X\mapsto\mathbb{R}$, for any set $X$ where a covariance function can be defined (e.g., a metric space), such that for any finite collection of inputs $\mathbf{x}=[x_1,x_2,\ldots,x_n]^\top \in X^n$, the corresponding outputs $f(x_1),f(x_2),\ldots,f(x_n)\in \mathbb{R}$ are jointly Gaussian, that is,
\begin{equation}
[f(x_1),f(x_2),\ldots,f(x_n)]^\top\sim \mathcal{N}(\mu(\mathbf{x}),K(\mathbf{x},\mathbf{x}))
\end{equation}
where the mean and covariance {functions}, $\mu(\cdot)$ and $K(\cdot,\cdot)$ respectively, are parametric forms that determine the spatial properties of samples drawn from the GP prior. Training the GP to observed data involves finding the parameters of the mean and covariance functions, then, the \textit{posterior} distribution of the entire function $f$ conditional to a set of observations is Gaussian.
The GP framework is well suited to the sensing setting in fig. \ref{fig:diagram}, since modelling latent signal as a GP results in the posterior distribution of the latent process (given the mixture of measurements) being Gaussian as well. Previous GP-based models for convolution processes \cite{nips15,npr_15b,boyle,Higdon2002} model signals a convolution between a continuous-time filter and a white-noise process, which is unsuitable to represent the latent process in the sensing application where the spatial correlation of the process is fundamental. Conversely, \cite{NIPS2008_3553} allows GPs as latent functions but to address the multi-output case, where the aim is to perform inference on the outputs rather than the latent processes. Furthermore, these methods consider continuous-time convolution filters, which is computationally demanding and requires, e.g., variational approximations \cite{titsias09,chatzis15}. Consequently, closed-form and computationally-efficient Bayesian reconstruction of the latent process is still an open problem in sensing applications.
\section{The Gaussian Process Mixture of Measurements (GPMM)}
Consider a sensing application where each observation $y_i$ is a noisy mixture of, again noisy and hidden, measurements $m_{i,j}$ of a latent process $f(\cdot)$ measured at locations $x_{i,j}$, that is,
\begin{align}
m_{i,j} &= f(x_{i,j}) + \epsilon_{i,j} \label{eq:mix1a}\\
y_i&= \sum_{j=1}^{M}w_{i,j}m_{i,j}+\eta_{i} \label{eq:mix1b}
\end{align}
where for the $i^{\text{th}}$ observation $y_i$, we use the following notation:
\begin{itemize}
\item $f(x_{i,j})$ is the value of the latent process at location $x_{i,j}$,
\item $m_{i,j}$ is the measurement of $f(x_{i,j})$ acquired by the sensor,
\item $w_{i,j}$ is the weight of the measurement $m_{i,j}$,
\item $M$ is the number of measurements in the observation $y_i$,
\item $\epsilon_{i,j}$ is the measurement Gaussian noise, and
\item $\eta_i$ is the observation Gaussian noise.
\end{itemize}
We use the terms \textit{measurement} and \textit{observation} defined here consistently in the rest of the paper---see fig. \ref{fig:diagram}. Note that in general, different observations, i.e. $y_{i}, y_{i'}\ i\neq i'$, correspond to different regions and both the locations $x_{i,j},x_{i',j'}, \forall i,j,j'$ and the weights $w_{i,j},w_{i',j'}, \forall i,j,j'$ might be different due to the sensing procedure.
For $N$ observations, we express eq. \eqref{eq:mix1b} as a system of linear equations in block matrix notation:
\begin{equation}
\underbrace{\left[\begin{smallmatrix} y_{1} \\ \vdots \\ y_{N} \end{smallmatrix}\right]}_{\mathbf{y}}=
{\underbrace{\left[\begin{smallmatrix} \mathbf{w}_{1} & & \\
& \ddots & \\
& & \mathbf{w}_{N}\end{smallmatrix}\right]}_{\mathbf{W}}}^T
\underbrace{\left[\begin{smallmatrix} \mathbf{m}_{1} \\
\vdots \\
\mathbf{m}_{M}\end{smallmatrix}\right]}_{\mathbf{m}}
+
\underbrace{\left[\begin{smallmatrix} \eta_{1} \\ \vdots \\ \eta_{N} \end{smallmatrix}\right]}_{\pmb{\eta}}\label{eq:matrix_system}
\end{equation}
where $\mathbf{w}_{i} = [w_{i,1},\ldots,w_{i,M}]^T$, $\mathbf{m}_{i} = [m_{i,1},\ldots,m_{i,M}]^T$ and the matrix $\mathbf{W}$ is an $M$-diagonal matrix. Column vectors are denoted in bold lowercase font and matrices in bold uppercase font.
\subsection{Solving the linear system}
Our approach will consider the weights $w_{i,j}$ to be unknown and learn them from the observations, however, let us assume they are known in this section in order to address the problem as a linear system; for ease of notation, we assume there are no common locations across the measurements (i.e., $ i\neq i', j\neq j' \Rightarrow x_{i,j}\neq x_{i',j'}$). With these assumptions, eq. \eqref{eq:matrix_system} is an underdetermined linear system: there are $L =MN$ unknowns and $N$ equations, where $N\ll L$, in fact, the general solution to such a system has $L-N$ free parameters or degrees of freedom (neglecting the inconsistent case).
This underdetermined system has infinite solutions, with the minimum-norm solution given by \mbox{$\hat{\mathbf{m}} = \mathbf{W}^+\mathbf{y}$}, where $\mathbf{W}^+$ is the Moore-Penrose pseudoinverse of $\mathbf{W}$ \cite{Moore1920}. Using this solution to recover the latent signal has a number of drawbacks: (i) it requires the weights $w_{i,j}$ to be known, (ii) it only recovers the process at the measured locations without providing any insight about regions not measured, and (iii) it does not provide a measure of uncertainty for the estimates, e.g., in the form of error bars.
\subsection{A novel generative model for the mixture of measurements}
As the system in eq. \eqref{eq:matrix_system} has infinite solutions, a constraint (or regularisation criterion) has to be imposed to reduce the number of solutions, or critically, to find a single solution. The Moore-Penrose pseudoinverse imposes the minimum-norm criterion, we instead assume a probabilistic condition (a prior) on the spatial structure of the solution. Specifically, we (i) place a GP prior on the latent signal to then (ii) find the posterior distribution of the latent signal conditional to the mixture of measurements, even at locations that were not measured. A key property of this approach is that these two steps are performed analytically, since the latent signal and the mixture of measurements are jointly Gaussian. We next present a formal description of the proposed generative model.
We model the latent process $f(\cdot)$ in \eqref{eq:mix1a} as a GP over the set of locations $X$ given by
\begin{equation}
\label{eq:latent}
f\sim\ \mathcal{GP}(\mu_f,K_f)
\end{equation}
where $\mu_f:X\mapsto\mathbb{R}$ and $K_f:X\times X\mapsto\mathbb{R}$ are the GP mean and covariance functions respectively. As the linear combination of jointly-Gaussian random variables (RVs) is Gaussian, the observations in eq.~\eqref{eq:mix1b} are Gaussian RVs indexed by
$\mathbf{x}_i=[x_{i,1},\ldots,x_{i,M}]^\top\in X^{M}$
with mean and covariance respectively given by
\begin{align}
\mu_y(\mathbf{x}_i) &=
\label{eq:mean_mix}
\expectation{y_i} = \sum_{j=1}^{M} \weightElem{i,j} \mu_f(x_{i,j})\\
\label{eq:K_mix}
K_{y}(\mathbf{x}_i, \mathbf{x}_{i'}) &= \expectation{(y_i-\mu_y(\mathbf{x}_i))(y_{i'}-\mu_y (\mathbf{x}_{i'})) } \\
& \hspace{-4.5em}= \sum_{j,j'=1}^{M} \weightElem{i,j}\weightElem{i',j'}\left(K_f(x_{i,j},x_{i',j'}) +\sigma^2_\epsilon\delta(x_{i,j}-x_{i',j'})\right) + \delta_{i-i'}\sigma^2_\eta \nonumber
\end{align}
where $\sigma^2_\eta,\sigma^2_\epsilon$ are the variances of the measurement and observation noises respectively. Additionally, note that if measurements are always taken at different locations (which is the case if the set of locations is continuous) we have $\delta(x_{i,j}-x_{i',j'})=0\ \forall i , i', j ,j'$ and
\begin{align}
K_{y}(\mathbf{x}_i, \mathbf{x}_{i'}) &= \sum_{j=1}^{M}\sum_{j'=1}^{M} \weightElem{i,j}\weightElem{i',j'} K_f(x_{i,j},x_{i',j'}) + \delta_{i-i'}\sigma^2_\eta\\
&=\mathbf{w}_i^\top{K_f}(\mathbf{x}_i,\mathbf{x}_{i'}) \mathbf{w}_{i'} + \delta_{i-i'}\sigma^2_\eta\label{eq:Ky}
\end{align}
where ${K_f}(\mathbf{x}_i,\mathbf{x}_{i'})\in\mathbb{R}^{M\times M}$ is the Gram matrix evaluated on $\mathbf{x}_i $ and $ \mathbf{x}_{i'}$, $\mathbf{x}_i=[x_{i,1},\ldots,x_{i,M}]^\top$ and $\mathbf{w}_i=[w_{i,1},\ldots,w_{i,M}]^\top$.
The covariance of the observations is a mixture of evaluations of the covariance function of the process $f(\cdot)$ at the locations measured. This implies that a single entry in common between measurement locations $\mathbf{x}_i$ and $\mathbf{x}_j$ is enough for the observations $y_i(\mathbf{x}_i)$ and $y_j(\mathbf{x}_j)$ to be correlated. This mixture-of-kernel structure resembles additive GPs \cite{Duvenaud11} and multikernel learning \cite{gonen11,tobar14, Yukawa12}, however, these methods combine different kernels (evaluated on a common input) for expressive kernel design, whereas the proposed model combines evaluations of a single kernel on different locations to find the spatial structure of the latent signal. We refer to the presented model as Gaussian processes mixture of measurements (GPMM) and give a graphical model illustration in fig. \ref{fig:LCOGPGraphicalModel}.
\begin{figure}[t]
\centering
\includegraphics{img/LCOGP_GraphicalModel}
\caption{Graphical model of GPMM. Hidden variables are left blank whereas observed ones are shaded. The inner plate represents the $(i,j)^\text{th}$ measurement, the outer one the $i^\text{th}$ observation, and $x_*$ and $f_*$ are a test location and its value respectively. The thick bar indicates that the latent GP $f$ is completely interconnected}
\label{fig:LCOGPGraphicalModel}
\end{figure}
\section{Inference for GPMM and Relationship to the Moore-Penrose Pseudoinverse}
Fitting the model to the observations $\mathcal{D}=\{(\mathbf{x}_i, y_i), i=1,\ldots,N\}$ involves finding appropriate hyperparameters for GPMM, $\theta_{\text{GPMM}}$, that is, the hyperparameters of the latent signal in eq. \eqref{eq:latent} and the weights $w_i$ which are now hyperparameters. This can be achieved by minimising the negative log-likelihood $\log p(\mathbf{y})$:
\begin{align*}
\theta_{\text{GPMM}}=\arg\min\frac{1}{2}\mathbf{y}^\top K_y^{-1}\mathbf{y} + \frac{1}{2}\log{\vert K_y \vert} + \frac{N}{2}\log{2\pi}
\end{align*}
where $K_y$ is given in \eqref{eq:K_mix}, $\mathbf{y}=[y_1,\ldots,y_N]^\top$, and the optimisation can be performed by, e.g., gradient descent. Notice that the cost of computing $K_y$ in eq. \eqref{eq:Ky} and inverting it are respectively $\mathcal{O}(N^2M^2)$ and $\mathcal{O}(N^3)$, meaning that if $M$ is dominated by $\mathcal{O}(N^{1/2})$ the cost of training GPMM is $\mathcal{O}(N^3)$ as standard GPs. With the optimal hyperparameters, we are now ready to calculate the posterior of $f(\cdot)$.
\subsection{The posterior of the latent process}
\label{sec_posterior}
The posterior density of $f$ given the observations $\mathcal{D}$, $p(f \vert \mathcal{D})$, is Gaussian and determined by (i) the prior mean---assumed to be zero in this case, (ii) the autocovariance of the mixture-of-measurement process $y$---given in eq.~\eqref{eq:K_mix}, and (iii) the covariance between the latent process $f(x)$ and the observation at $\mathbf{x}_i=[x_{i,1},\ldots,x_{i,M}]^\top$ given by $y(\mathbf{x}_i)=y_i=\sum_{j=1}^{M}w_{i,j}m_{i,j}$, this covariance is
\begin{align}
\label{eq:Kfc}
K_{fy}(x, \mathbf{x}_i) &= \expectation{(f(x)-\mu_f(x))(y_i-\mu_{y}(\mathbf{x}_i)) } \\
&= \sum_{j=1}^{M} w_{i,j} K_f(x,x_{i,j})\qquad \nonumber \\
&={ \mathbf{w}^\top K_f(x,\mathbf{x}_i) \qquad}\nonumber
\end{align}
Denoting $\mathbf{y}=[y(\mathbf{x}_1),\ldots,y(\mathbf{x}_N)]^\top$, the predictive posterior is:
\begin{align}
\label{eqn:responsePredictiveMean}
\expectation{f(x)|\mathbf{y}} &= K_{fy} K_y^{-1}\mathbf{y}\\
\label{eqn:responsePredictiveCov}
\mathrm{Var}(f(x)|\mathbf{y}) &= K_f(x,x)- K_{fy} K_y^{-1}K_{yf}
\end{align}
thus, the reconstruction of the latent process $f$ can be computed in closed-form (we used the notation $K_{fy}^\top=K_{yf}$).
\subsection{The Moore-Penrose solution is a particular case of GPMM}
Without loss of generality, let us consider that observations $y_i$ were taken at a grid $\mathbf{x}_\text{grid}=[x_1,\ldots,x_M]$ were only a few weights are nonzero per observation. Furthermore, without evidence for spatial correlation of the latent process $f(\cdot)$, its covariance matrix is the identity multiplied by $\sigma_f^2$ (signal power)---i.e., $\mathbf{K}(\mathbf{x}_\text{grid},\mathbf{x}_\text{grid})=\sigma_f^2\mathbf{I}_M$. Consequently, denoting the mixing weights by $\mathbf{W}=[\mathbf{w}_1,\ldots,\mathbf{w}_N]$ and $\mathbf{x}$ the input of the observed process, from eqs. \eqref{eq:Ky}-\eqref{eq:Kfc} the covariances are given by
\begin{align}
K_{y}(\mathbf{x},\mathbf{x}) &= (\sigma_\epsilon^2+\sigma_f^2)\mathbf{W}^\top \mathbf{W} +\sigma^2_\eta\mathbf{I}_M\\
K_{fy}(x, \mathbf{a}) &= \sigma_f^2 \mathbf{W}^\top.
\end{align}
Finally, introducing the above two expressions in the posterior mean in eq.~\eqref{eqn:responsePredictiveMean} gives the solution to the linear system
\begin{align}
\hat{f}(\mathbf{x}) &= \frac{\sigma_f^2}{\sigma_\epsilon^2+\sigma_f^2} \mathbf{W}^\top \left( \mathbf{W}^\top \mathbf{W} +\frac{\sigma^2_\eta}{\sigma_\epsilon^2+\sigma_f^2}\mathbf{I}_M\right)^{-1} \mathbf{y}.
\label{eq:GP_linearsol}
\end{align}
The connection between solutions to linear systems and GP models is therefore established: when the noise variances are negligible w.r.t. the signal power $\left(\sigma_f^2\gg\sigma_\epsilon^2,\sigma_\eta^2\right)$ the ratios in eq.~\eqref{eq:GP_linearsol} ${\sigma_f^2}/{(\sigma_\epsilon^2+\sigma_f^2)}\rightarrow 1$ and ${\sigma_\eta^2}/{(\sigma_\epsilon^2+\sigma_f^2)}\rightarrow 0$, and the Moore-Penrose inverse is obtained. On the contrary, when the noise variance $\sigma_\epsilon^2$ is large the estimate decays to zero, or \textit{reverts to the prior}, since the measurements are not reliable. Furthermore, when the observation noise $\sigma^2_\eta=0$, eq.~\eqref{eq:GP_linearsol} is equivalent to the ordinary least squares and when $\sigma^2_\eta>0$ to the regularised least squares (ridge regression).
Finally, we emphasise that, unlike the Moore-Penrose method, the proposed GPMM jointly infers the mixing weights and the complete latent process, while also providing a measure of uncertainty given by the variance in eq. \eqref{eqn:responsePredictiveCov}.
\section{Simulations}
The proposed GPMM model was validated using three datasets: A heart-rate time series, a smooth function generated by a GP with square exponential (SE) covariance kernel, and the Heaviside step function. All three experiments consisted in recovering the latent process from noisy mixtures of measurements by fitting GPMM to the observations and then computing the predictive posterior as described in Section \ref{sec_posterior}, where the learnt weights were constrained to have unit $L_1$-norm, positive entries and be symmetric to avoid redundant solutions. The proposed GPMM was compared to the standard GP that considers the observations as a single measurement in a single location, which is the common practice in sensing applications.
\subsection{Smooth synthetic signal}
This first toy example considered a draw from a GP with SE covariance as the latent function and 120 observations with 7 measurements per observation. Fig. \ref{fig:smooth} shows the GP estimates and their mean square error (MSE) for both the standard GP with SE kernel (termed GP-SE) and the proposed GPMM with SE kernel (termed GPMM-SE). Notice how GP-SE fails to recover all the extrema of the latent process and adjust very tightly to the observations, this is because the convolution performed by the sensor smooths out the extrema in the observations which are then trusted as true values by GP-SE. Conversely, the GPMM-SE was able to recover all the extrema, place appropriate error bars on the latent function and report a lower estimation error.
\begin{figure}[t]
\centering
\includegraphics[width=.49\textwidth]{img/smooth_signal_fig}
\caption{Recovering a synthetic signal (dashed) from mixture of measurements (red): standard GP-SE (top, blue) and the proposed GPMM-SE (bottom, blue).}
\label{fig:smooth}
\end{figure}
\subsection{Heart-rate signal}
Instantaneous-frequency estimation is performed averaging over a time window thus motivating the use of the proposed GPMM. We used a heart-rate time-series from the MIT-BIH Database \cite{Goldberger} (\href{http://ecg.mit.edu/time-series/}{\texttt{ecg.mit.edu/time-series}}) and constructed a lower-resolution version of it composed by a mixture of measurements. Fig. \ref{fig:hr} shows the recovery of a heart-rate signal from such observations for both the GP-SE and the proposed GPMM-SE, where the GPMM-SE again outperforms the GP-SE in terms of estimation MSE and uncertainty representation. We used 240 observations with 7 measurements each.
\begin{figure}[t]
\centering
\includegraphics[width=.49\textwidth]{img/heart-rate_signal_fig}
\caption{Recovering a heart-rate signal (dashed) from mixture of measurements (red): standard GP-SE (top, blue) and the proposed GPMM-SE (bottom, blue).}
\label{fig:hr}
\end{figure}
\subsection{Heaviside step function}
Motivated by edge-detection applications, we then considered the Heaviside step function. To cater for discontinuous signals, we used the neural network (NN) kernel \cite{will_inf} and implemented both the standard GP-NN and the proposed GPMM-NN to recover the latent step function. Observe in \ref{fig:heaviside}, how GPMM-NN successfully recovered the discontinuity with low variance. We used 120 observations with 7 measurements each.
\begin{figure}[t]
\centering
\includegraphics[width=.49\textwidth]{img/Heaviside_step_function_fig}
\caption{Recovering a step function (dashed) from mixture of measurements (red): standard GP-SE (top, blue) and the proposed GPMM-SE (bottom, blue).}
\label{fig:heaviside}
\end{figure}
\subsection{Comparing recovered signals in spectral terms}
Our aim is to recover spatial structure in the latent process, in this sense, Fig. \ref{fig:spectra} shows the PSDs for the smooth, heart-rate, and step function from top to bottom, observe how the GPMM (blue) was able to better recover the spectrum in all the experiments considered unlike the standard GP (red), where the latter failed to identify the spectral content of the latent process that was removed by the convolution sensor.
\begin{figure}[t]
\centering
\includegraphics[width=.49\textwidth]{img/smooth_signal_spectrum}\\
\includegraphics[width=.49\textwidth]{img/heart-rate_signal_spectrum}\\
\includegraphics[width=.49\textwidth]{img/Heaviside_step_function_spectrum}
\caption{Power spectral density of true latent signal (black), proposed GPMM (blue) and standard GP (red) for all datasets considered: smooth signal (top), heart-rate signal (middle) and step function (bottom).}
\label{fig:spectra}
\end{figure}
\section{Conclusions}
We have proposed the Gaussian process mixture of measurements (GPMM) to address the problem of recovering a latent signal from a noisy mixture of measurements of such signal, a common setting in sensing applications. Our contributions are (i) to model the latent process and the mixture-of-measurements process as jointly Gaussian, (ii) fitting GPMM, including the mixing weights, and deriving the posterior distribution of the latent function in closed form, (iii) interpreting the solution of the underdetermined linear system generated by the sensing application as a particular case of GPMM, and (iv) validating GPMM against the standard GP for synthetic and real-world signals, where the reconstruction accuracy of GPMM was evidenced both in the time and frequency domains.
\bibliographystyle{IEEEbib}
|
2,869,038,156,730 | arxiv | \section{Introduction}
Toroidal Alfvén eigenmodes (TAEs) are discrete frequency MHD waves that exist in the toroidicity induced gap of the Alfvén continuum spectra in toroidal magnetized plasmas. TAEs are typically excited by an ensemble of energetic ions (e.g.\ coming from auxiliary heating or from fusion reactions) with an inverted energy distribution along the characteristic curves of wave--particle interaction in momentum space. If these waves are excited to large amplitudes, they might eject a large fraction of energetic ions from the plasma before the ions transfer their energy to the bulk plasma, causing a significant reduction of heating efficiency of fast ions~\cite{hei,won}. It is therefore of great importance to understand the significance of TAEs in future devices, such as ITER. Accurate modeling is required that can resolve the nonlinear evolution of the wave--particle interactions.
Many hybrid MHD--kinetic Monte Carlo codes developed for this purpose are orbit following, requiring temporal resolutions well below bounce time scales of the resonant energetic particles. These are many orders of magnitude shorter than the time scales for the relevant dynamics of long-lived eigenmodes in large tokamaks, such as ITER. Relatively simple equations of motion that can resolve the relevant time scales for wave--particle interaction more efficiently can be acquired by using action--angle coordinates~\cite{kau} of the equilibrium system for the phase space of energetic particles. Particles in the unperturbed equilibrium system then follow straight lines in configuration space at constant velocities, and their canonical momenta are constants of motion. This coordinate system has the advantage that particles exactly remain on their guiding center orbits in the unperturbed system, independently of the time step length.
In order to get satisfactory convergence of the Monte Carlo codes describing nonlinear wave--particle interactions, $\delta f$ methods are often used. Although the method is computationally advantageous, it makes the code more difficult to use in conjunction with existing Monte Carlo codes that use full-$f$ methods or that use a different background distribution.
FOXTAIL (``\underline{FO}urier series e\underline{X}pansion of fas\underline{T} particle--\underline{A}lfvén eigenmode \underline{I}nteraction''-mode\underline{L}) is a new hybrid magnetohydrodynamic--kinetic model that both uses action--angle coordinates for particle phase space and a full-$f$ Monte Carlo method to represent the resonant energetic particle distribution. It is based on a model by Berk \emph{et al.}~\cite{bb1}, which is derived from a Lagrangian formulation of the wave--particle interaction. The use of action--angle variables can give scenarios where the shortest time scale needed to be resolved in FOXTAIL is on the order of $\omega_\mathrm{p}^{-1}$, where $\omega_\mathrm{p}$ is the precession frequency of the energetic particles interacting strongly with the eigenmodes.
A simplification used in FOXTAIL is that the spatial structures of the eigenmode wave fields are taken to be constant in time. This limitation means that FOXTAIL is unable to model, e.g., energetic particle modes. There also exist scenarios where the time evolution of TAE eigenfunctions is of importance (see e.g.\ Ref.~\cite{wan}). Such scenarios can be modeled by other existing codes, not having the limitation of static eigenfunctions, such as the hybrid MHD--gyrokinetic code, HMGC~\cite{hm1,hm2,hm3}, and the gyrokinetic toroidal code, GTC~\cite{lin,zha,xia}.
The structure of this paper is the following: Section~\ref{sec:mod} presents the mathematical, the physical and the technical background of FOXTAIL, including derivations of the equations used in the different parts of the code and the overall structure of the code. Section~\ref{sec:coi} describes how some of the resulting equations are numerically implemented. Section~\ref{sec:num} presents some of the possible applications of FOXTAIL, including quantitative comparisons with the corresponding one-dimensional bump-on-tail approximation of the system and numerical studies of the Chirikov criterion in scenarios with two eigenmodes. Section~\ref{sec:con} summarizes the paper.
\section{Model description}\label{sec:mod}
\subsection{Physics overview}
The equations used in FOXTAIL are based on a Lagrangian formulation of the wave--particle system \cite{bb1}. Each simulation is formulated as an initial value problem, starting from an energetic particle distribution in a background equilibrium plasma and a set of eigenmodes with a static spatial structure and a dynamical amplitude and phase. Eigenmodes are treated as weak perturbations of the equilibrium, excluding direct mode--mode interaction. The nonlinear coupling of eigenmodes is taken into account only by the energy and momentum exchange between the modes via the energetic particles.
The physical process of the considered wave--particle interactions is essentially absorption and stimulated emission, and the interactions are energy and momentum conserving. The total energy of the eigenmode is proportional to the amplitude squared. Energy conservation is ensured by the equation
\begin{equation}\label{eq:enc}
\sum_\mathrm{particles}\dot{W}_\mathrm{wave} + \sum_\mathrm{eigenmodes}C\re(A \dot{A}^*) = 0,
\end{equation}
where $\dot{W}_\mathrm{wave}$ is the time derivative of the kinetic particle energy, as accelerated by the wave field, $A$ is the complex amplitude of the eigenmode, and $C$ is the ratio between the eigenmode energy and $|A|^2/2$. A Lagrangian formulation of the wave--particle interaction is presented in section~\ref{sec:cse}, which is shown to be consistent with eq.~\eqref{eq:enc}.
The acceleration of a particle by an Alfvén eigenmode convects the momentum of the particles along curves in the phase space $(W, P_\phi, \mu)$ according to
\begin{equation}\label{eq:cha}
\frac{\mathrm{d} W}{\omega} = \frac{\mathrm{d} P_\phi}{n},~\mathrm{d}\mu = 0,
\end{equation}
where $W$ is the particle kinetic energy, $P_\phi$ is the toroidal canonical momentum, $\mu$ is the magnetic moment, $n$ is the toroidal mode number of the eigenmode and $\omega$ is the eigenmode frequency. The magnetic moment is unperturbed, since the Alfvén eigenmodes are low frequency waves ($\omega \ll \omega_\mathrm{c}$). The curves in $W,P_\phi$-space specified by eq.~\eqref{eq:cha} are referred to as the characteristic curves of wave--particle interaction. Within certain parameter limits, FOXTAIL is equivalent with a one-dimensional bump-on-tail model describing the wave--particle interaction (cf.\ Ref.~\cite{bb2}). Different characteristic curves are indistinguishable in this limit, and the momentum of the 1D model then corresponds to a variable indexing the location of the particle along the characteristic curves. From the theory of the 1D bump-on-tail model, it is apparent that an inverted energy distribution along the characteristic curves at the wave--particle resonance can excite eigenmodes via a process analogous to Landau damping.
\begin{figure}[t!]
\centering
\includegraphics[width=75mm]{Fig1.pdf}
\caption{Flowchart of the FOXTAIL code. As input, FOX takes the mass and charge of the energetic particle species and a grid in the space spanned by the adiabatic invariants of the equilibrium motion. On this grid, the orbits are solved using the input equilibrium configuration. External routines are used to calculate spatial wave field structures and frequencies of a chosen set of Alfvén eigenmodes. Eigenmode data and orbit data is sent to a routine that integrates all guiding center orbits along the wave fields of the eigenmodes in order to obtain a set of interaction coefficients characterizing the particle response with respect to the wave. A set of particle, eigenmode and wave--particle interaction data is collected and sent to the TAIL code, along with an initial distribution of markers and initial complex amplitudes of the eigenmodes. TAIL then solves the nonlinear time evolution of the markers and the considered eigenmodes.}\label{fig:fch}
\end{figure}
\subsection{Overview of the FOXTAIL code}
The FOXTAIL code is essentially split into two parts, ``FOX'' and ``TAIL'', as illustrated in Fig.~\ref{fig:fch}. TAIL is the numerical dynamics solver of the eigenmodes and the energetic particles, solving the wave--particle system as an initial value problem. FOX can be viewed as a preprocessor of TAIL, calculating orbital, eigenmode and interaction related data for a given set of eigenmodes and energetic particle species in a defined equilibrium configuration. The distribution of energetic particles is represented using a finite set of markers with predefined weights.
\subsection{Action--angle coordinates}\label{sec:aac}
The phase space of markers in TAIL is described using action--angle coordinates of the equilibrium system \cite{kau}. The equations of motion of the system become particularly simple in this choice of coordinates, which contributes to the computational efficiency of the solver. In the absence of wave field perturbations, the ``action'' coordinates (i.e.\ the momentum coordinates of the canonical action--angle coordinate system) are constants of motion, whereas the ``angle'' coordinates (configuration space coordinates) evolve with constant velocities. The presence of eigenmodes perturbs this simple dynamics, and the adiabatic invariants are convected according to eq.~\eqref{eq:cha}.
The angle coordinates of the system are given by
\begin{equation}\label{eq:ang}\left\{\begin{array}{*3{>{\displaystyle\vspace{1mm}}l}}
\tilde{\alpha} = \alpha + \int_0^\theta\mathrm{d}\theta'\frac{\bar{\omega}_\mathrm{c}(\bm{P}) - \omega_\mathrm{c}(\bm{P}, \theta')}{\dot{\theta}(\bm{P}, \theta')}, \\
\tilde{\theta} = \int_0^\theta\mathrm{d}\theta'\frac{\omega_\mathrm{B}(\bm{P})}{\dot{\theta}(\bm{P}, \theta')}, \\
\tilde{\phi} = \phi + \int_0^{\theta}\mathrm{d}\theta'\frac{\omega_\mathrm{p}(\bm{P}) - \dot{\phi}(\bm{P}, \theta')}{\dot{\theta}(\bm{P}, \theta')},
\end{array}\right.\end{equation}
where $\alpha$ is the gyro-angle, $\theta$ and $\phi$ are the poloidal and toroidal angles, respectively, $\bm{P} \equiv (P_\alpha, P_\theta, P_\phi)$ are the action coordinates, canonical to the angles $(\tilde{\alpha}, \tilde{\theta}, \tilde{\phi})$. Furthermore, $\omega_\mathrm{c}$ ($\bar{\omega}_\mathrm{c}$) is the (time averaged) gyro-frequency and $\omega_\mathrm{B}$ and $\omega_\mathrm{p}$ are the bounce and precession frequencies, respectively. The integrals of eq.~\eqref{eq:ang} are evaluated on the poloidal coordinates along the guiding center orbits.
FOXTAIL uses the adiabatic invariants $\bm{J} \equiv (\mu, \Lambda, P_\phi)$ as momentum coordinates ($\Lambda = \mu B_0/W$ is the normalized magnetic moment, where $B_0$ is the on-axis magnetic field strength). These momentum coordinates can be expressed as functions of $\bm{P}$. The angle coordinates $(\tilde{\alpha}, \tilde{\theta}, \tilde{\phi})$ are referred to as the \emph{transformed} gyro-angle, poloidal angle and toroidal angle, respectively. In the equilibrium system, where the momentum coordinates $\bm{J}$ are constant, the transformed angles evolve at a constant velocity $(\bar{\omega}_\mathrm{c}, \omega_\mathrm{B}, \omega_\mathrm{p})$. The dynamics in the FOXTAIL model is averaged over gyration time scales, and consequently $\tilde{\alpha}$ is an ignorable coordinate of the system.
The transformed poloidal angle, $\tilde{\theta}$, can be viewed as an index of the location of the particle in the guiding center orbit, where $\tilde{\theta}: 0 \to 2\pi$ is a complete period of the guiding center orbit in the poloidal plane ($\tilde{\theta} = 0$ is defined as the point where the outer leg of the orbit intersects with the equatorial plane). In section~\ref{sec:wav}, it is shown that a Fourier expansion of the instant acceleration of the particle in the wave field is a convenient representation of the wave--particle interactions.
Complications arise for the $\tilde{\theta}$ coordinate close to the boundary $\omega_\mathrm{B} = 0$, where the particle asymptotically approaches one of the turning points. In this limit, all points along the orbit besides the turning points are represented by infinitely narrow intervals in $\tilde{\theta}$, and the representation of the wave--particle interaction using Fourier series expansions in $\tilde{\theta}$-space becomes invalid. These complications can potentially be resolved either by ad hoc boundary conditions or by more sophisticated coordinate transformations close to this boundary. However, all of the numerical studies presented in this paper consider scenarios where the simulated energetic particle distributions are sufficiently far from the boundary.
\subsection{Orbit solver}
FOX contains subroutines that solves the 3D motion of particles in a given equilibrium field configuration on a grid in $\bm{J}$-space. The equilibrium configuration and the particle orbits are described in $\psi,\theta,\phi$-space, where $\psi$ is the poloidal magnetic flux per radian. For each orbit on the $\bm{J}$-grid, the time evolution of $\psi$, $\theta$ and $\phi_\mathrm{s}$ is calculated, where $\phi_\mathrm{s} \equiv \phi - \tilde{\phi} + \omega_\mathrm{p}\tilde{\theta}/\omega_\mathrm{B}$ is the shifted toroidal coordinate ($\phi_\mathrm{s}$ is $\phi$ shifted such that $\phi_\mathrm{s} = 0$ coincides with $\tilde{\theta} = 0$). The equilibrium configuration is taken to be axisymmetric. Assuming MHD force balance and nested magnetic flux surfaces, the equilibrium magnetic field can be expressed as
\begin{equation}
\bm{B} = F(\psi)\nabla\phi + \nabla\phi\times\nabla\psi.
\end{equation}
The guiding center motion consists of a parallel motion and a drift motion, where the drift is given by the combined $\bm{E}\times\bm{B}$, $\nabla B$ and curvature drifts according to
\begin{equation}
\bm{v}_\mathrm{d} = \frac{\bm{E}\times\bm{B}}{B^2} + \frac{\mu(2 B_0 - \Lambda B)}{Z e \Lambda B^3}\bm{B}\times\nabla B.
\end{equation}
By combining $W = m v_\parallel^2/2 + \mu B$ with $P_\phi = m v_\parallel F/B + m v_{\mathrm{d},\phi} - Z e\psi$ and eliminating $v_\parallel$, it can be shown that the coordinates of the guiding center orbits in the poloidal plane follow the equation
\begin{equation}\label{eq:gco}
f(\psi, \theta, \bm{J}) = 0,
\end{equation}
where
\ba
f(\psi, \theta, \bm{J}) &\equiv 1 - \frac{\Lambda B(\psi, \theta)}{B_0} - \frac{\Lambda B^2(\psi, \theta)}{2 m \mu B_0 F^2(\psi)} \nonumber\\
&\quad\, \times [P_\phi + Z e\psi - m v_{\mathrm{d}, \phi}(\psi, \theta, \mu, \Lambda)]^2,
\ea
\ba
v_{\mathrm{d}, \phi}(\psi, \theta, \mu, \Lambda) &= \frac{E^\psi(\psi, \theta)}{B^2(\psi, \theta)} - \frac{\mu[2 B_0 - \Lambda B(\psi, \theta)]}{Z e \Lambda B^3(\psi, \theta)}\nonumber\\
&\quad\, \times[\nabla B(\psi, \theta)]^\psi
\ea
is the covariant toroidal component of the drift velocity, $E^\psi$ and $[\nabla B]^\psi$ are the contravariant $\psi$-components of $\bm{E}$ and $\nabla B$, respectively, and $m$ is the particle mass.
Once the projections of the orbits on the poloidal plane are solved on the chosen $\bm{J}$-grid using eq.~\eqref{eq:gco}, the corresponding time coordinates are calculated according to
\begin{equation}\label{eq:tps}
t(\psi) = \int_{\psi_0}^\psi\frac{\mathrm{d}\psi}{\dot{\psi}},
\end{equation}
where
\begin{equation}\label{eq:dps}
\dot{\psi} = \frac{J F E_\theta - g_{\theta\theta}E_\phi}{J^2 B^2} - \frac{\mu F(2 B_0 - \Lambda B)}{Z e \Lambda J B^3}\dd{B}{\theta},
\end{equation}\begin{equation}
J \equiv \left(\dd{\bm{r}}{\psi}\times\dd{\bm{r}}{\theta}\right)\cdot\dd{\bm{r}}{\phi},
\end{equation}
and $\psi(t = 0) = \psi_0$ is defined as the point where the outer leg of the guiding center orbit intersects the equatorial plane ($\tilde{\theta} = 0$). Similarly, the $\phi_\mathrm{s}$ coordinates are calculated from the toroidal velocity according to
\begin{equation}\label{eq:tph}
\phi_\mathrm{s}(t) = \int_0^t\mathrm{d} t'\:\dot{\phi}(\bm{J}, t'),
\end{equation}
where
\begin{equation}\label{eq:dph}
\dot{\phi}(\bm{J}, t) = \frac{P_\phi + Z e\psi(\bm{J}, t)}{m R^2(\bm{J}, t)}.
\end{equation}
When calculating the wave--particle interaction coefficients, the poloidal velocity is required at each point of the orbit (see eq.~\eqref{eq:vig}). It is given by
\ba\label{eq:dth}
\dot{\theta} &= \frac{P_\phi + Z e\psi - m v_{\mathrm{d},\phi}}{m J F} - \frac{J F E_\psi + g_{\psi\theta}E_\phi}{J^2 B^2} \nonumber\\
&\quad\, + \frac{\mu F(2 B_0 - \Lambda B)}{Z e \Lambda J B^3}\dd{B}{\psi}.
\ea
\subsection{Wave field description}\label{sec:wav}
The present version of FOXTAIL describes the dynamics of low frequency, \emph{shear} eigenmodes, such as the toroidal Alfvén eigenmodes (TAEs), but the model can be extended to describe eigenmodes with, e.g., compressible components as well.
Neglecting plasma resistivity, the general electric wave field can be represented by the two scalar potentials $\Phi$ and $\Psi$ according to
\begin{equation}\label{eq:esc}
\delta\bm{E} = -\nabla_\perp\Phi + \frac{\bm{B}\times\nabla\Psi}{B},
\end{equation}
where the first term is associated with magnetic shear, and the second term is associated with magnetic compression. When parallel gradients in the plasma are negligible in comparison to perpendicular gradients, the excitation of the two scalar potentials $\Phi$ and $\Psi$ is almost decoupled, and it is sufficient to describe the shear Alfvén wave using the first term \cite{bb1}.
In an axisymmetric toroidal plasma, the scalar potential of each eigenmode (indexed by $i$) can be written on the form
\ba
\Phi_i(\bm{r}, t) &= \re\sum_m C_i(t)\mathrm{e}^{\mathrm{i}\chi_i(t)}\Phi_{i, m}(\psi) \nonumber\\
&\quad\, \times\mathrm{e}^{\mathrm{i}(n_i \phi - m\theta - \omega_i t)}, \label{eq:phi}
\ea
where $C_i$ and $\chi_i$ are the slowly varying amplitude and phase of the eigenmode, respectively ($\dot{C}_i/C_i \ll \omega_i$, $\dot{\chi_i} \ll \omega_i$), $n_i$ is the toroidal mode number and $\omega_i$ is the eigenmode frequency. The electric wave field therefore given by
\begin{equation}
\delta\bm{E} = \re\sum_i C_i\mathrm{e}^{\mathrm{i}\chi_i}\tilde{\bm{E}}_i\mathrm{e}^{\mathrm{i}(n_i\phi - \omega_i t)},
\end{equation}
where
\ba
\tilde{\bm{E}}_i &= \sum_m\mathrm{e}^{-\mathrm{i} m\theta}\bigg(\left[\mathrm{i}\Phi_{i,m}g_{\psi\theta}G_{i,m} - \dr{\Phi_{i, m}}{\psi}\right]\nabla\psi + \mathrm{i}\Phi_{i,m} \nonumber\\
&\quad\,\times\Big[(g_{\theta\theta}G_{i,m} + m)\nabla\theta + (J F G_{i,m} - n_i)\nabla\phi\Big]\bigg), \label{eq:tie}
\ea
$G_{i,m} \equiv (n_i J F - m R^2)/(J^2 R^2 B^2)$.
\subsection{Fourier series expansion of fast particle--Alfvén eigenmode interaction}
The acceleration of the energetic particle in the wave field is described by the equation
\begin{equation}
\dot{W} = Z e \bm{v}\cdot\delta\bm{E} \approx Z e\langle\bm{v}\cdot\delta\bm{E}\rangle_\mathrm{g},
\end{equation}
where $\langle\cdot\rangle_\mathrm{g}$ averages over the gyro-motion. Initial versions of FOXTAIL only consider the lowest order averaging over the gyro-motion, but gyro-kinetic corrections to the averaging may be included in later versions. The averaged $\bm{v}\cdot\delta\bm{E}$ can be written as
\begin{equation}
\langle\bm{v}\cdot\delta\bm{E}\rangle_\mathrm{g} = \langle\bm{v}\rangle_\mathrm{g}\cdot\langle\delta\bm{E}\rangle_\mathrm{g} + \frac{\mu}{Z e}\dd{\langle\delta B_\parallel\rangle_{S_\mathrm{g}}}{t},
\end{equation}
where $\langle\delta B_\parallel\rangle_{S_\mathrm{g}}$ averages the parallel magnetic wave field over the surface $S_\mathrm{g}$ enclosed by the gyro-ring (generated by the span of the gyro-angle. For shear waves, the $\delta B_\parallel$ term can be neglected, and we are left with
\begin{equation}
\dot{W} = Z e\langle\bm{v}\rangle_\mathrm{g}\cdot\langle\delta\bm{E}\rangle_\mathrm{g} = \re\sum_i C_i\mathrm{e}^{\mathrm{i}\chi_i} V_i \mathrm{e}^{\mathrm{i}(n_i\phi - \omega_i t)},
\end{equation}
where
\begin{equation}\label{eq:vig}
V_i(\bm{J}, t) = Z e(\dot{\psi}\tilde{E}_\psi + \dot{\theta}\tilde{E}_\theta + \dot{\phi}\tilde{E}_\phi),
\end{equation}
$\dot{\psi}(\bm{J}, t)$, $\dot{\theta}(\bm{J}, t)$ and $\dot{\phi}(\bm{J}, t)$ are the guiding center velocities, given by eqs.~\eqref{eq:dps}, \eqref{eq:dth} and \eqref{eq:dph}, respectively, and $\tilde{E}_\psi(\psi(t), \theta(t))$, $\tilde{E}_\theta(\psi(t), \theta(t))$ and $\tilde{E}_\phi(\psi(t), \theta(t))$ are given by eq.~\eqref{eq:tie}.
All guiding center coordinates $(\psi, \theta, \phi_\mathrm{s})$ and their velocities can be written as functions of $\bm{J}$ and $\tilde{\theta}$. It is then possible to write $\dot{W}$ on a very compact form using action--angle coordinates:
\begin{equation}\label{eq:dwi}
\dot{W} = \re\sum_{i,\ell}C_i\mathrm{e}^{\mathrm{i}\chi_i}V_{i,\ell}\mathrm{e}^{\mathrm{i}(\ell\tilde{\theta} + n_i\tilde{\phi} - \omega_i t)},
\end{equation}
where
\ba
V_{i,\ell}(\bm{J}) &= \frac{Z e}{2\pi}\int_0^{2\pi}\mathrm{d}\tilde{\theta}\:V_i(\tilde{\theta}, \bm{J}) \nonumber\\
&\quad\, \times\exp\bigg[\mathrm{i}\bigg(n_i\left[\phi_\mathrm{s}(\tilde{\theta}, \bm{J}) - \frac{\omega_\mathrm{p}(\bm{J})}{\omega_\mathrm{B}(\bm{J})}\tilde{\theta}\right] - \ell\tilde{\theta}\bigg)\bigg]\label{eq:vil}
\ea
The coefficients $V_{i, \ell}$ are the Fourier series expansion of the wave--particle interaction that named the FOX code. It can be seen in eq.~\eqref{eq:dwi} that a wave--particle resonance is characterized by the condition $\ell\omega_\mathrm{B} + n_i\omega_\mathrm{p} \approx \omega_i$. The model is very efficient in the sense that one can select the relevant modes $i$ and coefficients $\ell$ that are close to resonant with an ensemble of energetic particles and neglect all other modes and non-resonant Fourier coefficients. If the relevant coefficients only have $\ell = 0$, $\tilde{\theta}$ becomes an ignorable coordinate of the system, and the shortest time scales that need to be resolved in simulations using TAIL are the precession time scales, $\omega_\mathrm{p}^{-1}$.
As was mentioned in section~\ref{sec:aac}, complications arise for the choice of action--angle coordinates when describing the wave--particle interaction in regions where $\omega_\mathrm{B} \approx 0$. This is explicitly seen in eq.~\eqref{eq:vil}, where $\omega_\mathrm{p}/\omega_\mathrm{B} \to \infty$, and the integrand oscillates infinitely fast in $\tilde{\theta}$. $V_i(\tilde{\theta}, \bm{J})$ is bounded, and constant for all $\tilde{\theta}$ in this limit except for the infinitely narrow intervals that do not represent the turning points. For this reason, it can be understood that $V_{i,\ell}$ tends to zero towards the $\omega_\mathrm{B} = 0$ boundary surface. Note that the sum of eq.~\eqref{eq:dwi} over all integers $\ell \in \mathbb{Z}$ should remain finite, given that particles can be accelerated across these surfaces by wave fields.
\subsection{Lagrangian formulation of wave--particle interaction}\label{sec:cse}
A model for describing the dynamics of the momentum variables $(\mu, \Lambda, P_\phi)$ and the amplitudes and phases of the eigenmodes remains to be found. Such a model can be derived from a Lagrangian formulation of the wave--particle system. We consider additions to the equilibrium Lagrangian by power series of the eigenmode amplitude $C_i$.
Expressed in action--angle variables, the zeroth order Lagrangian reads \cite{whi}
\begin{equation}\label{eq:la0}
\mathcal{L}_0 = \sum_k \left[P_{\alpha,k}\dot{\tilde{\alpha}}_k + P_{\theta,k}\dot{\tilde{\theta}}_k + P_{\phi, k}\dot{\tilde{\phi}}_k - \mathcal{H}_0(\bm{P}_k)\right],
\end{equation}
where $k$ is the particle index, and $\mathcal{H}_0$, satisfying
\begin{equation}
\dd{\mathcal{H}_0}{\bm{P}} = (\bar{\omega}_\mathrm{c}, \omega_\mathrm{B}, \omega_\mathrm{p}),
\end{equation}
is the equilibrium Hamiltonian. A first order Lagrangian consistent with eq.~\eqref{eq:dwi} is
\begin{equation}\label{eq:la1}
\mathcal{L}_1 = \im\sum_{i,k,\ell}\frac{C_i}{\omega_i}\mathrm{e}^{\mathrm{i}\chi_i}V_{i,\ell}(\bm{J}_k)\mathrm{e}^{\mathrm{i}(\ell\tilde{\theta}_k + n_i\tilde{\phi}_k - \omega_i t)}.
\end{equation}
In a low-$\beta$ plasma, the second order Lagrangian for shear Alfvén eigenmodes can be expressed on the form \cite{bb1}
\begin{equation}\label{eq:l20}
\mathcal{L}_2 = -\sum_i\frac{\dot{\chi}_i C_i^2}{2\mu_0\omega_i}\int\mathrm{d} V\:\frac{|\tilde{\bm{E}}_i(\bm{r})|^2}{v_\mathrm{A}^2(\bm{r})}
\end{equation}
when neglecting rapidly oscillating terms (on time scales $\omega_i^{-1}$), where $\mu_0$ is the vacuum permeability.
Note that the system is invariant under the transformation $C_i\mathrm{e}^{\mathrm{i}\chi_i} \to \kappa_i C_i\mathrm{e}^{\mathrm{i}\chi_i}$, $\Phi_{i,m} \to \Phi_{i,m}/\kappa_i$ for any constant $\kappa_i$. A normalization can be imposed by the condition
\begin{equation}
\int\mathrm{d} V\:\frac{|\tilde{\bm{E}}_i(\bm{r})|^2}{\mu_0 v_\mathrm{A}^2(\bm{r})} = 1
\end{equation}
for each mode $i$. Then the second order Lagrangian reduces to
\begin{equation}\label{eq:la2}
\mathcal{L}_2 = -\sum_i\frac{\dot{\chi}_i C_i^2}{2\omega_i}.
\end{equation}
Assuming that the amplitudes $C_i$ are small enough, such that $|\mathcal{L}_1| \ll |\mathcal{H}_0|$, the corresponding Hamiltonian for the wave--particle system is $\mathcal{H} = \mathcal{H}_0 - \mathcal{L}_1$, with the canonically conjugate pairs $(\tilde{\alpha}_k\,; P_{\alpha, k})$, $(\tilde{\theta}_k\,; P_{\theta, k})$, $(\tilde{\phi}_k\,; P_{\phi, k})$ and $(-\chi_i\,; C_i^2/2\omega_i)$. From this, one can derive all the remaining equations of motion of the wave--particle system self-consistently.
Defining the complex amplitude $A_i \equiv C_i\mathrm{e}^{\mathrm{i}\chi_i}$ and using that $\mu = Z e P_\alpha / m$, the equations of motion of the wave--particle system is
\begin{equation}\label{eq:ta1}\left\{\begin{array}{*3{>{\displaystyle\vspace{1mm}}l}}
\dot{\mu}_k = 0, & \dot{\tilde{\theta}}_k = \omega_\mathrm{B}(\bm{J}_k), \\
\dot{\Lambda}_k = -\frac{\Lambda_k^2}{\mu_k B_0}\re\sum_i A_i U_{i,k}, & \dot{\tilde{\phi}}_k = \omega_\mathrm{p}(\bm{J}_k), \\
\dot{P}_{\phi,k} = \re\sum_i\frac{n_i}{\omega_i}A_i U_{i,k}, & \dot{A}_i = -\sum_k U_{i,k}^*,
\end{array}\right.\end{equation}
where
\begin{equation}
U_{i,k} \equiv \sum_\ell V_{i,\ell}(\bm{J}_k)\mathrm{e}^{\mathrm{i}(\ell\tilde{\theta}_k + n_i\tilde{\phi}_k - \omega_i t)}.
\end{equation}
\subsection{Additional operators}\label{sec:col}
The effects of particle collisions are not covered by the theory presented in the preceding sections. These effects can be added explicitly to the system, e.g.\ by using a diffusion operator in momentum space \cite{bb3} combined with a momentum drag \cite{lil} derived from the Fokker--Planck operator, which act directly on the energetic particle distribution. Adding diffusion to the system transforms the system of ordinary differential equations in eq.~\eqref{eq:ta1} to a system of stochastic differential equations. The general drag--diffusion operator can be written on It\=o form according to
\begin{equation}\label{eq:col}\left\{\begin{array}{*3{>{\displaystyle\vspace{1mm}}l}}
\mathrm{d}\mu_\mathrm{c} = a_\mu\mathrm{d} t + \bm{b}_\mu\cdot\mathrm{d}\bm{W}_t, \\
\mathrm{d}\Lambda_\mathrm{c} = a_\Lambda\mathrm{d} t + \bm{b}_\Lambda\cdot\mathrm{d}\bm{W}_t, \\
\mathrm{d} P_{\phi,\mathrm{c}} = a_{P_\phi}\mathrm{d} t + \bm{b}_{P_\phi}\cdot\mathrm{d}\bm{W}_t,
\end{array}\right.\end{equation}
where $a_{\bm{J}}$ is associated with drag, $\bm{b}_{\bm{J}}$ is associated with diffusion and the components of $\bm{W}_t$ are independent Wiener processes in time. Both $a_{\bm{J}}$ and $\bm{b}_{\bm{J}}$ are functions of $\bm{J}$ and $\tilde{\theta}$ in general, but orbit averaged (i.e.\ $\tilde{\theta}$ averaged) versions of the operators may be used for simplicity (see e.g.\ Ref.~\cite{eri}).
There are other processes external to the energetic particle--Alfvén eigenmode system which may be included. Depending on the source of the energetic particle distribution (e.g.\ neutral beam injection, cyclotron resonance heating or nuclear fusion), operators can be added that continuously supply particles and/or energy to the system. Particle sources are typically modeled by dynamical weights and statistical redistribution of markers. Losses from Bremsstrahlung and cyclotron radiation can be modeled by additions to the $a_{\bm{J}}$ terms in eq.~\eqref{eq:col}.
An additional $-\gamma_{\mathrm{d},i} A_i$ term can be added to the amplitude equation ($\dot{A}_i$ in eq.~\eqref{eq:ta1}) in order to model various damping mechanisms on the eigenmode amplitudes, where $\gamma_{\mathrm{d}, i}$ is the damping rate of the $i$:th mode. This damping can, e.g., come from Landau damping in the interaction with thermal particles or damping due to mode conversion. The $-\gamma_{\mathrm{d},i} A_i$ term of the amplitude equation is here referred to as \emph{explicit} wave damping, unlike the Landau damping coming from the interaction with the energetic particle distribution, which arises implicitly from the model equations.
\subsection{Lowest order corrections to the angle perturbation}\label{sec:loc}
When going from the Lagrangians of eqs.~\eqref{eq:la0}, \eqref{eq:la1} and \eqref{eq:la2} to eq.~\eqref{eq:ta1}, it was assumed that $|\mathcal{L}_1| \ll |\mathcal{H}_0|$, which allowed one to neglect corrections of the canonical momenta coming from the implicit dependence of $\mathcal{L}_1$ with respect to $\dot{\tilde{\theta}}$ and $\dot{\tilde{\phi}}$. For a small enough $\mathcal{L}_1$, the final equations for $\dot{\tilde{\theta}}$ and $\dot{\tilde{\phi}}$ are given only by the derivatives of the equilibrium Hamiltonian $\mathcal{H}_0$ with respect to $P_\theta$ and $P_\phi$, respectively. However, the above assumptions might not be valid for large amplitude eigenmodes and for processes affecting the energetic particles or the eigenmodes that are not direct wave--particle interaction, and one may have to consider the lowest order correction to $\dot{\tilde{\theta}}$ and $\dot{\tilde{\phi}}$ due to momentum perturbation. The correction can be understood from the fact that $\tilde{\theta}$ maps differently to locations on the guiding center orbit for different $\bm{J}$. Without the correction, one pushes the particle forwards or backwards along the orbit due to different mapping of $\tilde{\theta}$.
Assuming a minimal perturbation of the guiding center position in $R,Z$-space while perturbing $\bm{J}$ by the amount $\mathrm{d}\bm{J}$, it can be shown that $\tilde{\theta}$ should be corrected by the amount $\partial\tilde{\theta}/\partial\bm{J}\cdot\mathrm{d}\bm{J}$, where
\begin{equation}\label{eq:dtj}
\dd{\tilde{\theta}}{\bm{J}} = -\frac{\omega_\mathrm{B}}{\dot{R}^2 + \dot{Z}^2}\left(\dot{R}\dd{R}{\bm{J}} + \dot{Z}\dd{Z}{\bm{J}}\right),
\end{equation}
$R$ is the distance from the symmetry axis and $Z$ is the vertical guiding center position. Using that $\tilde{\phi} = \phi - \phi_\mathrm{s} + \omega_\mathrm{p}\tilde{\theta}/\omega_\mathrm{B}$, it can be shown that
\ba\label{eq:dpj}
\dd{\tilde{\phi}}{\bm{J}} &= \frac{\dot{\phi} - \omega_\mathrm{p}}{\dot{R}^2 + \dot{Z}^2}\left(\dot{R}\dd{R}{\bm{J}} + \dot{Z}\dd{Z}{\bm{J}}\right). \nonumber\\
&\quad\, + \dd{}{\bm{J}}\left(\frac{\omega_\mathrm{p}}{\omega_\mathrm{B}}\tilde{\theta} - \phi_\mathrm{s}\right).
\ea
All the derivatives with respect to $\bm{J}$ on the right hand sides of eqs.~\eqref{eq:dtj} and \eqref{eq:dpj} are evaluated while keeping $\tilde{\theta}$ constant.
When the $\bm{J}$ perturbations of any process is stochastic ($\bm{b}_{\bm{J}}$ of eq.~\eqref{eq:col} is nonzero), the angle coordinates become stochastic as well when including the $\partial\tilde{\theta}/\partial\bm{J}$ and $\partial\tilde{\phi}/\partial\bm{J}$ corrections. This is typically how phase decorrelation is introduced to the system. Such an operator was analyzed for the one-dimensional bump-on-tail model in Ref.~\cite{tho}.
\subsection{Summary of the TAIL model equations}\label{sec:sum}
To summarize, the model equations used by TAIL are
\begin{equation}\label{eq:ta2}\left\{\begin{array}{*3{>{\displaystyle\vspace{1mm}}l}}
\mathrm{d}\tilde{\theta}_k = \omega_\mathrm{B}(\bm{J}_k)\mathrm{d} t + \dd{\tilde{\theta}(\tilde{\theta}_k, \bm{J}_k)}{\bm{J}}\cdot\mathrm{d}\bm{J}_k, \\
\mathrm{d}\tilde{\phi}_k = \omega_\mathrm{p}(\bm{J}_k)\mathrm{d} t + \dd{\tilde{\phi}(\tilde{\theta}_k, \bm{J}_k)}{\bm{J}}\cdot\mathrm{d}\bm{J}_k, \\
\mathrm{d} \mu_k = \mathrm{d}\mu_\mathrm{c}(\bm{J}_k), \\
\mathrm{d}\Lambda_k = -\frac{\Lambda_k^2}{\mu_k B_0}\re\sum_i A_i U_{i,k}\mathrm{d} t + \mathrm{d}\Lambda_\mathrm{c}(\bm{J}_k), \\
\dot{P}_{\phi,k} = \re\sum_i\frac{n_i}{\omega_i}A_i U_{i,k} + \mathrm{d} P_{\phi,\mathrm{c}}(\bm{J}_k), \\
\dot{A}_i = -\sum_k w_k U_{i,k}^* - \gamma_{\mathrm{d}, i} A_i,
\end{array}\right.\end{equation}
where $k$ is now the \emph{marker} index with weight $w_k$, and $(\mathrm{d}\mu_\mathrm{c}, \mathrm{d}\Lambda_\mathrm{c}, \mathrm{d} P_{\phi,\mathrm{c}})$ represents additional differential operators acting on the momentum space of markers. The total wave--particle energy of the system can be defined as
\ba
W_\mathrm{tot} &\equiv \sum_k w_k W_k + \sum_i\frac{|A_i|^2}{2} \nonumber\\
&= \sum_k w_k \frac{\mu_k B_0}{\Lambda_k} + \sum_i\frac{|A_i|^2}{2}.
\ea
In the absence of explicit wave damping and particle sources and sinks, it can easily be shown that
\begin{equation}
\dot{W}_\mathrm{tot} = -\sum_k\frac{w_k \mu_k B_0}{\Lambda_k^2}\dot{\Lambda}_k + \re\sum_i A_i\dot{A}_i^* = 0,
\end{equation}
which is consistent with the condition for energy conservation in eq.~\eqref{eq:enc}.
\section{Code implementation}\label{sec:coi}
As input, FOX takes a file that contains all information characterizing the equilibrium configuration on a format compatible with Integrated Tokamak Modelling standards \cite{imb}. All scalar fields, such as $B$, $J$ and $F$, are specified on a grid in $\psi,\theta$-space. The user defines an equidistant grid in $\bm{J}$-space, where all guiding center orbits are to be solved in the given equilibrium. Equation~\eqref{eq:gco} is then solved for each $\bm{J}$ on the grid by bilinear interpolation of $f$ in $\psi,\theta$-space. An example of such a solution is shown in Fig.~\ref{fig:orb}. The solution method is optimal for wide orbits, whereas thinner orbits require a large enough $\psi$ resolution of the equilibrium.
Once the guiding center orbit points are found in the poloidal plane, the time dependence of the orbit is calculated using eq.~\eqref{eq:tps}. Numerically, the integration is performed by assuming $\ddot{\psi}$ to be constant between adjacent points of the guiding center orbit. This assumption generates the approximation
\begin{equation}\label{eq:tpa}
t_j = \int_{\psi_0}^{\psi_j}\frac{\mathrm{d}\psi}{\dot{\psi}} \approx 2\sum_{k = 1}^j\frac{\psi_k - \psi_{k - 1}}{\dot{\psi}_k + \dot{\psi}_{k - 1}},
\end{equation}
where $(\psi_j, \theta_j)$ is the $j$:th $\psi,\theta$-point along the orbit and $\dot{\psi}_j$ is $\dot{\psi}$ evaluated at $(\psi_j, \theta_j)$. Equation~\eqref{eq:tpa} becomes singular at the points where $\dot{\psi}_j = -\dot{\psi}_{j - 1}$. Close to such a singularity, or when the $\psi,\theta$-grid is coarse, large or even non-monotonous time coordinates may result. When these events are identified,\footnote{``Large'' time steps are identified by FOX using the condition $|\dot{\psi}_j| > C|\psi_{j + 1} - \psi_j|/|t_{j + 1} - t_j|$, where the constant $C = 4$.} FOX successively eliminates points along the guiding center orbit until all time steps are small and monotonous. The $\phi_\mathrm{s}$ coordinates of the orbit are then solved simply by using the trapezoidal method on eq.~\eqref{eq:tph}.
\begin{figure}[t]
\centering
\includegraphics[width=58mm]{Fig2.pdf}
\caption{Guiding center orbit in the poloidal plane solved by FOX in an ITER equilibrium, using a deuterium ion with $\mu = \SI{4}{MeV/T}$, $\Lambda = 0.85$ and $P_\phi = \SI{5}{eVs}$ (\SI{1}{eV} = \SI{1.6022e-19}{J}).}\label{fig:orb}
\end{figure}
The eigenfunctions of the wave field, $\Phi_{i,m}(\psi)$ and the corresponding mode frequencies $\omega$ are solved by using the analytical approximations presented in Ref.~\cite{can}. Future versions of FOXTAIL intend to use the MHD eigenmode analyzer code MISHKA~\cite{mi1,mi2,mi3,mi4} to solve the TAEs numerically. Besides ideal MHD effects, MISHKA also considers effects from finite ion Larmor radii, ion and electron drift, neoclassical ion viscosity and bootstrap current, indirect energetic ion effects and the collisionless skin effect. The interaction coefficients $V_{i,\ell}$ are calculated from eq.~\eqref{eq:vil} on each point on the $\bm{J}$-grid also by using the trapezoidal method.
The user specifies which mode--Fourier coefficient pairs to be calculated by FOX. All bounce frequencies, precession frequencies, interaction coefficients, mode frequencies and toroidal mode numbers are then collected in a single output file. A separate output file is generated that contains all initial conditions used in a specific TAIL simulation, which contains the initial energetic particle distribution, flags on the mode--Fourier coefficient pairs to be active in the simulation, initial mode amplitudes, etc.
In the absence of collisions, the TAIL model equations of eq.~\eqref{eq:ta1} defines a system of ordinary differential equation, which is solved numerically using the standard 4$^\mathrm{th}$ order Runge-Kutta method. When momentum diffusion is present, the model equations instead become a system of stochastic differential equations, which can be modeled numerically, e.g.\ by using an It\=o--Taylor numerical scheme~\cite{klo}.
\section{Numerical studies}\label{sec:num}
\subsection{Comparison with the 1D bump-on-tail model}\label{sec:com}
One of the possible applications of FOXTAIL is to determine whether a one-dimensional bump-on-tail approximation of the system is sufficient to capture the essential wave--particle dynamics, such as growth rate and saturation amplitude. A higher computational efficiency typically follows from the lower dimensionality of the approximation. However, the 1D bump-on-tail model is only valid within certain parameter regimes. Section~\ref{sec:com} presents a quantitative numerical study of these regimes.
\subsubsection{Theoretical comparison}
A simple bump-on-tail model, neglecting collisions, explicit wave damping and particle sources and sinks, can be written on the form~\cite{tho}
\begin{equation}\label{eq:bot}
\dr{\xi_k}{\tau} = u_k,~\dr{u_k}{\tau} = \re(\tilde{A}\mathrm{e}^{\mathrm{i}\xi_k}),~\dr{\tilde{A}}{\tau} = -\sum_k\mathrm{e}^{-\mathrm{i}\xi_k},
\end{equation}
where $\tau$ is the time coordinate, $\xi$ is the particle phase, $u$ is the particle momentum and $\tilde{A}$ is the complex eigenmode amplitude.
Now, consider the FOXTAIL model, only including a single mode $i$ and a single Fourier coefficient $\ell$. We define the particle phase as
\begin{equation}
\xi_k \equiv \ell\tilde{\theta}_k + n_i\tilde{\phi}_k - \omega_i t,
\end{equation}
and the relative wave--particle frequency as
\begin{equation}
\Omega(\bm{J}_k) \equiv \dot{\xi}_k = \ell\omega_\mathrm{B}(\bm{J}_k) + n_i\omega_\mathrm{p}(\bm{J}_k) - \omega_i.
\end{equation}
Next, we define a new momentum coordinate system $\bm{K} \equiv (\mu, S, W)$, where both $\mu$ and
\begin{equation}
S \equiv W - \frac{\omega_i}{n_i} P_\phi
\end{equation}
are constants of motion as particles move along the wave--particle characteristic curves of mode $i$.
Assuming that the energetic particle distribution is located in a neighborhood around the wave--particle resonance $\bm{K}_\mathrm{res}$ (such that $\Omega(\bm{K}_\mathrm{res}) = 0$) where $\Omega' = \partial\Omega/\partial W$ and the interaction coefficient $V_{i,\ell}$ is constant in $\bm{K}$, the coordinate substitution
\begin{equation}\label{eq:f2b}\left\{\begin{array}{*3{>{\displaystyle\vspace{1mm}}l}}
\tau \equiv t/\tilde{t}, & \tilde{A} \equiv \tilde{t}^2\Omega'(\bm{K}_\mathrm{res}) V_{i,\ell}(\bm{K}_\mathrm{res}) A_i, \\
u_k \equiv \tilde{t}\Omega(\bm{K}_k), & \tilde{t} \equiv \left(\Omega'|V_{i,\ell}|^2\right)_{\bm{K}_\mathrm{res}}^{-1/3},
\end{array}\right.\end{equation}
transforms the FOXTAIL model exactly to the 1D bump-on-tail model of eq.~\eqref{eq:bot}. Note that the substitutions of eq.~\eqref{eq:f2b} can be made for any FOXTAIL scenario, as long as one chooses a relevant eigenmode--Fourier coefficient pair and a resonant point $\bm{K}_\mathrm{res}$.
According to the theory of the 1D bump-on-tail model, the linear growth rate of the mode ($|\tilde{A}(\tau)| \approx \tilde{A}_0\mathrm{e}^{\gamma_\mathrm{L}\tau}$) is~\cite{tho}
\begin{equation}\label{eq:gam}
\gamma_\mathrm{L} = \frac{\pi}{2}\left.\dr{\tilde{F}_0}{u}\right|_{u = 0},
\end{equation}
where $\tilde{F}_0$ is the initial distribution function in $u$. The growth rate in time $t$ can be approximated by
\begin{equation}\label{eq:gaf}
\gamma_\mathrm{L} = \frac{\pi}{2}\frac{|V_{i,\ell}(\bm{K}_\mathrm{res})|^2}{\Omega'(\bm{K}_\mathrm{res})}\left.\dr{F_0}{W}\right|_{W_\mathrm{res}},
\end{equation}
where $F_0$ is the $W$ distribution of energetic particles along the characteristic curves of wave--particle interaction ($F_0(W) = \int \mathrm{d}\mu\,\mathrm{d} S\,f_0(\bm{K})$). In the 1D bump-on-tail model, particles deeply trapped by the wave field oscillate in $\xi,u$-space around the resonance with the frequency\footnote{The frequency of eq.~\eqref{eq:omb} is in real time, $t$. For the frequency in the normalized time, $\tau$, the expression should be multiplied by $\tilde{t}$.}
\begin{equation}\label{eq:omb}
\omega_\mathrm{b} \approx \frac{\sqrt{|\tilde{A}|}}{\tilde{t}} = \sqrt{\left|\Omega'(\bm{K}_\mathrm{res})V_{i,\ell}(\bm{K}_\mathrm{res}) A_i\right|}.
\end{equation}
By comparing the numerical growth rates and other system properties of the 1D bump-on-tail model and FOXTAIL, one can make estimations of the effects of having non-constant $\Omega'$ and interaction coefficients, and of having multiple modes and/or Fourier coefficients.
\subsubsection{Simulation parameters}\label{sec:sim}
In this study, we consider an ITER equilibrium configuration with an energetic particle distribution consisting of $^3$He$^{2+}$ ions. For simplicity, we neglect collisions, explicit wave damping and particle sources and sinks. In such scenarios, the amplitudes of the modes are expected to grow exponentially until saturation at an amplitude proportional to the growth rate squared \cite{lev}, assuming that the modes interact more or less independently with the energetic particle distribution. Since the magnetic moment is not a dynamical variable in this system, the dimensionality of the problem can be reduced by considering an energetic particle distribution on a $\mu = \mathrm{const.}$ surface. An ad hoc energetic particle distribution functions is constructed, with $\mu = \SI{0.5}{MeV/T}$ (\SI{1}{eV} = \SI{1.6022e-19}{J}). The distribution in $W$ and $S$ are chosen to be Gaussians localized near the two resonances defined by $(n_i, \ell) = (5, 1)$ and $(n_i, \ell) = (6, 1)$.
\begin{figure}[t!]\centering
\includegraphics[width=88mm]{Fig3.pdf}
\caption{The orbit categories, the bounce and the precession frequencies of trapped $^3$He$^{2+}$ ions on a $\Lambda,P_\phi$-grid with $\mu = \SI{0.5}{MeV/T}$. Category 0 orbits cross the last flux surface. For other categories, the convention of Ref.~\cite{hed} is used.}\label{fig:fbp}
\end{figure}
\begin{figure}[t!]\centering
\includegraphics[width=88mm]{Fig4.pdf}
\caption{The interaction coefficient $|V_{i,\ell}|$ plotted in $\bm{J}$-space, with $\mu = \SI{0.5}{MeV/T}$. The bold purple curves are the resonant curves $\ell\omega_\mathrm{B} + n_i\omega_\mathrm{p} = \omega_i$ for $\ell = 1$ and $i = 1$ (Fig.~\ref{fig:vil}.a), $i = 2$ (Fig.~\ref{fig:vil}.b). The thin black curves are the same resonant curves, but for $\ell = \{-2, -1, 0, 2\}$. The dashed curves are the wave--particle characteristic curves for the corresponding mode.}\label{fig:vil}
\end{figure}
\begin{figure}[t!]\centering
\includegraphics[width=88mm]{Fig5.pdf}
\caption{The energetic particle distributions of this study is placed close to the $\ell = 1$ resonant curves. Figure~\ref{fig:dis}.a shows the 3$\sigma$ curves for the Gaussian distributions of the energetic particles (97\,\% of the particles are contained within the 3$\sigma$ curves). The solid black curve is the ``narrow'' initial distribution, and the dashed black curve is the ``wide'' initial distribution. The solid purple curve is the $(i, \ell) = (1, 1)$ resonance, and the dashed purple curve is the $(i, \ell) = (2, 1)$ resonance. The reference point (blue dot) used for transforming to the 1D bump-on-tail coordinate system is placed on the $(1, 1)$ resonance. Both initial distributions have the same corresponding 1D bump-on-tail distribution, shown in Fig.~\ref{fig:dis}.b, where $N_\mathrm{EP}$ is the total number of energetic particles. The Gaussian bump-on-tail distribution is centered at $W - W_\mathrm{res} = \SI{3}{keV}$, and it has a width of $\sigma = \SI{3}{keV}$.}\label{fig:dis}
\end{figure}
The chosen $\bm{J}$-grid to calculate the guiding center orbits is in the region of trapped particles (category V- and VII-orbits in Fig.~\ref{fig:fbp}.a) on the $\mu = \SI{0.5}{MeV/T}$ surface. The corresponding bounce and precession frequencies, presented in Figs.~\ref{fig:fbp}.b and \ref{fig:fbp}.c, respectively, are found from the time dependence of the guiding center orbits, which are solved using eqs.~\eqref{eq:tps} and \eqref{eq:tph}.
Two TAEs are chosen for this study: the first mode with a toroidal mode number $n = 5$, and the second one with $n = 6$. The energetic particle distribution is placed close to the resonance $(i, \ell) = (1, 1)$, i.e., the surface defined by $\omega_\mathrm{B} + 5\omega_\mathrm{p} = \omega_1$, where $\omega_1$ is the frequency of the first TAE. Figure~\ref{fig:vil} shows the calculated interaction coefficients $V_{i,\ell}(\bm{J})$ for $(i, \ell) = (1, 1)$, Fig.~\ref{fig:vil}.a, and $(i, \ell) = (2, 1)$, Fig.~\ref{fig:vil}.b. The frequencies of the two modes are \SI{38.2}{kHz} and \SI{40.2}{kHz}, respectively. Since the precession frequency is small compared to the bounce frequency at the $(1, 1)$ resonance, the $(1, 1)$ and the $(2, 1)$ resonances are almost the same in $\Lambda,P_\phi$-space.
A reference point is chosen on the $(1, 1)$ resonance, as shown in Fig.~\ref{fig:dis}.a. Around this point, the bounce and precession frequencies are approximated to first order in $W,P_\phi$-space, and $V_{1,1}$ is approximated to zeroth order, when transforming from the FOXTAIL to the 1D bump-on-tail coordinate system. Two different initial distribution functions are used in these studies: one localized around the reference point and one that is more spread along the resonance. The two distributions transform to the same distribution in the 1D bump-on-tail model, shown in Fig.~\ref{fig:dis}.b. The initial distributions are chosen such that there is a positive derivative of the energy distribution at the resonance, giving a positive linear growth rate according to eq.~\eqref{eq:gam}.
An energetic particle distribution consisting of $2.5\cdot 10^{16}$ $^3$He$^{2+}$ ions are distributed on $2.5\cdot 10^5$ markers. The markers are spread out in phase space using quasi-random low-discrepancy sequences (a Sobol' sequence~\cite{sob} combined with the Matou\v{s}ek scrambling method~\cite{mat}). They are placed as a Gaussian in $W$-space \emph{centered} around the resonance, and then the marker weights are set such that they represent the \emph{shifted} Gaussian as in Fig.~\ref{fig:dis}.b. This is done in order to improve the statistics of markers around the resonance as compared to a scenario with equal weights.
\begin{table*}\centering
\begin{tabular}{cccccccc}
\toprule
Case & Eigenmodes & Fourier coefficients & $\sigma_S$ [keV] & $\sigma_W$ [keV] & $V_{2,1}$ factor & $\Delta\omega/2\pi$ [kHz] & $N_\mathrm{part}$ [$10^{16}$] \\
\midrule
\#1 & $i = \{1,2\}$ & $\ell = \{-2, \ldots, 2\}$ & 5.7 & 3.0 & 1 & 2.0 & 2.5 \\
\#2 & $i = 1$ & $\ell = 1$ & 5.7 & 3.0 & -- & -- & 2.5 \\
\#3 & $i = \{1,2\}$ & $\ell = \{-2, \ldots, 2\}$ & 22.7 & 3.0 & 1 & 2.0 & 2.5 \\
\#4 & $i = 1$ & $\ell = 1$ & 22.7 & 3.0 & -- & -- & 2.5 \\
\#5 & $i = \{1, 2\}$ & $\ell = 1$ & 5.7 & 3.0 & 10.1 & 2.0 & 2.5 \\
\#6 & $i = 2$ & $\ell = 1$ & 5.7 & 3.0 & 10.1 & -- & 2.5 \\
\#7 & $i = 1$ & $\ell = 1$ & 5.7 & 4.0 & -- & -- & 3.0 \\
\#8 & $i = \{1, 2\}$ & $\ell = 1$ & 5.7 & 4.0 & 13.2 & 6.1 & 3.0 \\
\#9 & $i = 2$ & $\ell = 1$ & 5.7 & 4.0 & 13.2 & -- & 3.0 \\\bottomrule
\end{tabular}\caption{Summary of the initial parameters used in the FOXTAIL simulations presented in Figs.~\ref{fig:bot} -- \ref{fig:na2}, where $\sigma_S$ and $\sigma_W$ are the energy widths of the Gaussian energetic particle distribution along the resonance curve and along the characteristic curves for wave--particle interaction, respectively, $\Delta\omega$ is the frequency separation between the two modes and $N_\mathrm{part}$ is the amount of energetic particles that the markers represent.}\label{tab:par}
\end{table*}
\subsubsection{Numerical comparison with the 1D bump-on-tail model}\label{sec:bot}
Here, different FOXTAIL scenarios are compared with the corresponding 1D bump-on-tail scenario by varying initial parameters such as the width of the initial energetic particle distribution along the resonances and the number of eigenmodes and Fourier coefficients to include in the simulations. Besides the 1D scenario, four FOXTAIL scenarios are presented. In the first two scenarios (referred to as cases \#1 and \#2), a narrow initial distribution is used (see Fig.~\ref{fig:dis}.a), whereas the two latter scenarios (cases \#3 and \#4) use the wide initial distribution. Furthermore, cases \#1 and \#3 include both of the TAEs presented in section~\ref{sec:sim} and a range of Fourier coefficients $-2 \leq \ell \leq 2$ for both eigenmodes. Cases \#2 and \#4 only include the first eigenmode ($n = 5$) and a single Fourier coefficient $\ell = 1$. For a complete list of initial parameter values used in the FOXTAIL simulations presented in this paper, see Table~\ref{tab:par}.
Figure~\ref{fig:bot} shows the amplitude evolutions of the first eigenmode for the bump-on-tail simulation and FOXTAIL simulations \#1 -- 4. The amplitude of the second mode never grows larger than $\approx$ 0.8\,\% of the amplitude of the first mode after saturation in case \#1, and up to 5\,\% in case \#3. It can immediately be seen that the two FOXTAIL scenarios with a narrow initial distribution (cases \#1 and \#2) agrees well with the corresponding 1D bump-on-tail scenario, both in growth rate and in saturation amplitude. On the other hand, the wider distribution (cases \#3 and \#4) gives a different growth rate and saturation level of the amplitude. This is presumably due to the fact that the wide distribution spans over regions where the interaction coefficient $V_{1,1}$ is considerably lower than in the reference point, giving a lower growth rate on average.
Including multiple eigenmodes and Fourier coefficients in the simulations seem to have negligible effects on the system in the considered scenarios, both for the narrow and the wide initial distributions. The reason for why the second mode does not influence the system considerably is because it has an expected growth rate of approximately 100 times lower than the first mode, which is due to the comparably lower values of the interaction coefficient $V_{2,1}$ in the region of the initial distribution function (recall that the growth rate scales as $|V_{i,\ell}|^2$, see eq.~\eqref{eq:gaf}).
When comparing growth rates of the different scenarios, it should be noted that $\gamma_\mathrm{L}$ of the 1D bump-on-tail scenario is approximately 81\,\% of the analytical growth rate of eq.~\eqref{eq:gam}, whereas the growth rates of cases \#1 and \#2 are 77\,\% of the analytical growth rate, and 68\,\% for cases \#3 and \#4. The reason why the growth rate of the 1D bump-on-tail scenario is considerably lower than the analytical one is primarily because of the finite extension of the energetic particle distribution along the characteristic curves (this issue was analyzed in more detail in Ref.~\cite{tho}).
\begin{figure}[t!]\centering
\includegraphics[width=88mm]{Fig6.pdf}
\caption{Amplitude evolution of the first mode $(n = 5)$. $\omega_\mathrm{b} \propto \sqrt{|A_i|}$ is the bounce frequency of particles deeply trapped by the wave field, and $\gamma_\mathrm{L}$ is the analytical linear growth rate of the mode, calculated from the value of $|V_{i,\ell}|$ and $\partial\Omega/\partial W$ in the reference point. The solid black curve uses the 1D bump-on-tail model, which is analogous to FOXTAIL using a single mode--Fourier coefficient pair (in this case $(i, \ell) = (1, 1)$) and constant $V_{i,\ell}$ and $\Omega'$ in $\Lambda,P_\phi$-space. Cases \#1 and \#2 use a narrow initial energetic distribution function along the $(i, \ell) = (1, 1)$ resonance curve in $\Lambda,P_\phi$-space, whereas cases \#3 and \#4 use a wide initial distribution. Cases \#1 and \#3 include both TAEs and Fourier coefficients $-2 \leq \ell \leq 2$, whereas cases \#2 and \#4 only include the $(i, \ell) = (1, 1)$ mode--Fourier coefficient pair. See Table~\ref{tab:par} for a list of parameter values used in all FOXTAIL simulations.}\label{fig:bot}
\end{figure}
\subsection{Numerical multi-mode studies}
The presence of multiple eigenmodes proved to have negligible effect on the system in the scenarios presented in section~\ref{sec:bot} due to the low interaction coefficient of the second mode in the considered part of $\Lambda,P_\phi$-space, compared to the interaction coefficient of the first mode. Scenarios with significant multimode dynamics can be constructed by adding an ad hoc scaling factor to the interaction coefficient of the second mode. Multiplying $V_{2,1}$ by a factor of 10.1 gives approximately the same linear growth rate of the two modes. This has been done for cases \#5 and \#6, presented in Fig.~\ref{fig:nar}, along with the previous case \#2. The three scenarios are the same, except that in cases \#2 and \#6 the eigenmodes are simulated individually, whereas in case \#5 both modes are included to the system. Such a comparison allows one to isolate the nonlinear effects of mode interaction via the energetic particle distribution.
\begin{figure}[t!]\centering
\includegraphics[width=88mm]{Fig7.pdf}
\caption{Figure~\ref{fig:nar}.a shows the mode amplitude evolution for a set of FOXTAIL simulations using the narrow initial distribution function. Case \#4 includes both TAEs, and the Fourier coefficient $\ell = 1$ for each mode. Only the evolution of the first mode is presented. Case \#5 is the same as \#4, but the interaction coefficient $V_{2,1}$ is scaled up by a factor of 10.1, such that the linear growth rates of the two modes approximately match. Case \#6 is the same as \#5, but the first mode is deactivated in the simulation. See Table~\ref{tab:par} for a list of parameter values used in all FOXTAIL simulations. Figure~\ref{fig:nar}.b shows the corresponding 1D bump-on-tail distribution of the above simulations (the final distribution is at $t = \SI{10}{ms}$). Figure \ref{fig:nar}.c tests the Chirikov criterion for case \#5 by dividing the average resonance width in $W$-space and divide by the average energy separation between the resonances along the two characteristic curves.}\label{fig:nar}
\end{figure}
\begin{figure}[t!]\centering
\includegraphics[width=88mm]{Fig8.pdf}
\caption{The same as Fig.~\ref{fig:nar}, but for slightly different scenarios. The frequency separation between the two modes is increased by a factor of 3, the width of the Gaussian energetic particle distribution along the characteristic curve of the first mode is increased from $\sigma = \SI{3}{keV}$ to $\sigma = \SI{4}{kev}$ and the number of particles is increased from $2.5\times 10^{16}$ to $3.0\times 10^{16}$. The scaling factor of the interaction coefficient $V_{2,1}$ is increased from 10.1 to 13.2 in order to match the linear growth rates of the two modes. The amplitude evolutions of the second mode ($A_2$) in Fig.~\ref{fig:na2}.a are smoothened in order to remove high frequency amplitude oscillations coming from the interactions with off-resonant particles and from statistical fluctuations. $\Delta F(W)$ of Fig.~\ref{fig:na2}.b is the difference between the final and the initial corresponding bump-on-tail distributions.}\label{fig:na2}
\end{figure}
Comparing the multimode scenario with the scenarios where the modes are included individually, it can be seen that the indirect interaction between the modes via the energetic particles has significant macroscopic effects on the system. This can partly be understood as a consequence of stochastization of particle trajectories in phase space due to resonance overlap of the two eigenmodes. Stochastization of trajectories causes a locally enhanced transport of energetic particles around the resonances, allowing the eigenmodes to exhaust more energy from the energetic particle distribution (see e.g.\ stochastization from resonance overlap, \cite{bb4,bb5}, and from phase decorrelation, \cite{tho}). This results in a wider portion of the energetic particle distribution being flattened around the resonances, compared to the individual eigenmode cases, as seen in Fig.~\ref{fig:nar}.b.
The resonance width of an eigenmode can be estimated as the separatrix width of the unperturbed mode (i.e.\ in the absence of other modes) along the characteristic curve in $W$-space, using the 1D bump-on-tail approximation. When the resonance widths of the two modes are comparable, the resonance-overlap parameter~\cite{chi} (English translation: Ref.~\cite{ch2}), estimated as the average resonance width of the two modes divided by their distance in phase space, is an approximate measure of the level of stochastization of particle trajectories. The Chirikov criterion for stochastization is satisfied when the resonance-overlap parameter is larger than unity. The full separatrix width in $W$-space, $W_\mathrm{sep}$, is $4\omega_\mathrm{b}/\Omega'(\bm{K}_\mathrm{res})$, with $\omega_\mathrm{b}$ given by eq.~\eqref{eq:omb}. As seen in Fig.~\ref{fig:nar}.c, the Chirikov criterion is well satisfied for case \#5 after $t \gtrsim \SI{1.3}{ms}$.
Slightly different scenarios are tested in the simulations presented in Fig.~\ref{fig:na2}. Cases \#7, \#8 and \#9 are the same as cases \#2, \#5 and \#6, respectively, except that the frequency separation between the two eigenmodes is artificially increased by a factor of 3, and the width of the energetic distribution function along the characteristic curves of the first mode is increased to $\sigma = \SI{4}{keV}$ instead of \SI{3}{keV}. The scaling factor of the interaction coefficient is also adjusted such that the linear growth rates of the two modes approximately match. As can be seen in Fig.~\ref{fig:na2}.c, the Chirikov criterion is never satisfied for case \#8, although the resonance-overlap parameter is of the order of unity. Comparing the amplitude evolutions in Figs.~\ref{fig:nar}.a and \ref{fig:na2}.a, the two modes of case \#8 evolves more similarly to the corresponding individual mode scenarios than case \#5 does. This is especially seen in Fig.~\ref{fig:na2}.b, where the modes of case \#8 flattens two separate regions of the energetic distribution function, matching the flattening regions of the individual mode scenarios.
\section{Summary}\label{sec:con}
This paper presents the theoretical framework of the FOXTAIL code, which is used to describe the nonlinear interaction between Alfvén eigenmodes and energetic particles in toroidal geometries. FOXTAIL is a hybrid magnetohydrodynamic--kinetic model based on a model developed by Berk \emph{et al.}~\cite{bb1}, where each simulation is formulated as an initial value problem. Eigenmodes are treated as perturbations of the equilibrium system, with temporally constant eigenfunctions and dynamic complex amplitudes that vary on time scales longer than the inverse mode frequency. The energetic particle distribution is modeled by a finite set of markers in an action--angle phase space of the unperturbed system. The use of action--angle coordinates rather than conventional toroidal coordinates simplifies the equations of motion of the individual markers, and it allows for efficient resolution of time scales relevant for resonant eigenmode--particle interaction in numerical simulations.
The particle response with respect to the wave field is quantized by a Fourier series expansion of the kinetic energy change $q \bm{v}\cdot\delta\bm{E}$ along the transit period of the unperturbed guiding center orbit, where $q$ is the particle charge, $\bm{v}$ is the guiding center velocity and $\delta\bm{E}$ is the electric wave field at the guiding center position. A Lagrangian formulation of the wave--particle system, consistent with the derived particle response with respect to the wave, is used to derive equations for the eigenmode amplitudes and phases. The resulting system of equations describing direct wave--particle interaction has a phase space with four particle dimensions and two eigenmode dimensions (amplitude and phase). When including mechanisms that perturb the magnetic moment of energetic particles, the particle phase space extends to five dimensions.
When splitting the interaction in the Fourier terms along the transit period, each term contributes to resonant nonlinear interaction mainly in a narrow region around surfaces in the adiabatic invariant space, referred to as resonant surfaces. These surfaces are given by $\ell\omega_\mathrm{B} + n_\phi\omega_\mathrm{p} = \omega_\mathrm{mode}$, where $\ell$ is the Fourier index of interaction, $\omega_\mathrm{B}$ is the bounce frequency, $n_\phi$ is the toroidal mode number of the eigenmode, $\omega_\mathrm{p}$ is the precession frequency and $\omega_\mathrm{mode}$ is the eigenmode frequency. The width of the relevant region around the resonant surfaces depends on the amplitude of the eigenmode, the strength of wave--particle interaction at the resonant surfaces (quantified by the Fourier coefficients of interaction) and the variation of bounce and precession frequencies of particles along the characteristic curves of wave--particle interaction (the curves in adiabatic invariant space along which a given eigenmode accelerates particles).
The presented multi-dimensional model can be approximated with a conventional 1D bump-on-tail model. For the 1D approximation to be valid, three approximate criteria must be met:
\begin{itemize}
\item No more than one eigenmode--Fourier coefficient pair interacts significantly with the energetic particle distribution.
\item The complex Fourier coefficient of interaction is approximately constant in adiabatic invariant space throughout the resonant part of the energetic particle distribution.
\item The bounce and precession frequencies of the energetic particles vary approximately linearly in kinetic energy--toroidal canonical momentum space across the region of the resonance where the energetic particle distribution is located.
\end{itemize}
All these conditions can be quantitatively evaluated with FOXTAIL.
Effects of the fulfillment of the Chirikov criterion in scenarios with two active eigenmodes have been studied numerically using FOXTAIL. It has been verified that eigenmodes can be treated independently in scenarios where the criterion is not satisfied. On the other hand, when the resonance-overlap parameter becomes larger than unity, indirect mode--mode interaction via the energetic particle distribution becomes significant, and a larger portion of the inverted energetic particle distribution becomes flattened in energy space due to stochastization of particle trajectories in phase space.
\section*{Acknowledgments}
This work was supported by the Swedish research council (VR) contract 621-2011-5387.
\bibliographystyle{elsarticle-num}
|
2,869,038,156,731 | arxiv | \section{Introduction}
\label{sec:introduction}
As the production of data is expanding at an astonishing rate and the era of big data is coming, organizing data via assigning items into groups is inevitable. Data clustering algorithms try to find clusters of objects in such a way that the objects in the same cluster are more similar to each other than to those in other clusters. Nowadays clustering algorithms are widely used in data mining tasks.\par
There are different categorization for clustering algorithms, e.g., these algorithms can be categorized into Density-based, Centroid-based and Distribution-based methods. In centroid-based clustering methods, each cluster is shown by a central object. This object which can be a member of the dataset denotes a prototype of the whole cluster. When these algorithms are appointed to find \textit{K} clusters, they usually find \textit{K} central objects and assign each element to the nearest centroid. As they go on, they attempt to decrease the energy and total error of clusters by finding better central elements. \textit{K-medoids} and \textit{K-means} are the two most popular centroid-based algorithms. Although, they both partition the data into groups such that the sum of the squared distances of the data points to the nearest center to them is minimized, they have different assumptions about centroids. Indeed, the k-medoids algorithm chooses the centroids only from the data points and so these centroids are members of the whole dataset while k-means algorithm can select the centroids from the whole input space. \par
In~\cite{albez2013kmd}, some usages and applications of k-medoids algorithm are discussed. According to this study, in resource allocation problems, when a company wants to open some branches in a city, in such a way that the average distance from each residential block to the closest branch is intended to be minimized, the k-medoids algorithm is a proper option. Additionally, in mobile computing context, it is an issue to save communication cost when devices need to choose super-nodes among each other, which should have minimum average distances to all devices and k-medoids can solve this problem. Furthermore, as reported by~\cite{albez2013kmd}, medoid queries also arise in the sensor networks and many other fields. \par
Active learning is a machine learning field that have bee attended specially in the last decade. Until now, many active methods for supervised learning that intend to select more informative samples to be labeled have been proposed. Active unsupervised methods have also been received attention recently. In an unsupervised learning manner, finding the similarities or distances of samples from each other may be difficult or infeasible. For example, the sequence similarity of proteins \cite{konstantin2012} or similarity of face images \cite{arijit2014} which needs to be obtained from human as an oracle, may be difficult to be responded. The active version of some of the well known clustering algorithms have been recently presented in \cite{Mai2013actdbscan,wang2010}. \par
In this paper, we propose the active k-medoid algorithm that inquires a subset of pairwise distances to find the clustering of data. We use a bottom-up approach to find more informative subset of the distances to be inquired. Extensive experiments on several data sets show that our algorithm usually needs a few percentage of the pairwise distances to cluster data properly.\par
In the rest of this paper, we first discuss about the works that have been done in the field of active clustering in Section \ref{sec:related}. In Section \ref{sec:method}, we introduce our algorithm. The result of experiments on different datasets have been presented in Section \ref{sec:experiments}. At last, we discuss about some aspects of our algorithm and conclude the paper in Section \ref{sec:conclusion}.
\section{Related Work}
\label{sec:related}
Active learning is a machine learning paradigm that endeavors to do learning with asking labels of a few number of samples which are more important in the final result of learning. Indeed, most of supervised learning algorithms need a large amount of labeled samples and gathering these labeled samples may need unreasonable amount of time and effort. Thus, active learning tries to ask labels for more important samples where important samples may be interpreted as most informative ones, most uncertain ones, or the ones that have a large effect in the results \cite{settles2010active}.
The active clustering problem has been recently received much attention. Until now, the active version of some well known clustering methods has been proposed in \cite{Mai2013actdbscan,wang2010}.
In the active clustering problem, a query is a pair of data whose
similarity must be determined. The purpose of the active learning approach is reducing the number of required queries via active selection of them
instead of random selection \cite{tong2001}. \par
The existing active clustering methods can be categorized into constraint-based and distance-based ones \cite{viet-vu2012} .
In the most of the constraint-based methods, must-link and cannot-link constraints on pairs of data points indicating these pairs must be in the same cluster or different clusters are inquired.
Some constraint-based methods for active clustering have been proposed in \cite{viet-vu2012,xiong2014,grira2008,wang2010,arijit2014,basu2004,wagstaff2001}.
In distance-based methods, the response to a query on a pair of data points is the distance
of that pair according to an objective function.
Distance-based methods for active clustering have been recently attended in \cite{Mai2013actdbscan,eriksson2011,krishnamurthy2012,Shamir2011,Wauthier2012,konstantin2012}. \par
In \cite{Mai2013actdbscan}, an algorithm for active DBSCAN clustering is presented. In this algorithm, the distances that have not been queried are estimated with a lower bound. A score indicating the amount and the probability of changes in the estimated distances by asking a query is used to select queries. Moreover, an updating technique is introduced in \cite{Mai2013actdbscan} that update clustering after a query. \par
In \cite{Shamir2011,Wauthier2012}, distance-based algorithms are presented for active spectral clustering in which a perturbation theory approach is used to select queries.
A constraint-based algorithm has also been presented in \cite{wang2010} for active spectral clustering that uses an approach based on maximum expected error reduction to select queries. \par
An active clustering method for k-median clustering has also been proposed in \cite{konstantin2012}. This method selects some points as the landmarks and ask the distances between these landmarks and all the other data points as queries. Finally, k-median clustering is done using these distances.
\section{Proposed Method}
\label{sec:method}
\newcommand{\ABS}[1]{\left | #1 \right |}
In this section, we introduce the proposed active k-medoids clustering. We assume that our algorithm intends to partition $n$ samples into $k$ different clusters. As mentioned above, many clustering algorithms such as K-medoids, PAM \cite{Kaufman1990PAM}, and some other distance-based methods, calculate an \(n\times n\) distance matrix at first and perform the clustering algorithm on this distance matrix. We show the distance matrix by $D$ where
$d_{ij}$ denotes the distance between the $i$th sample and the $j$th one. \par
We introduce a method to estimate unknown distances during an active clustering process. In a metric space, a satisfying and eminent upper-bound estimation for any distance metric can be obtained by the triangle inequality.
For example, when we know the exact distances between $d_{ax},d_{xy} $ and $d_{yb}$, we can determine the upper-bound estimation for $d_{ab}$ as:
\begin{equation}
d_{ab} \leqslant d_{ax} + d_{xy} + d_{yb}
\end{equation}
We find an upper-bound estimation of the distances using the triangle inequality and the known distances asked from an oracle already.
Therefore, we have
\begin{equation}
\forall i,j, 1\leqslant i,j \leqslant n : D(i,j)\leqslant D_{e}(i,j)
\end{equation} \par
where $D_{e}$ shows the estimated distances.
First, upper-bound estimations for all distances are infinity and we update these distances by asking some of them and make better estimations for the other unknown distances using the triangle inequality and new distances taken from the oracle. The update will be done by replacing exact values for the asked distances and getting the minimum of the old and the new upper-bound estimation for unknown distances. By asking the landmark distances, we intend to take a better estimation of distances required for the k-medoids algorithm.
Consider some data points which are partitioned to $m$ groups where the distances within each group are known or estimated. However, the distances between data points from different groups are unknown. Our goal is to estimate these unknown distances instead of asking them. In such situation, we can choose $t$ finite points from each group and ask the distances between these $mt$ points between different groups and estimate the other distances using these asked distances. The number of these distances is ${m \choose 2}t^{2}$.
Figure \ref{fig:estimation} gives an intuition about this distance estimation method.
We want to estimate the distance between $a$ and $b$ and the distances between those points that are connected by solid lines and dotted lines are known.
\begin{figure}[ht!]
\includegraphics[width=120mm]{./estimation.jpg}
\caption{Upper-bound distance estimation between $a,b$. Superscript $e$ determines that the distance is an estimation.\label{fig:estimation}}
\end{figure}
Such estimation algorithm can be done in $\BigO{n^2t^2}$, when $n$ is the number of data points. If we have $t \ll n$, the time complexity of the algorithm will be $\BigO{n^2}$.
Based on the estimation method used for unknown distances, we present an active k-medoids algorithm.
The approach is based on partitioning the data points into some groups by a hierarchical manner.
In the other words, we partition data points into $b$ groups and partition each of these groups to $b$ groups and so on until we get to a threshold like $t_h$ for the number of data points in each partition.
In this level, we ask all the distances within each group among all its data points and choose $t$ points from each group (to ask their distances) and using the explained algorithm for estimating distances in a bottom-up approach until we get to the highest level.
After that, we cluster the data using the k-medoids algorithm on the estimated distances.
According to these explanation, it seems that choosing $t$ points in each group is a critical step and choosing a bad point can lead to unfavourable estimations.
For this purpose, consider a group of data points like $G$ which its inner distances are known or estimated.
In order to choosing $t$ points, we perform k-medoids algorithm on $G$ and find clusters and medoids for this group.
Then, we choose medoids and $s$ random points from each cluster as the points whose distances are needed to be asked.
Therefore, the number of the chosen data points in $G$ will be $t = k(s+1)$.
It is obvious that a greater $s$ will lead to more accurate estimations. \par
Algorithm \ref{alg:red} present the pseudo code of the proposed method.
This function clusters $n$ data points into $k$ different categories and return clusters of data.
Here, $b$ shows the branching factor which is used to partition data points to $b$ different groups with the same size.
The partitioning algorithm will perform for each group recursively.
There is also a threshold $t_h$ which clarify the minimum size of a group of data points. Clearly, $t_h \geq k(s+1)$ since we need to choose at least $k(s+1)$ points in each group. It is also noteworthy that if $n \leq 2t_h$, we need to ask all distance pairs since these data points cannot be partitioned. \par
\begin{algorithm}
\caption{Active k-medoids}\label{alg:red}
\begin{algorithmic}[1]
\INPUT $D_e,n,k,b,t_h$ \Comment{distance estimation, \#data, \#clusters, branch factor, threshold}
\OUTPUT $C_1,\ldots,C_k$
\Procedure{ActiveKmedoids}{$D_e,n,k,b,t_h$}
\If{ $n \leq 2t_h$ }
\State Update $D_e$ by querying all distances
\State $C_1,\ldots,C_k \gets kmedoids(D_e,k)$ \Comment{kmedoids function is a regular k-medoids}
\State \textbf{return}
\EndIf
\State Partition data to $b$ equal size groups, like $G_1,\ldots,G_b$
\For{\texttt{$i$ from $1$ to $b$}}
\State $T_1,\ldots,T_k \gets ActiveKmedoids(D_e(G_i),|G_i|,k,b,t_h)$ \Comment{$D_e(G_i)$ is the part of the estimated distance matrix corresponding to $G_i$}
\State $G^c_i \gets$ medoids of $T_1,\ldots,T_k$ and $s$ random points from each of them.
\EndFor
\State Update $D_e$ by querying distances between all those pairs that one of them is in $G^c_i$ and the other is in $G^c_j$.
\State Update $D_e$ by the triangle inequality and new inquired distances.
\State $C_1,\ldots,C_k \gets kmedoids(D_e,k)$
\EndProcedure
\end{algorithmic}
\end{algorithm}
Figure~\ref{fig:flowTree} shows an example workflow for Algorithm~\ref{alg:red} for $1600$ data points with branching factor $2$ and threshold $400$. \par
\begin{figure}[h]
\tikzset{edge from parent/.style={draw, edge from parent path=
{(\tikzparentnode) -- (\tikzchildnode)}}
,level distance={1.5in},sibling distance={.2in}}
\begin{tikzpicture}[every tree node/.style={draw,rectangle,minimum width=1.1in,
minimum height=.65in,align=center},scale=0.9]
\Tree [.\node (7) {$n=1600$\\$data(1:1600)$\\$Query(list(3),list(6))$\\$Estimate D$};
\edge node [auto=right] {$data(1:800)$};
[.\node (3) {$n=800$\\$Query(list(1),list(2))$\\$Estimate D$};
\edge node [auto=right] {$data(1:400)$};
[.\node (1) {$n=400$\\$Query(all,all)$}; ]
\edge node [auto=left] {$data(401:800)$};
[.\node (2) {$n=400$\\$Query(all,all)$}; ]]
\edge node [auto=left] {$data(801:1600)$};
[.\node (6) {$n=800$\\$Query(list(4),list(5))$\\$Estimate D$};
\edge node [auto=right] {$data(801:1200)$};
[.\node (4) {$n=400$\\$Query(all,all)$}; ]
\edge node [auto=left] {$data(1201:1600)$};
[.\node (5) {$n=400$\\$Query(all,all)$}; ]]
]
\tikzset{every node/.style={draw,rectangle,fill=white}}
\foreach \x in {1,...,7}
{
\node at (\x.north east) {\x};
};
\end{tikzpicture}
\caption{ActiveKmedoids workflow for 1600 points($b=2$,$t_h=400$)}
\label{fig:flowTree}
\end{figure}
Now we calculate the complexity of our Algorithm~\ref{alg:red}.
It makes a tree with the branching factor $b$ and the threshold $t_h$ for the number of the data points in the leaves of the tree. Therefore, the height of this tree is $\ceil{\log_{b}{(n/t_h)}}$.
The number of nodes in the $i$th level of the tree is $b^i$ and each node of the $i$th level has $n/b^i$ data points.
According to~\cite{kmd2011singh}, the time complexity of the k-medoids algorithm is $\BigO{kn^2}$ for $n$ data points and $k$ clusters in each iteration.
Consider $p$ as the maximum number of iterations used in the k-medoids algorithm, then the time complexity in each node in the $i$th level of the tree is $\BigO{(n^2/b^{2i})kp}$.
Therefore, the overall complexity of Algorithm~\ref{alg:red} is
\begin{equation}
\BigO{\sum_{i=1}^{\floor{\log_{b}{(n/t_h)}}+1} (\frac{n^2}{b^{2i}}kp)b^i} = \BigO{n^2kp}.
\end{equation}
A major factor that measures the quality of an active clustering algorithm, is the number of distances that the algorithm demands from the oracle. The number of the asked queries in the internal nodes of the tree is $\BigO{b^2(k(s+1))^2}$ and in the leaves is $\BigO{nt_h}$. Since, in the proposed method, there are $n/t_h$ leaves and each of them has $t_h$ data points, the ratio of the asked distances to all of the distances is almost equal to
\begin{equation}
(b^2(k(s+1))^2\frac{b^{\floor{\log_{b}{(n/t_h)}}}-1}{b-1} + nt_h) / {n \choose 2}
\end{equation}
where $(b^{\floor{\log_{b}{(n/t_h)}}}-1)/(b-1)$ shows the approximate number of the internal nodes in the tree.
In order to improve the accuracy of the estimated distances, we can increase the $s$ value. However, it may raise running time of calculating upper-bound estimations and the number of queries that should be asked.
To have a good baseline for comparison, we introduce an algorithm based on random selection of the distance queries, called \textit{Random-Rival} and compare results of our algorithm with those of this algorithm. In Section \ref{sec:experiments}, we show that asking queries using our method is much better than asking them randomly that is the aim of any active clustering algorithm. \par
Random-Rival(RR) algorithm, considers data points as the vertices of a weighted graph in which the weight of each edge shows the distance between its endpoints. Firstly, RR asks some distances randomly and then estimates all unknown distances using the triangle inequality and \textit{Floyd-Warshall}~\cite[p. 693]{clrs} algorithm. In the other words, we find shortest path distances of all pairs using the available distances in the aforementioned graph. Since our distance function is a distance metric, the length of these shortest paths is an upper-bound estimation of the true distances. Finally, RR clusters data runs the k-medoids algorithm on the estimated distances. Since Floyd-Warshall worst case time complexity is $\BigO{n^3}$~\cite[p. 695]{clrs}, we can consider the runtime complexity of RR as $\BigO{n^3}$ at worst. Algorithm~\ref{alg:RR} shows the pseudo code of the RR algorithm. \par
\begin{algorithm}
\caption{Random Rival}\label{alg:RR}
\begin{algorithmic}[1]
\INPUT $n,k,B$ \Comment{\#data, \#clusters, budget}
\OUTPUT $C_1,\ldots,C_k$
\Procedure{RandomRival}{$n,k,B$}
\State $D \gets n\times n$ infinity matrix
\State $(x_1,y_1),\ldots,(x_B,y_B) \gets $ random pairs such that $1 \leqslant x_{i} < y_{i} \leq n$
\State $\forall i, 1 \leqslant i \leqslant B:$ update $D(x_i,y_i)$ by querying distances
\State $D_e \gets FloydWarshall(D)$
\State $C_1,\ldots,C_k \gets kmedoids(D_e,k)$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Empirical results}
\label{sec:experiments}
In this section, we show the results of our algorithm on some synthesized and real world datasets.
General information about these datasets are presented in Table \ref{table:geninf}.
Most of them are real world datasets, but some are synthesized which are marked with letter s in Table \ref{table:geninf}. NORM-10 \cite{kmeanspp} contains $10000$ data points having $20$ features. This dataset has been generated by choosing $10$ real centers uniformly at random from the hypercube of side length $50$. Then, for each of the real centers, $1000$ points from a Gaussian distribution of variance one centered at the corresponding point is generated. We converted samples in NEC\_animal \cite{NEC2009deep} and ALOI200 \cite{aloi} dataset into $32\times 32$ grayscale images. ALOI \cite{aloi} (Object Viewpoint version) has $1000$ classes but we use only $200$ classes of it. \par
\begin{table}[]
\centering
\caption{General information about datasets}
\label{table:geninf}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Data & \#Samples & \#Features & \#Classes & \#Distances & Ref. \\ \hline
vary-density(s) & 150 & 2 & 3 & 11175 & \cite{ELKI} \\ \hline
seeds & 210 & 7 & 3 & 21945 & \cite{UCIrvine} \\ \hline
mouse(s) & 500 & 2 & 4 & 124750 & \cite{ELKI} \\ \hline
fisheriris & 150 & 4 & 3 & 11175 & \cite{UCIrvine} \\ \hline
data\_2000(s) & 2000 & 2 & 5 & 1999000 & \cite{data2000} \\ \hline
Trace & 200 & 275 & 4 & 19900 & \cite{UCR} \\\hline
multi-features & 2000 & 649 & 10 & 1999000 & \cite{UCIrvine} \\ \hline
TwoDiamonds(s) & 800 & 2 & 2 & 319600 & \cite{FCPS} \\ \hline
EngyTime(s) & 4096 & 2 & 2 & 8386560 & \cite{FCPS} \\ \hline
COIL100 & 7200 & 1024 & 100 & 25916400 & \cite{columbia100} \\ \hline
NORM10(s) & 10000 & 20 & 10 & 49995000 & \cite{kmeanspp} \\ \hline
NEC\_animal & 4371 & 1024 & 60 & 9550635 & \cite{NEC2009deep} \\ \hline
ALOI200 & 14400 & 1024 & 200 & 103672800 & \cite{aloi} \\ \hline
\end{tabular}
\end{table}
Although active version of some clustering algorithms like DBSCAN and spectral clustering have been introduced in \cite{Mai2013actdbscan,wang2010}, these clustering algorithms are substantially different from the k-medoids algorithm. For example, DBSCAN and spectral clustering methods can find clusters of different shapes while k-medoids cannot.
Thus, we cannot compare results of our active k-medoid with those of active DBSCAN and active spectral clustering methods.
One way to evaluate an active clustering method that asks distances is to compare it with a clustering method that asks a random subset of distances.
, we compare our method with the Random-Rival algorithm introduced in section \ref{sec:method} that tries to use a random subset of distances to estimate the whole distance matrix (using shortest paths on the graph of data points and the triangle inequality). It must be mentioned that in both the proposed active k-medoid and the Random-Rival algorithm, the clustering algorithm that is run on the obtained distance matrix will be k-medoids.
One of the most common measures for comparison of clustering algorithms is \textit{normalized mutual information} (NMI) \cite{Nguyen2009}. This measure shows the agreement of the two assignments, ignoring permutations. In the other words, NMI for the clustering obtained by an algorithm shows the agreement between the obtained grouping by this algorithm and the ground truth grouping of data. \par
We run our algorithm with $s=1,3$ and branching factor $b=2$ over all the datasets. Threshold $t_h$ is set to the minimum possible value which is equal to the number of classes for each dataset. Greater value of $s$, branching factor $b$, or threshold $t_h$ can improve NMI score for some datasets. However, it would also increase the number of queries which is usually quite unsatisfactory. We also run Random-Rival over these datasets which requires a specified proportion of distances starting from $5\%$ to $100\%$ (with the step $5$ percent). \par
Results of our method with the parameters mentioned in the previous paragraph are shown in Table \ref{table:resmeth}. For each algorithm and each dataset, NMI score and the ratio of the asked distances are presented in the table cells. For the Random-Rival algorithm, the results for the proportion of distances (between $5\%$ and $100\%$) that is the first place where the number of inquired distances is greater than or equal to the inquired ones in our method are reported. \par
\begin{table}[]
\centering
\caption{NMI results of the methods. Numbers in parenthesis show percent of the asked distances.}
\label{table:resmeth}
\begin{tabular}{|l|c|c|c|}
\hline
Data & RR & s = 1 & s = 3 \\ \hline
vary-density & 70.8 (10.0\%) & 95.0 (9.7\%) & 96.6 (10.5\%) \\ \hline
seeds & 55.7 (10.0\%) & 90.3 (7.1\%) & 89.5 (7.3\%) \\ \hline
mouse & 58.1 (5.0\%) & 75.5 (4.1\%) & 73.6 (4.3\%) \\ \hline
fisheriris & 65.3 (10.0\%) & 85.6 (9.6\%) & 89.3 (10.2\%) \\ \hline
data\_2000 & 88.5 (5.0\%) & 77.1 (1.9\%) & 78.3 (1.9\%) \\ \hline
Trace & 45.7 (10.0\%) & 51.2 (9.6\%) & 52.4 (10.4\%) \\ \hline
multi-features & 46.9 (5.0\%) & 78.7 (2.8\%) & 77.6 (2.8\%) \\ \hline
TwoDiamonds & 97.5 (5.0\%) & 100.0 (1.8\%) & 100.0 (1.8\%) \\ \hline
EngyTime & 71.2 (5.0\%) & 99.9 (1.6) & 99.6 (1.6\%) \\ \hline
COIL100 & 42.3 (10.0\%) & 75.6 (7.6 \%) & 75.6 (8.0\%) \\ \hline
NORM10 & 95.5 (5.0\%) & 94.3 (1.6\%) & 94.6 (1.6\%) \\ \hline
NEC\_animal & 23.6 (10.0\%) & 66.6 (7.6\%) & 67.3 (7.9\%) \\ \hline
ALOI200 & 48.0 (10.0\%) & 79.6 (7.7\%) & 79.7 (8.0\%) \\ \hline
\end{tabular}
\end{table}
Moreover, results of the Random-Rival algorithm which uses $0$ percent of distances up to $100\%$ of them is presented for some datasets in figure~\ref{fig:RR_Table}. According to figure~\ref{fig:RR_Table}, RR shows an ascending trend by asking extra distances and it gets close to the maximum value by asking about $20\%$ of distances. Although it sounds to be a good algorithm, the proposed active k-medoids algorithm gets better NMI values with asking fewer number of distances. Moreover, the time complexity of our active k-medoid is also better. These results state the power of our algorithms which find accurate clusters by asking only a small subset of distances. \par
\begin{figure}[ht!]
\centering
\includegraphics[width=120mm]{./RR_table.jpg}
\caption{Random-Rival over four datasets.
\label{fig:RR_Table}}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we introduce an innovative active distance-based clustering method. Its goal is to cluster $n$ points from a metric dataset into $k$ clusters by using lowest number of distances that is possible. We design a recursive model that makes a tree and split data with a branching factor $b$ unless the number of objects is less than a threshold $t_h$. Then, it actively selects and ask some pairwise similarities from and oracle. After that it tries to make an upper-bound estimation for unknown distances utilizing triangular inequality. Eventually, it clusters data with a simple k-medoids algorithm. We run our algorithm over some synthesized and real world datasets. In order to show privilege of our method and to compare the results, we also introduce an algorithm which randomly selects pairwise distances and estimates unknown ones using Floyd-Warshall algorithm. \par
\small
\baselineskip=.85\baselineskip
\bibliographystyle{abbrv}
|
2,869,038,156,732 | arxiv | \section{Introduction}
Computing (or estimating) data similarities is a fundamental task in numerous practical applications. The popular method of random projections provides a potentially effective strategy for estimating data similarities (correlation or Euclidian distance) in massive high-dimensional datasets, in a memory-efficient manner. Approximate near neighbor search is a typical example of those applications.
The task of {near neighbor search} is to identify a set of data points which are ``most similar'' (in some measure of similarity) to a query data point. Efficient algorithms for near neighbor search have numerous applications in search, databases, machine learning, recommender systems, computer vision, etc. Developing efficient algorithms for finding near neighbors has been an active research topic since the early days of modern computing~\cite{Article:Friedman_75}. Near neighbor search with extremely high-dimensional data (e.g., texts or images) is still a challenging task and an active research problem.
In the specific setting of the World Wide Web, the use of hashing and random projections for applications such as detection of near-duplicate Web pages dates back to (e.g.,)~\cite{Proc:Broder_WWW97,Proc:Charikar}. The work in this area has naturally continued, improved, and expanded; see, for example, ~\cite{Proc:Casey_ISMIR06,Proc:Henzinger_SIGIR06,Proc:Bayardo_WWW07,Proc:Hajishirzi_SIGIR10,Proc:Duan_ISWC12,Proc:Kong_SIGIR12,Proc:Kulis_ICCV09,Proc:Li_Konig_WWW10,Proc:Leng_SIGIR14,Proc:Mitzenmacher_WWW14} for research papers with newer results on the theoretical frameworks, performance, and applications for such methods. In particular, such techniques have moved beyond near-duplicate detection and retrieval to detection and retrieval for more complex data types, including images and videos. Our work continues on this path; specifically, we seek to obtain accurate similarity scores using very small-memory random projections, for applications where the goal is to determine similar objects, or equivalently nearest neighbors in a well-defined space.\\
\vspace{-0.11in}
\subsection{Data Correlation}
Among many types of similarity measures, the (squared) Euclidian distance (denoted by $d$) and the correlation (denoted by $\rho$) are most commonly used. Without loss of generality, consider two high-dimensional data vectors $u, v\in\mathbb{R}^D$. The squared Euclidean distance and correlation are defined as follows:
\begin{align}\notag
d = \sum_{i=1}^D |u_i - v_i|^2, \hspace{0.2in} \rho = \frac{\sum_{i=1}^Du_iv_i}{\sqrt{\sum_{i=1}^D u_i^2} \sqrt{\sum_{i=1}^D v_i^2} }
\end{align}
The correlation $\rho$ is nicely normalized between -1 and 1. For convenience, this study will assume that the marginal $l_2$ norms $\sum_{i=1}^D |u_i|^2$ and $\sum_{i=1}^D |v_i|^2$ are known. This is a often reasonable assumption~\cite{Proc:Li_Hastie_Church_COLT06}, as computing the marginal $l_2$ norms only requires scanning the data once, which is anyway needed during the data collection process. In machine learning practice, it is common to first normalize the data before feeding the data to classification (e.g., SVM) or clustering (e.g., K-means) algorithms. Therefore, for convenience, throughout this paper, we assume unit $l_2$ norms:
\begin{align}\notag
&\rho = \sum_{i=1}^D u_iv_i,\hspace{0.3in} \text{where } \ \ \sum_{i=1}^D u_i^2 = \sum_{i=1}^D v_i^2 = 1
\end{align}
\vspace{-0.13in}
\subsection{Random Projections and Quantization}
As an effective tool for dimensionality reduction, the idea of random projections is to multiply the data, e.g., $u, v\in\mathbb{R}^D$, with a random normal projection matrix $\mathbf{R}\in\mathbb{R}^{D\times k}$, to generate:
\begin{align}\notag
&x = u\times \mathbf{R} \in\mathbb{R}^k,\hspace{0.2in} y = v\times \mathbf{R} \in\mathbb{R}^k, \\\notag
&\mathbf{R} = \{r_{ij}\}{_{i=1}^D}{_{j=1}^k}, \hspace{0.2in} r_{ij} \sim N(0,1) \text{ i.i.d. }
\end{align}
This method has become popular for large-scale machine learning applications such as classification, regression, matrix factorization, singular value decomposition, near neighbor search, bio-informatics, etc.~\cite{Proc:Papadimitriou_PODS98,Proc:Dasgupta_FOCS99,Proc:Bingham_KDD01,Article:Buher_Tompa,Proc:Fradkin_KDD03,Book:Vempala,Proc:Dasgupta_UAI00,Article:JL84,Proc:Wang_Li_SDM10}.
The projected data ($x_j = \sum_{i=1}^D u_i r_{ij}$, $y_j = \sum_{i=1}^D v_i r_{ij}$) are real-valued. For many applications it is however crucial to quantize them into integers. The quantization step is in fact mandatary if the projected data are used for the purpose of indexing and/or sublinear time near neighbor search (e.g.,) in the framework of {\em locality sensitive hashing (LSH)}~\cite{Proc:Indyk_STOC98}.
Another strong motivation for quantization is for reducing memory consumption. If only a few (e.g., 2) bits suffice for producing accurate estimate of the similarity, then we do not need to store the entire (e.g., 32 or 64 bits) real-valued projection data. This would be a very significant cost-saving in storage as well as computation.
\vspace{0.08in}
In this paper, we focus on 2-bit coding and estimation for multiple reasons. As analyzed in Section~\ref{sec_LSH}, the 2-bit coding appears to provide an overall good scheme for building hash tables in near neighbor search. The focus of this paper is on developing accurate nonlinear estimators, which are typically computationally quite expensive. Fortunately, for 2-bit coding, it is still feasible to find the numerical solution fairly easily, for example, by tabulation.
\section{2-Bit Random Projections}
Given two (high-dimensional) data vectors $u, v\in\mathbb{R}^D$, we generate two projected values $x$ and $y$ as follows:
\begin{align}\notag
x = \sum_{i=1}^D x_i r_i,\hspace{0.2in} y = \sum_{i=1}^D x_i r_i,\hspace{0.2in} r_i \sim N(0,1)\hspace{0.2in} i.i.d.
\end{align}
Assuming that the original data $u$, $v$ are normalized to unit $l_2$ norm, the projected data $(x,y)$ follow a bivariate normal distribution:
\begin{align}
\left[
\begin{array}{c}
x\\
y
\end{array}
\right] \sim N\left(
\left[
\begin{array}{c}
0\\
0
\end{array}
\right],\ \ \Sigma =
\left[
\begin{array}{cc}
1 &\rho\\
\rho &1
\end{array}
\right]
\right)
\end{align}
Note that when using random projections in practice, we will need (e.g.,) $k=200\sim 2000$ independent projections, depending on applications; and we will use $x_j$, $y_j$, $j=1$ to $k$, to denote them.
As the projected data $(x,y)$ are real-valued, we will have to quantize them either for indexing or for achieving compact storage. Figure~\ref{fig_16region} pictures the 2-bit coding scheme after random projections. Basically, a random projection value $x$ is mapped to an integer $\in \{0, 1, 2, 3\}$ according to a threshold $w$ (and $-w$).
\begin{figure}[h!]
\begin{center}
\includegraphics[width = 3.5in]{fig/16region.eps}
\end{center}
\vspace{-0.3in}
\caption{2-bit random projections.}\label{fig_16region}
\end{figure}
As shown in Figure~\ref{fig_16region}, the space is divided into 16 regions according to the pre-determined threshold $w$. To fully exploit the information, we need to jointly analyze the probabilities in all 16 regions. We will see that the analysis is quite involved.
\vspace{0.08in}
The first step of the analysis is to compute the probability of each region. Fortunately, due to symmetry (and asymmetry), we just need to conduct the computations for three regions:
\begin{align}\notag
&P_{2,2}(\rho,w) = \mathbf{Pr}\left\{\text{Region (2,2)}\right\}= \int_0^w \int_0^w f(x,y) dx dy,\\\notag
&P_{2,3}(\rho,w) = \mathbf{Pr}\left\{\text{Region (2,3)}\right\}= \int_0^w \int_w^\infty f(x,y) dx
dy,\\\notag
&P_{3,3}(\rho,w) = \mathbf{Pr}\left\{\text{Region (3,3)}\right\}= \int_w^\infty \int_w^\infty f(x,y) dx
dy.
\end{align}
Due to symmetry, the probabilities of other regions are simply
\begin{align}\notag
&P_{3,2}(\rho,w) = P_{0,1}(\rho,w) = P_{1,0}(\rho,w) = P_{2,3}(\rho,w),\\\notag
&P_{2,0}(\rho,w) = P_{3,1}(\rho,w) = P_{0,2}(\rho,w) = P_{1,3}(\rho,w) =P_{2,3}(-\rho,w),\\\notag
&P_{1,1}(\rho,w) = P_{2,2}(\rho,w), \hspace{0.05in} P_{1,2}(\rho,w) = P_{2,1}(\rho,w) = P_{2,2}(-\rho,w),\\\notag
&P_{0,0}(\rho,w) = P_{3,3}(\rho,w), \hspace{0.05in} P_{0,3}(\rho,w) = P_{3,0}(\rho,w) = P_{3,3}(-\rho,w).
\end{align}
\subsection{Region Probabilities and Their Derivatives}
We use the following standard notation for the normal distribution pdf $\phi(x)$ and cdf $\Phi(x)$:
\begin{align}\notag
\phi(x) = \frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2}}, \hspace{0.3in} \Phi(x) = \int_{-\infty}^x \phi(x) dx.
\end{align}
After some tedious calculations (which are skipped), the probabilities of the three regions are
\begin{align}\notag
&P_{2,2}(\rho,w)=\int_{0}^{w}\phi(x)\left[ \Phi\left(\frac{w-\rho x}{\sqrt{1-\rho^2}}\right)- \Phi\left(\frac{-\rho x}{\sqrt{1-\rho^2}}\right)\right]dx\\\notag
&P_{2,3}(\rho,w)=\int_{0}^{w}\phi(x)\Phi\left(\frac{-w+\rho x}{\sqrt{1-\rho^2}}\right)dx,\\\notag
&P_{3,3}(\rho,w) = \frac{1}{4}+\frac{\arcsin\rho}{2\pi}-P_{22}(\rho,w)-2P_{23}(\rho,w)
\end{align}
Their first derivatives (with respect to $\rho$) are
\begin{align}\notag
&P_{2,2}^\prime=\frac{\partial P_{2,2}(\rho,w)}{\partial \rho}=\frac{1}{2\pi}\frac{1}{\sqrt{1-\rho^2}}\left[1 - 2e^{-\frac{w^2}{2(1-\rho^2)}} + e^{-\frac{w^2}{1+\rho}}\right]\\\notag
&P_{2,3}^\prime=\frac{\partial P_{2,3}(\rho,w)}{\partial \rho} =\frac{1}{2\pi}\frac{1}{\sqrt{1-\rho^2}}\left[e^{-\frac{w^2}{2(1-\rho^2)}} - e^{-\frac{w^2}{1+\rho}}
\right]\\\notag
&P_{3,3}^\prime=\frac{\partial P_{3,3}(\rho,w)}{\partial \rho} =\frac{1}{2\pi}\frac{1}{\sqrt{1-\rho^2}}e^{-\frac{w^2}{1+\rho}}
\end{align}
Their second derivatives are
\begin{align}\notag
&P_{2,2}^{\prime\prime}=\frac{\partial^2 P_{2,2}(\rho,w)}{\partial \rho^2} =\frac{1}{2\pi}\frac{\rho}{(1-\rho^2)^{3/2}}\\\notag
&\hspace{0.5in}-\frac{1}{2\pi}\frac{2\rho}{(1-\rho^2)^{3/2}}e^{-\frac{w^2}{2(1-\rho^2)}}\left[1-\frac{w^2}{1-\rho^2}\right]\\\notag
&\hspace{0.5in}+\frac{1}{2\pi}\frac{1}{\sqrt{1-\rho^2}}e^{-\frac{w^2}{1+\rho}}
\left[\frac{\rho}{1-\rho^2}+\frac{w^2}{(1+\rho)^2}\right]\\\notag
&P_{2,3}^{\prime\prime}=\frac{\partial^2 P_{2,3}(\rho,w)}{\partial \rho^2} =\frac{1}{2\pi}\frac{\rho}{(1-\rho^2)^{3/2}}e^{-\frac{w^2}{2(1-\rho^2)}}\left[1-\frac{w^2}{1-\rho^2}\right]\\\notag
&\hspace{1in}-\frac{1}{2\pi}\frac{1}{\sqrt{1-\rho^2}}e^{-\frac{w^2}{1+\rho}}
\left[\frac{\rho}{1-\rho^2}+\frac{w^2}{(1+\rho)^2}\right]\\\notag
&P_{3,3}^{\prime\prime}=\frac{\partial^2 P_{3,3}(\rho,w)}{\partial \rho^2} =\frac{1}{2\pi}\frac{1}{\sqrt{1-\rho^2}}e^{-\frac{w^2}{1+\rho}}
\left[\frac{\rho}{1-\rho^2}+\frac{w^2}{(1+\rho)^2}\right
\end{align}
Because $\rho$ is bounded, we can tabulate the above probabilities and their derivatives for the entire range of $\rho$ and selected $w$ values. Note that in practice, we anyway have to first specify a $w$. In other words, the computations of the probabilities and derivatives are a simple matter of efficient table look-ups.
\subsection{Likelihood}
Suppose we use in total $k$ projections. Due to symmetry (as shown in Figure~\ref{fig_16region}), the log-likelihood is a sum of 6 terms (6 cells).
\begin{align}\notag
&l(\rho,w) = \sum_{i,j} k_{i,j}\log P_{i,j}\left(\rho,w\right)\\\notag
&=\left(k_{2,2}+k_{1,1}\right)\log{P_{2,2}(\rho,w)}\\\notag
& + \left(k_{2,3}+k_{3,2}+k_{0,1}+k_{1,0}\right)\log{P_{2,3}(\rho,w)}\\\notag
&+\left(k_{3,3}+k_{0,0}\right)\log{P_{3,3}(\rho,w)}
+ \left(k_{1,2}+k_{2,1}\right)\log{P_{2,2}(-\rho,w)}\\\notag
&+ \left(k_{0,2}+k_{1,3}+k_{2,0}+k_{3,1}\right)\log{P_{2,3}(-\rho,w)}\\\notag
& + \left(k_{0,3}+k_{3,0}\right)\log{P_{3,3}(-\rho,w)}
\end{align}
Corresponding to Figure~\ref{fig_16region}, $k_{1,1}$ is the number of observations (among $k$ observations) in the region (1,1). $k_{0,0}$, $k_{0,1}$ etc are defined similarity. Note that there is a natural constraint:
\begin{align}\notag
k =& \left(k_{2,2}+k_{1,1}\right)+ \left(k_{2,3}+k_{3,2}+k_{0,1}+k_{1,0}\right)
+\left(k_{3,3}+k_{0,0}\right)\\\notag
+& \left(k_{1,2}+k_{2,1}\right)
+ \left(k_{0,2}+k_{1,3}+k_{2,0}+k_{3,1}\right)+ \left(k_{0,3}+k_{3,0}\right)
\end{align}
In other words, this 6-cell problem only has 5 degrees of freedom. In fact, we can also choose to collapse some cells together to reduce this to an even smaller problem. For example, later we will show that if we reduce the 6-cell problem to a 5-cell problem, the estimation accuracy will not be affected much.
There are more than one way to solve the MLE which maximizes the likelihood $l(\rho,w)$, for finding $\rho$. Note that this is merely a one-dimensional optimization problem (at a fixed $w$) and we can tabulate all the probabilities (and their derivatives). In other words, it is not a difficult problem. We can do binary search, gradient descent, Newton's method, etc. Here we provide the first and second derivatives of $l(\rho,w)$. The first derive is
\begin{align}\notag
&l^\prime(\rho,w) = \frac{\partial l(\rho,w)}{\partial \rho}=\left(k_{2,2}+k_{1,1}\right)\frac{P_{2,2}^\prime(\rho,w)}{P_{2,2}(\rho,w)}\\\notag
&+ \left(k_{2,3}+k_{3,2}+k_{0,1}+k_{1,0}\right)\frac{P_{2,3}^\prime(\rho,w)}{P_{2,3}(\rho,w)}\\\notag
&+\left(k_{3,3}+k_{0,0}\right)\frac{P_{3,3}^\prime(\rho,w)}{P_{3,3}(\rho,w)}
- \left(k_{1,2}+k_{2,1}\right)\frac{P_{2,2}^\prime(-\rho,w)}{P_{2,2}(-\rho,w)}\\\notag
&-\left(k_{0,2}+k_{1,3}+k_{2,0}+k_{3,1}\right)\frac{P_{2,3}^\prime(-\rho,w)}{P_{2,3}(-\rho,w)}\\\notag
& - \left(k_{0,3}+k_{3,0}\right)\frac{P_{3,3}^\prime(-\rho,w)}{P_{3,3}(-\rho,w)}
\end{align}
and the second derivative is
\begin{align}\notag
&l^{\prime\prime}\left(\rho\right) =
\left(k_{2,2}+k_{1,1}\right)\frac{P_{2,2}^{\prime\prime}(\rho,w)P_{2,2}(\rho,w) - \left(P_{2,2}^{\prime}(\rho,w)\right)^2}{\left(P_{2,2}(\rho,w)\right)^2} \\\notag
&+ \left(k_{2,3}+k_{3,2}+k_{0,1}+k_{1,0}\right)\frac{P_{2,3}^{\prime\prime}(\rho,w)P_{2,3}(\rho,w) - \left(P_{2,3}^{\prime}(\rho,w)\right)^2}{\left(P_{2,3}(\rho,w)\right)^2} \\\notag
&+\left(k_{3,3}+k_{0,0}\right)\frac{P_{3,3}^{\prime\prime}(\rho,w)P_{3,3}(\rho,w) - \left(P_{3,3}^{\prime}(\rho,w)\right)^2}{\left(P_{3,3}(\rho,w)\right)^2} \\\notag
&+\left(k_{1,2}+k_{2,1}\right)\frac{P_{2,2}^{\prime\prime}(-\rho,w)P_{2,2}(-\rho,w)-\left(P_{2,2}^\prime(-\rho,w)\right)^2}{\left(P_{2,2}(-\rho,w)\right)^2}\\\notag
&+\left(k_{0,2}+k_{1,3}+k_{2,0}+k_{3,1}\right)\frac{P_{2,3}^{\prime\prime}(-\rho,w)P_{2,3}(-\rho,w) - \left(P_{2,3}^{\prime}(-\rho,w)\right)^2}{\left(P_{2,3}(-\rho,w)\right)^2} \\\notag
&+\left(k_{0,3}+k_{3,0}\right)\frac{P_{3,3}^{\prime\prime}(-\rho,w)P_{3,3}(-\rho,w) - \left(P_{3,3}^{\prime}(-\rho,w)\right)^2}{\left(P_{3,3}(-\rho,w)\right)^2}
\end{align}
If we use Newton's method, we can find the solution iteratively from $\rho^{(t)} = \rho^{(t-1)} - \frac{l^\prime(\rho)}{l^{\prime\prime}(\rho)}$, by starting from a good guess, e.g., the estimate using 1-bit information. Normally a small number of iterations will be sufficient. Recall that these derivatives and second derivatives are pre-computed and stored in look-up tables.
\vspace{0.08in}
For this particular 2-bit coding scheme, it is possible to completely avoid the numerical procedure by further exploiting look-up table tricks. Suppose we tabulate the MLE results for each $k_{i,j}/k$, spaced at 0.01. Then a 6-cell scheme would only require $O\left(10^{10}\right)$ space, which is not too large. (Recall there are only 5 degrees of freedom). If we adopt a 5-cell scheme, then the space would be reduced to $O(10^8)$. Of course, if we hope to use more than 2 bits, then we can not avoid numerical computations.
\subsection{Fisher Information and\\ Asymptotic Variance of the MLE}
The asymptotic (for large $k$) variance of the MLE (i.e., the $\rho$ which maximizes the log likelihood $l(\rho,w)$) can be computed from classical statistical estimation theory. Denote the MLE by $\hat{\rho}_{2,MLE}$. Then its asymptotic variance should be
\begin{align}
Var\left(\hat{\rho}_{2,MLE}\right) = \frac{1}{I_{2,\rho,w}} + O\left(\frac{1}{k^2}\right)
\end{align}
where $I_{2,\rho,w} = -E(l^{\prime\prime}(\rho))$ is the {\em Fisher Information}.
\begin{theorem}\label{thm_2bit_Info}
The Fisher Information is
\begin{align}\label{eqn_Fisher2}
&I_{2,\rho,w
= 2k\left[ A\right],\hspace{0.2in}\text{ where}\\\notag
&A = \frac{\left(P_{2,2}^\prime(\rho,w)\right)^2}{P_{2,2}(\rho,w)} + 2\frac{\left(P_{2,3}^\prime(\rho,w)\right)^2}{P_{2,3}(\rho,w)}
+\frac{\left(P_{3,3}^\prime(\rho,w)\right)^2}{P_{3,3}(\rho,w)}\\\notag
& + \frac{\left(P_{2,2}^\prime(-\rho,w)\right)^2}{P_{2,2}(-\rho,w)}
+ 2\frac{\left(P_{2,3}^\prime(-\rho,w)\right)^2}{P_{2,3}(-\rho,w)} + \frac{\left(P_{3,3}^\prime(-\rho,w)\right)^2}{P_{3,3}(-\rho,w)}.
\end{align}
\textbf{Proof}:\hspace{0in} We need to compute $I_{2,\rho,w} = -E(l^{\prime\prime}(\rho))$. Because the expectation $E\left(k_{2,2}+k_{1,1}\right) = 2P_{2,2}(\rho,w)$, the expression $E(l^{\prime\prime}(\rho))$ can be simplified substantially. Then we take advantage of the fact that $\sum_{i,j} P_{i,j}(\rho,w) =1 $, $ \sum_{i,j} P_{i,j}^\prime(\rho,w) = \sum_{i,j} P_{i,j}^{\prime\prime}(\rho,w)=0$, to obtain the desired result. $\hfill\Box$
\end{theorem}
While the expressions appear sophisticated, the Fisher Information and variance can be verified by simulations; see Figure~\ref{fig_simu}.
\subsection{The 2-Bit Linear Estimator}
A linear estimator only uses the information whether the code of $x$ equals the code of $y$. In other words, linear estimators only use the diagonal information in Figure~\ref{fig_16region}. With a 2-bit scheme, $\rho$ can be estimated from counts in collapsed cells, by solving for $\rho$ from
\begin{align}\notag
(k_{0,0}+k_{1,1}+ k_{2,2}+k_{3,3})/k = P_{0,0}+P_{1,1}+P_{2,2}+P_{3,3},
\end{align}
which still requires a numerical procedure (or tabulation). The analysis of the linear estimator was done in~\cite{Proc:Li_ICML14}, and can also be inferred from the analysis of the nonlinear estimator in this paper.
\subsection{The 1-Bit Estimator}
This special case can be derived from the results of 2-bit random projections by simply letting $w\rightarrow \infty$. The estimator, by counting the observations in each quadrant, has a simple closed-form~\cite{Article:Goemans_JACM95,Proc:Charikar}, i.e., $\mathbf{Pr}\left(sgn(x)=sgn(y)\right) = 1-\frac{1}{\pi}\cos^{-1}\rho$. The Fisher Information of estimator, denoted by $I_{1,\rho}$, is then\\
\begin{align}\notag
I_{1,\rho}
=&2k\left[\frac{\left(P_{2,2}^\prime(\rho,\infty)\right)^2}{P_{2,2}(\rho,\infty)} +
\frac{\left(P_{2,2}^\prime(-\rho,\infty)\right)^2}{P_{2,2}(-\rho,\infty)}
\right]\\\notag
=&2k\frac{1}{4\pi^2(1-\rho^2)}\left[\frac{1}{\frac{1}{4}+\frac{\arcsin\rho}{2\pi}} + \frac{1}{\frac{1}{4}-\frac{\arcsin\rho}{2\pi}}
\right]
\end{align}
The ratio
\begin{align}\label{eqn_Rw}
R_{\rho,w} = \frac{I_{2,\rho,w}}{I_{1,\rho}}
\end{align}
characterizes the reduction of variance by using the 2-bit scheme and the MLE, as a function of $\rho$ and $w$.
We provide the following Theorem, to show that the ratio $R_{\rho,w}$ is close to 2 when $\rho\rightarrow0$. Later we will see that, for high similarity regions, the ration can be substantially higher than 2.
\begin{theorem}\label{thm_R0}
For (\ref{eqn_Rw}) and $\rho\rightarrow 0$, we have $R_{0,w}
=\left[g(w)\right]^2$,
\begin{align}\label{eqn_R0}
\text{where}\hspace{0.2in} g(s) =\frac{1}{2}\left[\frac{\left[1-e^{-\frac{w^2}{2}}\right]^2}{\Phi(w)-\frac{1}{2}} + \frac{e^{-w^2}}{1- \Phi\left(w\right)}\right].\hspace{0.2in}\Box
\end{align}
\end{theorem}
Figure~\ref{fig_gs} shows that $g(w)$ has a unique maximum = 1.3863 (i.e., maximum of $\left[g(w)\right]^2$ is 1.9218), attained at $w = 0.9816$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width = 1.8in]{fig/gs.eps}
\end{center}
\vspace{-0.25in}
\caption{The curve of $g(w)$ as defined in (\ref{eqn_R0}).
}\label{fig_gs}
\end{figure}
\vspace{-0.1in}
\subsection{The Choice of $w$}
The performance depends on $w$ (and $\rho$). In practice, we need to pre-specify a value of $w$ for random projections and we have to use the same $w$ for all data points because this coding process is non-adaptive. Figure~\ref{fig_Rw} and Figure~\ref{fig_Rw_small} plot the ratio $R_{\rho,w}$ (left panels) for selected $w$ values, confirming that $w=0.75$ should be an overall good choice. In addition, we present some additional work in the right panels of Figure~\ref{fig_Rw} and Figure~\ref{fig_Rw_small} to show that if we collapse some cells appropriately (from a 6-cell model to a 5-cell model), the performance would not degrade much (not at all for high similarity region, which is often more interesting in practice).
According to Figure~\ref{fig_16region}, we collapse the three cells (0,3), (0,2), and (1,3) into one cell. Note that (0,2) and (1,3) have the same probabilities and are already treated as one cell. Due to symmetry, the other three cells (3,0), (2,0), and (3,1) are also collapsed into one. This way, we have in total 5 distinct cells. The intuition is that if we are mostly interested in high similar regions, most of the observations will be falling around the diagonals. This treatment simplifies the estimation process and does not lead to an obvious degradation of the accuracy at least for high similarity regions, according to Figure~\ref{fig_Rw} and Figure~\ref{fig_Rw_small}.
\begin{figure}[h!]
\begin{center}
\mbox{\includegraphics[width = 1.75in]{fig/Rw6.eps}\hspace{-0.12in}
\includegraphics[width = 1.75in]{fig/Rw5.eps}}
\end{center}
\vspace{-0.25in}
\caption{The ratio $R_{\rho,w}$ (\ref{eqn_Rw}) at $w=0.4, 0.6, 0.75, 1, 1.5$, which characterizes the improvement of the MLE ($\hat{\rho}_{2,\rho}$) over the 1-bit estimator $\hat{\rho}_1$. It looks $w=0.75$ provides an overall good trade-off. The problem is a 6-cell (ie., left panel) contingency table estimation problem. To demonstrate the simplification of the process by using 5 cells (see the main text for the description of the procedure), we also include the same type of improvements for using the reduced 5-cell model in the right panel. }\label{fig_Rw}\vspace{-0.1in}
\end{figure}
\begin{figure}[h!]
\begin{center}
\mbox{\includegraphics[width = 1.75in]{fig/Rw6_small.eps}\hspace{-0.12in}
\includegraphics[width = 1.75in]{fig/Rw5_small.eps}}
\end{center}
\vspace{-0.25in}
\caption{The ratio $R_{\rho,w}$ (\ref{eqn_Rw}) at $w=0.6, 0.7, 0.75, 0.8, 0.9$, to show $w=0.75$ is an overall good trade-off. There is no space to label $w=0.7, 0.75, 0.8$ but the order of curves should be a good indicator. We plot $w=0.75$ in red, if color is available.}\label{fig_Rw_small
\end{figure}
\subsection{Simulations}\label{sec_2bit_simu}
\begin{figure}[h!]
\hspace{-0.2in}
\mbox
{\includegraphics[width = 1.3in]{fig/MSER095.eps}\hspace{-0.08in}
\includegraphics[width = 1.3in]{fig/MSER09.eps}\hspace{-0.08in}
\includegraphics[width = 1.3in]{fig/MSER08.eps}
}
\hspace{-0.2in}
\mbox{
\includegraphics[width = 1.3in]{fig/MSER07.eps}\hspace{-0.08in}
\includegraphics[width = 1.3in]{fig/MSER06.eps}\hspace{-0.08in}
\includegraphics[width = 1.3in]{fig/MSER05.eps}
}
\hspace{-0.2in}
\mbox
{\includegraphics[width = 1.3in]{fig/MSER03.eps}\hspace{-0.08in}
\includegraphics[width = 1.3in]{fig/MSER02.eps}\hspace{-0.08in}
\includegraphics[width = 1.3in]{fig/MSER01.eps}
}
\vspace{-0.15in}
\caption{Mean square errors (MSE) from the simulations to verify the nonlinear MLE. The empirical MSEs essentially overlap the asymptotic variances predicted by the Fisher information (\ref{eqn_Fisher2}), confirming the theoretical results. In addition, we also plot the empirical MSEs of the 1-bit estimator to verify the substantial improvement of the MLE. }\label{fig_simu}\vspace{-0.1in}
\end{figure}
As presented in Figure~\ref{fig_simu}, a simulation study is conducted to confirm the theoretical results of the MLE, for a wide range of $\rho$ values. The plots confirm that the MLE substantially improves the 1-bit estimator, even at low similarities. They also verify that the theoretical asymptotic variance predicted by the Fisher Information (\ref{eqn_Fisher2}) is accurate, essentially no different from the empirical mean square errors. We hope this experiment might help readers who are less familiar with the classical theory of Fisher Information.
\newpage
\section{Other Common Coding Schemes}\label{sec_others}
In this section, we review two common coding strategy: (i) the scheme based on windows + random offset; (ii) the scheme based on simple uniform quantization. Note that both of them are strictly speaking infinite-bit coding schemes, although (ii) can be effectively viewed as a finite-bit scheme.
\subsection{Quantization with Random Offset}
\cite{Proc:Datar_SCG04} proposed the following well-known coding scheme, which uses
windows and a random offset:
\begin{align}\label{eqn_hwq}
h_{w,q}^{(j)}(u) = \left\lfloor\frac{x_j + q_j}{w}\right\rfloor,\hspace{0.3in} h_{w,q}^{(j)}(v) = \left\lfloor\frac{y_j + q_j}{w}\right\rfloor
\end{align}
where $q_j\sim uniform(0,w)$, $w>0$ is the bin width and $\left\lfloor . \right\rfloor$ is the standard floor operation.
\cite{Proc:Datar_SCG04} showed that the collision probability can be written as a monotonic function of the Euclidean distance:
\begin{align}\notag
P_{w,q} = &\mathbf{Pr}\left(h_{w,q}^{(j)}(u) = h_{w,q}^{(j)}(v)\right)
= \int_0^w\frac{1}{\sqrt{d}}2\phi\left(\frac{t}{\sqrt{d}}\right)\left(1-\frac{t}{w}\right)dt
\end{align}
where $d = ||u-v||^2= 2(1-\rho)$ is the distance between $u$ and $v$.
\subsection{Uniform Quantization without Offset}
A simpler (and in fact better) scheme than (\ref{eqn_hwq}) is based on uniform quantization without offset:
\begin{align}\label{eqn_hw}
h_{w}^{(j)}(u) = \left\lfloor x_j/w\right\rfloor,\hspace{0.4in} h_{w}^{(j)}(v) = \left\lfloor y_j/w\right\rfloor
\end{align}
The collision probability for (\ref{eqn_hw}) is
\begin{align}\notag
&P_{w} =\mathbf{Pr}\left(h_{w}^{(j)}(u) = h_{w}^{(j)}(v) \right)\\\notag
=& 2\sum_{i=0}^\infty\int_{iw}^{(i+1)w}\phi(z)\left[\Phi\left(\frac{(i+1)w-\rho z}{\sqrt{1-\rho^2}}\right)- \Phi\left(\frac{iw-\rho z}{\sqrt{1-\rho^2}}\right)\right]dz
\end{align}
$P_w$ is a monotonically increasing function of $\rho\geq0$. \\
The fact that $P_w$ is monotonically increasing in $\rho$ makes (\ref{eqn_hw}) an appropriate coding scheme for approximate near neighbor search under the general framework of locality sensitive hashing (LSH). Note that while $P_{w}$ appears sophisticated, the expression is just for the analysis. Without using the offset, the scheme (\ref{eqn_hw}) itself is operationally simpler than the popular scheme (\ref{eqn_hwq}).
In the prior work, \cite{Proc:Li_ICML14} studied the coding scheme (\ref{eqn_hw}) in the context of similarity estimation using linear estimators with application to building large-scale linear classifiers. In this paper, we conduct the study of (\ref{eqn_hw}) for sublinear time near neighbor search by building hash tables from coded projected data. This is a very different task from similarity estimation. Moreover, much of the space of the paper is allocated to the design and analysis of nonlinear estimators which are very useful in the ``re-ranking'' stage of near neighbor search after the potentially similar data points are retrieved.
\vspace{0.08in}
There is another important distinction between (\ref{eqn_hw}) and (\ref{eqn_hwq}). By using a window and a random offset, (\ref{eqn_hwq}) is actually an ``infinite-bit'' scheme. On the other hand, with only a uniform quantization, (\ref{eqn_hw}) is essentially a finite-bit scheme, because the data are normalized and the Gaussian (with variance 1) density decays very rapidly at the tail. If we choose (e.g.,) $w\geq3$ (note that $1-\Phi(3)=1.3\times10^{-3}$), we essentially have a 1-bit scheme (i.e., by recording the signs of the projected data), because the analysis can show that using $w\geq 3$ is not essentially different from using $w=\infty$. Note that the 1-bit scheme~\cite{Article:Goemans_JACM95,Proc:Charikar} is also known as ``sim-hash'' in the literature.
\vspace{0.08in}
In this paper, we will show, through analysis and experiment, that often a 2-bit scheme (i.e., a uniform quantization with $w\geq 1.5$) is better for LSH (depending on the data similarity). Moreover, we have developed nonlinear estimators for 2-bit scheme which significantly improve the estimator using the 1-bit scheme as well as the linear estimator using the 2-bit scheme.
\section{Sublinear Time $c$-Approximate\\ Near Neighbor Search}\label{sec_LSH}
In this section, we compare the two coding schemes in Section~\ref{sec_others}: (i) the scheme based on windows + random offset, i.e., (\ref{eqn_hwq}); (ii) the scheme based on simple uniform quantization, i.e., (\ref{eqn_hw}), in the setting of approximate near neighbor search. We will show that (\ref{eqn_hw}) is more effective and in fact only a small number of bits are needed.
\vspace{0.08in}
Consider a data vector $u$. Suppose there exists another vector whose Euclidian distance ($\sqrt{d}$) from $u$ is at most $\sqrt{d_0}$ (the target distance). The goal of {\em$c$-approximate $\sqrt{d_0}$-near neighbor} algorithms is to return data vectors (with high probability) whose Euclidian distances from $u$ are at most $c\times \sqrt{d_0}$ with $c>1$.
\vspace{0.08in}
Recall that, in our definition, $d = 2(1-\rho)$ is the squared Euclidian distance. To be consistent with the convention in~\cite{Proc:Datar_SCG04}, we present the results in terms of $\sqrt{d}$. Corresponding to the target distance $\sqrt{d_0}$, the target similarity $\rho_0$ can be computed from $d_0 = 2(1-\rho_0)$ i.e., $\rho_0 = 1-d_0/2$. To simplify the presentation, we focus on $\rho\geq 0$ (as is common in practice), i.e., $0\leq d\leq 2$. Once we fix a target similarity $\rho_0$, $c$ can not exceed a certain value:
\begin{align}\notag
c\sqrt{2(1-\rho_0)}\leq \sqrt{2} \Longrightarrow c \leq \sqrt{\frac{1}{1-\rho_0}}
\end{align}
For example, when $\rho_0 =0.5$, we must have $1\leq c \leq \sqrt{2}$.
The performance of an LSH algorithm largely depends on the difference (gap) between the two collision probabilities $P^{(1)}$ and $P^{(2)}$ (respectively corresponding to $\sqrt{d_0}$ and $c\sqrt{d_0}$):
\begin{align}\notag
&P^{(1)}_w = \mathbf{Pr}\left(h_w(u) = h_w(v) \right) \hspace{0.2in} \text{when } d = ||u - v||^2_2 = d_0\\\notag
&P^{(2)}_w = \mathbf{Pr}\left(h_w(u) = h_w(v) \right) \hspace{0.2in} \text{when } d = ||u - v||^2_2 = c^2d_0
\end{align}
The probabilities $P^{(1)}_{w,q}$ and $P^{(2)}_{w,q}$ are analogously defined for $h_{w,q}$.
\vspace{0.08in}
A larger difference between $P^{(1)}$ and $P^{(2)}$ implies a more efficient LSH algorithm. The following ``$G$'' values ($G_w$ for $h_w$ and $G_{w,q}$ for $h_{w,q}$, respectively) characterize the gaps:
\begin{align}\label{eqn_rho_M}
&G_w =\frac{\log 1/P_w^{(1)} }{\log 1/P_w^{(2)} },\hspace{0.5in} G_{w,q} =\frac{\log 1/P_{w,q}^{(1)} }{\log 1/P_{w,q}^{(2)} }
\end{align}
A smaller $G$ (i.e., larger difference between $P^{(1)}$ and $P^{(2)}$) leads to a potentially more efficient LSH algorithm and $\rho <\frac{1}{c}$ is particularly desirable~\cite{Proc:Indyk_STOC98}. The general theory of LSH says the query time for $c$-approximate $d_0$-near neighbor is dominated by $O(N^G)$ distance evaluations, where $N$ is the total number of data vectors in the collection. This is better than $O(N)$, the cost of a linear scan.
\subsection{Theoretical Comparison of the Gaps}
Figure~\ref{fig_GwqOpt} compares $G_w$ with $G_{w,q}$ at their ``optimum'' $w$ values, as functions of $c$, for a wide range of target similarity $\rho_0$ levels. Basically, at each $c$ and $\rho_0$, we choose the $w$ to minimize $G_w$ and the $w$ to minimize $G_{w,q}$. This figure illustrates that $G_w$ is smaller than $G_{w,q}$, noticeably so in the low similarity region.\\
\begin{figure}[h!]
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR01Opt.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR02Opt.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR03Opt.eps}
}
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR04Opt.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05Opt.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR06Opt.eps}
}
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR07Opt.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR08Opt.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR085Opt.eps}
}
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR09Opt.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR095Opt.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR099Opt.eps}
}
\vspace{-.15in}
\caption{Comparison of the optimum gaps (smaller the better) for $h_w$ and $h_{w,q}$. For each $\rho_0$ and $c$, we can find the smallest gaps individually for $h_w$ and $h_{w,q}$, over the entire range of $w$. For all target similarity levels $\rho_0$, both $h_{w,q}$ and $h_w$ exhibit better performance than $1/c$. $h_w$ always has smaller gap than $h_{w,q}$, although in high similarity region both perform similarly. }\label{fig_GwqOpt}\vspace{-0.1in}
\end{figure}
Figure~\ref{fig_GwqR09C} and Figure~\ref{fig_GwqR05C} present $G_w$ and $G_{w,q}$ as functions of $w$, for $\rho_0 = 0.9$ and $\rho_0 = 0.5$, respectively. In each figure, we plot the curves for a wide range of $c$ values. These figures illustrate where the optimum $w$ values are obtained. Clearly, in the high similarity region, the smallest $G$ values are obtained at low $w$ values, especially at small $c$. In the low (or moderate) similarity region, the smallest $G$ values are usually attained at relatively large $w$.
\begin{figure}[h!]
\hspace{-.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR09C1.05.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09C1.1.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09C1.2.eps}
}
\hspace{-.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR09C1.3.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09C1.4.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09C1.5.eps}
}
\hspace{-.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR09C1.7.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09C2.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09C2.5.eps}
}
\vspace{-.15in}
\caption{The gaps $G_w$ and $G_{w,q}$ as functions of $w$, for $\rho_0 = 0.9$. The lowest points on the curves are reflected in Figure~\ref{fig_GwqOpt}.}\label{fig_GwqR09C}\vspace{-0.1in}
\end{figure}
\begin{figure}[h!]
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR05C1.05.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05C1.1.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05C1.2.eps}
}
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR05C1.3.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05C1.35.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05C1.4.eps}
}
\vspace{-.15in}
\caption{The gaps $G_w$ and $G_{w,q}$ as functions of $w$ for~$\rho_0~=~0.5$}\label{fig_GwqR05C}\vspace{-0.1in}
\end{figure}
\begin{figure}[h!]
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR09W0.25.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09W0.5.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09W0.75.eps}
}
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR09W1.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09W1.25.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09W1.5.eps}
}
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR09W1.75.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09W2.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09W2.5.eps}
}
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR09W3.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09W4.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR09W5.eps}
}
\vspace{-.15in}
\caption{The gaps $G_w$ and $G_{w,q}$ as functions of $c$, for $\rho_0 = 0.9$. In each panel, we plot $G_w$ and $G_{w,q}$ for one $w$ value. }\label{fig_GwqR09W}\vspace{-0.1in}
\end{figure}
\begin{figure}[h!]
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR05W0.25.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05W0.5.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05W0.75.eps}
}
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR05W1.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05W1.25.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05W1.5.eps}
}
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR05W1.75.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05W2.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05W2.5.eps}
}
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/GwqR05W3.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05W4.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/GwqR05W5.eps}
}
\vspace{-.15in}
\caption{The gaps $G_w$ and $G_{w,q}$ as functions of $c$, for $\rho_0 = 0.5$. In each panel, we plot $G_w$ and $G_{w,q}$ for one $w$ value. }\label{fig_GwqR05W}\vspace{-0.1in}
\end{figure}
\newpage
\begin{figure}[h!]
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/OptGc105.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/OptGc11.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/OptGc13.eps}
}
\hspace{-0.15in}
\mbox{
\includegraphics[width = 1.25in]{fig/OptWc105.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/OptWc11.eps}\hspace{-0.08in}
\includegraphics[width = 1.25in]{fig/OptWc13.eps}
}
\vspace{-0.15in}
\caption{\textbf{Upper panels}: the optimal (smallest) gaps at given $c$ values and the entire range of $\rho$. We can see that $G_{w,q}$ is always larger than $G_w$, confirming that it is better to use $h_w$ instead of $h_{w,q}$. \textbf{Bottom panels}: the optimal values of $w$ at which the optimal gaps are attained. When the target similarity $\rho$ is very high, it is best to use a relatively small $w$. }\label{fig_OptGW1}\vspace{-0.1in}
\end{figure}
\subsection{The Optimal Gaps}
In practice, we normally have to pre-specify the bin width $w$, for all $c$ and $\rho_0$ values. In other words, the ``optimum'' $G$ values presented in Figure~\ref{fig_GwqOpt} are in general not attainable. Thus, Figure~\ref{fig_GwqR09W} and Figure~\ref{fig_GwqR05W} present $G_w$ and $G_{w,q}$ as functions of $c$, for $\rho_0 = 0.9$ and $\rho_0 = 0.5$, respectively. In each figure, we plot the curves for a wide range of $w$ values. These figures again confirm that $G_w$ is smaller than $G_{w,q}$, i.e., the scheme without offset (\ref{eqn_hw}) is better.
To view the optimal gaps more closely, Figure~\ref{fig_OptGW1} plots the best gaps (upper panels) and the optimal $w$ values (bottom panels) at which the best gaps are attained, for selected values of $c$. These plots again confirm the previous comparisons:\vspace{-0.07in}
\begin{itemize}
\item We should always replace $h_{w,q}$ with $h_{w}$. At any $\rho$ and $c$, the optimal gap $G_{w,q}$ is at least as large as the optimal gap $G_{w}$. At relatively low similarities, the optimal $G_{w,q}$ can be substantially larger than the optimal $G_{w}$.\vspace{-0.07in}
\item If we use $h_w$ and target at very high similarity, a reasonable choice of the bin width $w$ might be $w=1\sim 1.5$.\vspace{-0.07in}
\item If we use $h_{w}$ and the target similarity is not too high, then we can safely use $w=2\sim3$.\vspace{-0.07in}
\end{itemize}
We should also mention that, although the optimal $w$ values for $h_w$ appear to exhibit a ``jump'' in the right panels of Figure~\ref{fig_OptGW1}, the choice of $w$ does not influence the performance much, as shown in previous plots. In Figures~\ref{fig_GwqR09C} and~\ref{fig_GwqR05C}, we have seen that even when the optimal $w$ appears to approach ``$\infty$'', the actual gaps are not much different between $w=3$ and $w\gg3$. In the real data evaluations in the next section, we will see the same phenomenon for $h_w$.
Note that the Gaussian density decays very rapidly at the tail, for example, $1-\Phi(3) = 1.3\times10^{-3}$ and $1-\Phi(6) = 9.9\times10^{-10}$. If we choose $w\geq 1.5$, then we practically just need (at most) 2 bits to code each hashed value, that is, we can simply quantize the data according to $(-\infty,\ -w], (-w,\ 0], (0,\ w], [w, \infty)$ (see Figure~\ref{fig_16region}).
\vspace{-0.1in}
\section{Re-Ranking for LSH}
In the process of using hash tables for sublinear time near neighbor search, there is an important step called ``re-ranking''. With a good LSH scheme, the fraction of retrieved data points could be relatively low (e.g., $1\%$). But the absolute number of retrieved points can still be very large (e.g., $1\%$ of a billion points is still large). It is thus crucial to have a re-ranking mechanism, for which one will have to either compute or estimate the actual similarities.
When the original data are massive and high-dimensional, i.e., a data matrix in $\mathbb{R}^{n\times D}$ with both $n$ and $D$ being large, it can be challenging to evaluate the similarities. For example, it is often not possible to load the entire dataset in the memory. In general, we can not store all pair-wise similarities at the cost of $O(n^2)$ space which is not practical even for merely $n=10^6$. In addition, the query might be a new data point so that we will have to compute the similarities on the fly anyway. If the data are high-dimensional, the computation itself of the exact similarities can be too time-consuming.
A feasible solution is to estimate the similarities on the fly for re-ranking, from a small projected data stored in the memory. This has motivated us to develop \textbf{nonlinear estimators} for a 2-bit coding scheme, by exploiting full information of the bits.
\vspace{0.08in}
There are other applications of nonlinear estimators too. For example, we can use random projections and nonlinear estimators for computing nonlinear kernels for SVM. Another example is to find nearest neighbors by random projections (to reduce the dimensionality and data size) and brute-force linear scan of the projected data, which is simple to implement and easy to run in parallel.
\vspace{0.07in}
\noindent\textbf{Two-stage coding}. \ \ Note that the coding scheme for building hash tables should be separate from the coding scheme for developing accurate estimators. Once we have projected the data and place the points into the buckets using a designated coding scheme, we can actually discard the codes. In other words, we can code the same projected data twice. In the second time, we store the codes of (a fraction of) the projected data for the task of similarity estimation.
\vspace{-0.1in}
\section{Re-Ranking Experiments for LSH}
We conduct a set of experimental study for LSH and re-ranking to demonstrate the advantage of the proposed nonlinear estimator for the 2-bit coding scheme. Again, we adopt the standard $(K,L)$-LSH scheme~\cite{Proc:Indyk_STOC98}. That is, we concatenate $K$ (independent) hash functions to build each hash table and we independently build $L$ such hash tables. Note that here we use the capital letter $K$ to differentiate it from $k$, which we use for sample size (or number of projections) in the context of similarity estimation.
\vspace{0.07in}
We have showed that, for building hash tables, it is good to use uninform quantization with bin width (e.g.,) $w_1=1.5$ if the target similarity is high and $w_1\geq3$ if the target similarity is not so high. Here we use $w_1$ to indicate that it is the bin width for building hash tables. For simplicity, we fix $w_1=1.5$ (for table building) and $w=0.75$ (for similarity estimation). We choose $K=10$ and $L\in\{50, 100\}$. The results (especially the trends) we try to present are not too sensitive to those parameters $K$ and $L$.
Once we have built the hash tables, we need to store a fraction of the coded projected data. To save space, we should store $k\ll K\times L$ projections. Here we choose $k=100$ and $k=200$, which appear to be sufficient to provide accurate estimates of the similarity for re-ranking of retrieved data points.
\vspace{0.07in}
We target at top-$T$ nearest neighbors, for $T\in\{10, 20, 50, 100\}$. We re-rank the retrieved points according to estimated similarities based on 3 different estimators: (i) the MLE (nonlinear) for 2-bit coding as studied in this paper; (ii) the 2-bit linear estimator; (iii) the 1-bit estimator. We present the results in terms of precision-recall curves (higher is better) for retrieving the top-$T$ points. That is, we first rank all retrieved points according to estimated similarities. Then for a particular $T$, we examine the top-$m$ of the list to compute one (precision, recall) tuple. By varying $m$, we obtain a precision-recall curve for each $T$, averaged over all query points.
As shown in Figure~\ref{fig_YoutubeK10}, Figure~\ref{fig_PeekaboomK10}, and Figure~\ref{fig_LabelMeK10}, in all our experiments, we see that the 2-bit MLE substantially improves the 2-bit linear estimator, which substantially improves the 1-bit estimator.
\begin{figure*}[t]
\begin{minipage}[c][\textheight]{\textwidth}
\begin{center}
\mbox
{
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L50Top10k100.eps
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L50Top20k100.eps
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L50Top50k100.eps
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L50Top100k100.eps}
}
\mbox{
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L50Top10k200.eps
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L50Top20k200.eps
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L50Top50k200.eps
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L50Top100k200.eps}
}
\mbox
{
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L100Top10k100.eps
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L100Top20k100.eps
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L100Top50k100.eps
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L100Top100k100.eps}
}
\mbox{
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L100Top10k200.eps
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L100Top20k200.eps
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L100Top50k200.eps
\includegraphics[width = 1.7in]{fig/YoutubeReRankK10L100Top100k200.eps}
}
\end{center}
\caption{\textbf{Youtube}: precision-recall curves (higher is better) for retrieving the top-10, -20, -50, -100 nearest neighbors using standard $(K,L)$-LSH scheme and 3 different estimators of similarities (for the retrieved data points). The Youtube dataset is a subset from the publicly available UCL-Youtube-Vision dataset. We use 97,934 data points for building hash tables and 5,000 data points for the query. The results are averaged over all the query points. In the LSH experiments, we fix $K=10$ and $L=50$ (upper two layers) and $L=100$ (bottom two layers). We estimate the similarities using two different sample sizes, for $k=100$ and $k=200$. We can see that for any combinations of parameters, the nonlinear MLE (labeled as ``MLE'') always substantially improves the 2-bit linear estimator (labeled as ``2-bit''), which substantially improves the 1-bit estimator (labeled as ``1-bit''). }\label{fig_YoutubeK10
\end{minipage}
\end{figure*}
\clearpage\newpage
\begin{figure*}[h!]
\begin{center}
\mbox
{
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L50T10k100.eps
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L50T20k100.eps
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L50T50k100.eps
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L50T100k100.eps}
}
\vspace{-0.025in}
\mbox{
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L50T10k200.eps
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L50T20k200.eps
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L50T50k200.eps
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L50T100k200.eps}
}
\vspace{-0.025in}
\mbox
{
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L100T10k100.eps
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L100T20k100.eps
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L100T50k100.eps
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L100T100k100.eps}
}
\vspace{-0.025in}
\mbox{
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L100T10k200.eps
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L100T20k200.eps
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L100T50k200.eps
\includegraphics[width = 1.4in]{fig/PeekaboomReRankK10L100T100k200.eps}
}
\end{center}
\vspace{-0.25in}
\caption{\textbf{Peekaboom}: precision-recall curves (higher is better) for retrieving the top-10, -20, -50, -100 nearest neighbors using standard $(K,L)$-LSH scheme and 3 different estimators of similarities (for the retrieved data points). Peekaboom is a standard image retrieval dataset with 20,019 data points for building the tables and 2,000 data points for the query. }\label{fig_PeekaboomK10}\vspace{-0.14in}
\end{figure*}
\begin{figure*}[h!]
\begin{center}
\mbox
{
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L50T10k100.eps
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L50T20k100.eps
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L50T50k100.eps
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L50T100k100.eps}
}
\vspace{-0.025in}
\mbox{
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L50T10k200.eps
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L50T20k200.eps
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L50T50k200.eps
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L50T100k200.eps}
}
\vspace{-0.025in}
\mbox
{
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L100T10k100.eps
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L100T20k100.eps
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L100T50k100.eps
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L100T100k100.eps}
}
\vspace{-0.025in}
\mbox{
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L100T10k200.eps
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L100T20k200.eps
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L100T50k200.eps
\includegraphics[width = 1.4in]{fig/LabelMeReRankK10L100T100k200.eps}
}
\end{center}
\vspace{-0.25in}
\caption{\textbf{LabelMe}: precision-recall curves (higher is better.) for retrieving the top-10, -20, -50, -100 nearest neighbors using standard $(K,L)$-LSH scheme and 3 different estimators of similarities (for the retrieved data points). LabelMe is a standard image retrieval dataset with 55,599 data points for building the tables and 1,998 data points for the query.
}\label{fig_LabelMeK10}\vspace{-0.15in}
\end{figure*}
\newpage\clearpage
\section{Conclusion}
The method of random projections is a standard tool for many data processing applications which involve massive, high-dimensional datasets ( which are common in Web search and data mining). In the context of approximate near neighbor search by building hash tables, it is mandatary to quantize (code) the projected into integers. Prior to this work, there are two popular coding schemes: (i) an ``infinite-bit'' scheme~\cite{Proc:Datar_SCG04} by using uniform quantization with a random offset; and (ii) a ``1-bit'' scheme~\cite{Article:Goemans_JACM95,Proc:Charikar} by using the signs of the projected data. This paper bridges these two strategies.\\
In this paper, we show that, for the purpose of building hash tables in the framework of LSH, using uniform quantization without the offset leads to improvement over the prior work~\cite{Proc:Datar_SCG04}. Our method only needs a small number of bits for coding each hashed value. Roughly speaking, when the target similarity is high (which is often interesting in practice), it is better to use 2 or 3 bits. But if the target similarity is not so high, 1 or 2 bits often suffice. Overall, we recommend the use of a 2-bit scheme for LSH. Not surprisingly, as an additional benefit, using 2-bit scheme typically halves the preprocessing cost compared to using the 1-bit scheme.\\
For approximate near neighbor search, an important (and sometimes less well-discussed) step is the ``re-ranking'', which is needed in order to identify the truly similar data points among the large number of candidates retrieved from hash tables. This re-ranking step requires a good estimator of the similarity, because storing the pre-computed all pairwise similarities is normally not feasible and computing the exact similarities on the fly can be time-consuming especially for high-dimensional data. In this paper, we propose the use of nonlinear estimators and we analyze the 2-bit case with details. Although the analysis appears sophisticated, the estimation procedure is computationally feasible and simple, for example, by tabulations. Compared to the standard 1-bit and 2-bit linear estimators, the proposed nonlinear estimator significantly improves the accuracy, both theoretically and empirically.\\
In summary, our paper advances the state-of-the-art of random projections in the context of approximate near neighbor search.
\vspace{-0.1in}
\bibliographystyle{plain}
|
2,869,038,156,733 | arxiv | \section{Introduction \label{sec:intro}}
Direct-imaging of Earth-like planets is a quite challenging but
indispensable technique to revolutionize our understanding of planets
in the near future. The amplitude modulation of a photometric
lightcurve from a {\it color-changing dot} is sensitive to its surface
pattern, and thus would reveal the presence of lands, oceans, clouds
and even vegetation on the surface of the planets
\citep[e.g.,][]{1993Natur.365..715S, 2001Natur.412..885F,
2009ApJ...700..915C, 2009ApJ...700.1428O, 2010ApJ...715..866F,
2011ApJ...738..184F, 2019asbi.book..441S,Rushby2019}. Indeed,
continuous monitoring of oblique planets over their orbital periods
may even enable one to reconstruct their two-dimensional surface map
\citep{2010ApJ...720.1333K,2011ApJ...739L..62K,
2012ApJ...755..101F,2018AJ....156..146F}. The feasibility of the
mapping has recently been tested using continuous Earth observations
by Deep Space Climate Observatory orbiting at an altitude of 150 km
\citep{2018AJ....156...26J,2019ApJ...882L...1F,aizawa2020}.
In addition, the lightcurve carries complementary information for the
planet as well. The auto-correlation analysis of the photometric
variation roughly provides us the rotation period of the planet
\citep{2008ApJ...676.1319P}. The obliquity can also be inferred from a
simultaneous fitting of the spin vector and planet surface
\citep[e.g.][]{2010ApJ...720.1333K,2016MNRAS.457..926S,2018AJ....156..146F}.
Such dynamical parameters of the planet are of interest for a general
circulation modeling of Earth-like planets
\citep[e.g.][]{2015ApJ...804...60K,2018AJ....155..266D,2019ApJ...883...46K}.
Strictly speaking, an apparent photometric period observed by a
distant observer is not necessarily identical to the true spin
rotation period due to the planetary orbital motion. This is related
to the reason why a sidereal day of our Earth $P_{\rm spin}$ is
approximately $365.24/366.24 \times 24 \approx 23.934$ hours, which
corresponds to the true spin frequency $f_{\rm spin} \approx 1.00274$
[day$^{-1}$], instead of the $f_{\rm spin, heliocentric} = 1$
[day$^{-1}$]. The difference between the observed and true spin
rotation frequencies, $f_{\rm obs}$ and $f_{\rm spin}$, is
time-dependent, and sensitive to the geometrical configuration of the
system including the planetary obliquity, $\zeta$, the inclination of
the planetary orbital plane for the observer, $i$, and the observer's
direction (the orbital phase angle $\Theta_{\rm eq}$ measured from the
ascending node, for instance).
Thus the corresponding frequency modulation of the periodicity in the
lightcurve may reveal those parameters, through the presence of
the large-scale inhomogeneity of the surface. We emphasize that the
frequency modulation signal is much less sensitive to the specific
distribution pattern of the surface than the amplitude
modulation. \citet[][hereafter K16]{2016ApJ...822..112K} proposed
a novel idea to measure the planetary obliquity from the frequency
modulation, and demonstrated its feasibility successfully using a
static cloud-subtracted Earth model.
\begin{figure}[htbp]
\centering \includegraphics[width=10cm]{fig1.pdf}
\caption{A schematic illustration for the periodicity in the
photometric lightcurve of a planet observed from the direction of
$i=0^\circ$ relative to the normal vector of the orbital plane.
Panels a and b indicate prograde ($\zeta=0^\circ$) and
retrograde ($\zeta=180^\circ$) planets, respectively, in a
{\it geocentric} frame. The host star moves around the planet
from S to S' after one heliocentric day of the planet. The
illuminated and visible part of the planet from a face-on
observer (shaded region) changes accordingly, and the reflective
point at each epoch also moves from R to R'.}
\label{fig:fig1}
\end{figure}
The basic principle of frequency modulation can be understood from
Figure \ref{fig:fig1}. For a perfectly prograde planet
($\zeta=0^\circ$), the illuminated and visible part of the planet
viewed from a face-on observer ($i=0^\circ$) moves along the same
direction of the planetary spin (Panel a). The reflective point, at
which the reflected flux of the star is maximal on the planetary
surface, moves accordingly, and thus it takes slightly more than one
spin rotation period $P_{\rm spin}$ for the observer to see the
exactly same part of the planet. Therefore the observed photometric
variation frequency becomes $f_{\rm obs}=f_{\rm spin}-f_{\rm orb}$.
Applying the same argument, one can easily understand that $f_{\rm
obs}=f_{\rm spin}+f_{\rm orb}$ for a perfectly retrograde planet
($\zeta=180^\circ$) as illustrated in Panel b of Figure
\ref{fig:fig1}.
\begin{figure}[htbp]
\centering \includegraphics[width=10cm]{fig2.pdf}
\caption{Same as Figure \ref{fig:fig1} but for
$\zeta$ = 90$^\circ$ and $i=0^\circ$. } \label{fig:fig2}
\end{figure}
In general, the photometric variation frequency $f_{\rm obs}$ is not
constant and varies according to the mutual geometry between the star
and the planet, leading to a frequency modulation of the photometric
lightcurve of the planet. Figure \ref{fig:fig2} illustrates an
example of the time-dependent frequency modulation for a
$\zeta=90^\circ$ planet viewed from a distant observer at $i=0^\circ$.
In this case, the motion of the reflective point on the planetary
surface changes the direction relative to the planetary spin axis in a
time-dependent fashion, resulting in the frequency modulation of the
observed period.
When the star is located in S1 (and also in S3), the reflective point
on the planetary surface moves along the constant longitude, and the
planet exhibits a nearly same illuminated and visible part of its
surface after one spin rotation period. This implies that $f_{\rm obs}
\approx f_{\rm spin}$. In contrast, when the star is located at
S2 (S4), the reflective point after one spin rotation period moves
slightly westward (eastward), leading to $f_{\rm obs} \approx f_{\rm
spin}+f_{\rm orb}$ ($f_{\rm obs} \approx f_{\rm spin}-f_{\rm orb}$).
While the above frequency modulation is basically determined by the
geometrical configuration of the system characterized by $\zeta$, $i$,
and $\Theta_{\rm eq}$ as mentioned above (see also Figure
\ref{fig:fig3} below), the most important uncertain factor in modeling
the lightcurve is the time-dependent cloud pattern. A planet
completely covered by the thick homogeneous clouds, for instance, does
not exhibit any photometric variation, and thus one cannot probe the
surface information at all. In the case of our Earth, approximately
50-60 percent of the surface is covered by clouds on average. Thus, it
is not clear to what extent the interpretation from the frequency
modulation of the lightcurve is affected or even biased by the
properties and time-dependent distribution pattern of clouds.
Since the planetary obliquity is supposed to sensitively change the
cloud pattern among others, the feasibility study of the obliquity
measurement from the frequency modulation requires a self-consistent
modeling of clouds over the entire surface of a planet. This is why
we perform the GCM (General Circulation Model)\footnote{``General
Climate Model'' is also referred to as GCM. The two terms are
often used interchangeably, but sometimes ``General Circulation
Model'' is more specifically implies a part of modules in
``General Climate Model''. In this sense, our model may be
referred to as ``an atmospheric General Circulation Model'', but
we do not distinguish between them in the present paper.}
simulation and analyze the simulated lightcurves for different
planetary obliquities.
The rest of the paper is organized as follows. Section
\ref{sec:method} describes the basic model of the frequency modulation
in the lightcurve, the GCM simulation of the Earth with different
obliquities, and radiation transfer to simulate lightcurves. Section
\ref{sec:result} shows the analysis method of the frequency modulation
and the result of the frequency modulation signal extracted from
simulated lightcurves. Finally section \ref{sec:conclusion} is
devoted to the summary and conclusion of the present paper.
\section{Computational Methods \label{sec:method}}
\subsection{Basic strategy to estimate the planetary
obliquity from photometric variation \label{subsec:strategy}}
For simplicity, we consider a star-planet system in a circular orbit,
which is schematically illustrated in Figure \ref{fig:fig3}. In
order to compute the photometric variation of the {\it planet}, it is
convenient to define a {\it geocentric} frame in which the planet is
located at the origin. The stellar orbit defines the $xy$-plane, and
the star orbits around the $z$-axis in a counter-clockwise manner. The
unit vector of the planetary spin is on the $yz$-plane, and expressed as
$(0, \sin\zeta, \cos\zeta)$ in terms of the planetary obliquity
$\zeta$. Thus the direction of the $x$-axis corresponds to that of the
vernal equinox.
The unit vector toward a distant observer is given by
$(\cos\Theta_{\rm eq} \sin i, -\sin\Theta_{\rm eq} \sin i, \cos i)$,
where $i$ is the inclination, and $\Theta_{\rm eq}$ is the phase
angle measured clockwise from the $x$-axis ({\it i.e., the vernal
equinox}).
In this frame, the location of the star on the orbit is specified by
its phase angle $\Theta(t)$ measured from the observer's projected
direction. Since we consider a circular orbit below, $\Theta(t)=2\pi f_{\rm
orb}t$ (mod $2\pi$).
\begin{figure}[htbp]
\centering \centering\includegraphics[width=12cm]{fig3.pdf}
\caption{A schematic
configuration of the system in a {\it geocentric} frame.
The directions of the observer
and the planetary spin vector do not vary in time, while the
direction of the star is time-dependent.} \label{fig:fig3}
\end{figure}
K16 computed the frequency modulation based on a {\it maximum-weighted longitude approximation}, and derived the following formula
for $f_{\rm obs}$ in the case of a circular orbit:
\begin{equation}
\label{eq:fmodel}
f_{\rm model}=f_{\rm spin}+\epsilon_\zeta(\Theta)f_{\rm orb},
\end{equation}
where $\epsilon_\zeta(\Theta)$ is the modulation factor
\footnote{Equation \ref{eq:emodel} is the correct version of equation
(13) in K16, which contains a couple of typos in signs.},
\begin{eqnarray}
\label{eq:emodel}
\epsilon_\zeta(\Theta)
= \frac{-\cos\zeta\left[1+\cos\Theta\sin i\right]+\sin\zeta\cos i\sin(\Theta-\Theta_{\rm eq})}
{\left[\cos(\Theta-\Theta_{\rm eq})+\sin i\cos\Theta_{\rm eq}\right]^2
+\left[\cos\zeta\sin(\Theta-\Theta_{\rm eq})
-\cos\zeta\sin i\sin\Theta_{\rm eq}-\sin\zeta\cos i\right]^2}.
\end{eqnarray}
We apply the maximum-weighted longitude approximation and derive a
general formula for non-circular orbits in Appendix \ref{sec:epsilon}.
In the present analysis, however, we focus on a circular orbit, and
adopt equation (\ref{eq:emodel}) for the frequency
modulation template.
Following K16, we use the pseudo-Wigner distribution to estimate the
frequency modulation of the photometric variation of a given
lightcurve. The pseudo-Wigner distribution is the Fourier
transform of the auto-correlation of the data, emphasizing the
periodicity near the time of interest, and reducing the cross terms
and noises. Further detail will be described in Section \ref{sec:result}.
\subsection{GCM simulation of the Earth with different obliquities
\label{subsec:GCM}}
We would like to emphasize that the main purpose of the present paper
is to examine the feasibility of the planetary obliquity measurement
through the frequency modulation of the lightcurve. The cloud covering
pattern and fraction are important factors that would degrade the
measurement. On the other hand, the precise modeling of the climate is
not supposed to be essential for the feasibility. Therefore various
assumptions and limitations of our current GCM simulation described
below need to be clarified and understood, but do not change the main
conclusion of the present paper.
We use the GCM code {\tt DCPAM5} (the Dennou-Club Planetary
Atmospheric Model), which has been developed by GFD-Dennou
Club\footnote{\tt http://www.gfd-dennou.org/, and
http://www.gfd-dennou.org/library/dcpam/. } for planetary
climate modeling. {\tt DCPAM5} has been developed with the aim of
being able to calculate an atmospheric condition of various
terrestrial planets, using general formulae as much as possible, by
excluding properties and modules specific to the Earth
\citep[e.g.,][]{2017Icarus..282..1N}. {\tt DCPAM5} employs the
primitive equation system assuming that the vertical component of
the equation of motion is hydrostatic.
\subsubsection{Setup and Sub-grid physical processes}
\label{GCMsetup}
We set the computational grids of $32\times64\times26$ corresponding
to latitudinal, longitudinal, and vertical directions, respectively.
We carry out calculations in the region up to about 6 mbar, which
includes the whole troposphere and a part of the stratosphere. The
vertical extent of the model domain is enough for our study to express
the generation and motion of clouds because clouds are generated and
advected in the troposphere. Our simulation resolves the typical
Hadley cell with $\sim 5\time 10$ grids, and thus reproduces the
global meridional circulation observed on the Earth reasonably well.
We use some parameterized physical processes. In the shortwave
(visible and near infrared, corresponding to the range of incident
stellar flux) radiation process, we take account of absorption by
H$_2$O and CO$_2$, absorption and scattering by clouds, and the
Rayleigh scattering. In the longwave (mid and far infrared,
corresponding to the range of planetary thermal emission) radiation
process, we take account of absorption by H$_2$O, CO$_2$ molecules and
clouds. The level-2.5 closure scheme of \citet{1982RvGSP..20..851M}
is used for turbulent diffusion. The methods of
\citet{1991JApMe..30..327B} and \citet{1995QJRMS.121..255B} are used
for surface flux calculation. Moist convection is parameterized by
the Relaxed Arakawa-Schubert scheme described in
\citet{1992MWRv..120..978M}. Large scale condensation (non-convective
condensation) is parameterized by the scheme of
\cite{1991ClDy....5..175L}. The amount of cloud water is calculated
by integrating a time dependent equation including condensation,
evaporation, advection, turbulent diffusion, and sedimentation of
cloud water. Extinction rate of cloud water is assumed to be
proportional to the amount of cloud water, and extinction time is
given as an external parameter. The bucket model of
\cite{1969MWRv...97..739M} is used for soil moisture calculation. We
use a slab ocean model and its depth to 60 m, the value of
\cite{doi:10.1002/2014JD022659}.
Our simulation is intended to produce a simulated lightcurve for
an Earth-twin but with different obliquity $\zeta$. Thus we
basically adopt the known parameters of the Earth, except for its
obliquity. For simplicity, we set the orbital eccentricity
and the orbital period to be
$e= 0$ and $P_{\rm orb}=365.0$ day.
We solve surface temperature and sea ice concentration directly from
our simulation, instead of adopting the observed value for the Earth
with $\zeta$ = 23.44$^\circ$, since those values change with the
different values of $\zeta$. We use observational data of
surface geological properties,
neglecting that the change of climate also affects those parameters.
Surface albedo is calculated at each grid point according to the surface
geological properties, land moisture, and temperature
\footnote{The model codes and related data for the GCM
experiments are available at
{\tt http://www.gfd-dennou.org/library/dcpam/sample/}}.
Because our GCM does not include the microphysics of cloud
formation, cloud parameters are fixed to those for the Earth;
effective radius of water and ice cloud particle are set to be
10$\mu$m and 50$\mu$m, respectively. Lifetime of water and ice
clouds are chosen to be 3240 seconds and 8400 seconds,
respectively.
\subsubsection{Initial Conditions \label{GCMinitial}}
In the present paper, we consider six different values for the
obliquity; $\zeta=0^\circ$, $30^\circ$, $60^\circ$, $90^\circ$,
$150^\circ$, and $180^\circ$. The simulation runs for $\zeta=0^\circ$,
$150^\circ$, and $180^\circ$ start from the isothermal atmosphere of
temperature $T_{\rm init}=280$K and surface pressure $p_{\rm
s}=10^5$Pa with initially vanishing specific humidity and wind
speed. Then we evolve those three models for 20 years so that they
reach equilibrium. We call this process the relaxation run.
For $\zeta=30^\circ$, $60^\circ$, and $90^\circ$, we first run the
case of $\zeta=15^\circ$ with exactly the same initial conditions
mentioned above for 20 years as the relaxation run. We adopt the
final result of the relaxation run with $\zeta$ as the initial
condition for the next model with $\zeta+\Delta\zeta$. We set
$\Delta\zeta=5^\circ$. The system with $\zeta+\Delta\zeta$ becomes
almost in equilibrium after 10 years since the final epoch of the
relaxation run for $\zeta$; the annual mean of the total atmospheric
energy is constant within the level of 0.1\% for the last 5 years.
Thus we stop the relaxation run in 10 years. We repeat the process up
to $\zeta=90^\circ$. The incremental procedure is mainly to save the
computation time. In the retrograde runs for $\zeta=150^\circ$ and
$180^\circ$, we skip the incremental procedure, and made sure that the
results reach the equilibrium state after the 20 years relaxation run
directly from the isothermal atmosphere.
The simulated data that we analyze below is computed for an additional
one year after each relaxation run. We output the physical parameters
every 3 hours for the entire period, which is the required time
resolution for detecting the rotation frequency (corresponding to 24
hours) and its modulation from the simulated lightcurve.
\subsubsection{Climate of earths with different obliquities}\label{GCMresult}
Figure \ref{fig:fig4} shows the annual mean cloud
column density distribution of planets with different obliquities.
Results for $\zeta\leq$ 30$^\circ$ show the cloud belts on the equator and
mid-latitudes. The clouds around the equator are generated by the
Hadley circulation. This circulation also produces subtropical
highs, which are shown as the partially cloudless continents around
the latitude $\lambda=20-30^\circ$.
The cloud patterns for $\zeta=150^\circ$ and $180^\circ$ are very
similar to those for $\zeta=30^\circ$ and $0^\circ$,
respectively. This is due to the symmetry with respect to the stellar
location for the cases of $\zeta$ and $180^\circ-\zeta$. The results
for $\zeta=60^\circ$ and $90^\circ$ have the different cloud patterns
due to their atmospheric circulation from the day-side pole to the
equator. The present result is roughly consistent with that shown in
\citet{2003IJAsB...2....1W}, but quantitative comparison is beyond the
scope of this paper. As we mentioned earlier, however, the precise
modeling of the climate is not the focus of this work. We plan to
make further comparison elsewhere.
\begin{figure*}
\centering \centering\includegraphics[width=14cm]{fig4.pdf}
\caption{Annual mean cloud column density $[{\rm g/m}^2]$
of GCM experiments with six individual obliquities $\zeta$.
\label{fig:fig4}}
\end{figure*}
\subsection{Simulated lightcurves \label{subsec:lightcurve}}
\subsubsection{Scattering model and
radiative transfer through the planetary atmosphere}
The total flux of the scattered light from the planet $F(\lambda)$ at
wavelength $\lambda$ is computed by integrating the intensities $I$
over the illuminated({\rm I}) and visible({\rm V}) region of the
planetary surface:
\begin{eqnarray}
\label{eq:RadFlux}
F(\lambda) = \int_{\rm I\cap V}
I(\vartheta_0, \vartheta_1, \varphi; \lambda) \cos\vartheta_1 dS
\frac{1}{D_{\rm obs}^2},
\end{eqnarray}
where $\cos\vartheta_1 dS$ is the projected area element of the
planetary surface viewed by the observer located at a distance of
$D_{\rm obs}$.
The location of each planetary surface area element is specified
by the three angles ($\vartheta_0$, $\vartheta_1$ and $\varphi$) as
illustrated in Figure \ref{fig:fig5}.
Then the intensity $I$ from the planetary surface area element
is given by
\begin{equation}
\label{eq:intensity}
I(\vartheta_0, \vartheta_1, \varphi; \lambda)
= F_{*,p}(\lambda) \cos\vartheta_0 ~f(\vartheta_0, \vartheta_1, \varphi; \lambda),
\end{equation}
where $F_{*,p}$ is the incident flux and $f$ is the BRDF (bi-directional
reflectance distribution function) that characterizes the scattering
properties of the planetary surface.
\begin{figure}[htbp]
\centering\includegraphics[width=12cm]{fig5.pdf}
\caption{A schematic configuration of the scattering geometry.
\label{fig:fig5}}
\end{figure}
Because $f$ includes the entire radiative effects of atmosphere,
clouds and solid/liquid planetary surface, we need to perform a
numerical radiative transfer calculation through the planetary
atmosphere. For that purpose, we compute $f$ using a public code {\tt
libRadtran} \citep{gmd-9-1647-2016,acp-5-1855-2005}, which solved
the radiative transfer based on various detailed models of optical
properties of Earth's atmosphere, clouds, aerosols, lands, and ocean
\footnote{We use the {\tt libRadtran} version 2.0.1.
URL:{\tt http://www.libradtran.org/doku.php}}.
The {\tt libRadtran} provides several different options for specific
models. We choose the following options.
\begin{enumerate}
\item We choose REPTRAN \citep{2014JQSRT.148...99G} for optical
properties of the planetary atmosphere.
\item We compute optical properties of clouds according to
\citet{1993JCli....6..728H}. We adopt 10 $\mu$m for the
effective radius of water cloud particles, as assumed in our GCM
simulation.
\item We select the Ross-Li BRDF model \citep{JGRD:JGRD3769} for
land scattering. We adopt three Ross-Li parameters that are
required to provide in {\tt libRadtran} from a remote sensing
project of the Earth, MODIS \citep[MODerate resolution Imaging
Spectroradiometer;][]{1989ITGRS..27..145S}. More
specifically, we choose their data set ``snow-free gap-filled
MODIS BRDF Model Parameters''. In doing so, we employ the data
in March, neglecting the annual variation. Also, we sample the
three parameters at the center of each grid on the
planetary surface ($32\times64$), instead of averaging over the
entire grid. We adopt the above approximation just for
simplicity.
\item Since the above particular data set does not have sufficient
information for Antarctica, we assume the Lambert scattering and
employ the ice albedos of (0.948, 0.921, 0.891, 0.837, 0.562,
0.233), corresponding to the six MODIS bands from 1 to 6
described below. These values of ice albedo is picked from the
data ``snow-free gap-filled MODIS BRDF Model Parameters'' at
(N69$^\circ$.20, W39$^\circ$.35). This approximation is not
serious because the ice albedos do not change so much depending
on the area.
\item We select the ocean reflection BRDF model of
\citet{1983JQSRT..29..521N} that is implemented in {\tt
libRadtran}. We choose 4 m s$^{-1}$ for the wind speed at 10 m
above the ocean. Further detail can be found in
\citet{2010ApJ...715..866F}.
\item Finally, we solve the radiative transfer equation through
the atmosphere under a plane parallel approximation. We choose
DISORT \citep[DIScrete-Ordinate-method Radiative Transfer
model;][]{1988ApOpt..27.2502S}.
\end{enumerate}
We use the GCM output of water cloud density, ice cloud density,
temperature, air density, and vapor mixing ratio as the input vertical
profiles of atmosphere and clouds for {\tt libRadtran}. While our GCM
simulations distinguish between ice cloud and water cloud, we regard
the ice cloud as water cloud in {\tt libRadtran} so as to reduce the
computational cost. For simplicity, we ignore the radiative transfer
outside the region of GCM simulation ($z\sim$ 0-30 km), including
effects due to the upper atmosphere of the planet, exo-zodiacal dust and
the interstellar medium.
We compute the intensity in six photometric bands centered at the
wavelengths of the MODIS bands (Table \ref{tab:band}) but with an
expanded bandwidth of $\Delta\lambda=0.1\mu$m.
\begin{table}[htb]
\caption{Photometric bands of our mock observation}
\centering
\begin{tabular}{cc} \hline
band number & MODIS central wavelength\\ \hline
1 & 0.469 $\mu$m \\
2 & 0.555 $\mu$m \\
3 & 0.645 $\mu$m \\
4 & 0.858 $\mu$m \\
5 & 1.240 $\mu$m \\
6 & 1.640 $\mu$m \\ \hline
\end{tabular}
\label{tab:band}
\end{table}
The MODIS project selected their photometric bands so as to
characterize the reflection properties of the Earth's surface by
remote sensing. Figure \ref{fig:fig6}a shows examples of effective
albedo (reflectance) spectrum for different components of the Earth's
surface; soil, vegetation, and ocean. Three bands (1-3) roughly
correspond to the visible color of blue, green, and red, respectively.
Figure \ref{fig:fig6}a exhibits a clear difference among the three
components, ocean, soil, and vegetation. Incidentally, the MODIS
project chooses 3 near-IR bands that correspond to observational
windows of Earth's atmosphere (Figure \ref{fig:fig6}b).
As we have already emphasized, the cloud distribution is the most
important ingredient in our mock simulation. In order to examine the
dependence on their properties, we generate a simple cloud
distribution as follows. Our GCM result for $\zeta = 30^\circ$
indicates that the simulated cloud distribution has typical column
densities of $0.040^{+0.050}_{-0.025}$ kg/m$^2$. Thus we
redistribute all the clouds homogeneously within 0.0-0.3, 0.5-1.0, and
3.0-8.0 km, which roughly correspond to the typical heights for mist, lower clouds, and middle clouds for the Earth.
Figure \ref{fig:fig7} shows the resulting effective albedos for
those mock clouds, indicating that the albedos are mainly determined
by the column density, and fairly insensitive to the height of clouds.
\begin{figure*}
\gridline{
\fig{fig6a.pdf}
{0.5\textwidth}
{a. Reflectance of the Earth's
surface components through the Earth's atmosphere.
}
\fig{fig6b.pdf}
{0.5\textwidth}
{b. Transmittance of the Earth's atmosphere
and positions of the MODIS photometric bands.
}
}
\caption{Effective albedo and the transmittance of the atmosphere of
the Earth. Numbers from 1 to 6 above panels indicate numbers of
bands in the MODIS project (see Table \ref{tab:band}). For
atmospheric profile, the US Standard Atmosphere
\citep{1986aacp.book.....A} without cloud is used. \label{fig:fig6}}
\end{figure*}
\begin{figure}[ht!]
\centering\includegraphics[width=10cm]{fig7.pdf}
\caption{Effective albedo of clouds calculated with {\tt libRadtran}
from our GCM outputs with the obliquity $\zeta=30^\circ$.
\label{fig:fig7}}
\end{figure}
\subsubsection{Simulated images and lightcurves of an Earth-twin}
Before performing the frequency modulation analysis, let us present
examples of the apparent images and lightcurves from our mock
observation.
Figures \ref{fig:fig8a} and \ref{fig:fig8b} show the images of an
Earth-twin in January and July, respectively, with different
obliquities viewed from a distant observer at $i=0^\circ$. Plotted
from left to right are input surface distribution, illuminated and
visible part of the cloudless earth with atmosphere, illuminated and
visible part of the earth with both cloud and atmosphere from our GCM
simulation, and the corresponding cloud distribution. The arrows
indicate the incident direction of the starlight.
The input surface distribution (the left images) is computed from the
intensity of land alone, neglecting the contribution of the ocean
reflection. The land is assumed to be covered by the US Standard
Atmosphere \citep{1986aacp.book.....A} and land scattering is
approximated by Lambertian. Since those images are just for reference,
we assume the geometric configuration with $(\vartheta_0,
\vartheta_1)=(0^\circ, 0^\circ)$.
The different surface components are illustrated in orange, green
and blue for continents, vegetation and oceans, respectively. In the
left images, one may identify North and South America, Eurasia,
Africa, and Antarctica. The images of the cloudless earth
relatively well exhibit colors of surface below atmosphere, and also
show an oceanic glint (oceanic mirror reflection) in the illuminating
direction\citep{1993Natur.365..715S,Robinson2010}. Those
signatures of the surface components are significantly degraded by the
clouds, but one may still identify the presence of the Sahara desert
for $\zeta < 60^\circ$ in Figure \ref{fig:fig8a}, for instance.
Although one may not identify the Sahara desert for $\zeta
=90^\circ$ in January (Figure \ref{fig:fig8a}), the Sahara desert
appears in the visible and illuminating part in July (Figure
\ref{fig:fig8b}). Thus it can be still used a frequency modulation
indicator partially in a year.
Figures \ref{fig:fig8a} and \ref{fig:fig8b} reconfirm that the cloud
distribution weakens the surface information in photometric
monitoring, but still indicate that the diurnal variation and
possibly its frequency modulation detection are feasible if there
exists a good tracer of the global planetary surface like the Sahara
desert.
\renewcommand{\thefigure}{\arabic{figure}a}
\begin{figure*}[ht!]
\centering \includegraphics[width=16cm]{fig8a.pdf}
\caption{Images of an Earth-twin from our GCM simulations with
different obliquities viewed from a distant observer at $i=0^\circ$
in January. From left to right, we
plot the input surface images, illuminated and visible part of the
cloudless earth with atmosphere, illuminated and visible part of the
earth with both cloud and atmosphere from our GCM simulation, and
the corresponding cloud distribution. The orange arrows show the
direction of stellar illumination. We adopt the RGB flux ratio to
be the intensity ratio of band 3:2:1 (0.645 $\mu$m: 0.555 $\mu$m:
0.469 $\mu$m) and apply the gamma correction with $\gamma$= 1/2.2 so
as to roughly represent the apparent colors.}
\label{fig:fig8a}
\end{figure*}
\renewcommand{\thefigure}{\arabic{figure}b}
\addtocounter{figure}{-1}
\begin{figure}
\centering \includegraphics[width=16cm]{fig8b.pdf}
\caption{Same as Figure \ref{fig:fig8a} but in July.}
\label{fig:fig8b}
\end{figure}
\renewcommand{\thefigure}{\arabic{figure}}
Mock photometric monitoring of the images presented in Figures
\ref{fig:fig8a} and \ref{fig:fig8b} generates the corresponding
simulated lightcurves. Throughout the analysis in what follows, we
consider an observer located at $i=0^\circ$ for simplicity. Since we
output results of our GCM simulations every three hours,
we construct simulated lightcurves from those discrete snapshots.
Then we ignore the change of the lightcurve during the three
hours, and construct mock lightcurves sampled every three hours.
While this approximate method significantly affects the lightcurve
variation on a time-scale less than three hours, the variation
around a planetary spin period (24 hours) of our interest is hardly
affected.
Figure \ref{fig:fig9} shows an example of one-week lightcurves in
January for Earth-twins with different obliquities; left and right
panels correspond to those in band 1 and 4, respectively. We assume
that the star-planet system is located at a distance $D_{\rm obs}$
away from the telescope of diameter $D_{\rm tel}$ and exposure time of
$t_{\rm exp}$. In an idealized case where both the light
from the host star and other instrumental noises are completely
neglected, the photon counts at band $i$ with the bandwidth of
$\Delta\lambda$ are scaled as
\begin{equation}
\label{eq:photon-counts}
N_i(t) = N_{i,0}(t)
\left(\frac{D_{\rm obs}}{10\;{\rm pc}}\right)^{-2}
\left(\frac{D_{\rm tel}}{4\;{\rm m}}\right)^2
\left(\frac{t_{\rm exp}}{3\;{\rm hr}}\right)
\left(\frac{\Delta\lambda}{0.1\,\mu{\rm m}}\right) .
\end{equation}
The photon counts in Figure \ref{fig:fig9} correspond to $N_{1,0}$
(left panel) and $N_{4,0}$ (right panel) for the bands 1 and 4 in
equation (\ref{eq:photon-counts}). In practice, we compute
$N_i(t)$ from snapshots every three hours, assuming $t_{\rm exp}=3$hours.
The simulated lightcurves for $\zeta\leq$ 60$^\circ$ exhibit a kind of
diurnal periodicity, which does not reflect the surface information
directly, but comes mainly from the cloud pattern correlated with the
surface distribution. As $\zeta$ increases ($\zeta \geq 90^\circ$),
the diurnal periodicity is not easy to identify. As we mentioned in
the above, the Sahara desert played an important role as a tracer of
the planetary rotation, and the annual-averaged cloud pattern is also
correlated to the distribution of the surface components. This is why
the diurnal periodicity is more visible for photometric monitoring of
the Northern Hemisphere in the case of the Earth. Although in case of
$\zeta=60^\circ$, the north part of the South America takes the role
as well, it eventually sets out of visible and illuminated region as
$\zeta$ increases.
\begin{figure*}
\gridline{
\fig{fig9a.pdf}{0.5\textwidth}{}
\fig{fig9b.pdf}{0.5\textwidth}{}
}
\caption{Examples of simulated lightcurves
of an Earth-twin with different obliquities for an observer at
$i=0^\circ$. The photon counts $N_{1,0}$ and $N_{4,0}$ correspond to
the set of parameters, $D_{obs}=10\;{\rm pc}$, $D_{\rm tel}=4\;{\rm m}$,
$t_{\rm exp}=3\;{\rm hr}$, and $\Delta\lambda=0.1\,\mu{\rm m}$, and
are scaled as equation (\ref{eq:photon-counts}). The quoted error-bars
consider the photon shot-noise alone.
\label{fig:fig9}}
\end{figure*}
\section{Time-frequency analysis of simulated lightcurves
and parameter estimation \label{sec:result}}
Given the simulated lightcurves, we perform the frequency modulation
analysis following K16. In practice, we use a numerical code {\tt
juwvid} to compute the pseudo-Wigner distribution, which is publicly
available from the
web site\footnote{\tt https://github.com/HajimeKawahara/juwvid}.
Our time-frequency analysis proceeds as follows. First we compute
$N_i(t)$ from our simulated lightcurves every three hours over an
orbital period of one year. Then we sample $N_{i, {\rm obs}}(t)$ from
the Poisson distribution with the expectation value of $N_i(t)$. In
other words, we consider the shot noise alone in the analysis below.
In total, we have $\rm N_{data}=2920$ (=1 year/3hours) data points,
and duplicate the data points with the period of 1 year.
We divide each lightcurve into 73 segments consisting of 40
consecutive data points ({\it i.e.}, 3 hours $\times$ 40 $\approx$5
days). Then we compute the mean $\mu$ and standard deviation $\sigma$
of $N_i(t)$ in each segment, and convert to the {\it normalized}
lightcurve $s(t) \equiv (N_i(t)-\mu)/\sigma$. Finally we compute the
pseudo-Wigner distribution:
\begin{equation}
\label{eq:pseudow}
g(f,t)=\int_{-\infty}^{\infty}
H(\tau)z(t+\tau/2)z^*(t-\tau/2)e^{-2\pi if\tau}d\tau,
\end{equation}
where
\begin{equation}
\label{eq:analytics}
z(t) = \frac{1}{\pi}\int_0^\infty \tilde{s}(w) e^{iwt}dw
\end{equation}
is the analytic signal of $s(t)$ with $\tilde{s}(w)$ being the Fourier
transform of the normalized lightcurve $s(t)$ in the present case. We
choose the window function $H(\tau)$ as the following Hamming window
function:
\begin{eqnarray}
\label{eq:Hamming}
h(\tau;T_{\rm w}) =
\left\{
\begin{array}{lr}
0.54 + 0.46\cos(2\pi \tau/T_{\rm w})
& ~~~~~~ {\rm for} ~~|\tau| \leq T_{\rm w}/2 \\
0 & ~~~~ {\rm otherwise} .
\end{array}
\right.
\end{eqnarray}
In practice, we adopt $T_{\rm w}=0.25$ year for the
the window width of the Hamming window function.
The pseudo-Wigner distribution is an appropriate time-frequency
distribution for extracting the instantaneous frequency
\citep[e.g.][]{BA25293241}, as explained below. Let us consider a
single mode signal $z = A(t) e^{i \psi(t)}$ with an instantaneous
phase $\psi(t)$, where $A(t) \in \mathbb{R}$ is the amplitude of the
mode. The ideal time-frequency representation is a delta function
$\rho(f,t) = A(t)^2 \delta_D (f - f_{\rm ins}(t)) $, where $f_{\rm
ins} (t) $ is the instantaneous frequency defined by
\begin{equation}
\label{eq:if}
f_{\rm ins}(t) \equiv \frac{1}{2 \pi} \frac{d \psi(t)}{d t}.
\end{equation}
Then, the inverse Fourier transform of $\rho$ can be written as
\begin{equation}
\label{eq:invf}
\hat{\rho} (\tau, t) = A(t)^2 e^{2 \pi i f_{\rm ins}(t) \tau}
= A(t)^2 e^{i \tau \psi^\prime(t)}
\approx A(t)^2 e^{i \psi(t+\tau/2) - i \psi(t-\tau/2)}
= z(t+\tau/2) z^\ast(t-\tau/2),
\end{equation}
where we use the linear approximation $\psi^\prime (t) \approx
[\psi(t+\tau/2) - \psi(t-\tau/2)]/\tau$ in the last two
terms.
Performing the Fourier transform of equation (\ref{eq:invf}) with the
time window, we obtain the pseudo-Wigner distribution. Because the
linear approximation is valid only for the linear frequency modulation
such as $f_{\rm ins}(t) \approx a t + b $ ($a, b$ are constant
values), the width of the window should be chosen to be comparable to
the scale of the non-linear feature of the frequency modulation. The
derivative of equation (\ref{eq:invf}) by $\tau$ at $\tau=0$ provides
\begin{equation}
\label{eq:der}
\frac{d}{d \tau} [z(t+\tau/2) z^\ast(t-\tau/2)] |_{\tau=0}
= i A(t)^2 \psi^\prime(t) = 2 \pi i \int_{-\infty}^{\infty} f \rho(f,t) df.
\end{equation}
Also, the mode amplitude is rewritten as
\begin{equation}
\label{eq:derampl}
|z(t)|^2 = A(t)^2 = \int_{-\infty}^{\infty} \rho(f,t) df.
\end{equation}
Then, the instantaneous frequency is formally estimated by the
weighted form as,
\begin{equation}
\label{eq:wei}
f_{\rm ins}(t) = \frac{1}{2 \pi} \frac{d \psi(t)}{d t}
= \frac{ \int_{-\infty}^{\infty} f \rho(f,t) df}
{ \int_{-\infty}^{\infty} \rho(f,t) df}.
\end{equation}
In practice, one can estimate the peak value of the pseudo-Wigner
distribution as an instantaneous frequency to avoid the effect of
noise. In this expression, we need a complex-valued signal with
non-negative frequency component of the signal. That is the reason why
we convert a real-valued signal $s(t)$ to the analytic signal $z(t)$
in equation (\ref{eq:analytics}).
We calculate the pseudo-Wigner distribution $g(f,t)$ over the range of
$f_{\rm min}<f<f_{\rm max}$ using equation
(\ref{eq:pseudow}). Specifically we choose $f_{\rm min}=0.98$
[day$^{-1}$] and $f_{\rm max}=1.02$[day$^{-1}$] throughout the
analysis. Since our lightcurves are sampled every 3 hours, the
corresponding frequency resolution is not good enough to determine the
value of $f_{\rm spin}$ precisely. Therefore we adopt a non-uniform
FFT scheme \citep{doi:10.1137/S003614450343200X} following K16, and
achieved the frequency resolution of $\delta f= (f_{\rm max}-f_{\rm
min})/{\rm N}_f$ after applying an appropriate smoothing of the
lightcurves. We choose ${\rm N}_f = 1024$ in what follows, and
the resulting resolution $\delta f \approx 4\times
10^{-5}$[day$^{-1}$] is better than the modulation amplitude
detected in Figure \ref{fig:fig16} by a factor of 100.
\subsection{Single-band analysis \label{subsec:singleband}}
Consider first the frequency modulation for single-band lightcurves.
Figure \ref{fig:fig10} is similar to Figure
\ref{fig:fig9}, but plots simulated {\it noiseless} (without shot
noise) lightcurves in the photometric bands 1 to 6 for $(\zeta,
i)=(0^\circ, 0^\circ)$.
As clearly indicated by Figure \ref{fig:fig10}, the apparent diurnal
variation in each band originates from the cloud pattern that is
correlated with the land-ocean distribution. These surface-correlated
clouds were also found in Earth observations by Deep Space Climate
Observatory as the second component of the Principal Component
Analysis \citep{2019ApJ...882L...1F}. While our analysis did not
directly identify the component, it appears to be imprinted in the
diurnal variation in a single band. The amplitude of the single-band
lightcurves is basically determined by the cloud albedo (Figure
\ref{fig:fig7}) multiplied by the incident solar flux. This is why the
amplitude of the diurnal variation with cloud in Figure
\ref{fig:fig10} is relatively large around the visible wavelength
(bands 2 and 3), and declines sharply in the near-infrared (bands 4 to
6).
We note that Figure \ref{fig:fig10} also indicates
the anti-correlation of the lightcurve modulation between cloudless
and cloudy cases. For a cloudless case, the photometric variation is
mainly due to the land component that has larger albedos (Figures
\ref{fig:fig6}a and Figure \ref{fig:fig7}). Since clouds
are much brighter, however, the photometric variation of a cloudy
case is sensitive to the location of clouds, which tend to avoid the
continent, in particular desert regions, and rather form
preferentially above the ocean. Thus the locations of lands and
clouds are anti-correlated, leading to the anti-correlation
illustrated in Figure \ref{fig:fig10}. This also
explains that the periodic signature of the lightcurve for a cloudy
case is weaker for redder bands, because lands become brighter in
redder bands and compensate the variation due to clouds.
\begin{figure*}
\centering \includegraphics[width=16cm] {fig10.pdf}
\caption{Same as Figure \ref{fig:fig9} for $\zeta=0^\circ$, but
in photometric bands 1 to 6 without photometric noise. Solid and dotted
symbols indicate the lightcurves with and without clouds,
respectively.
\label{fig:fig10}}
\centering \includegraphics[width=16cm]{fig11.pdf}
\caption{Time-frequency representation corresponding to the
noiseless lightcurves of Figure \ref{fig:fig10}.
\label{fig:fig11}}
\end{figure*}
The corresponding color-map for the pseudo-Wigner distribution on the
time-frequency plane (Figure \ref{fig:fig11}) clearly illustrates
the above trend that redder bands have weaker signal. The color
indicates the absolute value of the time-frequency distribution
density $g(f,t)$, whose maximum value is normalized as unity. Since
Figure \ref{fig:fig10} is for $\zeta=0^\circ$, the
period for the apparent diurnal variation should be constant, and does
not show any frequency modulation. The tiny frequency modulation $\sim
0.001$ day$^{-1}$ visible in Figure \ref{fig:fig11} is simply due to
the time-dependent inhomogeneous distribution of clouds.
Consider next the time-frequency representation of the band-1
lightcurves for different obliquities (Figure \ref{fig:fig12}). We
adopt band 1 because it produces the clearest ridge on time-frequency
representation in Figure \ref{fig:fig11}. The dashed lines show the
model frequency modulation $f_{\rm model}(t)$, equation
(\ref{eq:fmodel}). The signature of the frequency modulation from the
single-band lightcurves is not strong, and barely identifiable only
for $\zeta \leq 30^\circ$. Though the amplitude of frequency
modulation is zero for $\zeta = 0^\circ$, the signature of the {\it
constant} apparent frequency is visible clearly. This obliquity
dependence reflects the specific distribution pattern of land and ocean
on the Earth. As shown in the $\zeta=150^\circ$ image in Figure
\ref{fig:fig8a}, the illuminated and visible part in winter is
dominated by Antarctica, and there is no significant diurnal variation
in the lightcurve. On the other hand, in summer, Antarctica is almost
invisible and parts of Africa and South America generate the diurnal
variation signal instead.
\begin{figure*}
\centering \includegraphics[width=16cm]{fig12a.pdf}
\centering\includegraphics[width=16cm]{fig12b.pdf}
\caption{Time-frequency representation of band 1 noiseless signal for
different obliquities ($\zeta=0^\circ$, $30^\circ$, $60^\circ$,
$90^\circ$, $150^\circ$ and $180^\circ$), corresponding to the
results with cloud ({\it upper panel}) and without cloud ({\it lower
panel}) plotted in Figure \ref{fig:fig11}. Thick dashed lines
are the prediction based on the maximum-weighted longitude
approximation model, $f_{\rm model}(t)$; see equations
(\ref{eq:fmodel}) and (\ref{eq:emodel}) in the main text.
Thin blue points indicate $f_{\rm data, max}(t)$, the frequency
corresponding to the maximum value of $g(f,t)$ over a range of
$f_{\rm min}<f<f_{\rm max}$ at each epoch; see equation
(\ref{eq:R1}).
\label{fig:fig12}}
\end{figure*}
\subsection{Multi-band analysis \label{subsec:multiband}}
As shown in Section \ref{subsec:singleband}, single-band analysis does
not properly extract the information of the correct frequency
modulation due to the anti-correlation between lands and clouds. In
order to detect the diurnal period due to the planetary surface
distribution, therefore, we need to remove the time-dependent cloud
pattern as much as possible.
As inferred from the wavelength dependence of albedos for lands
and clouds, bands 1 and 4 are mainly sensitive to clouds and
clouds$+$lands, respectively (see Figure \ref{fig:fig6}a and
Figure \ref{fig:fig7}). Thus the difference of the photon counts
$N_1(t)$ and $N_4(t)$ roughly removes the contribution from clouds.
For definiteness, we choose the following
linear combination of bands 1 and 4:
\begin{eqnarray}
\label{eq:C14}
C_{1-4}(t) &=& N_1(t)-\alpha_{1-4}N_4(t), \\
\alpha_{1-4} &=& \frac{F_{*1}\lambda_1 \Delta\lambda}
{F_{*4}\lambda_4 \Delta\lambda}\sim 1.12 .
\end{eqnarray}
The above combination is derived assuming that the albedo of clouds is
roughly independent of the wavelength, and thus the contribution of
the clouds is canceled, at least partially as shown in Figure
\ref{fig:fig13}. While the cloud effect may be removed more
efficiently by combining other
bands appropriately, it is beyond the scope of the present paper.
Thus we perform the frequency modulation analysis using
equation (\ref{eq:C14}) in what follows.
\begin{figure}
\centering \includegraphics[width=16cm]
{fig13.pdf}
\caption{Simulated noiseless $C_{1-4}$ lightcurves for different
obliquities ($\zeta=0^\circ$, $30^\circ$, $60^\circ$, $90^\circ$,
$150^\circ$, and $180^\circ$). We adopt the same set of parameters
as in Figure \ref{fig:fig9}.
\label{fig:fig13}}
\end{figure}
\begin{figure}[ht!]
\centering\includegraphics[width=16cm]{fig14.pdf}
\caption{The pseudo-Wigner distribution $g(f,t)$ of the noiseless
$C_{1-4}$. Thick dashed lines indicate $f_{\rm model}(t)$, while
thin blue points indicate $f_{\rm data, max}(t)$, the frequency
corresponding to the maximum value of $g(f,t)$ over a range of
$f_{\rm min}<f<f_{\rm max}$ at each epoch; see equation
(\ref{eq:R1}). Due to the quality of the data, the values of $f_{\rm
data, max}(t)$ are not robust for $\zeta=150^\circ$ and
$180^\circ$, and discontinuous.
\label{fig:fig14}}
\end{figure}
Figure \ref{fig:fig13} shows an example of simulated
noiseless lightcurves using $C_{1-4}(t)$, and Figure
\ref{fig:fig14} is the corresponding time-frequency
representation. Comparison between Figures \ref{fig:fig12} and
\ref{fig:fig14} clearly indicates that the multi-band
analysis suppresses the time-dependent cloud effect, and significantly
improves the frequency modulation signal.
We note that the amplitude of the frequency modulation signature
depicted in Figure \ref{fig:fig14} sensitively depends on the value of
$\zeta$, reflecting the specific surface distribution on the Earth.
As indicated in Figures \ref{fig:fig8a} and \ref{fig:fig8b}, the
Southern Hemisphere, especially around the South Pole, of the Earth is
occupied by Antarctica and ocean, in an approximately axisymmetric
manner. Thus the diurnal variation of the Southern Hemisphere (for
example, viewed from the direction of $i=0^\circ$ if
$\zeta=180^\circ$) is difficult to detect. This also applies to
the $\zeta =150^\circ$ case in which the frequency modulation signal
is clear only in summer, as described at the end of subsection
\ref{subsec:singleband}.
In contrast, the Northern Hemisphere is roughly divided into two major
distinct components; the Eurasian continent and the Pacific
ocean. This large-scale inhomogeneity, in particular the Sahara
desert, acts as a good tracer of an asymmetric surface pattern,
yielding a relatively large amplitude signal of the frequency
modulation (see Figures \ref{fig:fig8a} and \ref{fig:fig8b}). This is
why a clear frequency modulation signal in the case of $\zeta \le
90^\circ$ can be detected for an observer located at $i=0^\circ$.
\subsection{Feasibility of the obliquity estimate \label{subsec:feasibility}}
All of the pseudo-Wigner distributions above (Figures \ref{fig:fig11},
\ref{fig:fig12}, and \ref{fig:fig14}) are based on noiseless data.
Now we are in a position to examine to what extent one can estimate
the planetary obliquity from the long-term photometric monitoring via
the frequency modulation method. For that purpose, we assume a
dedicated space mission with the telescope aperture of $D_{\rm
tel}=2$, 4 and 6 m. Again we consider idealized cases in which the
photometric noise comes from the photon shot-noise alone, and generate
a set of $C_{1-4}(t)$ lightcurves from the photon counts $N_1(t)$ and
$N_4(t)$ obeying the Poisson statistics. Examples of the resulting
frequency modulation are presented in Figure \ref{fig:fig15}.
\begin{figure*}
\centering\includegraphics[width=16cm]{fig15.pdf}
\caption{The pseudo-Wigner distribution for oblique Earth-twins from
the shot-noise limited photometric monitor. The top, middle, and
bottom panels are for the space telescope aperture $D_{\rm tel}=2$,
4, and 6 m, and the left, center, and right panels for the planetary
obliquity $\zeta=30^\circ$, $60^\circ$ and $90^\circ$. We assume
that $D_{\rm obs}=10\;{\rm pc}$, $t_{\rm exp}=3\;{\rm hr}$,
and $\Delta\lambda=0.1\,\mu{\rm m}$. \label{fig:fig15}}
\end{figure*}
The model frequency modulation is determined by the five parameters
($\zeta$, $f_{\rm spin}$, $\Theta_{\rm eq}$, $i$, $f_{\rm orb}$) that
are listed in Table \ref{tab:fit}; the planetary obliquity $\zeta$,
the planetary spin frequency $f_{\rm spin}$, the angle of the vernal
equinox measured from the location of the observer projected on the
orbital plane $\Theta_{\rm eq}$, the observer's inclination $i$, and
the orbital frequency of the planet $f_{\rm orb}$. Among them, $i$ and
$\Theta_{\rm eq}$ simply specify the location of the observer relative
to the system, and are not so interesting. The remaining three
parameters, $\zeta$, $f_{\rm spin}$ and $f_{\rm orb}$, are important
since they characterize the star-planet system.
In order to estimate $\zeta$, which cannot be estimated otherwise and
thus are of our primary interest, we need to perform eventually a
joint analysis of the five parameters in a Bayesian fashion. In the
present study, however, we would like to examine the feasibility of
the determination of $\zeta$ and $f_{\rm spin}$, assuming that $i$ and
$f_{\rm orb}$ are known, for simplicity. The precise spectroscopic
and astrometric data would determine $i$ and $f_{\rm orb}$. Also,
$f_{\rm spin}$ may be estimated from the photometric data on
relatively short-time scales apart from the uncertainty of
$\epsilon_\zeta(\Theta)f_{\rm orb}$ in equation (\ref{eq:fmodel}).
Under the similar assumption, K16 attempted
to find the best-fit values for $\zeta$, $f_{\rm spin}$, and
$\Theta_{\rm eq}$ by minimizing
\begin{equation}
\label{eq:R1}
R_1(\Theta_{\rm eq},\zeta, f_{\rm spin})
= \sum_{j=1}^{\rm N_{data}} \left|f_{\rm data, max}(t_j)
-f_{\rm model}(t_j;\Theta_{\rm eq},\zeta, f_{\rm spin}) \right|^2 ,
\end{equation}
where $f_{\rm model}(t)$ is the frequency derived from the {\it
maximum-weighted longitude approximation}, equation
(\ref{eq:fmodel}), and $f_{\rm data, max}(t)$ corresponds to the
maximum value of $g(f,t)$ over a range of $f_{\rm min}<f<f_{\rm max}$
at each epoch $t$. We tried the same fitting, but the result is not
robust against the shot noise especially when the frequency modulation
signal is weak.
Therefore we empirically improve the fit by taking account of the
distribution around the $f_{\rm data, max}(t)$ as well. More
specifically, we construct a Gaussian weighted model
$\tilde{g}_{model}(f,t)$ for the time-frequency distribution:
\begin{equation}
\label{eq:tildeg-model}
\tilde{g}_{model}(f,t) =
\exp\Bigg[-\frac{(f-f_{model}(t;\Theta_{\rm eq},\zeta, f_{\rm spin}))^2}
{2\sigma_f^2}\Bigg],
\end{equation}
where $\sigma_f$ is a new fitting parameter that is introduced to
account for the finite width of the frequency distribution
around $f_{\rm model}$.
Then we minimize the following quantity:
\begin{equation}
\label{eq:R2}
R_2(\Theta_{\rm eq},\zeta, f_{\rm spin},\sigma_f)
= \sum_{i=1}^{{\rm N}_f}\sum_{j=1}^{\rm N_{data}}
\left| \frac{g_{\rm data}(f_i,t_j)}{g_{\rm data}(f_{\rm data, max}(t_j),t_j)}
-\tilde{g}_{\rm model}(f_i,t_j) \right|^2,
\end{equation}
to find the best-fit $\zeta$, $\Theta_{\rm eq}$, $f_{\rm spin}$, and
$\sigma_f$. The value of $\sigma_f$ should be roughly equal to
$1/T_{\rm w}$ because the time-frequency representation of a
signal $z(t)=e^{if_{\rm ins}t}$ based on the pseudo-Wigner
distribution has a dispersion corresponding to the Fourier transform
of the window function $\tilde{h}(f-f_{\rm ins}; T_w)$, and this
dispersion is flattened due to the noise and non-linear frequency
modulation.
In practice, we use the Levenberg-Marquardt algorithm
{\tt mpfit}\citep{2009ASPC..411..251M} to find the best-fit parameters.
This algorithm is a practical and fast algorithm of the least square
method for non-linear functions. We fit the time-frequency
distribution for $\zeta=30^\circ$ and $60^\circ$. Table
\ref{tab:fit} summarizes our initial parameters in addition to the
fixed orbital parameters that we assume to be {\it a priori} known.
\begin{table}[htb]
\caption{Initial and fixed parameter sets for best fit search}
\centering
\begin{tabular}{|l|c|} \hline
Initial parameter & value\\ \hline \hline
obliquity $\zeta$ & 15$i^\circ$ ($i=1,2,\cdots,12$)\\
$\Theta_{\rm eq}$ & $60j^\circ$ ($j=1,2,\cdots,5$)\\
spin frequency $f_{\rm spin}$ & 366 [year$^{-1}$]\\ \hline \hline
Fixed parameter & value\\ \hline \hline
orbital inclination $i$ & 0$^\circ$\\
orbital frequency $f_{\rm orb}$ & 1 [year$^{-1}$]\\ \hline
\end{tabular}
\label{tab:fit}
\end{table}
Figure \ref{fig:fig16} shows the distribution of the best-fit
estimates on the $\zeta$-$f_{\rm spin}$ plane from 1000 different
realizations. The black cross symbols indicate the input values,
($\zeta,$ $f_{\rm spin}$) = ($30^\circ$, 366 year$^{-1}$) and
($60^\circ$, 366 year$^{-1}$), for left and right panels,
respectively. The top and bottom panels show the results based
on the shot-noise limited observations with $D_{\rm tel}=$ 4 m and 6 m,
respectively. The numbers in each panel
denote the mean and 1$\sigma$ estimated from 1000 realizations.
The systematic offsets of $(\Delta\zeta)_{\rm sys} \approx 3^\circ$
and $(\Delta f_{\rm spin})_{\rm sys}\approx 0.03$ year$^{-1}$ result
most likely from the specific pattern of the continents on the
Earth. Indeed the previous simplified analysis by K16 also found a
similar level of the systematic offset of the planetary obliquity
($\sim$ several degrees; see Figure 8 of K16). K16 added noises
empirically into his mock data, neglecting the time-dependent cloud
distribution that we compute here.
The fact that the systematic offsets between the two analyses are
similar indicates, therefore, that they should be ascribed to the
specific surface pattern of the Earth itself. Indeed Eurasia, North
Africa, and South America are distributed roughly from northeast to
southwest directions. This latitudinal pattern is consistent with the
positive systematic offset of the obliquity exhibited in Figure
\ref{fig:fig16}. Since the amplitude of the systematic offset would
depend on the specific pattern of the planetary surface to some
extent, it is difficult to predict it {\it a prior}, but important to
bear in mind that it could amount to several degrees, much larger than
the statistical uncertainty as shown in Figure \ref{fig:fig16}.
\begin{figure*}
\centering \includegraphics[width=14cm]{fig16.pdf}
\caption{Spin parameters ($\zeta$, $f_{\rm spin}$) estimated from the
normalized Gaussian model $\tilde{g}_{model}$. Black crosses show
the input spin values ($\zeta$, $f_{\rm spin}$) = (30$^\circ$, 366
year$^{-1}$) and (60$^\circ$, 366 year$^{-1}$) for left and right
panels, respectively. We plot the best fit values from 1000
realizations of shot-noise limited observation. The shot noise is
assumed from the telescope diameter of $D_{\rm tel} = 4$ m and
6 m for top and bottom panels, respectively.}
\label{fig:fig16}
\end{figure*}
\section{Summary and conclusion \label{sec:conclusion}}
The direct imaging of earth-like planets is very challenging, but will
provide ground-breaking datasets for astronomy, planetary science, and
biology, if successful eventually. One notable example is the
reconstruction of the surface components
\citep[e.g.,][]{1993Natur.365..715S, 2001Natur.412..885F,
2010ApJ...715..866F, 2011ApJ...738..184F, 2011ApJ...739L..62K,
2012ApJ...755..101F,2019asbi.book..441S}, and it may be even
possible to measure the planetary obliquity through the frequency
modulation of the photometric lightcurve of future direct imaged
Earth-like planets, proposed by \citet{2016ApJ...822..112K}.
We have examined the feasibility of the methodology by creating
simulated lightcurves of our Earth-Sun systems but with different
planetary obliquities. First, we performed the GCM simulation for
those systems with particular emphasis on the time-dependent cloud
distribution. Second, we computed the scattered light in 6 photometric
bands by solving the radiation transfer of the incident starlight
through the cloud and atmosphere taking into account the scattering
due to the different surface components under the parameterized
bi-directional reflectance distribution function models
\citep{JGRD:JGRD3769,1983JQSRT..29..521N}. Third, the resulting light
from the planet was mock-observed every three hours over the orbital
period of one year, and simulated lightcurves were constructed by
combining the different photometric bands so as to suppress the effect
of the time-dependent cloud pattern. Finally, we computed the
frequency modulation of the lightcurves using the pseudo-Wigner
distribution and attempted to estimate the planetary
obliquities for photon-shot noise dominated cases.
We found that the frequency modulation signal is crucially dependent
on the presence of the large-scale inhomogeneity on the planetary
surface. Indeed this is the case for the Northern Hemisphere of our
Earth; in particular the Sahara desert turned out to be a useful
tracer of the planetary spin rotation. The Southern Hemisphere, on the
other hand, is relatively featureless, and the frequency modulation
signal is weak.
As a result, we found that a dedicated 4 m space telescope at 10 pc
away from the system in the face-on view relative to the
observer can estimate the planetary obliquity within the
uncertainty of several degrees in principle (in the shot-noise limited
case). Although this conclusion is based on several idealized
assumptions at this point, we believe that it is very encouraging for
the future exploration of the direct imaging of Earth-like planets.
\acknowledgements
We thank an anonymous referee for numerous constructive suggestions
and comments that significantly improved the early manuscript of the
paper. This work is supported by Japan Society for the Promotion of
Science (JSPS) Core-to-Core Program “International Network of
Planetary Sciences”, the European Research Council under the European
Union's Horizon 2020 research and innovation programme (Grant
Agreement 679030/WHIPLASH), JSPS KAKENHI Grant Numbers JP18H01247 and
JP19H01947 (Y.S.), JP17K14246 and JP18H04577 (H.K.), JP19H01947
(M.I.), JP17H06457 (K.K. and Y.O.T), and the Astrobiology Center
Program of National Institutes of Natural Sciences (Grant Number
AB311025). Numerical computations and analyses were partly carried
out on PC cluster at Center for Computational Astrophysics, National
Astronomical Observatory of Japan. Y.N. acknowledges the support by
fellowship from the Advanced Leading Graduate Course for Photon
Science at the University of Tokyo.
\software{
{\tt DCPAM5} ({\tt http://www.gfd-dennou.org/library/dcpam}),
{\tt libRadtran}\citep{gmd-9-1647-2016,acp-5-1855-2005},
{\tt REPTRAN}\citep{2014JQSRT.148...99G},
{\tt mpfit}\citep{2009ASPC..411..251M},
{\tt juwvid} ({\tt https://github.com/HajimeKawahara/juwvid}).
}
\begin{appendix}
\section{Behavior of the frequency modulation factor
$\epsilon_\zeta(\Theta)$ for an eccentric orbit \label{sec:epsilon}}
The modulation factor, equation (\ref{eq:emodel}), first derived by
K16 assumes a circular orbit for simplicity. We compute a generalized
expression for an eccentric orbit, and present the effect of
eccentricity on the frequency modulation based on the {\it maximum
weighted longitude approximation}.
For an eccentric orbit, it is more convenient to consider a geocentric
frame where the $x$-axis is the direction toward the periapsis as
shown in Figure \ref{fig:eccentric-frame}, instead of vernal equinox
({\it c.f.,} Figure \ref{fig:fig3}). In this frame, the spin vector
is no longer on the $yz$-plane, and we introduce a new parameter
$\beta$, which denotes the azimuthal angle of the planetary spin
measured from $y$-axis. Similarly the location of the observer is
specified by the phase angle from the periapsis $\Theta_{\rm per}$,
and $\Theta$ is now the azimuthal angle measured clockwise from the
periapsis, {\it i.e.}, the true anomaly. The frame reduces to that
shown in Figure \ref{fig:fig3} for $e \to 0$, $\beta \to 0^\circ$,
$\Theta_{\rm per} \to \Theta_{\rm eq}$, and $\Theta \to
\Theta-\Theta_{\rm eq}$.
\begin{figure}[htbp]
\centering\includegraphics[width=12cm]{fig17.pdf}
\caption{A schematic configuration of a
geocentric frame for eccentric orbits.
\label{fig:eccentric-frame}}
\end{figure}
Following K16, we assume that the longitude of the reflective point on
the planetary surface, $\hat{\phi}_{\rm M}$, traces faithfully the
observable periodicity of the planetary scattered light. Then the
observed frequency $f_{\rm obs}$ is given in terms of $\hat{\phi}_{\rm
M}$ as
\begin{eqnarray}
\label{eq:eccentric-fobs}
f_{\rm obs}(t) &=& -\frac{1}{2\pi}\frac{{\rm d} \hat{\phi}_{\rm M}}{{\rm d} t}
= -\frac{1}{2\pi}\frac{{\rm d} \Theta}{{\rm d} t}
\frac{\partial \hat{\phi}_{\rm M}}{\partial \Theta} \cr
&=& f_{\rm spin} + \frac{1}{2\pi}\frac{{\rm d} \Theta}{{\rm d} t}
\epsilon_\zeta(\Theta),
\end{eqnarray}
where
\begin{eqnarray}
\label{eq:eccentric-epsilon}
\epsilon_\zeta(\Theta)
& \equiv & - \frac{\partial (\hat{\phi}_{\rm M}+\Phi)}{\partial \Theta}
= - \frac{\kappa '(\Theta)}{1+\kappa(\Theta)^2}, \\
\label{eq:eccentric-kappa}
\kappa(\Theta)
& \equiv & \tan(\hat{\phi}_{\rm M}+\Phi), \\
\label{eq:eccentric-Phi}
\Phi & \equiv & 2\pi f_{\rm spin} t.
\end{eqnarray}
For a circular orbit, $d\Theta/dt$ is equal to $f_{\rm orb}$. For $e
\not=0$, however, it cannot be written explicitly in terms of $t$, but
is expressed as
\begin{equation}
\label{eq:eccentric-dtheta-dt}
\frac{d\Theta}{dt}
= 2\pi f_{\rm orb}\;(1-e^2)^{-3/2}(1+e\cos\Theta )^2.
\end{equation}
In the geocentric frame, unit vectors toward the star and the
observer, $\vec{e}_s$ and $\vec{e}_o$, are given as $\vec{e}_s=(\cos
\Theta, \sin \Theta, 0)$, and $\vec{e}_o = (\cos \Theta_{\rm per}\sin
i, -\sin \Theta_{\rm per}\sin i, \cos i)$, respectively. Thus the unit
vector towards the reflective point, $\vec{e}_{\rm M}$, is
\begin{eqnarray}
\label{eq:eccentric-vec-M}
\vec{e}_{\rm M}
= \frac{\vec{e}_s+\vec{e}_o}{|\vec{e}_s+\vec{e}_o|}
=
\frac{1}{L}\left(
\begin{array}{c}
\cos \Theta + \cos \Theta_{\rm per}\sin i\\
\sin \Theta - \sin \Theta_{\rm per}\sin i\\
\cos i
\end{array} \right) ,
\end{eqnarray}
where $L\equiv|\vec{e}_s+\vec{e}_o|=\sqrt{2+2\cos(\Theta+\Theta_{\rm
per})\sin i}$.
Consider a point on the planetary surface specified by the latitude
$\lambda$ and longitude $\phi$ in the rest frame of the planet. The
surface normal unit vector at the point is $\vec{e_R}'(\phi, \lambda)
= (\cos\phi\cos\lambda, \sin\phi\cos\lambda, \sin\lambda)$. One can
transform $\vec{e_R}'$ to $\vec{e_R}$ in the geocentric frame as
\begin{equation}
\label{eq:eccentric-transform-forward}
\vec{e}_R = R_z(\beta) R_x(-\zeta) \hat{\rm S}(\Phi) \vec{e_R}',
\end{equation}
where $\hat{\rm S}(\Phi)$ is a spin rotation operator
($\phi\to\phi+\Phi$), and $R_i$ is the rotation matrix
counterclockwise around the $i$-axis. Note that $R_z(\beta)$ is
required for a non-circular case.
\begin{figure*}
\gridline{
\fig{fig18a.pdf}
{0.5\textwidth}
{(a) }
\fig{fig18b.pdf}
{0.5\textwidth}
{(b)}
}
\caption{Samples of frequency modulation applied to eccentric orbits;
(a) $\zeta=30^\circ$ and (b) $\zeta=60^\circ$. Each line shows the
frequency modulation specific to the spin configuration of the
planet. Solid lines are eccentric cases ($e=0.2$) and dashed lines
are circular cases ($e=0.0$). \label{fig:eccentric-FM}}
\end{figure*}
We apply the generic transformation, equation
(\ref{eq:eccentric-transform-forward}), to compute the component of
the reflective point in the planetary frame. Then we obtain
\begin{eqnarray}
\label{eq:eccentric-transform-backward}
\vec{e_{\rm M}}'(\hat{\phi}_{\rm M}+\Phi)
&=& \hat{\rm S}(\Phi) \vec{e_{\rm M}}'(\hat{\phi}_{\rm M})
= R_x(\zeta) R_z(-\beta) \vec{e_{\rm M}} \cr
& = & \frac{1}{L}\left(
\begin{array}{c}
\cos(\Theta-\beta)+\sin i \cos(\Theta_{\rm per} + \beta)\\
\cos\zeta\{\sin(\Theta-\beta)-\sin i \sin(\Theta_{\rm per} + \beta)\}
-\sin\zeta\cos i\\
\sin\zeta\{\sin(\Theta-\beta)-\sin i \sin(\Theta_{\rm per} + \beta)\}
+\cos\zeta\cos i
\end{array} \right) .
\end{eqnarray}
The ratio of the $x$- and $y$-components in equation
(\ref{eq:eccentric-transform-backward}) yields
\begin{equation}
\label{eq:eccentric-kappa-formula}
\tan(\hat{\phi}_{\rm M}+\Phi)=
\frac{\cos\zeta\{\sin(\Theta-\beta)-\sin i \sin(\Theta_{\rm per} + \beta)\}
-\sin\zeta\cos i}
{\cos(\Theta-\beta)+\sin i \cos(\Theta_{\rm per} + \beta)}.
\end{equation}
Therefore equation (\ref{eq:eccentric-epsilon}) reduces to
\begin{equation}
\label{eq:eccentric-epsilon-formula}
\epsilon_\zeta(\Theta)=
\frac{-\cos\zeta\{1+\sin i \cos(\Theta+\Theta_{\rm per})\}
+\sin\zeta\cos i\sin(\Theta - \beta) }
{\Bigl[\cos(\Theta-\beta)+\sin i \cos(\Theta_{\rm per} + \beta)\Bigr]^2
+\Bigl[\cos\zeta\{\sin(\Theta-\beta)
-\sin i \sin(\Theta_{\rm per} + \beta)\} -\sin\zeta\cos i\Bigr]^2}.
\end{equation}
Finally we obtain the eccentric frequency modulation in terms of true
anomaly $\Theta$:
\begin{equation}
\label{eq:eccentric-fobs-formula}
f_{\rm obs} = f_{\rm spin} +
\frac{f_{\rm orb}\;(1-e^2)^{-3/2}(1+e\cos\Theta )^2
\Bigl[-\cos\zeta\{1+\sin i \cos(\Theta+\Theta_{\rm per})\}
+\sin\zeta\cos i\sin(\Theta - \beta)\Bigr] }
{\Bigl[\cos(\Theta-\beta)+\sin i \cos(\Theta_{\rm per} + \beta)\Bigr]^2
+\Bigl[\cos\zeta\{\sin(\Theta-\beta)
-\sin i \sin(\Theta_{\rm per} + \beta)\} -\sin\zeta\cos i\Bigr]^2}.
\end{equation}
The above equation reproduces equation (\ref{eq:emodel}) for $e \to
0$, $\beta \to 0^\circ$, $\Theta_{\rm per} \to \Theta_{\rm eq}$, and
$\Theta \to \Theta-\Theta_{\rm eq}$.
In the main part of the paper, we consider the circular orbit alone
just for simplicity, but the effect of $e$ is important as well. To
show this, we plot equation \ref{eq:eccentric-fobs-formula} for the
Earth-like planet viewed from $i=0^\circ$ for $e=0$ and $0.2$ in
Figure \ref{fig:eccentric-FM}. The horizontal axis indicates the time
in units of $P_{\rm orb}$ that is numerically computed from the true
anomaly. The left and right panels correspond to the obliquity of
$\zeta=30^\circ$ and $\zeta=60^\circ$. Different curves indicate
cases for $\beta=0^\circ$, $90^\circ$, $180^\circ$, and $270^\circ$.
The dashed ($e=0$) and solid ($e=0.2$) clearly exhibit different
amplitudes and phases of the frequency modulation. Thus the effect of
eccentricity biases the estimates of $\zeta$ and $\Theta_{\rm eq}$ if
the formula for $e=0$ is used in fitting the data. Since it is likely
that the orbit of direct imaging targets is precisely determined prior
to its monitoring, one can use equation
(\ref{eq:eccentric-fobs-formula}) as a template using the estimated
value of $e$.
\end{appendix}
|
2,869,038,156,734 | arxiv | \section{Introduction}
Field-atom entanglement is one of the hallmarks of strongly interacting cavity quantum electrodynamics (CQED) systems. This fundamental process is often used as a building block in the formation of entangled states of two or more atoms that serve as a resource for quantum information processing. A notable example is the generation of a two-atom Bell state by means of consecutive interaction with a resonant microwave cavity mode prepared in the vacuum state \cite{ens1}. In effect, the cavity mode stores the entanglement with one atom, that is then transferred to the second atom. The question addressed in this work is: Can entanglement be generated when the initial field state becomes semiclassical? On one hand quantum fluctuations persist also in the semiclassical limit, while on the other hand the correspondence principle states that in this limit classical physics should be recovered, where the notion of entanglement is not even defined.
The semiclassical states of the field mode are wave packets that are localized in phase space, where the canonical coordinates are the field quadratures, such as coherent states and squeezed coherent states, and have a large mean photon number. When a two-level atom interacts with a coherent state wave packet, its polarization undergoes Rabi oscillations whose amplitude exhibits collapse-revival dynamics \cite{eberly} as a consequence of the splitting of the initial wave packet in two mutually orthogonal sub-wave packets, and of their periodic collisions \cite{geaprl,geapra}. Each sub wave-packet is a product of squeezed field state and an atomic state. After splitting, the atom-field system is highly entangled, but slow atomic state evolution turns the field-atom state again into a product state \cite{geaprl,geapra}, and the field state is a superposition of two well-separated wave packets, sometimes called a Schr\"odinger cat state. This semiclassical dynamics can be described as a flow in a double phase space \cite{jc}.
Here we study a field mode interacting consecutively with two two-level atoms. The interaction of the field wave packet with the first atom causes it to split, as explained, and to become entangled with the atom. When the interaction with the second atom begins, each of these sub-wave packets splits again, and the atom-field-atom system becomes a superposition of four wave packets (see Sec. \ref{sec:afa}, Fig.\ \ref{fig:field}). \emph{Atom-atom} entanglement can only be obtained if two or more of the sub-wavepackets overlap, and this occurs when the normalized interaction times of the field with the two atoms are equal. The system state is then in general a superposition of three wave packets, and the atom-atom state is an entangled mixed state.
We show in Sec.\ \ref{sec:aa} that it is possible to generate entanglement that reaches a finite, less than maximal, limit when the photon number tends to infinity, although the time required to generate the entangled state diverges in this limit. The optimal interaction time is determined as a trade-off between the mixing of three pure atomic states and the degree of entanglement of one of them, as shown in Fig.\ \ref{fig:2atom}. We consider the effects of photon loss in Sec.\ \ref{sec:dec} and show that the entanglement generation is degraded by dephasing of the sub wave packets, and that a coherent superposition of wave packets cannot be maintained when the mean photon number is larger than the inverse decoherence rate to the $\frac23$ power. Beyond this point entanglement generation proceeds only through small quantum fluctuation and it decreases as the inverse of the mean photon number to the sixth power, see Fig.\ \ref{fig:decoherence}.
The analysis demonstrates that a semiclassical preparation can serve as mediator for appreciable entanglement of atoms, if the decoherence is weak enough. For this purpose it is necessary that the field becomes entangled with the atoms and evolve into macroscopic superposition states. In contrast, when decoherence is too strong to allow wave packet splitting, atom-atom entanglement is generated only through small quantum fluctuations, and it is therefore weak.
The resonant interaction of a field mode in excited states with two atoms has been investigated theoretically in \cite{mondragon,tcm,tcm2} for simultaneous interaction and \cite{nayak1,nayak2} for consecutive interaction, revealing wavepacket splitting, slow atomic state evolution, and collapse-revival dynamics of entanglement similar to those observed in one atom-field system.
The off-resonant interaction of a field wave packet with two atoms was studied experimentally and theoretically in \cite{ens-dec,bppaper} as a means of measurement of the cavity decay rate and its effect on the dephasing of Schr\"odinger cat states. The steady state entanglement of two atoms in a highly excited leaky cavity was studied \cite{qd}. Our focus is the process of entanglement of two two-level systems by interaction with an intermediary that tends to the semiclassical limit, for which CQED provides a prominent realization.
Field-mediated atom-atom entanglement is often carried out in the highly-detuned Raman regime \cite{guo,ens2} where the field is only virtually excited to avoid field-induced decoherence. In a sense this is the opposite limit to the one studied here, and indeed we show that the atom-atom entanglement is very sensitive to dephasing of the field wave packet. Furthermore, although it is often argued that in the Raman regime entanglement is independent of the field state, the Raman regime entails an arbitrarily large detuning when the field state photon number tends to infinity.
\section{Atom-field-atom interaction}\label{sec:afa}
\begin{figure*}[htb]
\includegraphics[width=0.4\linewidth]{wi1.eps}\hspace{1cm} \includegraphics[width=0.4\linewidth]{fig3new.eps}
\caption{\label{fig:field}The Wigner function of the field state $\mathop\mathrm{Tr}_\text{atoms}\ket{2}\bra{2}$, for unequal effective interaction times $\gamma_1\ne\gamma_2$ (left) and equal interaction times (right), numerically calculated for $\alpha=4$. In the left panel the arrows signify the direction of phase space rotation of the four sub wave packets. The Wigner function of the initial coherent state wave packet is circularly symmetric, and the deformation of the four sub wave packets in the left panel, and the two flank sub-wave packets in the right panel is a consequence of the squeezing.
}
\end{figure*}
We study the dynamics of two two-level atoms interacting consecutively on resonance with a single electromagnetic mode. Modelling the interaction by the Jaynes-Cummings Hamiltonian in the rotating wave approximation, the Hamiltonian is $H(t)=H_1,\,0<t<t_1$, and $H_2,\,t_1<t<t_1+t_2$, where
\begin{equation}\label{eq:h}
H_k=\hbar\omega(a^\dagger a+\sigma_k^\dagger\sigma_k)+\hbar \Omega_k(a^\dagger\sigma_k+a\sigma_k^\dagger)
\end{equation}
where $\omega$ is the frequency of the electromagnetic mode with energy states $\ket{n}$, $n=0,1,\ldots~$ created by $a^\dagger$, and coherent states $\ket{\alpha}$, $\hbar\omega$ is the level spacing of the two-level atoms with energy states $\ket{g}_k,\ket{e}_k$ (atom k) raised by $\sigma^\dagger_k$, and $\hbar \Omega_k$ are the
field-atom interaction energies. The system is prepared in a product state $\ket{\alpha}\otimes\ket{g}_1\otimes\ket{g}_2$.
When the field interacts with the first atom, the energy states are polariton states $\ket{n}_{\pm}\equiv\frac{1}{\sqrt2}(\ket{n-1}\otimes\ket{e}_1\pm\ket{n}\otimes\ket{g}_1)$, with energies $\hbar(n\omega\pm\sqrt{n}\Omega_1)$, and the absolute ground state $\ket{0}\otimes\ket{g}_1$. The field-atom state evolves into a superposition of two products of a squeezed coherent state and an atomic state \cite{eberly,geaprl,geapra}. Expressing the Hamiltonian in terms of polariton number operator $\hat n$, and the projections $P_\pm$ on the $\pm$ subspaces \cite{jc},
$H_1=H_+P_++H_-P_-$, where $H_\pm=\hbar(\hat n\omega\pm \sqrt{\hat n}\Omega_1)$,
the initial state is $\frac1{\sqrt2}(\ket{\alpha}_+-\ket{\alpha}_-)$, where
$\ket{\alpha}_\pm=\pm\sqrt{2}P_\pm(\ket{\alpha}\otimes\ket{g}_1)$, and the field-first atom state at time $t_1$ is
\begin{equation}
\ket{1}=\ket{1_+}-\ket{1_-}\ ,\quad \ket{1_\pm}={\textstyle\frac1{\sqrt2}}e^{-\frac{i}{\hbar}H_\pm t_1}\ket{\alpha}_\pm
\end{equation}
For large $|\alpha|$ the dynamics generated by $H_\pm$ is well-approximated by classical dynamics in the polariton phase spaces, that is, phase spaces corresponding to the $\pm$ sub-Hilbert spaces, where the canonical coordinates $q,p$ are the field quadratures. For convenience, we join a $\ket{0}_\pm$ state to the $\pm$ subspaces (respectively) with occupations that remain exponentially small throughout. The initial states in both phase spaces are then coherent state Gaussian wave packets, and they evolve according to the rules of semiclassical phase space dynamics \cite{lj86}. In this approximation the wave packet evolution is determined completely by classical data generated by the classical Hamiltonians
\begin{equation}
H_\pm^{\text{(cl)}}=\textstyle\frac{1}{2}(q^2+p^2+\hbar) \omega +\sqrt{\frac{\hbar}{2}(q^2+p^2+\hbar)}\Omega_1
\end{equation}
obtained from the Weyl phase-space representation of the quantum Hamiltonians. $H_\pm^{\text{(cl)}}$ generate nonlinear oscillations, that is rotation in phase space with an amplitude-dependent frequency.
The wave packet propagation consists of three parts: A phase-space translation from the initial point $(q,p)$, $\alpha=\frac{1}{\sqrt2}(q+ip)$ to the final point $(q_\pm,p_\pm)$, $\alpha_\pm=\frac{1}{\sqrt2}(q_\pm+ip_\pm)$ determined by the classical trajectory of the center of the wave packet, squeezing determined by the phase-space deformation generated by the nonlinear oscillations with squeeze parameters $\xi_\pm$, and an overall phase factor $e^{-i\phi_\pm}$ determined by the classical action of the classical orbits, so that the wave packets evolve to the squeezed coherent state
\begin{align}
\ket{1_\pm}&=e^{-i\phi_\pm}\ket{\alpha_\pm,\xi_\pm}\label{eq:af-pol}
\end{align}
where
\begin{equation}\ket{\alpha,\xi}_\pm=e^{\alpha b^\dagger-\alpha^* b} e^{\frac12(\xi^*b^2-\xi(b^\dagger)^2)}\ket{0}_{\pm}
\end{equation}
$b$ being the polariton annihilation operator defined with the usual properties $b\ket{n}_{\pm}=\sqrt{n}\ket{n-1}_\pm$, $[b,b^\dagger]=1$.
The values of the wave packet parameters in a frame rotating with angular speed $\omega$
are $\phi_\pm=\phi(\pm\gamma_1)$, $\alpha_\pm=\alpha(\pm\gamma_1)$, $\xi_\pm=\xi(\pm\gamma_1)$ with
\begin{align}
\phi(\gamma)&=\textstyle\frac{\gamma}{2}|\alpha|^2-\frac12\arctan(\frac14\gamma)\\
\alpha(\gamma)&=\alpha e^{-\frac{i}{2}\gamma}\\
\xi(\gamma)&= \textstyle\text{arcsinh}(\frac14\gamma)e^{i(\gamma+\text{arccot}(\frac14\gamma))}
\end{align}
Here $\gamma_1=\frac{\Omega_1t_1}{|\alpha|}$ is twice the phase space angle of rotation, clockwise and counter-clockwise (respectively), of the $+$ and $-$ wave packet; $\frac{\Omega_1}{2|\alpha|}$ is the classical (angular) frequency of nonlinear oscillations.
The atom-field states can be expressed in terms of \emph{photon} states with the help of the identity
$\ket{n}_\pm=\frac1{\sqrt2}(a\frac{1}{\sqrt{a^\dagger a}}\ket{n}\otimes\ket{e}_1\pm\ket{n}\otimes\ket{g}_1)$ as
\begin{align}\label{eq:af-ph}
\ket{1_\pm}&=\textstyle e^{-i\phi(\pm\gamma_1)}\ket{\alpha(\pm\gamma_1),\xi(\pm\gamma_1)}\nonumber\\&\otimes(e^{i(\gamma_0\mp\frac12\gamma_1)}\ket{e}_1\pm\ket{g}_1)
\end{align}
where $\gamma_0=\arg\alpha$. Here we used the fact that the squeezed state wavepackets are localized in phase-space, so that they are approximate eigenstates of $a$ and $a^\dagger$ with eigenvalues $\alpha(\pm\gamma_1)$ and $\alpha(\pm\gamma_1)^*$ (respectively).
The atom-field state $\ket{1}$ is therefore a superposition of two wavepackets that are well-separated in phase space except when $\gamma_1$ is close to an integer multiple of $2\pi$. The splitting of the wave packet generates field-atom entanglement that is almost maximal when $\gamma_1\gtrsim|\alpha|^{-1}$ and decreases to zero for $\gamma_1=\pi$ \cite{geaprl}.
The interaction with the second atom proceeds analogously. The final field-two atom state at time $t_1+t_2$ is
\begin{equation}\textstyle
\ket{2}=e^{-\frac{i}{\hbar}H_2t_2}(\ket{1_+}-\ket{1_-})\otimes\ket{g}_2
\end{equation}
where each of the two wave packets $\ket{1}_\pm\otimes\ket{g}_2$ splits again to two sub wavepackets, that continue to undergo phase-space rotation and squeezing according to the field-second atom polariton sign. $\ket{2}$ is therefore a superposition of \emph{four} localized wave packets labeled by two sign choices
\begin{align}\textstyle
&\ket{2}=\frac12(\ket{2_{++}}-\ket{2_{+-}}-\ket{2_{-+}}+\ket{2_{--}})\\
&\ket{2_{rs}}=\textstyle\frac12e^{-i\phi(\Gamma_{rs})}\label{eq:afa-ph}
\ket{\alpha(\Gamma_{rs}), \xi(\Gamma_{rs})}\\
&\quad\otimes(e^{i(\gamma_0-\frac12r\gamma_1)}\ket{e}_1+r\ket{g}_1)\otimes(e^{i(\gamma_0-\frac12\Gamma_{rs})}\ket{e}_2+s\ket{g}_2) \nonumber
\end{align}
where $\Gamma_{rs}=(r\gamma_1+s\gamma_2)$ and $\gamma_2=\frac{\Omega_2t_2}{|\alpha|}$.
\section{Atom-atom entanglement}\label{sec:aa}
\begin{figure*}[htb]
\includegraphics[width=0.45\linewidth]{ectest.eps} \includegraphics[width=0.45\linewidth]{purtest.eps}
\caption{\label{fig:mixing}The entanglement entropy $E_c$ of the center wavepacket two-atom state $\ket{c}$ (left), and the purity $P$ of the full two-atom state $\rho_a$ (right) as a function of the phase-space rotation angle $\gamma$, shown for $\alpha=4$ (thin red), $\alpha=8$ (medium green), and asymptotically for $|\alpha|\to\infty$ (thick blue). The initial rise and later drop in the degree of entanglement of the full two-atom state can be understood as a trade-off between the increase in $P$ and the decrease of $E_c$. $P(\alpha\to\infty)$ is discontinuous at $\gamma=0,\pi$, and these points are excluded from the graph.}
\end{figure*}
\begin{figure*}[htb]
\includegraphics[width=0.45\linewidth]{conctest.eps}\hspace{8mm}\includegraphics[width=0.45\linewidth]{eof-logneg.eps}
\caption{\label{fig:2atom}Left: The entanglement of formation $E_f$ of the two-atom state $\rho_a$ as a function of the phase-space rotation angle $\gamma$, shown for $\alpha=4$ (thin red), $\alpha=8$ (medium green), and asymptotically for $|\alpha|\to\infty$ (thick blue). Right: The entanglement of formation (upper, blue) and negativity (lower, violet) as a function of $\gamma$ for $|\alpha|\to\infty$. The two entanglement monotones attain their maxima (shown by vertical dashed lines) at close but different values of $\gamma$, showing that they are nonequivalent.}
\end{figure*}
The field-two atom state $\ket{2}$ is a superposition of four sub-wave packets each with a well-defined atomic product state. Superpositions of product states can give rise to entanglement, but for most values of $\gamma_1$ and $\gamma_2$ the four sub-wave packets are separate in phase space (see Fig.\ \ref{fig:field}), and label the different atomic components, so that the two-atom density matrix $\rho_a=\mathop\mathrm{Tr}_\text{field}\ket{2}\bra{2}$ is a classical mixture of four product states in the limit $|\alpha|\gg1$, which is by definition separable. The atomic states form a coherent superposition when the corresponding sub-wave packets overlap in phase space.
We therefore let $\gamma_1=\gamma_2=\gamma$ so that the field factors in the $\ket{2_{+-}}$ and $\ket{2_{-+}}$ terms are identical and equal to the initial coherent state $\ket{\alpha}$, while being separate from the field factors of $\ket{2_{++}}$ and $\ket{2_{--}}$, unless $\gamma$ is an integer multiple of $2\pi$. If $\gamma$ is not an integer multiple of $\pi$, the field factors of $\ket{2_{++}}$ and $\ket{2_{--}}$ are also phase-space separate, so that the atomic state is a mixture
\begin{equation}\textstyle\label{eq:densitya}
\rho_a=\frac14(\ket{l}\bra{l}+2\ket{c}\bra{c}+\ket{r}\bra{r})
\end{equation}
of the states
\begin{align}
\ket{l}=&\textstyle\frac12(\ket{g}_1- e^{i\gamma_0}e^{\frac{i}{2}\gamma}\ket{e}_1)\otimes(\ket{g}_2- e^{i\gamma_0}e^{i\gamma}\ket{e}_2)\ ,\notag\\
\ket{c}=&\textstyle\frac1{\sqrt2}(e^{2i\gamma_0}\cos(\frac12\gamma)\ket{e}_1\otimes\ket{e}_2\notag\\
&\textstyle+ie^{i\gamma_0}\sin(\frac12\gamma)\ket{e}_1\otimes\ket{g}_2-\ket{g}_1\otimes\ket{g}_2)\ ,\ \text{and}\notag\\
\ket{r}=&\textstyle\frac12(\ket{g}_1+ e^{i\gamma_0}e^{-\frac{i}{2}\gamma}\ket{e}_1)\otimes(\ket{g}_2+ e^{i\gamma_0}e^{- i\gamma}\ket{e}_2)\ .\notag
\end{align}
The state $\ket{c}$ is entangled for all $\gamma$ not an integer multiple of $\pi$, with monotonically decreasing entanglement entropy as a function of $\gamma$ between $0$ and $\pi$ in the limit $|\alpha|\to\infty$ (see Fig.\ \ref{fig:mixing}), while the states $\ket{l}$ and $\ket{r}$ are separable. The full two-atom state is entangled if $\rho_a$ cannot be expressed as a mixture of separable states. The entanglement of formation \cite{wootters} is an entanglement measure that vanishes if and only if the state is separable, displayed in Fig.\ \ref{fig:2atom} that shows that the two atoms are entangled for most values of $\gamma$ for finite $\alpha$, and for all $\gamma$ except integer multiples of $\pi$ in the limit $|\alpha|\to\infty$. When nonzero, the entanglement of formation of the two-atom state, being equal to the minimal weighted average entanglement entropy of pure states mixed in $\rho_a$, obtains its maximum of $\approx0.21$ at a kink singularity at $\gamma=\frac\pi2$. Unfortunately, there exist nonequivalent entanglement monotones for mixed states \cite{hor}. As an example, the negativity, that is minus the sum of the negative eigenvalues of the partial transpose of $\rho_a$, obtains its maximum at $\gamma \approx0.511\pi$ (see Fig.\ \ref{fig:2atom}). This result indicates that although there is no precise meaning for optimal interaction time, values of $\gamma$ near $\frac12\pi$ are best for entanglement generation in this system. Furthermore, changing the initial condition of the second atom to $\ket{e}$ leaves the entanglement of formation unchanged, and slightly shifts the maximum of the negativity to $\gamma\approx0.513\pi$.
An important aspect of the two-atom entanglement generation process is that although the atom-field entanglement, and therefore the entanglement entropy of the center wave packet $\ket{c}$, reaches maximal value after a short effective interaction time $\gamma$ of $O(|\alpha|^{-1})$, the degree of entanglement of the full two-atom state is very low for such small $\gamma$. This is a result of the incoherent mixing of $\ket{c}$ with the flank wave packets $\ket{l},\ket{r}$ whose overlap with $\ket{c}$ is low for small $\gamma$, as follows from the low purity of $\rho_a$ for these $\gamma$, see Fig.\ \ref{fig:mixing}. When $\gamma$ increases, the entanglement entropy of $\ket{c}$ decreases until it becomes separable for $\gamma=\pi$, but at the same time the overlap of $\ket{c}$ with $\ket{l}$ and $\ket{r}$, and consequently the purity of $\rho_a$, increase. As a result, the degree of entanglement of $\rho_a$ increases initially, reaches a maximum near $\gamma=\frac12\pi$, and decreases to 0 for $\gamma=\pi$, as shown in Fig.\ \ref{fig:2atom}.
Although Eq.\ (\ref{eq:densitya}) is not valid for $\gamma$ an integer multiple of $\pi$, the preceding argument is valid for these values of $\gamma$ showing that the entanglement of formation is a continuous function of $\gamma$.
\section{Effects of decoherence}\label{sec:dec}
\begin{figure*}[htb]
\subfigure{\epsfig{file=deco3.eps,width=0.45\linewidth}}\hfill
\subfigure{\hspace{-0.2cm}\epsfig{file=con5.eps,width=0.45\linewidth}}\hspace{5mm}
\caption{\label{fig:decoherence} The entanglement of formation $E_f$ of the two-atom state as a function of the phase-space angle $\gamma$ for $y\equiv\frac{2\lambda|\alpha|^3}{\Omega_1+\Omega_2}=0,0.15,0.4,0.7,1$ respectively (left) and $y=0,4,5,6,7,8$ (right) from top to bottom, where $\lambda$ is the cavity loss rate and $\Omega_{1,2}$ are the atom-field interactions. The dashed line in the right panel traces the maxima of $E_f$ over $\gamma$ for given values of $y$.}
\end{figure*}
The preceding results apparently contradict the correspondence principle by displaying entanglement, a purely quantum phenomenon, at the classical limit. However, a given value of entanglement is reached for a fixed $\gamma=\frac{\Omega_1t_1}{|\alpha|}=\frac{\Omega_2t_2}{|\alpha|}$, and therefore the required interaction time diverges when $|\alpha|\to\infty$. This is a demonstration of the singular nature of the semiclassical limit: for a fixed interaction time classical physics is recovered as $|\alpha|\to\infty$, but the classical and the long-time limits do not commute. A similar phenomenon was observed in \cite{lg}, where the classical limit and the limit of large squeezing do not commute.
Physically, longer interaction times imply a stronger effect of environment coupling, and it is this effect that guarantees the emergence of classical physics for large $|\alpha|$, both because of stronger dephasing and because of longer interaction times. We demonstrate this statement using the standard Markovian model of cavity loss, with Lindblad generator $a$ and rate $\lambda$, so that the master equation for the full system density matrix $\rho$ is \cite{zurek}
\begin{equation}\label{eq:master}
\textstyle\partial_t\rho=-\frac{i}{\hbar}[H,\rho]+\frac12\lambda(a^\dagger a\rho-2a\rho a^\dagger+\rho a^\dagger a)
\end{equation}
A superposition of distinct coherent states $c(\ket{\alpha}+\ket{\beta})$ experiences as a result of cavity loss, in addition to the overall decay with rate $\lambda$, a much faster dephasing with rate $\lambda|\alpha-\beta|^2$ that affects the coefficients of the coherence terms in the density matrix proportional to $\ket{\alpha}\bra{\beta}$ and its conjugate \cite{zurek}.
In order to analyze this process in conjunction with the wave packet dynamics we assume that $\lambda|\alpha|\ll \Omega_1,\Omega_2$ so that we can ignore the energy decay of the state. The full-system density matrix after the interaction with the first atom is then
\begin{align}
\rho_1&=\ket{1_+}\!\bra{1_+}+x_1\ket{1_+}\!\bra{1_-}+x_1^*\ket{1_-}\!\bra{1_+}
+\ket{1_-}\!\bra{1_-}
\end{align}
with
$
x_k=e^{-\frac{\lambda|\alpha|^3}{\Omega_k}(\gamma+i(e^{-i\gamma}-1))}.
$
A similar dephasing affects the coefficient of the twelve coherence terms during the interaction with the second atom.
The significance of dephasing for the generation of two-atom entanglement is the reduction of the coherence between the two wavepackets $\ket{2_{+-}}$ and $\ket{2_{-+}}$ that generates the entangled atomic state $\ket{c}$ when they collide, so that the term $\ket{2_{+-}}\bra{2_{-+}}$ and its conjugate in the full-state density matrix are multiplied by $x_1x_2$
and $(x_1x_2)^*$ (respectively). It follows that the term $\frac12 \ket{c}\bra{c}$ in $\rho_a$ is replaced by
\begin{align}
\rho_c&=\textstyle\frac14(\ket{c_A} \bra{c_A} +x_1x_2\ket{c_A}\bra{c_B}+(x_1x_2)^*\ket{c_B}\bra{c_A}
\nonumber\\&+ \ket{c_B} \bra{c_B})
\label{eq:od-dec}\end{align}
with
\begin{equation*}
\ket{c_{A,B}}=\textstyle\frac12(\ket{g}_1\mp e^{ \frac{i}{2}\gamma_0}e^{\pm\frac{i}{2}\gamma}\ket{e}_1)\otimes(\ket{g}_2\pm e^{\frac{i}{2}\gamma_0}\ket{e}_2)
\end{equation*}
where upper (lower) signs correspond to $A$ ($B$), respectively,
while the terms proportional to $\ket{l}\bra{l}$ and $\ket{r}\bra{r}$ are negligibly affected by decoherence. For $|x_1x_2|<1$ $\rho_c$ is the density matrix of a mixed state. Since $\ket{c_A}$ and $\ket{c_B}$ are product states, the degree of entanglement of $\rho_c$, and therefore $\rho_a$, is a decreasing function of the decoherence rate $\lambda$
It follows from Eq.\ (\ref{eq:od-dec}) that the degree of entanglement depend on the decoherence rate only through the combination $y=\frac{2\lambda|\alpha|^{3}}{\Omega_1+\Omega_2}$. Fig.\ \ref{fig:decoherence} shows the entanglement of formation as a function of $y$ and $\gamma$. Evidently, the maximum achievable entanglement decreases with increasing $y$, and this maximum is achieved earlier. Detailed analysis shows that when $y$ is large the maximum of the entanglement is achieved at $\gamma=\frac1y$, and its value is proportional to $y^4\log y$. This phenomenon has the simple interpretation that a shorter effective interaction time allows less time for entanglement generation, but also less time for decoherence, and that this trade-off leads to a shorter optimal interaction time for stronger decoherence; for large $y$ the entanglement is completely destroyed when $\gamma=\frac{3}{2y}$. In this limit the interaction stops before the initial wave packet splits so that entanglement is generated only by weak quantum fluctuations, and this allows a power law rather than an exponential decay in the degree of entanglement---the entanglement of formation for the optimal interaction time is approximately $\frac19$ of the entanglement achieved in an {ideal} system for the same interaction time, see Fig.\ \ref{fig:decoherence} (right panel).
\section{Conclusions}
Our first conclusion is that two qubits \emph{can} be entangled solely by interaction mediated by a macroscopic field preparation, and the degree of entanglement tends to a positive value in the classical limit. The entanglement process proceeds through splitting of the field wave packet into four components each carrying a different atomic product state and a subsequent merging of two of the sub wave packets to form a entangled superposition of two atomic product states. Thus, although the initial field state is semiclassical with a large photon number and well-defined quadratures, the field necessarily evolves into highly non-classical states during the entanglement generation process. Nevertheless, semiclassical wave packet dynamics approximation for the propagation of the wave packet stays valid during the \emph{full} entanglement generation process, although in a multiple phase space---one phase space copy for each sub wave packet.
The splitting of the wave packet is a slow process where a macroscopic field state evolves by interaction with two qubits.
Naturally, it is not an efficient method of creating entangled pairs, as the interaction time required to obtain entanglement diverges in the classical limit. The correspondence principle is therefore nonetheless obeyed in the sense that classical physics, \textit{i.e.}\/ no entanglement, is obtained in the limit of large photon number for a \emph{fixed} interaction time.
The final two-atom state is a coherent superposition of two product states associated with the center wave packet, incoherently mixed with two additional products associated with the flank wave packets. Thus, the two-atom entanglement is determined by the overlap of the evolving atomic states. In particular, wave packet splitting and atom-field entanglement are necessary but not sufficient for the generation of atom-atom entanglement.
A subtler issue is the fact that the two atoms are not maximally entangled by interaction with the semiclassical field mode. Although for short interaction periods the center wave packet has an almost Bell atomic state, the atom-atom entanglement is degraded almost completely by mixture with the flank wave packets, while for long interaction times the field and atoms decouple, the atomic state at this stage is separable. Unlike the vacuum field, therefore, the semiclassical field does not function as a quantum gate, and we conjecture that this is an example of a general principle.
We thank D Aharonov, I Arad, H Eisenberg, and N Katz, for helpful discussions. This work was supported by the ISF grant {1002/07}.
|
2,869,038,156,735 | arxiv | \section{Introduction}
Decades after the experimental detection of the neutrino \cite{Reines 1953},
it was generally accepted that the neutrino mass $m_{0\nu}$ was rigorously
zero. The crucial experiments with the 50 kton neutrino detector
Super-Kamiokande found strong evidence for oscillations (and hence - mass) in
the atmospheric neutrinos \cite{Fukuda 1998}. The direct neutrino measurements
allowed to bound the neutrino mass. The upper limit for the mass of the
lightest neutrino flavor $\nu_{e}$ was obtained from experiments for
measurement of the high-energy part of the tritium $\beta$-spectrum and recent
experiments yielded $\sim
\operatorname{eV
$ upper limit \cite{Weinheimer 1999, Lobashev 1999}. As a result of the recent
experiments, the upper mass limits of $\nu_{\mu\text{ }}$and $\nu_{\tau\text{
}}$are $170$ KeV \cite{Assamagan 1996} and $18.
\operatorname{MeV
$ \cite{Barate 1998}, respectively. The Solar neutrino experiments
(\textit{SNE}) and Atmospheric neutrino experiments (\textit{ANE}) allow to
find the square mass difference $\bigtriangleup m_{12}^{2}=m_{2}^{2}-m_{1
^{2}$ and $\bigtriangleup m_{23}^{2}=m_{3}^{2}-m_{2}^{2}$, but not the
absolute value of neutrino masses. The astrophysical constraint of the
neutrino mass is
{\textstyle\sum}
m_{\nu}<
\operatorname{eV
$ \cite{Bahcall 1996}. The recent extensions of the Standard model lead to
non-zero neutrino masses, which are within the large range of $10^{-6
\operatorname{eV
\div1
\operatorname{eV
$.
In the classical $SU(5)$ model the mass relations between charged leptons and
down quark masses are simply identities: $m_{e}=m_{d},$ $m_{\mu}=m_{s}$ and
$m_{\tau}=m_{b}$. The mass relations of Georgi-Jarlskog \cite{Georgi 1979}
ensue from the $SO(10)$ model and relate charged leptons and down quark
masses: $m_{e}=m_{d}/3,$ $m_{\mu}=3m_{s}$ and $m_{\tau}=m_{b}$. However, these
mass relations deviate several times, compared to experimental data. Moreover,
similar mass relations are unsuited for neutral leptons (neutrino) masses.
The seesaw mechanism naturally generates small Majorana neutrino mass $m_{\nu
}$ from reasonable Dirac mass $m_{D}$ and very heavy Majorana sterile neutrino
mass $M_{N}$, namely $m_{\nu}\sim\frac{m_{D}^{2}}{M_{N}}\ll m_{D}$. But there
are many seesaw models that differ in the scale $M_{N}$ and Dirac mass. The
Grand unified theories (\textit{GUT}) are the main candidates for seesaw
models, with $M_{N}$ at or a few orders of magnitude below \textit{GUT} scale.
Successful \textit{GUT} models should essentially generate
Cabibbo-Kobayashi-Maskawa (\textit{CKM}) quark mixing matrix \cite{Cabibbo
1963, Cobayashi 1973} and Maki-Nakagawa-Sakata (\textit{MNS}) lepton mixing
matrix \cite{Maki 1962} and predict results compatible with the data from
\textit{SNE} and \textit{ANE}. Yet, it is admitted that the predictions of the
quark-lepton mass spectrum are the least successful aspect of the unified
gauge theory \cite{Fukugita 1999, Falcone 2002}.
The purpose of this paper is to find simple and reliable quark-lepton mass
relations, based on experimental data and estimations for quark and lepton
masses. The next step is to estimate neutrino masses by means of these mass
relations and data from \textit{SNE} and \textit{ANE}.
\section{Power law approximation for the masses of charged leptons and up
quarks}
According to the Standard model, the fundamental constituents of the matter
are 6 quarks and 6 leptons. The fundamental fermions group in three
generations, having similar properties and increasing masses. The three
generations of the fundamental fermions and their masses are presented in
Table \ref{Table 1}. The estimations of quark masses are taken from
\cite{Manohar 2000} and the upper mass limits of the neutrino flavors are
taken from \cite{Weinheimer 1999, Lobashev 1999, Assamagan 1996, Barate 1998}.
\begin{table}[htb] \centering
\caption{Three generations of fundamental fermions and their masses
(MeV).
\begin{tabular}
[c]{lllllll}\hline\hline
Fermions & \multicolumn{2}{l}{$1^{st}$ generation} &
\multicolumn{2}{l}{$2^{nd}$ generation} & \multicolumn{2}{l}{$3^{rd}$
generation}\\\hline
Up quarks & $u$ & $3$ & $c$ & $1.25\times10^{3}$ & $t$ & $1.74\times10^{5}$\\
Down quarks & $d$ & $6$ & $s$ & $122$ & $b$ & $4.2\times10^{3}$\\
Charged leptons & $e$ & $0.511$ & $\mu$ & $106$ & $\tau$ & $1.78\times10^{3
$\\
Neutral leptons & $\nu_{e}$ & $<2\times10^{-6}$ & $\nu_{\mu}$ & $<0.17$ &
$\nu_{\tau}$ & $<18.2$\\\hline\hline
\end{tabular}
\label{Table 1
\end{table
\bigskip
A clear feature of the quark and charged lepton mass spectrum is the hierarchy
of masses belonging to different generations:
\begin{center
\begin{equation}
m_{u}\ll m_{c}\ll m_{t},\text{ }m_{d}\ll m_{s}\ll m_{b}\text{, }m_{e}\ll
m_{\mu}\ll m_{\tau} \label{eqn1
\end{equation}
\end{center}
Most likely, a similar hierarchy of masses of the neutral leptons (neutrinos)
could be anticipate $m_{\nu e}\ll m_{\nu\mu}\ll m_{\nu\tau}$. Based on the
experimental data, we search for a simple relation between the masses of
charged leptons ($m_{cl}$) and the respective up quarks ($m_{uq}$) by the
least squares. Although the linear regression $m_{cl}\approx0.0102m_{uq}$
\operatorname{eV
$ shows close correlation, it yields electron mass many times lower than the
experimental value. After examination of other simple approximations
(logarithmic, exponential and power law) we found out that the power law fits
best experimental data:
\begin{center
\begin{equation}
m_{cl}=k_{0}m_{uq}^{\alpha
\operatorname{eV}
\label{eqn2
\end{equation}
\end{center}
where $k_{0}=9.33$ and $\alpha=0.749\approx3/4.$
Despite the large uncertainty of the $u-$ quark mass (from $1.
\operatorname{MeV
$ to $5$
\operatorname{MeV
$) and $d-$ quark mass (from $
\operatorname{MeV
$ to $
\operatorname{MeV
$), the slope remains within the narrow interval from 0.683 to 0.782. Although
only three points make the approximation, the correlation coefficient is very
high ($r=0.993$) and the maximal ratio of the predicted mass in relation to
the respective experimental value is $\Delta=m_{pr}/m_{ex}=$ $1.74$ (for
muon). The predicted masses of the electron and tau lepton differ from the
respective experimental values with less than 40\%. Therefore, the mass
relation (\ref{eqn2}) could be accepted satisfactory.
\section{Mass relation for neutral leptons and down quarks and estimations of
neutrino masses}
We suggest that a mass relation similar to (\ref{eqn2}) connects the masses of
the neutral leptons ($m_{nl}$) and the respective down quarks ($m_{dq}$):
\begin{center
\begin{equation}
m_{nl}=km_{dq}^{\alpha
\operatorname{eV}
\label{eqn3
\end{equation}
\end{center}
where $\alpha=0.749\approx3/4$ and $k$ is an unknown constant.
For $k=k_{0}=9.33$ formula (\ref{eqn3}) yields $m_{\nu e}\approx1.13$
\operatorname{MeV
$, $m_{\nu\mu}\approx10.8
\operatorname{MeV
$ and $m_{\nu\tau}\approx153.66$
\operatorname{MeV
$. These values are several orders of magnitude bigger than the experimental
upper limits of the neutrino masses (See Table 1), therefore $k\ll k_{0}$.
Astrophysical constraints allow to limit more closely the value of \textit{k}
since they give $m_{\nu}<\sum m_{\nu}<2$
\operatorname{eV
$. Thus, from equation (\ref{eqn3}) we obtain :
\begin{center
\begin{equation}
k=\frac{m_{\nu\tau}}{m_{b}^{3/4}}<\frac{\sum m_{\nu}}{m_{b}^{3/4}
\sim1.21\times10^{-7} \label{eqn4
\end{equation}
\end{center}
\textit{ANE} \cite{Ashie 2005} determine the squared mass difference:
\begin{center
\begin{equation}
m_{\nu\tau}^{2}-m_{\nu\mu}^{2}\approx2.2\times10^{-3
\operatorname{eV
^{2} \label{eqn5
\end{equation}
\end{center}
Relation (\ref{eqn3}) yields:
\begin{center
\begin{equation}
\frac{m_{\nu\mu}}{m_{\nu e}}\sim(\frac{m_{s}}{m_{d}})^{3/4}\approx9.60
\label{eqn6
\end{equation}
\begin{equation}
\frac{m_{\nu\tau}}{m_{\nu\mu}}\sim(\frac{m_{b}}{m_{s}})^{3/4}\approx14.17
\label{eqn7
\end{equation}
\end{center}
\bigskip
Solving system (\ref{eqn5}) - (\ref{eqn7}) we obtain $m_{\nu e}\approx
3.4\times10^{-4
\operatorname{eV
$, $m_{\nu\mu}\approx3.3\times10^{-3
\operatorname{eV
$ and $m_{\nu\tau}\approx4.7\times10^{-2
\operatorname{eV
$. These results support the normal hierarchy of neutrino masses.
On the other hand, the Large mixing angle (\textit{LMA}) of Mikheyev - Smirnov
- Wolfenstein (\textit{MSW}) solution for \textit{SNE} \cite{Bandyopadhyay
2005} yields:
\begin{center
\begin{equation}
m_{\nu\mu}^{2}-m_{\nu e}^{2}\approx7.9\times10^{-5
\operatorname{eV
^{2} \label{eqn8
\end{equation}
\end{center}
This equation, together (\ref{eqn6}) and (\ref{eqn7}), yield $m_{\nu e
\approx9.3\times10^{-4
\operatorname{eV
$, $m_{\nu\mu}\approx8.9\times10^{-3
\operatorname{eV
$ and $m_{\nu\tau}\approx0.13$
\operatorname{eV
$. These values are almost three times bigger than the values obtained by
Super Kamiokande data, therefore, they do not fit well with the latter.
However, the Small mixing angle (\textit{SMA}) \textit{MSW} solution for
\textit{SNE} \cite{Albright 2001} yields:
\begin{center
\begin{equation}
m_{\nu\mu}^{2}-m_{\nu e}^{2}\approx6\times10^{-6
\operatorname{eV
^{2} \label{eqn9
\end{equation}
\end{center}
This equation, together (\ref{eqn6}) and (\ref{eqn7}), yield $m_{\nu e
\approx2.6\times10^{-4
\operatorname{eV
$, $m_{\nu\mu}\approx2.5\times10^{-3
\operatorname{eV
$ and $m_{\nu e}\approx3.4\times10^{-2
\operatorname{eV
$. These values differ by less than 25\% from the results obtained above by
Super Kamiokande data, which show that according to the suggested approach,
\textit{SMA MSW} solution fits better with the \textit{ANE} than \textit{LMA
MSW}.
Thus, the obtained quark-lepton mass relations and the results of the solar
and atmospheric neutrino experiments provide to estimate the masses of
$\nu_{e}$, $\nu_{\mu}$ and $\nu_{\tau}$ of $(2.6\div3.4)\times10^{-4
\operatorname{eV
$, $(2.5\div3.3)\times10^{-3
\operatorname{eV
$ and $(3.4\div4.7)\times10^{-2
\operatorname{eV
$, respectively. These values are close to the neutrino masses ($2.1\times
10^{-4
\operatorname{eV
$, $2.5\times10^{-3
\operatorname{eV
$ and $5.0\times10^{-2
\operatorname{eV
$) found in \cite{Valev 2008} by the mass relation, connecting the masses of
four stable particles and the coupling constants of the fundamental interactions.
We could calculate constant $k$ using the most trustworthy data for neutrinos
and down quarks masses, namely $\nu_{\tau}$ and $b$-quark masses:
\begin{center
\begin{equation}
k=\frac{m_{\nu\tau}}{m_{b}^{3/4}}\sim2.42\times10^{-9} \label{eqn10
\end{equation}
\bigskip
\end{center}
\begin{figure}
[t]
\begin{center}
\includegraphics[
natheight=2.859900in,
natwidth=5.306500in,
height=2.904in,
width=5.3627in
{Fig1.png
\caption[Quark Lepton Mass]{Mass relations for the charged leptons and up
quarks masses (thick solid line) and for the neutral leptons and down quarks
masses (thin solid line). The dashed line shows the upper limit of the
neutrino masses obtained by astrophysical constraints.
\label{Figure1
\end{center}
\end{figure}
\bigskip
The obtained mass relations (\ref{eqn2}) and (\ref{eqn3}) are shown in Fig.
\ref{Figure1}. It shows that the neutrino masses estimated by \textit{SMA MSW}
are close to the neutrino masses estimated by \textit{ANE}, i.e. two sets of
estimations are compatible.
The attempt to relate the masses of the charged leptons with the masses of the
down quarks and the masses of neutral leptons with the masses of up quarks did
not yield satisfactory results since the data from \textit{SNE} and
\textit{ANE} did not fit within the framework of the suggested approach.
Besides, the respective mass relations predict the muon mass which is nearly
three times less than the experimental value (see Table \ref{Table 2}) and an
electron neutrino mass less than $10^{-7
\operatorname{eV
$.
\bigski
\begin{table}[htbp] \centering
\caption{Masses of charged leptons calculated by various approaches and
experimental values (MeV ).
\begin{tabular}
[c]{lllllll}\hline\hline
Model & \textit{SU(5)} & \textit{SO(10)} & Power law & Linear & Power law &
Exp.\\
& & & (Doun & (Up & (Up & data\\
& & & quarks) & quarks) & quarks) & \\\hline
Electron & 6 & 2 & 0.905 & 0.031 & \textbf{0.663} & \textbf{0.511}\\
Muon & 122 & 366 & 36.9 & 12.8 & \textbf{60.7} & \textbf{105.7}\\
Tau & 4200 & 4200 & 2877 & 1775 & \textbf{2449} & \textbf{1777}\\
$\Delta\max$ & 11.74 & 3.91 & 2.86 & 16.48 & \textbf{1.74} & \textbf{1
\\\hline\hline
\end{tabular}
\label{Table 2
\end{table
\bigskip
Table \ref{Table 2} shows the masses of charged leptons calculated by
different approaches and experimental values. The last row of the table shows
the maximal ratio (deviation) $\Delta_{\max}$ of the masses predicted by the
respective approach in relation to the experimental values $\Delta
=m_{pr}/m_{ex}$. Clearly, the power law relating to the masses of charged
leptons and up quarks fits best experimental data.
\bigskip
\section{Conclusions}
Based on the experimental data and estimations of charged leptons and quarks
masses, a power law with exponent $3/4$ has been found, connecting charged
leptons masses and up quarks masses. It has been shown that this approximation
is considerably better than any known approach. A similar mass relation has
been suggested for neutral leptons and down quarks. The latter mass relation
and the results of \textit{ANE} and \textit{SNE} have been used for
estimations of neutrino masses. The values of neutrino masses obtained by
\textit{ANE} are close to the ones, obtained by the \textit{SMA MSW} solution.
The masses of $\nu_{e}$, $\nu_{\mu}$ and $\nu_{\tau}$ are estimated to
$(2.6\div3.4)\times10^{-4
\operatorname{eV
$, $(2.5\div3.3)\times10^{-3
\operatorname{eV
$ and $(3.4\div4.7)\times10^{-2
\operatorname{eV
$, respectively, and they support the normal hierarchy of neutrino masses.
\bigskip
|
2,869,038,156,736 | arxiv | \section{methods}
A schematic
of the spin valve configuration is depicted in Fig.~\ref{schematic}.
We model the
nanostructure
as a
${SF_1NF_2}$ layered system,
where $S$ represents the superconducting layer,
$N$ denotes the normal metallic intermediate layer,
and $F_1$, $F_2$
are the inner (free) and outer (pinned)
magnets, respectively.
The
layers are assumed to be infinite in the $y-z$ plane with a total thickness
$d$ in the $x$ direction, which is perpendicular to the interfaces between layers.
The ferromagnet
$F_2$ has width $d_{F_2}$, and
fixed direction of magnetization along $z$,
while
the free magnetic layer $F_1$ of width $d_{F_1}$
has a variable magnetization
direction.
The superconducting layer of thickness $d_S$ is in contact with the free layer.
The magnetizations in the $F$ layers are modeled by effective Stoner-type exchange fields ${\bm h}(x)$
which vanish in the non-ferromagnetic layers.
To accurately describe the physical properties of our systems with sizes in the
nanometer scale and over a broad range of exchange fields,
where quasiclassical approximations
are limited,
we numerically
solve the microscopic BdG equations within a fully self-consistent framework.
The general
spin-dependent BdG equations for the quasiparticle energies, $\varepsilon_n$,
and quasiparticle wavefunctions, $u_{n\sigma}, v_{n\sigma}$ is written:
\begin{align}
&\begin{pmatrix}
{\cal H}_0 -h_z&-h_x&0&\Delta(x) \\
-h_x&{\cal H}_0 +h_z&-\Delta(x)&0 \\
0&-\Delta(x)&-({\cal H}_0 -h_z)&-h_x\\
\Delta(x)&0&-h_x&-({\cal H}_0+h_z) \\
\end{pmatrix} \nonumber \\
&\times
\begin{pmatrix}
u_{n\uparrow}\\u_{n\downarrow}\\v_{n\uparrow}\\v_{n\downarrow}
\end{pmatrix}
=\varepsilon_n
\begin{pmatrix}
u_{n\uparrow}\\u_{n\downarrow}\\v_{n\uparrow}\\v_{n\downarrow}
\end{pmatrix}\label{bogo2},
\end{align}
where $h_i$
($i=x,z$) are components of the
exchange field. In
Eqs.~(\ref{bogo2}),
the single-particle Hamiltonian ${\cal H}_0=
-{1}/{(2m)}{d^2}/{dx^2}-E_F+U(x)$
contains the Fermi energy, $E_F$, and
an effective interfacial scattering potential described by
delta functions of strength $H_j$ ($j$ denotes the different interfaces), namely:
$U(x)= H_{1}\delta(x-d_S)$
$+$
$H_{2}\delta(x-d_S-d_{F_1})$
$ +$
$H_{3}\delta(x-d_S-d_{F_1}-d_N)$,
where $H_j={k_F H_{B j}}/{m}$ is written in terms of the dimensionless
scattering strength $H_{B j}$.
We assume $h_{x,i}=h_i\cos \theta_i$ and
$h_{z,i}=h_i\sin \theta_i$
in $F_i$, where $h_i$ is the magnitude of exchange field, and $i$ denotes the region.
To minimize the free energy of the system at temperature $T$,
the singlet pair potential $\Delta(x)$ is calculated self-consistently \cite{gennes}:
\begin{equation}
\label{del}
\Delta(x) = \frac{g(x)}{2}{\sum_n}
\bigl[u_{n\uparrow}(x)v_{n\downarrow}(x) +
u_{n\downarrow}(x)v_{n\uparrow}(x)\bigr]\tanh\left(\frac{\varepsilon_n}{2T}\right),
\end{equation}
where the sum is over all eigenstates
with $\varepsilon_n$ that lie within a characteristic
Debye energy $\omega_D$, and
$g(x)$ is the superconducting coupling strength, taken to be constant in the $\rm S$ region and zero elsewhere.
The pair potential gives direct information regarding
superconducting correlations within the $S$ region only,
since it vanishes in the remaining spin valve regions where $g(x) = 0$.
Greater insight
into the singlet superconducting correlations throughout the structure, and
the extraction of the proximity effects
is most easily obtained by considering the pair
amplitude, $f_3$, defined as $f_3\equiv \Delta(x)/g(x)$.
To analyze the correlation between the behavior of the
superconducting transition temperatures and the existence of
odd triplet superconducting correlations in our system,
we compute the induced triplet pairing amplitudes which we denote as $f_0$
(with $m=0$ spin projection) and $f_1$ (with $m=\pm 1$ spin projection)
according to
the following equations \cite{Halterman2007}:
\begin{subequations}
\begin{eqnarray}
f_0 (x,t) & = \frac{1}{2} \sum_n \left[ u_{n\uparrow} (x) v_{n\downarrow}(x)-
u_{n\downarrow}(x) v_{n\uparrow} (x) \right] \zeta_n(t),\;\;\;\; \;\;\;
\label{f0defa} \\
f_1 (x,t) & = -\frac{1}{2} \sum_n \left[ u_{n\uparrow} (x) v_{n\uparrow}(x)+
u_{n\downarrow}(x) v_{n\downarrow} (x) \right] \zeta_n(t),\;\;\;\; \;\;\;
\label{f1defa}
\end{eqnarray}
\end{subequations}
where $\zeta_n(t) \equiv \cos(\varepsilon_n t)-i \sin(\varepsilon_n t) \tanh(\varepsilon_n /(2T))$,
and $t$ is the time difference in the Heisenberg picture.
These triplet pair amplitudes are odd in $t$ and vanish at $t=0$,
in accordance with the Pauli exclusion principle.
The quantization axis
in Eqs.~(\ref{f0defa}) and (\ref{f1defa})
is along the $z$ direction.
When studying the triplet correlations in $F_1$,
we
align the quantization axis
with the
local
exchange field direction, so that after
rotating, the triplet amplitudes $f_0$ and $f_1$
become linear combinations of the $f_0$ and $f_1$
in the original unprimed system \cite{klaus_zep}:
$f_0^\prime(x,t)$$=$ $f_0(x,t) \cos\theta - f_1(x,t) \sin\theta$,
and $f_1^\prime(x,t)$$=$$f_0(x,t) \sin\theta + f_1(x,t) \cos\theta$.
Thus, when the exchange fields in $F_1$ and $F_2$ are
orthogonal ($\theta=\pi/2$),
the roles of the equal-spin and opposite-spin triplet correlations are reversed.
The singlet pair amplitude however is naturally invariant under these rotations.
The study of single-particle
excitations in these systems can
reveal important
signatures
in the proximity induced singlet and triplet pair correlations.
A useful experimental tool that probes these single-particle states is tunneling
spectroscopy, where information
measured by a scanning tunneling microscope (STM)
can reveal the local DOS, $N(x,\varepsilon)$, as a function of position $x$
and energy $\varepsilon$.
We write $N(x,\varepsilon)$ as a
sum of each spin component ($\sigma=\uparrow,\downarrow$)
to the DOS:
$N(x, \varepsilon) = N_\uparrow(x, \varepsilon) + N_\downarrow(x, \varepsilon)$,
where,
\begin{eqnarray}
N_\sigma(x, \varepsilon) =\sum_n\left[u_{n\sigma}^2(x) \delta(\varepsilon-\varepsilon_n)+
v_{n\sigma}^2(x) \delta(\varepsilon+\varepsilon_n)
\right].\;\;\;\;\;
\end{eqnarray}
\section{results}
We now proceed to present the self-consistent
numerical results
for the transition temperature, triplet amplitudes,
and local DOS for the spin-valve structure depicted in Fig.~\ref{schematic}.
We normalize
the temperature in the calculations by
$T_{0}$, the transition
temperature of a pure bulk S sample.
When in the low-$T$ limit,
we take $T = 0.05 T_{0}$.
All length scales
are normalized by the
Fermi wavevector $k_F$,
so that the coordinate
$x$ is written $X = k_F x$,
and the $F_1$
and $F_2$
widths are
written
$D_{F_i} = k_F d_{F_i}$, for $i = 1, 2$.
The thick
half-metallic ferromagnet $F_2$
has width
$D_{F_2} = 400$, and $F_1$ is a standard ferromagnet
with $h_1=0.1E_F$.
We set
$d_{F_1}=\xi_F$, where $\xi_F=v_F/(2 h_1)$
is the length scale describing the
propagation of spin-0 pairs.
In dimensionless units we thus have,
$D_{F_1}=(h_1/E_F)^{-1}=10$,
which optimizes
spin mixing of superconducting correlations in the system.
The $S$ width is normalized similarly
by
$D_{S} = k_F d_{S}$,
and its scaled coherence length
is taken to be $k_F \xi_0 = 100$.
Natural units, e.g., $\hbar = k_B = 1$, are used throughout.
\begin{figure}
\centering
\includegraphics[width=0.99\columnwidth]{tc_hstudy.pdf}
\caption{
(Color online). Critical temperature $T_c$ as a function of the
relative exchange field orientation angle $\theta$ at differing values
of the
ratio of the exchange field in the $F_2$ region, $h_2$
to the Fermi energy $E_F$.
The legend depicts the range of $h_2/E_F$ considered, ranging from
a relatively weak ferromagnet with $h_2/E_F=0.1$, to
a fully spin polarized half-metallic phase, corresponding to $h_2/E_F=1$.
}
\label{tc_h}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.99\columnwidth]{triplet_collection_varioush2_inS.pdf}
\includegraphics[width=0.99\columnwidth]{triplet_collection_variousS_inS.pdf}
\caption{
(Color online). The magnitudes of the normalized
triplet ($f_0, f_1$) and singlet ($f_3$)components
are shown averaged over the $S$ region and plotted as a function of
the relative magnetization angle $\theta$.
The temperature is set at $T=0.05 T_0$.
The top panels (a)-(c) depict
differing values of the
exchange field in the $F_2$ region as shown.
All other system parameters are the same as those used in Fig.~\ref{tc_h}.
Panels (d)-(f) correspond to $F_2$ with an
optimal exchange field of $h_2/E_F=1$,
and various $S$ widths, as labeled.
}
\label{trip_h}
\end{figure}
\subsection{Critical Temperature and Triplet Correlations}
We first study the critical temperature of the spin valve system.
The linearized
self-consistency expression near $T_c$ takes the form,
$\Delta_i=\sum_q {\cal G}_{iq}\Delta_q$,
where $\Delta_i$ are the expansion coefficients
for $\Delta(x)$ in the chosen basis.
The ${\cal G}_{iq}$ are the corresponding matrix elements,
which involve sums
of the normal state energies and wavefunctions.
To determine $T_c$, we compute the eigenvalues
$\lambda$, of the corresponding eigensystem
${\bm \Delta}=\lambda {\cal G} {\bm \Delta}$.
When $\lambda>1$ at a given temperature,
the system is in the superconducting state.
Many of the computational details can be found in Ref.~\onlinecite{ilya},
and are omitted here.
It was experimentally observed \cite{singh}
that a $S F_1 F_2$ spin valve
is most effective at converting singlet Cooper
pairs to
spin polarized triplet pairs when $F_2$ is in a
half-metallic phase. To examine this theoretically,
we investigate the critical temperature and
corresponding triplet pair generation as a function of $h_2/E_F$ and $\theta$
($h_1/E_F=0.1$ remains fixed).
The width of the
superconducting layer is maintained at $D_S=130$, and the nonmagnetic insert
has a set width corresponding to $D_N=5$.
The exchange field
$h_2$ varies from $0.1E_F$ to $E_F$ where $h_2=E_F$ corresponds to
the situation where only one
spin species exists in this region (i.e. the half-metallic phase).
As seen in
Fig.~\ref{tc_h}, $T_c$ is nearly constant over the full range of $\theta$ when
both ferromagnets are of the same type, i.e.,
when $h_2/E_F=0.1$.
Upon increasing $h_2$ towards the half-metallic limit,
it is apparent that
the spin valve effect becomes
dramatically enhanced, whereby
rapid changes in $T_c$ occur when varying $\theta$.
This result therefore clearly supports
the assertion that the
use of a half-metal generates
the most optimal spin-valve effectiveness \cite{singh}.
Large variations in $T_c$ have also been
found using a diffusive
quasiclassical approach involving
$S F_1 F_2$ heterostructures lacking the
normal layer insert \cite{Mironov,fomin}.
When
comparing $T_c$ in the two collinear
magnetic orientations,
the
self-consistently calculated critical temperatures
in Fig.~\ref{tc_h} reveal that the parallel state ($\theta=0^\circ$) has
a smaller $T_c$ compared to
the antiparallel state ($\theta=180^\circ$)
for moderate exchange field strengths.
For these cases, the two magnets
can counter one another, leading to a reduction
of their effective pair-breaking effects. This creates
a
more favorable
situation for the superconducting state, causing $T_c$ to be larger.
The situation reverses
for stronger magnets with $h \gtrsim 0.8$,
and the maximum $T_c$ now arises for parallel relative orientations
of the magnetizations.
In between the parallel and antiparallel states,
$T_c$ undergoes a minimum
that
occurs not at the orthogonal orientation ($\theta=90^\circ$),
but slightly away from it.
This behavior
has been observed in ballistic \cite{wu} and
diffusive \cite{fomin} systems where
the minimum in $T_c$ arises from the
leakage of Cooper pairs
that are coupled to the outer $F$ layer
via the generation of the triplet component $f_1$
that is largest near $\theta=90^\circ$.
To demonstrate the correlation
between the strong
$T_c$ variations
and the generation of
triplet and singlet
pairs, Fig.~\ref{trip_h}
shows
the magnitudes
of the
equal-spin triplet amplitudes ($f_1$), opposite-spin triplet amplitudes ($f_0$),
and
the singlet pair amplitudes ($f_3$), each
averaged over the $S$ region.
For the triplet correlations,
a representative value for the normalized relative time $\tau$ is set at
$\tau\equiv \omega_D t = 4$.
When the ferromagnet ($F_2$) possesses a large exchange field,
and the relative magnetization angle between $F_1$ and $F_2$ approaches
an orthogonal state, superconductivity becomes
severely weakened. Indeed, as Fig.~{\ref{tc_h} demonstrated,
the singlet pair correlations can become completely destroyed at
low temperatures ($T\simeq 0.05$), and
orientations in the vicinity of
$\theta\simeq90^\circ$, whereby the system
has transitioned to a normal resistive state.
This is consistent with Fig.~\ref{trip_h}(c), where
the $f_3$ amplitudes
vanish in the neighborhood of $\theta\approx90^\circ$ and $h_2/E_F = 1$.
As Fig.~\ref{trip_h}(a) and (b)
illustrates, the triplet amplitudes also
vanish due to the
absence of singlet correlations at those orientations.
For weaker magnets however,
the superconducting state
never transitions to a normal resistive state over the entire range of $\theta$,
and
the well known situation arises whereby
the equal-spin triplet
pairs are largest for
orthogonal magnetization
configurations,
i.e., when the misalignment angle is greatest ($\theta\simeq90^\circ$).
In all cases however,
the $f_1$ components must always
vanish at $\theta=0$ and $\theta=180^\circ$,
where the relative collinear magnetization alignments are either in the parallel or antiparallel state
respectively.
It is clear from Figs.~\ref{trip_h}(a) and \ref{trip_h}(b)
that the average behavior of $|f_0|$ and $|f_1|$
exhibits their most extreme
values when $T_c$ undergoes its steepest variations around
$\theta \approx 20^\circ$ [see Fig.~{\ref{tc_h}].
In particular,
at the half-metallic phase, $f_1$ is greatly enhanced
while $f_0$ is dramatically
suppressed.
Therefore,
the considerable variations in $T_c$
is correlated with the
fact that $100\%$ spin-polarized compounds such as ${\rm CrO_2}$
result in the optimal generation of spin
triplet correlations \cite{singh}.
The suppression of $f_0$ at $\theta \approx 20^\circ$
is fairly robust to changes in
the size of the $S$ region.
As the bottom panels in Figs.~\ref{trip_h} illustrate,
increasing $D_S$ by several coherence lengths
causes very little change in the location of the first minimum in
$f_0$ at $\theta\approx 20^\circ$.
The angle $\theta$ that corresponds to a peak in $f_1$ however,
noticeably shifts to larger $\theta$, so that at $\theta \approx 20^\circ$,
$f_1$ is no longer at its peak value.
Therefore, the thinnest $S$ layer width considered
here, $D_S=130$, leads to the most favorable conditions for the
generation of $f_1$ triplet pairs in the superconductor and
limited coexistence with the $f_0$ triplet correlations.
\begin{figure}
\centering
\includegraphics[width=0.99\columnwidth]{tc_merged_vert.pdf}
\caption{
(Color online). Critical temperature $T_c$ as a function of the relative exchange field orientation angle $\theta$.
In (a) the normal metal insert has a width of $D_N=5$, and the $S$ width varies as shown in the legend,
from $D_S$=100 to $D_S=200$.
In (b) the $S$ width is fixed at $D_S=130$, while the $N$ spacer is varied.
In (c) the effects of interfacial scattering are examined, with $D_S=130$, $D_N=5$.
The legend depicts the various scattering strengths $H_B$ considered.
}
\label{tc}
\end{figure}
Next, Fig.~\ref{tc} shows $T_c$ as a function of the out-of-plane
misalignment angle $\theta$ for differing (a) superconductor widths $D_S$,
(b) normal layer
widths $D_N$, and (c) spin-independent
interface
scattering strengths $H_B$.
If the
relative magnetizations were to rotate in-plane,
the $T_c$ behavior discussed here
would be identical,
thus providing additional experimental options for
observing the predicted effects.
In (a), the sensitivity of $T_c$ to the $S$ layer width is shown.
The importance of having thin $S$ layers
with $d_S \sim \xi_0$ (100 in our units) is clearly seen.
In essence,
extremely narrow
$S$ boundaries restrict
Cooper pair formation,
causing the ordered superconducting state to
effectively become
more ``fragile",
consistent with other
$F/S$ systems containing thin $S$ layers \cite{wu}.
Indeed, for the thinnest case,
$D_S=100$,
superconductivity
completely vanishes
for most
magnetization configurations,
except when $\theta$ is near
the parallel or antiparallel orientations.
At the thickest $D_S$ shown ($D_S=200$), the sensitivity to $\theta$ has dramatically
diminished, as pair-breaking effects from the adjacent ferromagnet
now have a limited overall effect
in the larger superconductor.
For all $S$ widths considered,
the minimum in $T_c$
occurs when $\theta$
lies slightly off the orthogonal
configuration ($\theta=90^\circ$),
consistent with
some quasiclassical
systems \cite{fomin}.
Next,
in Fig.~\ref{tc}(b)
the $S$ layer thickness is set
to $D_S=130$,
while several nonmagnetic
$N$ metal spacer widths
are considered.
The presence of the $N$ layer clearly
plays a crucial
role in the thermodynamics of
the spin valve. Indeed, an optimum $D_N\approx 5$ exists which yields the greatest
$\Delta T_c (\theta)$:
Increasing or decreasing $D_N$ around this value
can significantly
reduce the size
of the spin valve effect.
Physically,
this behavior
is related to the
spin-triplet conversion
that takes place
in the ferromagnets
and corresponding enhancement of the equal-spin triplet correlations
in the $N$ layer. This will be discussed in greater detail below.
For
$D_N$
much
larger than the optimal
width,
a severe reduction in
magnetic interlayer coupling occurs and
$T_c$ exhibits little variation with $\theta$.
Finally,
in Fig.~\ref{tc}(c),
we
incorporate spin-independent scattering at each of the spin valve
interfaces.
A wide range of
scattering strengths are considered.
We assume $H_j\equiv H \,(j=1,2,3)$, so that
interface
scattering can be written solely in terms of the
dimensionless parameter
$H_B= H/v_F$.
Overall,
the general features and trends for $T_c$
seen previously
are
retained.
With moderate amounts of interface scattering,
$H_B=0.1$, we find $\Delta T_c\equiv T_c(\theta=0^\circ)-
T_c(\theta=90^\circ) \approx 0.3 T_0$.
It is immediately evident that
samples must have interfaces as transparent as possible \cite{singh,klaus_zep}:
the variations in $T_c$
with $\theta$
become severely reduced
with increasing $H_B$,
as the phase coherence of the superconducting correlations becomes destroyed.
In all cases, we observe some degree of
asymmetry in $T_c$ as a function of $\theta$,
similar to what has been reported in both diffusive \cite{fomin} and clean \cite{wu} spin valves lacking half-metallic elements.
If it is assumed that the
band splitting in $F_2$ is
sufficiently large so that only one spin species can exist,
a quasiclassical approach has shown
that $T_c$ becomes symmetric with respect to $\theta$
in the diffusive regime \cite{Mironov}.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{triplets.pdf}
\caption{
(Color online).
Normalized triplet
($f_0$, $f_1$) and singlet ($f_3$)
amplitudes
versus the relative
magnetization
angle $\theta$.
The magnitude
of each pair correlation is
averaged over a given region in the $SF_1NF_2$
spin valve, as identified in the legend.
The top, middle, and bottom
rows correspond to $D_S=130$,
$D_S=150$, and
$D_S=300$ respectively.
}
\label{triplets}
\end{figure}
To correlate the large spin-valve
effect observed in Fig.~\ref{tc}
with the odd-time triplet correlations, we employ
the expressions in Eqs.~({\ref{f0defa}) and (\ref{f1defa}), which describe the spatial
and temporal behavior of the triplet amplitudes.
We normalize
the triplet correlations,
computed in the low $T$ limit, to
the value of the singlet pair amplitude
in the bulk $S$. The
normalized averages of $|f_0|$ and $|f_1|$
are plotted as functions of $\theta$
in Fig.~\ref{triplets}, at a dimensionless characteristic
time of $\tau = 4$.
For comparison purposes,
the singlet pair correlations,
$f_3$, are also shown
(third column).
In each panel,
spatial averages over different segments of the spin valve are
displayed as separate curves (see caption).
Each row of figures
corresponds to different
$D_S$: $D_S=130,150,300$ (from top to bottom).
One of the most striking observations is the effect of the normal metal
spacer, which contains a substantial portion of the equal-spin triplet pairs.
We will see below that the $f_1$ triplet correlations
within the normal metal tend to propagate into
the adjacent regions of the spin valve as time evolves.
Examining the top two panels of Fig.~\ref{triplets}, the equal-spin $f_1$ triplet component in $S$
clearly dominates
its opposite spin counterpart when $\theta\approx 20 ^\circ$.
Thus, only slight deviations from the parallel state ($\theta=0^\circ$)
generates
triplet correlations within $S$
that have spin projection $m=\pm1$.
For each $D_S$ case studied, the singlet $f_3$ amplitudes are
clearly largest in the $S$ region
where they originate,
and then decline further in each subsequent segment.
It is evident also that the $f_1$ triplet pair amplitudes
are anticorrelated to $T_c$ (governed by the behavior of the singlet amplitudes),
which indicates a singlet-triplet conversion process.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{pair_spatial.pdf}
\caption{
(Color online).
Normalized triplet
($f_0$, $f_1$) and singlet ($f_3$)
amplitudes
versus the dimensionless coordinate $X$.
The relative magnetization orientation is
set to $\theta=20^\circ$. The dashed vertical lines identify the locations of
the interfaces for the $S F_1 N F_2$ structure.
Each segment corresponds to the following ranges:
$X<130$ ($S$ region),
$130\leq X \leq 140$ ($F_1$ region),
$140 < X \leq 145$ ($N$ region),
and $X > 145$ ($F_2$ region).
The singlet component has been reduced by a factor of 10
for comparison purposes.
}
\label{pair_spatial}
\end{figure}
Therefore as more singlet
superconductivity leaks into the ferromagnet side, $T_c$ is
suppressed, and triplet superconductivity is enhanced.
It is evident that both triplet components vanish around $\theta=90^\circ$,
as was also observed in Fig.~\ref{trip_h}.
This is due to the highly sensitive nature of the gapless superconducting state
that arises in thin $S$ systems,
whereby the
singlet
pair correlations become rapidly destroyed as the magnetization vector in $F_1$
approaches the orthogonal configuration.
Increasing the size of the superconductor
causes the superconducting state to become more robust to
changes in $\theta$, and consequently
the system no longer transitions to a resistive state
at $\theta \approx 90^\circ$.
The triplet correlations reflect this aspect as seen in the middle and bottom panels of
Fig.~\ref{triplets}, whereby both triplet components have finite values
for the orthogonal orientation.
Overall, there is a dramatic
change in both triplet components when the $S$ part of the spin valve
is increased in size.
For example, the $f_1$ triplet correlations in $N$
and in $F_2$ evolve from having two peaks
two a single maximum at $\theta = 90^\circ$.
The $D_S$ trends also reflect the importance of self-consistency of the pair potential $\Delta(x)$
for thinner superconductors, where a self-consistent singlet component $f_3(x)$
can substantially decline, or vanish altogether, in contrast to simple step function.
Indeed, the observed
disappearance of the singlet and triplet
pair correlations
for thin superconductors at $\theta\simeq 90^\circ$ (see top panels),
can only occur if
the pair potential is calculated self-consistently [Eq.~(\ref{del})],
thus ensuring that the free energy of the system is lowest \cite{gennes}.
As will be seen below,
this important step permits the proper description of the proximity effects leading to
nontrivial spatial behavior of $\Delta(x)$ in and around
the interfaces for both the superconductor and ferromagnets \cite{klaus_first}.
In common non self-consistent approaches, where $\Delta(x)$ is treated
phenomenologically as a prescribed constant
in the $S$ region, this vital behavior is lost.
Next, in Fig.~\ref{pair_spatial} we present
the spatial behavior of the
real parts of the
triplet and singlet
pair correlations
throughout each segment of the spin valve.
We choose $\theta=20^\circ$ in order to optimize the
$f_1$ triplet component in $S$.
The other parameters used correspond to $D_S=130$,
$D_N=5$, and $T=0.05$.
Proximity effects are seen to result in a reduction
of the singlet $f_3$ correlations in the $S$ region
near the interface at $X=130$. As usual, this decay occurs over the
coherence length $\xi_0$. The singlet amplitude then declines within
the $F_1$ region before undergoing oscillations and
quickly dampening out in the half-metal. Thus, as expected,
the singlet Cooper pairs cannot be sustained in the half-metallic segment
where only one spin species exists.
Within the half-metal, the triplet component,
$f_0$ (also comprised of opposite-spin pairs),
undergoes damped oscillations similar to
the $f_3$ correlations. It is notable that the
triplet $f_0$ component
is severely limited in the $S$ region, in stark contrast to the singlet correlations. Therefore,
the $f_0$ correlations in this situation
are confined mainly to the $F_1$ and $N$ regions.
The equal-spin $f_1$ triplet component on the other hand, is seen to
pervade every segment of the spin valve: The $f_1$ correlations are
enhanced in the $N$ region, similar
in magnitude to $f_0$, but then exhibit a slow decay in both the $S$ and half-metallic regions.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{triplets_time.pdf}
\caption{
(Color online).
Time evolution of the localized spatial dependence of the $f_0$ and $f_1$
triplet correlations. The insets depict magnifications of the $N$ regions
($140\leq X \leq145$).
The dimensionless time parameter $\tau\equiv \omega_D t$ varies from
$0.8$ to $5.6$ in increments of $0.8$. Initially, the $f_1$ component
predominately populates the $N$ region,
and then progressively moves outward into each segment
of the spin valve with increasing time.
The $f_0$ component initially occupies the $F_1$ and $N$ layers,
and then remains confined to those regions at higher $\tau$.
Each dashed vertical line identifies the $S$ interface.
}
\label{triplets_time}
\end{figure}
To further clarify the role of the triplet correlations in the spin valve,
we now discuss the explicit relative time evolution
of the triplet states
in Fig.~\ref{triplets_time}. Snapshots of the real parts of the
triplet amplitudes are shown in equal increments of the relative time parameter $\tau$.
The angle $\theta$ is fixed at $\theta=20^\circ$, again corresponding to when
the triplet correlations with $m=\pm1$
projection of the $z$-component of the total spin in the superconductor is largest (see Fig.~\ref{triplets}).
The spatial range shown permits
visualization of both triplet components throughout much of the system.
Starting at the earliest time $\tau=0.8$, we find that $f_1$
mainly populates the nonmagnetic $N$ region,
and then as $\tau$ increases, propagates into the $F_1$ and $F_2$ regions before extending into
the superconductor (left of the dashed vertical line).
Meanwhile, $f_0$ is essentially confined to the $F_1$ and $N$ regions,
with limited presence in
the $S$ and $F_2$ layers.
Since the characteristic length $\xi_F$ over which the $f_0$ correlations
modulate in $F_2$ is inversely proportional to $h_2$,
$f_0$ declines
sharply in the half-metallic region.
Also, in agreement with Fig.~\ref{triplets}, for $\theta=20^\circ$ and $D_S=130$, there is also
a limited presence of $f_0$ in the superconductor.
The superconductor therefore has $|f_1| \gg |f_0|$,
which by using the appropriate experimental probe, can reveal
signatures detailing the presence of equal-spin pairs $f_1$ \cite{bernard}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{dos2.pdf}
\caption{
(Color online). Signatures of equal-spin triplet correlations:
The normalized local DOS in the superconductor for various relative
magnetization orientations,
$\theta$. In the range $0^\circ\leq \theta \leq20^\circ$, the DOS
possesses peaks at zero energy
which grow until they become inverted at $\theta=30^\circ$.
The well defined,
prominent ZEP at $\theta=20^\circ$
corresponds to the maximal generation of equal-spin triplet amplitudes in the $S$ region,
as shown in Fig.~\ref{triplets}.
}
\label{dos}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig8.pdf}
\caption{
(Color online). Top panels:
The normalized
spatially and energy resolved DOS at three
different orientations of the relative magnetization angle:
(a) $\theta=10^\circ$, (b) $\theta=20^\circ$, and (c) $\theta=30^\circ$.
Panels (a)-(c) pertain to a single system with a narrow $S$ layer
of width $D_S=130$.
The spatial region extending from $X=0$ to $130$ therefore
corresponds to the
superconducting region, and $X>130$ pertains to the
remaining
layers of the spin valve.
Bottom panels:
the DOS is shown for three
different $S$ layer thicknesses: (d) $D_S=150$, (e) $D_S=200$,
and (f) $D_S=300$, where $\theta$ is now fixed at $20^\circ$.
The dashed vertical lines identify the interface between $S$ and $F_1$.
}
\label{2ddos}
\end{figure*}
\subsection{Density of States}
To explore these proximity induced
signatures further,
we investigate
the experimentally relevant local DOS.
An important spectroscopic tool for
exploring
proximity
effects on an atomic scale with sub-meV energy resolution
is the scanning tunneling microscope (STM).
We are interested in determining the local DOS in the outer
$S$ segment of the $SF_1 N F_2$ spin valve.
By positioning a nonmagnetic STM tip at the edge of
the $S$ region, the tunneling current ($I$) and voltage ($V$)
characteristics can be measured \cite{bernard}.
This technique yields a direct probe
of the available electronic states with energy $eV$ near the tip.
The corresponding differential conductance $dI(V )/dV$ over the
energy range of interest is then proportional to the local DOS.
The vast majority of past works
only considered the DOS in the ferromagnet side where
the $f_1$ correlations were expected
to dominate \cite{bernard,klaus_zep,golu}.
However unavoidable experimental
issues related to noise and thermal broadening can
yield inconclusive data.
As we have shown above, with the proper alignment of relative magnetizations,
one can generate a finite $f_1$ in $S$ accompanied by relatively limited $f_0$,
thus presenting an opportunity to detect the important
triplet pairs with spin $s=\pm 1$.
By avoiding comparable admixtures of the two triplet components,
experimental signatures of the equal-spin
triplet correlations should be discernible.
To investigate this further,
the six
panels in Fig.~\ref{dos}
show the normalized DOS evaluated near
the edge of the superconductor for a wide variety of
orientation angles $\theta$.
All plots are normalized to the corresponding
value in a bulk sample of $S$ material in its normal state.
As shown, each panel ranges from
a mutually parallel ($\theta=0^\circ$) to
a nearly orthogonal magnetization state ($\theta=80^\circ$).
In each
case considered, we again have $D_N=5$ and $D_S=130$.
Examining the top row of panels,
traces are seen of the well-known BCS peaks that
have now been shifted to subgap energies
due to proximity and size effects.
There also exists bound states at low energies that
arise from quasiparticle interference effects.
By sweeping the angle $\theta$ from the relative parallel
case ($\theta=0^\circ$) to slightly out of plane ($\theta=20^\circ$),
the zero energy quasiparticle states
become significantly more pronounced.
This follows from the fact that
strong magnets
tend to shift the relative magnetizations leading to maximal $f_1$ generation
away
from the expected
orthogonal alignment at $\theta=90^\circ$ \cite{klaus_zep}.
The top panels reflect the
gapless superconducting state often found in $F/S$ heterostructures \cite{gap},
superimposed with
the triplet induced zero-energy peaks.
The modifications to the superconducting state
in the form of
a subgap DOS in the superconductor
is another signature that is
indicative of the presence of
spin-triplet pair correlations \cite{bernard}.
Finally, as $\theta$ rotates further
out of plane ($\theta>20^\circ$),
the former ZEP's become inverted and vanish
when
$\theta=80^\circ$, exhibiting a relatively flat DOS
where the system has essentially transitioned to the normal state (see Fig.~\ref{tc}).
A complimentary global view of the above phenomena is presented in
Fig.~\ref{2ddos}, where both the spatially and energy resolved DOS
is shown at various $\theta$ (top panels) and $D_S$ (bottom panels). The top panels (a)-(c)
depict the DOS
for the same parameters and normalizations used in Fig.~\ref{dos},
and at three orientations: $\theta=0^\circ, 10^\circ, 20^\circ$.
It is evident that increasing the misalignment
angle $\theta$, causes the ZEP in the $S$ region
to become enhanced, reaching its maximum at
$\theta\approx 20^\circ$.
At this angle the ZEP extends through
much of the system, including to a small extent, the $F_2$ side. However, within
$S$, the ZEP is clearly more dominate \cite{bernard}.
For the bottom panels, (d)-(f),
the relative magnetization orientation is fixed at $\theta=20^\circ$,
and three larger $S$ layers are shown:
$D_S=150$, $D_S=200$, and $D_S=300$.
Increasing the $S$ layer widths illustrates
the ZEP evolution towards a familiar gapped DOS of a BCS form.
As seen, the ZEP is maximal in the
superconducting region near the $S/F_1$ interface. By
increasing $D_S$, the ZEP in the $S$ side becomes diminished until for
sufficiently
large $D_S$, that is, $D_S\approx 200$, the
well-known singlet superconducting
gap begins to emerge throughout much of the superconductor.
At an even larger $D_S$ ($D_S=300$),
the ZEP has clearly weakened even further.
Finally, for the experiment reported in Ref. \onlinecite{singh}, a
peak in the resistive transitions at external fields of $B>0.25\, {\rm T}$ was
observed immediately before the critical temperature whereby the system
has transitioned to the superconducting phase. This peak in the transition curves was
believed to be caused by the influence of the external field, effectively creating
a $SF_1F'F_2$ type of configuration.
We investigated
such a configuration for various strengths and orientations of the
$F'$ ferromagnet, and no evidence was found that was
suggestive of anomalous behavior
near $T_c$ for
$F'$ with weak exchange fields.
Note that the system under consideration is
translationally invariant in the $yz$ plane (see Fig. \ref{schematic}). Therefore,
the spin valve structure may experience a Fulde Ferrell-Larkin-Ovchinnikov phase during
its phase transition from the superconducting to normal phase, although in a narrow region of parameter space \cite{loff1,loff2}.
\subsection{Spin Currents}
To reveal further details of the exchange interaction which
controls the
behavior and type of triplet correlations present in the system,
we next examine the characteristics of the spin currents that exist within the spin valve.
When the magnetizations in $F_1$ and $F_2$ are noncollinear,
the exchange interaction in
the ferromagnets creates a
spin current ${\bm S}$ that flows in parts of the system,
even in the absence of a charge current.
If the spin current varies spatially,
the corresponding nonconserved spin currents
in $F_1$ and $F_2$
generate
a mutual torque that tends to rotate the magnetizations of
the two ferromagnets.
This process is embodied in the spin-torque continuity equation \cite{joe1,joe2}
which describes the time evolution of the spin density $\bm \eta$:
\begin{align}
\frac{\partial}{\partial t} \langle \eta_i (x)\rangle + \frac{\partial}{\partial x} S_i(x) = \tau_i(x), \quad i=x,y,z,
\label{cont}
\end{align}
where $\bm \tau(x)$ is the
spin transfer
torque (STT): ${\bm \tau}(x) = -(2/\mu_B) {\bm m} (x) \times {\bm h}(x)$, ${\bm m}(x)$ is the magnetization,
and $\mu_B$ is the Bohr magneton (see Appendix~\ref{appA}).
The spin current tensor here has been reduced to vector form
due to the quasi-one dimensional nature of the geometry.
We calculate
${\bm S}(x)$
by performing the appropriate sums
of quasiparticle amplitudes and energies [see Eq.~(\ref{spinall})].
In
the steady state, the continuity equation, Eq.~(\ref{cont}), determines
the torque by simply evaluating
the derivative of the spin current as a function of position:
${ \tau}_i(x) = \partial {S}_i(x)/\partial x$.
The net torque acting within the boundaries of e.g., the $F_1$ layer,
is therefore
the change in spin current across the two interfaces bounding that region:
\begin{align} \label{tnet}
S_y(d_S+d_{F_1})-S_y(d_S)=\int_{F_1} dx \tau_y.
\end{align}
In equilibrium,
the net $\tau_y$ in $F_2$ is opposite to its counterpart in $F_1$.
Since no spin current flows in the superconductor, we have
$S_y(d_S)=0$, and the net torque in $F_1$
is equivalent to the spin current flowing through $N$.
In our setup, the exchange field in $F_1$ is
directed
in the $x-z$ plane,
and therefore the spin current
and torque are directed orthogonal to this plane (along the interfaces in the $y$ direction).
Likewise, if the magnetizations were varied in the $y-z$ plane, the spin currents would be directed along $x$.
Figure~\ref{spin} thus illustrates the normalized spin current $S_y$ as a function of
the dimensionless position $X$. The normalization factor $S_0$
is written in terms of $n_e v_F$, where $n_e=k_F^3/(3\pi^2)$,
and $v_F=k_F/m$.
Several equally spaced magnetization orientations $\theta$ are considered, ranging from
parallel ($\theta=0^\circ$), to orthogonal ($\theta=90^\circ$).
Within the two $F$ regions, $S_y$ tends to undergo damped oscillations,
while in $N$ there is no exchange interaction (${\bm h}=0$), and consequently the spin current is constant for a given $\theta$.
The main plot shows that when $\theta=0^\circ$,
$S_y$ vanishes
throughout the entire system, as expected
for parallel magnetizations.
By varying $\theta$, spin currents are induced due to the misaligned magnetic moments in the $F$ layers.
If the exchange field is rotated
slightly out of plane, such that $\theta \lesssim 30^\circ$,
it
generates on average, negative
spin currents in the $N$ and $F_1$ regions.
As shown, these spin currents
reverse their polarization direction for larger $\theta$.
This
behavior is consistent with the inset,
which shows how tuning $\theta$ affects $S_y$ (or equivalently, the net torque) in $N$.
Thus, by manipulating $\theta$, the strength and direction of the spin current in the normal metal
can be controlled, or even eliminated completely
at $\theta\approx 34^\circ$.
By varying $\theta$ about this angle, the
overall torque, which tends to align the magnets in a particular direction,
can then reverse in a given magnet.
For $\theta \approx 15^\circ$ and $\theta \approx 160^\circ$, the
inset also clearly shows an enhancement of the magnitude of the spin currents,
which coincides approximately
to the orientations leading to an increase
in the spin-polarized triplet pairs observed in Fig.~\ref{triplets}.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{spin_final.pdf}
\caption{
(Color online). Spin current $S_y$ as a function of position $X$ in the spin valve. Several
magnetization orientations $\theta$ are considered as shown in the legend. The dashed vertical lines identify the interfaces
of each layer as labeled. The inset corresponds to the spin current within the $N$ region.
}
\label{spin}
\end{figure}
In conclusion,
motivated by recent experiments \cite{singh,bernard}, a hybrid $S F_1 N F_2 $ spin valve containing a
half-metallic ferromagnet has been theoretically investigated, revealing a
sizable spin-valve effect
for thin superconductors with widths close to $\xi_0$.
Through self-consistent numerical calculations,
the contributions from both the
equal-spin ($f_1$) and opposite-spin ($f_0$) triplet
correlations have been identified as the relative magnetization angle $\theta$ varies.
We found that when the magnetization in $F_1$ is directed slightly out-of-plane,
the magnitude of $f_1$ in $S$ is maximized, while for $f_0$ it is very small.
By investigating the DOS in the superconductor
over a broad range of $\theta$,
we were able to identify the emergence of
zero energy peaks (ZEPs) in the DOS
that coincide with peaks in the averaged $|f_1|$.
Our results show,
to a large extent, good agreement with experimental
observations as well as the physical origins of these effects.
We have thus established a clear, experimentally identifiable role that the triplet correlations
play in this new class of half-metallic spin valve structures.
For future work, it would be interesting to
study the transport properties of these types of spin valves
by investigating the self-consistent
charge and spin currents as they pertain to dissipationless spintronics applications.
\section{Acknowledgements}
This work was supported
in part by ONR and a grant of HPC resources
from the DOD HPCMP. We thank N. Birge for
a careful reading of the manuscript and helpful comments.
|
2,869,038,156,737 | arxiv | \section{ Gradient estimates}
Let $(M^n,g(t))$, $t\in [0,T]$, be a (not necessarily complete) Ricci flow, and $u$ be a positive solution to the conjugate heat equation
\begin{equation*}
(\frac{\partial}{\partial t}+\Delta_{g(t)}-R_{g(t)})u=0
\end{equation*}
coupled with the Ricci flow on $M\times [0,T]$. Sometimes we'll write $\partial_t$ for $\frac{\partial}{\partial t}$, and $u_t$ for $\frac{\partial u}{\partial t}$. Let $f=\log u$. Then
\begin{equation*}
f_t=-\Delta f-|\nabla f|^2+R.
\end{equation*}
Fix $\alpha >1$.
Let $\tau=T-t$, and
\begin{equation*}
F=\tau(\frac{|\nabla u|^2}{u^2}+\alpha \frac{u_t}{u}-\alpha R)=\tau(|\nabla f|^2+\alpha f_t-\alpha R).
\end{equation*}
Let $x_0\in M$. Assume that the parabolic cube $Q_{2r,T}(x_0,T)$ is compact, and $-K_0\leq Ric \leq K_0$, $|\nabla R|\leq K_1$, and $\Delta R \leq K_2$ on $Q_{2r,T}(x_0,T)$.
\begin{lem} \label{lem 2.1} With the above assumption, for any $\varepsilon >0$ we have
\begin{equation*}
\begin{split}
(\Delta+\partial_t) F & \geq -2\langle\nabla f, \nabla F\rangle +\frac{2\tau}{n+\varepsilon}(f_t+|\nabla f|^2-R)^2-(|\nabla f|^2+\alpha f_t-\alpha R)\\
& -2\tau |2-\alpha|K_0|\nabla f|^2-2\tau (\alpha-1)K_1|\nabla f|-\frac{n(n+\varepsilon)}{2\varepsilon}\alpha^2\tau K_0^2-\alpha \tau K_2 \\
\end{split}
\end{equation*}
on $Q_{2r,T}(x_0,T)$.
\end{lem}
\noindent {\bf Proof}. We have
\begin{equation*}
\begin{split}
\Delta |\nabla f|^2 &=2|\nabla^2f|^2+ 2\langle \nabla f, \nabla \Delta f\rangle+ 2Ric(\nabla f, \nabla f),\\
\Delta f_t &= (\Delta f)_t-2\langle Ric, \nabla^2f\rangle, \\
\tau \Delta f&=\tau(\alpha-1)(f_t-R)-F, \\
(|\nabla f|^2)_t &=2\langle \nabla f, \nabla f_t \rangle+2Ric(\nabla f, \nabla f \rangle, \\
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
& \Delta F \\
=& \tau[2|\nabla^2f|^2+2\langle \nabla f, \nabla \Delta f\rangle+2 Ric(\nabla f, \nabla f)+\alpha ((\Delta f)_t-2\langle Ric, \nabla^2f\rangle)-\alpha \Delta R]\\
=& 2\tau |\nabla^2f|^2-2\langle\nabla f, \nabla F\rangle+2\tau(\alpha-1)\langle \nabla f, \nabla f_t \rangle-2\tau(\alpha-1)\langle \nabla R, \nabla f\rangle\\
& +2\tau Ric(\nabla f, \nabla f)+\alpha\tau(-f_{tt}-(|\nabla f|^2)_t+R_t-2\langle Ric, \nabla^2f\rangle)-\alpha\tau \Delta R \\
=& 2\tau |\nabla^2f|^2-2\langle\nabla f, \nabla F\rangle-F_t-\frac{F}{\tau}-2\alpha\tau \langle Ric, \nabla^2 f\rangle +2\tau(2-\alpha)Ric(\nabla f, \nabla f)\\
& -2\tau(\alpha-1)\langle \nabla R, \nabla f\rangle -\alpha \tau\Delta R. \\
\end{split}
\end{equation*}
See p. 61 of \cite{CTY}.
Note that
\begin{equation*}
|\langle Ric, \nabla^2f\rangle| \leq |Ric||\nabla^2f| \leq \frac{\alpha (n+\varepsilon)}{4\varepsilon}|Ric|^2 + \frac{\varepsilon}{\alpha (n+\varepsilon)}|\nabla^2f|^2
\end{equation*}
for any $\varepsilon>0 $, and
\begin{equation*}
|\nabla^2f|^2 \geq \frac{1}{n}(\Delta f)^2=\frac{1}{n}(f_t+|\nabla f|^2-R)^2,
\end{equation*}
then the lemma follows. \hfill{$\Box$}
\begin{prop} \label{prop 2.2} Let $(M^n,g(t))$, $t\in [0,T]$, be a (not necessarily complete) Ricci flow and $u$ be a positive solution to the conjugate heat equation coupled with the Ricci flow on $M\times [0,T]$. Let $x_0\in M$. Suppose that $Q_{2r,T}(x_0,T)$ is compact and $-K_0\leq Ric \leq K_0$, $|\nabla R|\leq K_1$, $\Delta R \leq K_2$ on $Q_{2r,T}(x_0,T)$.
For any $\alpha>1$ and $\varepsilon>0$, we have
\begin{equation*}
\frac{|\nabla u|^2}{u^2}+\alpha \frac{u_t}{u}-\alpha R \leq \frac{(n+\varepsilon)\alpha^2}{2(T-t)}+C(r^{-2}+r^{-1}+1) \hspace{6mm} on \hspace{2mm} Q_{r,T}(x_0,T)\setminus \{(x,T) \hspace{1mm} | \hspace{1mm} x\in M \},
\end{equation*}
where the constant $C$ depends only on $n,\alpha, \varepsilon, K_0,K_1$ and $K_2$.
\end{prop}
\noindent {\bf Proof}.
Choose a smooth cutoff function $\psi: [0,\infty)\rightarrow [0,1]$ with $\psi=1$ on the interval $[0,1]$, $\psi=0$ on $[2,\infty)$, and
\begin{equation*}
\psi'\leq 0, \hspace{2mm} \frac{|\psi'|^2}{\psi}\leq C, \hspace{2mm} \psi''\geq-C.
\end{equation*}
Let $\phi(x,t)=\psi(\frac{d(x,x_0,t)}{r})$. Suppose that the maximum of the function $\phi F$ is positive, otherwise the result follows trivially. Assume that $\phi F$ achieves its positive maximum at the point $(x_1, t_1)$. Then $(x_1,t_1) \in Q_{2r,T}(x_0,T)$ with $t_1 \neq T$. By Calabi's trick \cite{C} we may assume that $\phi F$ is smooth at $(x_1, t_1)$. Let $\tau_1=T-t_1$. We compute at the point $(x_1, t_1)$ using Lemma 2.1,
\begin{equation*}
\begin{split}
0 \geq & (\Delta+\partial_t) (\phi F) \\
\geq & \tau_1\phi \frac{2}{n+\varepsilon}(f_t+|\nabla f|^2-R)^2-CF\sqrt{\phi}r^{-1}|\nabla f|-\phi \frac{F}{\tau_1}\\
& -C\tau_1\phi |\nabla f|^2-C\tau_1 \phi-CF(r^{-2}+r^{-1}+1), \\
\end{split}
\end{equation*}
where the constant $C$ depends only on $n,\alpha, \varepsilon, K_0,K_1$ and $K_2$; compare \cite{CTY}, \cite{Li}, and \cite{S}. Then we proceed as in \cite{LY} and \cite{CTY}. \hfill{$\Box$}
\vspace*{0.4cm}
The following corollary is a slight improvement of Lemma 4.1 in \cite{CTY} in the Ricci flow case.
\begin{cor} \label{cor 2.3} Suppose that $(M^n,g(t))$, $t\in [0,T]$, is a complete Ricci flow with $-K_0\leq Ric \leq K_0$, $|\nabla R|\leq K_1$, $\Delta R \leq K_2$ on $M\times [0,T]$, and that $u$ is a positive solution to the conjugate heat equation coupled with the Ricci flow on $M\times [0,T]$.
For any $\alpha>1$ and $\varepsilon>0$, at $(x,t)\in M\times [0,T)$ we have
\begin{equation*}
\frac{|\nabla u|^2}{u^2}+\alpha \frac{u_t}{u}-\alpha R \leq \frac{(n+\varepsilon)\alpha^2}{2(T-t)}+C,
\end{equation*}
where the constant $C$ depends only on $n,\alpha, \varepsilon, K_0,K_1$ and $K_2$.
\end{cor}
\begin{prop} \label{prop 2.4} Let $(M^n,g(t))$, $t\in [0,T]$, be a (not necessarily complete) Ricci flow and $u$ be a positive solution to the conjugate heat equation coupled with the Ricci flow on $M\times [0,T]$. Let $x_0\in M$. Suppose that $Q_{2r,T}(x_0,T)$ is compact and $-K_0\leq Ric \leq K_0$, $|\nabla R|\leq K_1$, $\Delta R \leq K_2$ on $Q_{2r,T}(x_0,T)$.
For any $\alpha>1$ and $\varepsilon>0$, at $(x,t) \in Q_{r,T}(x_0,T)$ with $t\neq T$ we have
\begin{equation*}
\begin{split}
\frac{|\nabla u|^2}{u^2}&+\alpha \frac{u_t}{u}-\alpha R \leq \frac{n\alpha^2}{T-t}+\frac{C\alpha^2}{r^2}(r\sqrt{K_0}+\frac{\alpha^2}{\alpha-1})+C\alpha^2K_0\\
& +\frac{n\alpha^2}{\alpha-1}(|2-\alpha| K_0+\frac{\alpha-1}{2}K_1)+n\alpha^2K_0+\alpha \sqrt{n(\alpha-1)K_1}+\alpha\sqrt{n\alpha K_2},
\end{split}
\end{equation*}
where the constant $C$ depends only on $n$.
\end{prop}
\noindent {\bf Proof}. In Lemma 2.1 we let $\varepsilon=n$ and get
\begin{equation*}
\begin{split}
(\Delta+\partial_t) F & \geq -2\langle\nabla f, \nabla F\rangle +\frac{\tau}{n}(f_t+|\nabla f|^2-R)^2-(|\nabla f|^2+\alpha f_t-\alpha R)\\
& -\tau (2|2-\alpha|K_0+(\alpha-1)K_1)|\nabla f|^2-n\alpha^2\tau K_0^2-(\alpha-1)\tau K_1-\alpha \tau K_2. \\
\end{split}
\end{equation*}
Then we proceed as in \cite{Li} and \cite{S}. \hfill{$\Box$}
\section{Hessian estimates}
Let $(M^n, g(t))$, $t\in [0,T]$, be a Ricci flow and $u$ be a smooth positive solution to the conjugate heat equation coupled with the Ricci flow on $M\times [0,T]$.
Fix $\alpha >1$. Let
\begin{equation*}
F_1=\frac{|\nabla^2u|}{u}+\alpha \frac{|\nabla u|^2}{u^2}+5\alpha \frac{u_t}{u},
\end{equation*}
and $F_2=\tau F_1$, where $\tau=T-t$. Assume that $Q_{4r,T}(x_0,T)$ is compact and $|Rm|\leq k_0$, $|\nabla Rm|\leq k_1$, and $|\nabla^2R|\leq k_2$ on $Q_{4r,T}(x_0,T)$.
\begin{lem} \label{lem 3.1} With the above assumption, for $\gamma-1>0$ sufficiently small (depending only on the dimension $n$) we have
\begin{equation*}
\begin{split}
(\frac{\partial}{\partial t}+\Delta)F_2 & \geq -2\langle \nabla F_2, \nabla \log u\rangle -Ck_0F_2 -\alpha^2 F_2\frac{u_t}{u}-\alpha^2 F_2\frac{|\nabla u|^2}{5u^2}-\frac{F_2}{\tau} \\
& +\frac{\alpha F_2^2}{10\tau} -2C\tau(\frac{7}{40}\alpha^3+\frac{2\alpha}{9})(\frac{1}{\tau}+\frac{1}{r^2}+k_0+k_1+\sqrt{k_1}+\sqrt{k_2})^2 \\
& -C\tau(\frac{k_0^2}{(\gamma-1)^2} +\frac{k_0^2}{\alpha}+k_1^{4/3}\alpha^{1/3} +k_0^2\alpha+k_2 \alpha) \\
\end{split}
\end{equation*}
on $Q_{2r,T}(x_0,T)\setminus \{(x,T) \hspace{1mm} | \hspace{1mm} x\in M \}$ when $|\nabla^2u|\neq 0$, where $C$ depends only on $n$ and $\alpha$.
\end{lem}
\noindent {\bf Proof}. Compare \cite{Li} and \cite{S}. We have
\begin{equation*}
\frac{\partial}{\partial t}|\nabla^2u| \geq \frac{\langle \nabla^2u, \nabla^2 \partial_t u\rangle}{|\nabla^2u|}-Ck_0|\nabla^2u|-Ck_1|\nabla u|,
\end{equation*}
and
\begin{equation*}
\Delta |\nabla^2u| \geq \frac{\langle \nabla^2u, \nabla^2 \Delta u \rangle}{|\nabla^2u|}-Ck_0|\nabla^2u|-Ck_1|\nabla u|
\end{equation*}
when $|\nabla^2u|\neq 0$, and
\begin{equation*}
\begin{split}
& (\frac{\partial}{\partial t}+\Delta)\frac{|\nabla^2u|}{u} \\
= & \frac{1}{u}(\frac{\partial}{\partial t}+\Delta) |\nabla^2u| -\frac{2}{u}\langle \nabla \frac{|\nabla^2u|}{u}, \nabla u\rangle - \frac{R|\nabla^2u|}{u} \\
\geq & \frac{\langle \nabla^2u, \nabla^2(Ru)\rangle }{u|\nabla^2u| } -Ck_0\frac{|\nabla^2u|}{u}-Ck_1\frac{|\nabla u|}{u} -2 \langle \nabla \frac{|\nabla^2u|}{u}, \nabla \log u\rangle - \frac{R|\nabla^2u|}{u} \\
\geq & -2 \langle \nabla \frac{|\nabla^2u|}{u}, \nabla \log u\rangle -Ck_0\frac{|\nabla^2u|}{u}-Ck_1\frac{|\nabla u|}{u} - Ck_2
\end{split}
\end{equation*}
when $|\nabla^2u|\neq 0$, where $C$ depends only on $n$.
We also heve
\begin{equation*}
\frac{\partial}{\partial t}|\nabla u|^2 = 2\langle \nabla u, \nabla \partial_t u\rangle+ 2Ric(\nabla u, \nabla u),
\end{equation*}
and by Bochner formula
\begin{equation*}
\Delta |\nabla u|^2 =2|\nabla^2u|^2+ 2\langle \nabla u, \nabla \Delta u\rangle+ 2Ric(\nabla u, \nabla u),
\end{equation*}
so
\begin{equation*}
\begin{split}
& (\frac{\partial}{\partial t}+\Delta)|\nabla u|^2 \\
= & 2|\nabla^2u|^2+ 2\langle \nabla u, \nabla (Ru) \rangle+ 4Ric(\nabla u, \nabla u) \\
\geq & 2|\nabla^2u|^2-Ck_0|\nabla u|^2 -Ck_1u|\nabla u|, \\
\end{split}
\end{equation*}
where $C$ depends only on $n$.
For $0<\delta <1$ we have
\begin{equation*}
\frac{1}{u^3}|\langle \nabla |\nabla u|^2, \nabla u\rangle| \leq \frac{2}{u^3}|\nabla^2u| |\nabla u|^2 \leq (1-\delta)\frac{|\nabla^2u|^2}{u^2}+\frac{1}{1-\delta}\frac{|\nabla u|^4 }{u^4},
\end{equation*}
so
\begin{equation*}
\begin{split}
& (\frac{\partial}{\partial t}+\Delta)\frac{|\nabla u|^2}{u^2} \\
= & \frac{1}{u^2}(\frac{\partial}{\partial t}+\Delta)|\nabla u|^2-\frac{2|\nabla u|^2}{u^4}(Ru^2+|\nabla u|^2)-4\langle \nabla \frac{|\nabla u|^2}{u^2}, \nabla \log u\rangle \\
= & \frac{1}{u^2}(\frac{\partial}{\partial t}+\Delta)|\nabla u|^2 -2 \langle \nabla \frac{|\nabla u|^2}{u^2}, \nabla \log u\rangle - \frac{2}{u^3}\langle \nabla |\nabla u|^2, \nabla u\rangle + \frac{2 |\nabla u|^4 }{u^4}-\frac{2R|\nabla u|^2}{u^2} \\
\geq & 2\delta \frac{|\nabla^2u|^2}{u^2}-2 \langle \nabla \frac{|\nabla u|^2}{u^2}, \nabla \log u\rangle -\frac{2\delta}{1-\delta}\frac{|\nabla u|^4 }{u^4}-Ck_0\frac{|\nabla u|^2}{u^2} -Ck_1\frac{|\nabla u|}{u},\\
\end{split}
\end{equation*}
where $C$ depends only on $n$.
We have
\begin{equation*}
\frac{\partial}{\partial t}\Delta u=\Delta \frac{\partial}{\partial t}u+2\langle Ric, \nabla^2 u\rangle,
\end{equation*}
\begin{equation*}
(\frac{\partial}{\partial t}+\Delta) u_t=\frac{\partial}{\partial t}(u_t+\Delta u)-2\langle Ric, \nabla^2 u\rangle=\frac{\partial}{\partial t}(Ru)-2\langle Ric, \nabla^2 u\rangle,
\end{equation*}
and
\begin{equation*}
Ck_0\frac{|\nabla^2u|}{u} \leq \varepsilon \frac{|\nabla^2u|^2}{u^2}+\frac{1}{4\varepsilon}C^2k_0^2
\end{equation*}
for $\varepsilon>0$, so
\begin{equation*}
\begin{split}
& (\frac{\partial}{\partial t}+\Delta)\frac{u_t}{u} \\
= & \frac{1}{u}(\frac{\partial}{\partial t}+\Delta)u_t -\frac{u_t}{u^2}(\frac{\partial}{\partial t}+\Delta)u-\frac{2}{u}\langle \nabla \frac{u_t}{u}, \nabla u\rangle \\
= & R_t -\frac{2}{u}\langle Ric, \nabla^2 u\rangle -2\langle \nabla \frac{u_t}{u}, \nabla \log u\rangle \\
\geq & \Delta R+|Ric|^2-Ck_0\frac{|\nabla^2u|}{u} -2\langle \nabla \frac{u_t}{u}, \nabla \log u\rangle \\
\geq & -2\langle \nabla \frac{u_t}{u}, \nabla \log u\rangle - \varepsilon \frac{|\nabla^2u|^2}{u^2}-\frac{C}{\varepsilon}k_0^2 -Ck_2 \\
\end{split}
\end{equation*}
for $0<\varepsilon <1$.
Now let $\beta=5\alpha$. We have
\begin{equation*}
\begin{split}
(\frac{\partial}{\partial t}+\Delta)F_1 & \geq (2\alpha \delta-\beta \varepsilon) \frac{|\nabla^2 u|^2}{u^2}-2\langle \nabla F_1, \nabla \log u\rangle-Ck_0\frac{|\nabla^2 u|}{u} -\frac{2\alpha \delta}{1-\delta}\frac{|\nabla u|^4}{u^4}\\
& -Ck_0\alpha \frac{|\nabla u|^2}{u^2}-Ck_1\alpha \frac{|\nabla u|}{u}-\frac{Ck_0^2\beta}{\varepsilon}-Ck_2 \beta. \\
\end{split}
\end{equation*}
Note that
\begin{equation*}
\frac{|\nabla^2 u|}{u}\leq F_1-\beta \frac{u_t}{u},
\end{equation*}
\begin{equation*}
\begin{split}
\frac{|\nabla^2 u|^2}{u^2} &=(F_1-\alpha \frac{|\nabla u|^2}{u^2} -\beta \frac{u_t}{u})^2 \\
&= F_1^2+\alpha^2 \frac{|\nabla u|^4}{u^4}+\beta^2 \frac{u_t^2}{u^2}-2\alpha F_1\frac{|\nabla u|^2}{u^2}-2\beta F_1\frac{u_t}{u}+2\alpha \beta \frac{|\nabla u|^2}{u^2}\frac{u_t}{u}, \\
\end{split}
\end{equation*}
\begin{equation*}
|k_0\beta \frac{u_t}{u}| \leq \beta^2 (\gamma-1)^2\frac{u_t^2}{u^2} + \frac{k_0^2}{4(\gamma-1)^2}
\end{equation*}
for $\gamma >1$,
\begin{equation*}
Ck_0\alpha \frac{|\nabla u|^2}{u^2} \leq \frac{C^2}{2\alpha \delta}k_0^2+ \frac{\alpha^3\delta}{2}\frac{|\nabla u|^4}{u^4},
\end{equation*}
\begin{equation*}
Ck_1\alpha \frac{|\nabla u|}{u}\leq \frac{3}{4}(\frac{Ck_1\alpha}{\alpha^{3/4} \delta^{1/4}})^{4/3}+\frac{1}{4}(\alpha^{3/4} \delta^{1/4} \frac{|\nabla u|}{u})^4=\frac{3}{4}C^{4/3}k_1^{4/3}(\frac{\alpha}{\delta})^{1/3}+\frac{1}{4} \alpha^3 \delta \frac{|\nabla u|^4}{u^4},
\end{equation*}
and
\begin{equation*}
|2\alpha \beta \frac{|\nabla u|^2}{u^2}\frac{u_t}{u}| \leq \frac{1}{2}\beta^2 \frac{u_t^2}{u^2} + 2 \alpha^2 \frac{|\nabla u|^4}{u^4}.
\end{equation*}
So
\begin{equation*}
\begin{split}
(\frac{\partial}{\partial t}+\Delta)F_1 & \geq (\frac{1}{2}\alpha \beta^2 \delta-C\beta^2(\gamma-1)^2)\frac{u_t^2}{u^2}+(\alpha \delta-\beta \varepsilon) \frac{|\nabla^2 u|^2}{u^2}-2\langle \nabla F_1, \nabla \log u\rangle \\
& -Ck_0F_1-(\frac{7}{4}\alpha^3\delta+\frac{2\alpha\delta}{1-\delta})\frac{|\nabla u|^4}{u^4} +\alpha\delta F_1^2 -2\alpha\beta\delta F_1\frac{u_t}{u}-2\alpha^2\delta F_1\frac{|\nabla u|^2}{u^2} \\
& -C\frac{k_0^2}{(\gamma-1)^2} -\frac{C}{\alpha\delta}k_0^2-Ck_1^{4/3}(\frac{\alpha}{\delta})^{1/3} -\frac{Ck_0^2\beta}{\varepsilon}-Ck_2 \beta. \\
\end{split}
\end{equation*}
We have
\begin{equation*}
(\frac{|\nabla u|^2}{u^2})^2 \leq 2(\frac{|\nabla u|^2}{u^2}+ \gamma \frac{u_t}{u})^2 + 2\gamma^2 \frac{u_t^2}{u^2},
\end{equation*}
and
\begin{equation*}
\begin{split}
& (\frac{1}{2}\alpha \beta^2 \delta-C\beta^2(\gamma-1)^2)\frac{u_t^2}{u^2}-(\frac{7}{4}\alpha^3\delta+\frac{2\alpha\delta}{1-\delta})\frac{|\nabla u|^4}{u^4} \\
\geq & (\frac{1}{2}\alpha \beta^2 \delta-2\gamma^2(\frac{7}{4}\alpha^3\delta+\frac{2\alpha\delta}{1-\delta}) -C\beta^2(\gamma-1)^2)\frac{u_t^2}{u^2} \\
& -2(\frac{7}{4}\alpha^3\delta+\frac{2\alpha\delta}{1-\delta})(\frac{|\nabla u|^2}{u^2}+ \gamma \frac{u_t}{u})^2. \\
\end{split}
\end{equation*}
Choose $\delta=\frac{1}{10}$, we see that
\begin{equation*}
\frac{1}{2}\alpha \beta^2 \delta-2\gamma^2(\frac{7}{4}\alpha^3\delta+\frac{2\alpha\delta}{1-\delta}) -C\beta^2(\gamma-1)^2 >0
\end{equation*}
when $\gamma-1$ is sufficiently small (depending only on the dimension). We also choose $\varepsilon=\frac{1}{50}$ so $\alpha \delta-\beta \varepsilon=0$.
Then we have
\begin{equation*}
\begin{split}
(\frac{\partial}{\partial t}+\Delta)F_1 & \geq -2\langle \nabla F_1, \nabla \log u\rangle -2(\frac{7}{4}\alpha^3\delta+\frac{2\alpha\delta}{1-\delta})(\frac{|\nabla u|^2}{u^2}+ \gamma \frac{u_t}{u})^2\\
& -Ck_0F_1 +\alpha\delta F_1^2 -2\alpha\beta\delta F_1\frac{u_t}{u}-2\alpha^2\delta F_1\frac{|\nabla u|^2}{u^2} \\
& -C\frac{k_0^2}{(\gamma-1)^2} -\frac{C}{\alpha}k_0^2-Ck_1^{4/3}\alpha^{1/3} -Ck_0^2\beta-Ck_2 \beta. \\
\end{split}
\end{equation*}
Now Lemma 3.1 follows by using Proposition 2.4.
\hfill{$\Box$}
\begin{prop}\label{prop 3.2} Let $(M^n,g(t))$, $t\in [0,T]$, be a (not necessarily complete) Ricci flow, and $u$ be a positive solution to the conjugate heat equation coupled with the Ricci flow on $M\times [0,T]$ with $0 < u \leq A$, where $A$ is a positive constant. Let $x_0\in M$. Assume that the parabolic cube $Q_{4r,T}(x_0,T)$ is compact.
Then we have
\begin{equation*}
\frac{|\nabla^2u|}{u}+\alpha \frac{|\nabla u|^2}{u^2}+5\alpha \frac{u_t}{u}\leq \frac{C_0}{T-t}+\frac{C_0}{r^2}+C_1 \hspace{8mm} on \hspace{2mm} Q_{r,T}(x_0,T)\setminus \{(x,T) \hspace{1mm} | \hspace{1mm} x\in M \},
\end{equation*}
where $C_0$ is a constant depends only on $n$ and $\alpha$, and $C_1$ depends only on $n$, $\alpha$, the upper bounds of $|Rm|$, $|\nabla Rm|$, and $|\nabla^2 R|$ on $Q_{4r,T}(x_0,T)$.
\end{prop}
\noindent {\bf Proof}. Let $\phi$ be as in the proof of Proposition 2.2. We may assume that the function $\phi F_2$ attains its positive maximum at $(x_1,t_1)$.
Then $(x_1,t_1) \in Q_{r,T}(x_0,T)\setminus \{(x,T) \hspace{1mm} | \hspace{1mm} x\in M \}$. Clearly we may also assume that $|\nabla^2u|(x_1,t_1)\neq 0$.
Let $\gamma$ be as in Lemma 3.1, $\delta=\frac{1}{10}$, and $s=\alpha^2\delta(\frac{5}{\gamma}-1)$. Then using Proposition 2.4 we have
\begin{equation*}
\begin{split}
& \alpha\beta\delta \frac{u_t}{u}+\alpha^2\delta \frac{|\nabla u|^2}{u^2}+s\frac{|\nabla u|^2}{u^2}\\
\leq &(\alpha^2\delta+s)[C(\frac{1}{\tau_1}+\frac{1}{r^2}+k_0+k_1+\sqrt{k_1}+\sqrt{k_2})+C\gamma k_0], \\
\end{split}
\end{equation*}
on $Q_{2r,T}(x_0,T)$,
so at $(x_1,t_1)$ we have
\begin{equation*}
\begin{split}
0&\geq (\frac{\partial}{\partial t}+\Delta)(\phi F_2)\\
& \geq \frac{\alpha\delta}{\tau_1}\phi F_2^2 -\phi F_2C(2s+2\delta\alpha^2)(\frac{1}{\tau_1}+\frac{1}{r^2}+k_0+k_1+\sqrt{k_1}+\sqrt{k_2}) \\
& -CF_2(\frac{1}{r^2}+\frac{\sqrt{k_0}}{r})-\frac{CF_2}{2sr^2}-Ck_0F_2-Ck_0\phi F_2-\frac{\phi F_2}{\tau_1}\\
& -C\phi\tau_1(\frac{k_0^2}{(\gamma-1)^2}+\frac{k_0^2}{\alpha}+k_1^{4/3}\alpha^{1/3} +k_0^2\beta+k_2 \beta) \\
& -C\phi\tau_1(\frac{7}{4}\alpha^3\delta+\frac{2\alpha\delta}{1-\delta})( \frac{1}{\tau_1}+\frac{1}{r^2}+k_0+k_1+\sqrt{k_1}+\sqrt{k_2})^2\\
\end{split}
\end{equation*}
by using Lemma 3.1, where $\tau_1=T-t_1$. Then Proposition 3.2 follows. Compare \cite{Li} and \cite{S}.
\hfill{$\Box$}
\vspace*{0.4cm}
\begin{lem}\label{lem 3.3} Let $u$ be a positive solution to the conjugate heat equation coupled with the Ricci flow with $0 < u \leq A$, and $f = \log \frac{u}{A}$. Let $u_{ij}$ denote the Hessian of $u$ in local coordinates, and $v_{ij}:=\frac{u_{ij}}{u(1-f)}$. In local coordinates we have
\begin{equation*}
\begin{split}
(\partial_t+\Delta &-\frac{2f}{1-f}\nabla f\cdot \nabla)v_{ij} =\frac{|\nabla f|^2+Rf}{1-f}v_{ij}+\frac{1}{u(1-f)}[2R_{kijl}u_{kl}+R_{il}u_{jl}+R_{jl}u_{il} \\
& +2(\nabla_iR_{jl}+\nabla_jR_{il}-\nabla_lR_{ij})\nabla_lu -u\nabla_i\nabla_jR-\nabla_iR\nabla_ju-\nabla_jR\nabla_iu-Ru_{ij}]. \\
\end{split}
\end{equation*}
\end{lem}
\noindent Here we adopt the usual convention on summations. For example, $R_{kijl}u_{kl}$ means $g^{ab}g^{pq}R_{aijp}u_{bq}$. Note also that our convention on the curvature tensor $R_{ijkl}$ is the same as that of R. Hamilton, but is different from that of Han-Zhang \cite{HZ}.
\vspace*{0.4cm}
\noindent {\bf Proof}. The proof is similar to that of Lemma 3.1 in \cite{HZ}. \hfill{$\Box$}
\vspace*{0.4cm}
\begin{lem}\label{lem 3.4} Let $u$ be a positive solution to the conjugate heat equation coupled with the Ricci flow with $0 < u \leq A$, and $f = \log \frac{u}{A}$. For a function $h$ we let $h_i$ denote the 1-form $dh$ in local coordinates. We also let $w_{ij}:=\frac{u_iu_j}{u^2(1-f)^2}$ denote the 2-tensor $\frac{du \otimes du}{u^2(1-f)^2}$ in local coordinates. In local coordinates we have
\begin{equation*}
\begin{split}
(\partial_t+\Delta -\frac{2f}{1-f}\nabla f\cdot \nabla)w_{ij}& =\frac{2(|\nabla f|^2+Rf)}{1-f}w_{ij}+\frac{(Ru)_iu_j+u_i(Ru)_j}{u^2(1-f)^2} \\
& + 2(v_{ki}+fw_{ki})(v_{kj}+fw_{kj}) +R_{ik}w_{kj}+R_{jk}w_{ki}, \\
\end{split}
\end{equation*}
where $v_{ij}$ is as in Lemma 3.3.
\end{lem}
\noindent {\bf Proof}. The proof is similar to that of Lemma 3.2 in \cite{HZ}. \hfill{$\Box$}
\begin{prop}\label{prop 3.5} Let $(M^n,g(t))$, $t\in [0,T]$, be a Ricci flow on a compact manifold, $u$ be a positive solution to the conjugate heat equation coupled with the Ricci flow on $M\times [0,T]$ with $0 < u \leq A$. Then we have
\begin{equation*}
\nabla^2 u \leq u(\frac{18}{T-t}+C)(1+\log \frac{A}{u})g(t) \hspace{8mm} on \hspace{2mm} M\times [0,T),
\end{equation*}
and, in particular,
\begin{equation*}
\Delta u \leq u(\frac{18n}{T-t}+C)(1+\log \frac{A}{u}) \hspace{8mm} on \hspace{2mm} M\times [0,T),
\end{equation*}
where $C$ depends only on $n$, the upper bounds of $|Rm|$, $|\nabla Ric|$, and $|\nabla^2 R|$ on $M\times [0,T]$.
\end{prop}
\noindent {\bf Proof}.
Let $V=(v_{ij})$, $W=(w_{ij})$, $w=\text{tr}\hspace{0.5mm} W=g^{ij}w_{ij}=\frac{|\nabla f|^2}{(1-f)^2}$,
and
\begin{equation*}
L=\partial_t+\Delta -\frac{2f}{1-f}\nabla f\cdot \nabla.
\end{equation*}
Then by Lemmas 3.3 and 3.4 we have
\begin{equation*}
\begin{split}
LV&=(1-f)wV+P, \\
LW&=2(1-f)wW+2(V+fW)^2+Q, \\
\end{split}
\end{equation*}
where $P$ is a 2-tensor whose $(i,j)$-th component in local coordinates is given by
\begin{equation*}
\begin{split}
P_{ij} & =\frac{Rf}{1-f}v_{ij}+\frac{1}{u(1-f)}[2R_{kijl}u_{kl}+R_{il}u_{jl}+R_{jl}u_{il} \\
& +2(\nabla_iR_{jl}+\nabla_jR_{il}-\nabla_lR_{ij})\nabla_lu -u\nabla_i\nabla_jR-\nabla_iR\nabla_ju-\nabla_jR\nabla_iu-Ru_{ij}], \\
\end{split}
\end{equation*}
and $Q$ is a 2-tensor whose $(i,j)$-th component in local coordinates is given by
\begin{equation*}
Q_{ij} =\frac{2Rf}{1-f}w_{ij}+\frac{(Ru)_iu_j+u_i(Ru)_j}{u^2(1-f)^2} +R_{ik}w_{kj}+R_{jk}w_{ki}.
\end{equation*}
Now with the help of Theorem 10 in \cite{EKNT} and Corollary 2.3 here, we can proceed as in Han-Zhang \cite{HZ}. \hfill{$\Box$}
\vspace*{0.4cm}
\begin{prop}\label{prop 3.6} Let $(M^n,g(t)$, $t\in [0,T]$, be a (not necessarily complete) Ricci flow, $u$ be a positive solution to the conjugate heat equation coupled with the Ricci flow on $M\times [0,T]$ with $0 < u \leq A$. Let $x_0\in M$. Assume that the parabolic cube $Q_{4r,T}(x_0,T)$ is compact. Then we have
\begin{equation*}
\nabla^2 u \leq u(\frac{C_0}{T}+\frac{C_0}{r^2}+C_1)(1+\log \frac{A}{u})^2g(t) \hspace{8mm} on \hspace{2mm} Q_{r,\frac{T}{2}}(x_0,\frac{T}{2}),
\end{equation*}
where $C_0$ is a universal constant, and $C_1$ depends only on $n$, the upper bounds of $|Rm|$, $|\nabla Ric|$, and $|\nabla^2 R|$ on $Q_{4r,T}(x_0,T)$.
\end{prop}
\noindent {\bf Proof}. With the help of Theorem 10 in \cite{EKNT}, Proposition 2.2 here, and a space-time cutoff function, we can proceed as in Han-Zhang \cite{HZ} with minor modifications. \hfill{$\Box$}
\begin{cor}\label{cor 3.7} Let $(M^n,g(t)$, $t\in [0,T]$, be a (not necessarily complete) Ricci flow, $u$ be a positive solution to the conjugate heat equation coupled with the Ricci flow on $M\times [0,T]$ with $0 < u \leq A$. Let $x_0\in M$. Assume that the parabolic cube $Q_{4r,T}(x_0,T)$ is compact. Then we have
\begin{equation*}
\nabla^2 u \leq u(\frac{C_0}{T-t}+\frac{C_0}{r^2}+C_1)(1+\log \frac{A}{u})^2g(t) \hspace{8mm} on \hspace{2mm} Q_{r,T}(x_0,T)\setminus \{(x,T) \hspace{1mm} | \hspace{1mm} x\in M \},
\end{equation*}
and, in particular,
\begin{equation*}
\Delta u \leq u(\frac{C_0n}{T-t}+\frac{C_0n}{r^2}+C_1)(1+\log \frac{A}{u})^2 \hspace{8mm} on \hspace{2mm} Q_{r,T}(x_0,T)\setminus \{(x,T) \hspace{1mm} | \hspace{1mm} x\in M \},
\end{equation*}
where $C_0$ is a universal constant, and $C_1$ depends only on $n$, the upper bounds of $|Rm|$, $|\nabla Ric|$, and $|\nabla^2 R|$ on $Q_{4r,T}(x_0,T)$.
\end{cor}
\vspace*{0.4cm}
Finally, Theorem 1.1 follows from Propositions 3.2 and 3.5, and Theorem 1.2 follows from Proposition 3.2 and Corollary 3.7.
\section{Perelman's W-entropy on noncompact manifolds}
Now as in for example \cite{Ku1}, \cite{Z12b}, and \cite{L}, we consider Perelman's W-entropy (see \cite{P})
\begin{equation*}
W(g,v, \tau)=\int_M [\tau(4|\nabla v|^2+Rv^2)-v^2 \ln v^2-\frac{n}{2}(\ln 4\pi \tau)v^2-nv^2]dg
\end{equation*}
on a complete noncompact Riemannian manifold $(M, g)$ of dimension $n$, where $v\in W^{1,2}(M,g)$, $\tau >0$ is a parameter, and $dg$ denotes the volume element of the metric $g$ as in \cite{Z12b}.
The following result extends the entropy formula in Perelman \cite{P} to the noncompact case. It is a slight improvement of Proposition 4.1 in \cite{Hu} and some results in \cite{Ku1}, \cite{Z12b}; compare also \cite{L}.
\begin{prop} \label{prop 4.1} (cf. \cite{Hu}, \cite{L}, \cite{Ku1} and \cite{Z12b})
Let $(M, g_0)$ be a complete noncompact Riemannian manifold with bounded curvature such that $\sup_M|\nabla Rm_{g_0}|< \infty$ and $\sup_M|\nabla^2 Rm_{g_0}|< \infty$. Let $(M, (g(t))_{t \in [0,T]})$ be the complete solution to the Ricci flow with $\sup_{M\times [0,T]}|Rm|< \infty$ and with $g(0)=g_0$.
Let $v_T \in C^\infty(M)$ with $ |v_T(x)| \leq Ae^{-ad_{T}(x,x_0)^2}$ for any $x\in M$ and $\int_M v_T^2dg(T)=1$, where $A$ and $a$ are positive constants, and $x_0$ is a fixed point in $M$. Assume that $u$ is the solution to the conjugate heat equation coupled to the Ricci flow, $ \frac{\partial u}{\partial t}+\Delta_{g(t)} u-Ru=0$, with $u(x,T)=v_T(x)^2$. Let $v(x,t)=\sqrt{u(x,t)}$ and $\tau (t)= T-t$ for $t\in [0,T)$.
Then $W(g(t),v(\cdot,t), \tau(t))$ is finite and
\begin{equation*}
\frac{d}{dt}W(g(t),v(\cdot,t), \tau(t))=2\tau (t) \int_M |Ric-Hess \ln u-\frac{1}{2\tau (t)}g(t)|^2udg(t)
\end{equation*}
for $t\in [0,T)$.
\end{prop}
\noindent{\bf Proof}. With the help of Theorem 10 in \cite{EKNT} and our Theorem 1.2 we can get a control of $|\text{Hess} \ln u|$, then we can proceed as in the proof of Proposition 4.1 in \cite{Hu}.
\hfill{$\Box$}
\vspace*{0.4cm}
\noindent {\bf Remark}. With the help of Theorem 10 in \cite{EKNT} and our Theorem 1.2 we can also justify the second equality in (3.25) on p. 23 of \cite{Ku1}.
\vspace*{0.4cm}
\noindent {\bf Acknowledgements}. I would like to thank Professor Qi S. Zhang for helpful communications. I'm partially supported by Beijing Natural Science Foundation (Z190003) and by Laboratory of Mathematics and Complex Systems, Ministry of Education.
\hspace *{0.4cm}
|
2,869,038,156,738 | arxiv | \section*{Extended conjecture and counterexamples}
We begin with the following conjecture. We then provide
an infinite number of counterexamples.
\begin{conjecture}
\label{conjecture_higher}
Let $P$ be a simple polytope of dimension greater than or equal to $3$.
Then there exists a subset $S$ of the vertices of $P$ such that
that convex hull of $S$ has the same combinatorial type
as the dual polytope $P^*$.
\end{conjecture}
Ziegler conjectured this result in the
case of $3$-dimensional
simple polytopes; see~\cite[Exercise~4.19]{Ziegler}.
Note that the conjecture is true in dimensions at most $2$
and is immediately true for any $d$-dimensional simplex.
In the case of dimension $3$,
Andreas Paffenholz, in unpublished work, verified the conjecture
for the truncated tetrahedron, the trucated cube and
the truncated cross-polytope by giving an explicit realization.
It is a straightforward matter to verify
the conjecture holds for any
$d$-dimensional cube.
We will show that Conjecture~\ref{conjecture_higher}
is false in all dimensions greater than or equal
to $3$.
\begin{theorem}
\label{theorem}
Let $d$ be a positive integer. Let $P$ be a $d'$-dimensional polytope with $d' \ge 2$ and $n$ facets
such that
every vertex is incident with at most $\left\lceil (n+2d)/2^d\right\rceil-d'$
facets. If $Q$ is the Cartesian product of $P$ with the $d$-dimensional cube
then there is no subset $S$ of the vertices of $Q$
such that the convex hull of $S$ is combinatorially equivalent to the dual polytope
$Q^*$.
\end{theorem}
\begin{proof}
Suppose on the contrary that $S$ is a subset of the vertices
of the polytope $Q$ satisfying the convex hull of $S$
is combinatorially equivalent to the dual polytope $Q^*$.
Observe that the dual polytope $Q^*$
is combinatorially equivalent to the $d$
times iterated bipyramid over $P^*$
and thus has $n+2d$ vertices.
The vertices of $S$ can then be divided into disjoint subsets
$T$ and $U$, with $\abs{T} = n$, and $\abs{U} = 2d$
so that the convex hull of $T$ is combinatorially equivalent to
$P^*$. The vertices of $U$
correspond to the $2d$ vertices
created when taking $d$ iterated bipyramids over $P^*$
to create $Q^*$.
Since $Q$ is formed by a Cartesian product,
the vertices $V(Q)$ of $Q$ can be partitioned into $2^d$
disjoint sets, say $V(Q) = \bigcup_{i=1}^{2^d}Q_i$.
Where the convex hull of the vertices in $Q_i$ is combinatorially
equivalent to a copy of the original polytope $P$
for $i=1,\dots,2^d$. By the pigeonhole principle, of the $n+2d$
vertices selected to form the set $S$, there is at least one
set of the vertex partition of $V(Q)$, say $Q_k$, that contains
at least $\left\lceil (n+2d)/2^d \right \rceil$ vertices of $S$.
Let $H$ be a supporting hyperplane for $Q$ at $Q_k$
which contains no other vertices of $Q$.
This is also a supporting hyperplane for $S$,
so the convex hull of the vertices in $S\cap Q_k$ forms a face of
$Q^*$, which is at most dimension $d'$.
By hypothesis each vertex of $P$ is incident with at most
$\left\lceil (n+2d)/2^d\right\rceil-d'$
facets. Any vertex of $P$ is in at least $d'$ facets, so
$\left\lceil (n+2d)/2^d\right\rceil-d' \ge d'$.
This implies $\left\lceil(n+2d)/2^d\right\rceil \ge 2d'$,
so $S\cap Q_k$ contains at least $2d'$ vertices.
We claim at most $d'-1$ of these vertices are
from $U$.
To see this, note that since $S\cap Q_k$
is the set of vertices of a face of the convex
hull of $S$,
it consists of some (possibly
empty) set of vertices from $T$, and
some (possibly empty) set of vertices from $U$.
These vertices from $U$ lie in general
position to one another and are additionally in general position
with respect to $T$. Thus the fact that the convex hull of $S\cap Q_k$
has dimension at most
$d'$ yields that $\abs{U\cap Q_k} \le d'+1$.
If $\abs{U\cap Q_k} = d'+1$ then $\abs{T\cap Q_k} \ge d'-1 \ge 1$,
hence $S\cap Q_k$ contains $d'+2$ vertices in general position,
so the convex hull has dimension greater then $d'$, a contradiction.
Similarly if $\abs{U\cap Q_k} = d'$ then $\abs{T\cap Q_k}\ge d'\ge 2$,
and once again $S\cap Q_k$ would contain $d'+2$ vertices in general
position.
Therefore $\abs{U\cap Q_k} \le d'-1$,
hence $\abs{T\cap Q_k} \ge \left\lceil (n+2d)/2^d\right\rceil -d'+1$.
Since these vertices are in $T$ and in a face of the convex
hull of $S$, the convex hull of $T\cap Q_k$ forms
a proper face of $P^*$. Therefore
$P^*$ has a facet with at least
$\left\lceil (n+2d)/2^d\right\rceil -d'+1$ vertices,
and thus $P$ has a vertex incident with the same
number of facets, contrary to assumption.
Hence there is no such subset $S$ of the vertices
of $Q$ such that the convex hull of $S$ is combinatorially
equivalent to $Q^*$.
\end{proof}
\begin{corollary}
Let $P$ be an $n$-gon with $n \ge 3\cdot 2^d - 2d+1$.
Let $Q$ be the simple polytope formed by taking the Cartesian
product of $P$ with the $d$-dimensional cube. Then there is no subset
$S$ of the vertices of $Q$ where the convex hull of $S$ has the same
combinatorial type as the dual polytope $Q^*$.
\end{corollary}
\begin{proof}
Observe that $P$ satisfies the hypothesis of
Theorem~\ref{theorem} as $d'=2$
and $n \ge 3\cdot 2^d - 2d+1$ implies
$\left\lceil (n+2d)/2^d\right\rceil - d' \ge 2$.
Each vertex of $P$ is incident with at most 2 facets,
so indeed by Theorem~\ref{theorem} there is
no subset $S$ of the vertices of $Q$ where the
convex hull of $S$ has the same combinatorial type
as the dual polytope $Q^*$.
\end{proof}
\begin{conjecture}
The truncated $d$-simplex, the truncated $d$-cube and the truncated $d$-cross-polytope
for $d\ge 4$ are families of polytopes where Conjecture~\ref{conjecture_higher} holds.
\end{conjecture}
\section*{Acknowledgements}
The author thanks Richard Ehrenborg and Margaret Readdy
for inspiring conversations
and comments on an earlier draft,
and G\'abor Hetyei for comments.
\newcommand{\bookf}[5]{{\sc #1,} ``#2,'' #3, #4, #5.}
|
2,869,038,156,739 | arxiv | \section{Introduction}
The Adversarial Multi Armed Bandit(MAB) problem proceeds as a sequential game of $T$ rounds between a player and an adversary. In each round $t=1,\dots, T$, the player selects a distribution $p_t$ over the $n$-arms and the adversary selects a loss vector $l_t$ belonging to some set $\mathcal{L} \subseteq \mathbb{R}^n$. An action $i_t$ is sampled from $p_t$ and the player observes the loss $l_t(i_t)$. The (expected) regret of the player is:
$$R_T = \mathbb{E}\left[\sum_{t=1}^T l_t(i_t) - \min_{i \in [n]} \sum_{t=1}^T l_t(i)\right]$$
We assume that the adversary is oblivious, i.e., the loss vectors $l_1,\dots, l_T$ are chosen before the game begins. So, the above expectation is with respect to the randomness in the player's strategy. The goal of the player is to sequentially select the distributions $p_1,\dots,p_T$ such that $R_T$ is minimized. The adversarial MAB problem has been studied extensively; we refer the reader to the texts of \cite{DBLP:journals/ftml/BubeckC12,lattimore_szepesvari_2020,DBLP:journals/ftml/Slivkins19} for further details. Assuming that $\mathcal{L}$ is bounded, and the $\|\cdot\|_\infty$-Lipschitz constant $G$ is known to the player in advance (i.e. $\sup_{l \in \mathcal{L}} \|l\|_\infty = G<\infty$), the minimax rate of regret is known to be $\Theta(G\sqrt{nT})$. The Exp3 algorithm \citep{DBLP:journals/siamcomp/AuerCFS02} has a $\mathcal{O}(G\sqrt{nT\log(n)})$ regret bound whereas the Poly-INF algorithm \citep{DBLP:conf/colt/AudibertB09} removes the $\sqrt{\log(n)}$ factor, achieving the optimal $\mathcal{O}(G\sqrt{nT})$ regret bound. Exp3 and Poly-INF use $G$ in tuning the learning rate, which helps them achieve a linear dependence on $G$.
In this paper, we address the case when the player has no knowledge of $\mathcal{L}$. We consider \textit{Scale-Free} bounds for MABs, which aim to bound the regret in terms of $n$ and norms of the loss vectors $l_1,\dots,l_T$ for any sequence of loss vectors chosen arbitrarily by adversary. Scale-free bounds have been studied in the \textit{full-information} setting (where the player sees the complete vector $l_t$ in each round). For the Experts problem, which is the full-information counterpart of adversarial MAB, the AdaHedge algorithm \citep{ DBLP:journals/jmlr/RooijEGK14} has a scale-free regret bound of $\mathcal{O}(\sqrt{\log(n)(\sum_{t=1}^T \|l_t\|_\infty^2)})$. For the same problem, the Hedge algorithm \citep{DBLP:journals/jcss/FreundS97} has a regret bound of $\mathcal{O}(G\sqrt{T\log(n)})$ with knowledge of $G$. The scale-free bound is more general as it holds for any $l_1,\dots,l_T \in \mathbb{R}^n$, whereas the bound achieved by the Hedge algorithm only holds provided that $\sup_t \|l_t\|_\infty < G$ where $G$ needs to be known in advance.
\subsection{Our Contributions}
We present an algorithm for the scale-free MAB problem. By appropriately
setting the parameters of this algorithm, we can achieve a scale-free regret upper-bound of either $\tilde{\mathcal{O}}(\sqrt{nL_2} + L_\infty\sqrt{nT})$, or $\tilde{\mathcal{O}}(\sqrt{nL_2} + L_\infty\sqrt{nL_1})$. Here $L_\infty = \sup_t \| l_t\|_\infty$, $L_2 = \sum_{t=1}^T \|l_t\|_2^2$, $L_1 = \sum_{t=1}^T \|l_t\|_1$ and the $\tilde{\mathcal{O}}$ notation suppress logarithmic factors. Our algorithm is also \textit{any-time} as it does not need to know the number of rounds $T$ in advance. Assuming $\sup_t \|l_t\|_\infty < G$, our first regret bound achieves linear dependence on $G$ (sans the hidden logarithmic terms). This bound is only $\tilde{\mathcal{O}}(\sqrt{n})$ factor larger than Poly-INF's regret of $\mathcal{O}(G\sqrt{nT})$. The second bound is the first completely data-dependent scale-free regret bound for MABs as it has no direct dependence on $T$. Moreover, these are the first MAB bounds that adapt to the $\|\cdot\|_2$, $\|\cdot\|_1$ norms of the losses. The only previously known scale-free result for MABs was $\mathcal{O}(L_\infty \sqrt{nT\log(n)})$ by \citet{hadiji2020adaptation}, which adapts to the $\| \cdot \|_\infty$ norm and is not completely data-dependent due to the $T$ in their bound.
In the analysis, we present a novel and
general technique to obtain \textit{local-norm} lower-bounds for \textit{Bregman divergences} induced by a special class of functions that are commonly used in online learning. These local-norm lower-bounds can be used to obtain regret inequalities as shown in \citet[Corollary 28.8]{lattimore_szepesvari_2020}. We use our technique to obtain a full-information regret inequality that holds for any arbitrary sequence of losses and is particularly useful in the bandit setting due to its local-norm structure. This technique could be of independent interest.
\subsection{Related Work}
\textbf{Scale-Free Regret.} As mentioned earlier, Scale-Free regret bounds were studied in the full information setting. The AdaHedge algorithm from \citet{DBLP:journals/jmlr/RooijEGK14} gives a scale-free bound for the experts problem. The AdaFTRL algorithm from \cite{DBLP:journals/tcs/OrabonaP18} extends these bounds to the general online convex optimization problem. We rely on the analysis of AdaFTRL as presented in \cite{koolen_2016}. For the MAB problem, \citet{hadiji2020adaptation} show a scale-free bound of $\mathcal{O}(L_\infty \sqrt{nT\log(n)})$, which is close to the $\mathcal{O}(G\sqrt{nT\log(n)})$ bound of Exp3. Our scale-free bounds are more versatile as they are able to adapt to additional structure in the loss sequence, such as the case of sparse losses with large magnitude, i.e., when $L_2<< L_\infty^2 nT$ and $L_1 << L_\infty nT$. Even in the worst-case, our bounds are a factor of $\tilde{\mathcal{O}}(\sqrt{n})$ and $\tilde{\mathcal{O}}(\sqrt{nL_\infty})$ larger than their bound respectivley.
\\
\noindent \textbf{Data-dependent Regret.} These bounds use a ``measure of hardness" of the sequence of loss vectors instead of $T$. Algorithms that have a data-dependent regret bound perform better than the worst-case regret, when the sequence of losses is ``easy" according to the measure of hardness used. For instance, First-order bounds \citep{DBLP:conf/alt/AllenbergAGO06, DBLP:conf/nips/FosterLLST16, DBLP:conf/uai/PogodinL19}, also known as small-loss or $L^\star$ bounds depend on $L^\star = \min_{i \in [n]}\sum_{t=1}^T l_t(i)$. Bounds that depend on the empirical variance of the losses were shown in \cite{DBLP:journals/jmlr/HazanK11, DBLP:conf/alt/BubeckCL18}. Path length bounds that depend on $\sum_{t=1}^{T-1}\|l_t-l_{t+1}\|$ or a similar quantity appear in \cite{DBLP:conf/colt/WeiL18, DBLP:conf/colt/BubeckLLW19}. \citet{DBLP:journals/jmlr/ZimmertS21} give an algorithm that adapts to any stochastictiy present in the losses. Our bound is comparable to a result in \cite{DBLP:conf/alt/BubeckCL18}, where they derive a regret bound depending on $\sum_{t=1}^T \|l_t\|_2^2$. However, all these results assume either $\mathcal{L} = [0,1]^n$ or $\mathcal{L} = [-1,1]^n$.
\\
\noindent \textbf{Effective Range Regret.} The effective range of the loss sequence is defined as $\sup_{t,i,j}| l_t(i) - l_t(j)|$. \citet{DBLP:conf/nips/GerchinovitzL16} showed that it is impossible to adapt to the effective range in adversarial MAB. This result does not contradict the existence of scale-free bounds as the effective range could be much smaller than, for instance, the complete range $\sup_{t,s,i,j}| l_t(i) - l_s(j)|$. In fact, \citet{hadiji2020adaptation} already show a regret bound that adapts to the complete range. We do note that under some mild additional assumptions, \citet{DBLP:conf/alt/Cesa-BianchiS18} show that it is possible to adapt to the effective range.
\subsection{Organization}
In Section \ref{sec:algorithm} we present the scale-free MAB algorithm (Algorithm \ref{alg:SF_MAB}) and its scale-free regret bound (Theorem \ref{thm:main}). Section \ref{sec:preliminaries} introduces Potential functions, based on which we build our analysis. Section \ref{subsec:BregmanLowerBound} shows a technique for obtaining local-norm lower-bounds for Bregman divergences. Section \ref{subsec:FTRL} briefly discusses full-information FTRL, AdaFTRL and in Theorem \ref{thm:LogBarrierRegret} we obtain a regret inequality for AdaFTRL with the log-barrier regularizer. Theorem \ref{thm:main} is proved in Section \ref{sec:proof}.
\subsection{Notation}
Let $\Delta_n$ be the probability simplex $\{p\in \mathbb{R}^n: \sum_{i=1}^n p(i) = 1, p(i)\geq 0, i\in [n]\}$. Let $\textbf{1}^{i}$ be the vector with $\textbf{1}^{i}(i)=1$ and $\textbf{1}^{i}(j)=0$ for all $j\neq i$. For $\epsilon \in (0,1]$, let $\textbf{1}^i_\epsilon = (1-\epsilon) \textbf{1}^i + \epsilon/n$. The all ones and all zeros vector are denoted by $\textbf{1}$ and $\textbf{0}$ respectively. Let $H_t$ be the history from time-step $1$ to $t$, i.e., $H_t = \{l_1(i_1), l_2(i_2),\dots,l_t(i_t)\}$.
\section{Algorithm}
\label{sec:algorithm}
Consider for a moment, full-information strategies on $\Delta_n$. In the full information setting, in each round $t$, the player picks a point $p_t \in \Delta_n$. Simultaneously, the adversary picks a loss vector $l_t \in \mathbb{R}^n$. The player incurs a loss of $l_t^\top p_t$ and (unlike the bandit setting) {\it sees the entire vector $l_t$.} A full-information strategy $\mathcal{F}$ takes as input a sequence of loss vectors $l_1,\dots,l_t$ and outputs the next iterate $p_{t+1} \in \Delta_n$. A MAB strategy $\mathcal{B}$ can be constructed from a full-information strategy $\mathcal{F}$ along with two other components as follows:
\begin{enumerate}
\item A sampling scheme $\mathcal{S}$, which constructs a sampling distribution $p'_t$ from the current iterate $p_t$. An arm $i_t$ is then sampled from $p'_t$ and the loss $l_t(i_t)$ is revealed to the player.
\item An estimation scheme $\mathcal{E}$, that constructs an estimate $\tilde{l}_t$ of the loss vector $l_t$ using $l_t(i_t)$ and $p_t$.
\item A full-information strategy $\mathcal{F}$, which computes the next iterate $p_{t+1}$ using all the estimates $\tilde{l}_1,\dots,\tilde{l}_t$.
\end{enumerate}
In fact, most existing MAB strategies in the literature can be described in the above framework with different choices of ${\cal S}, {\cal E}, {\cal F}$
A delicate balance needs to be struck between $\mathcal{S}, \mathcal{E}$ and $\mathcal{F}$ in order to achieve a good regret bound for $\mathcal{B}$. Suppose the best arm in hindsight is $i_\star = \arg\min_{i \in [n]} \sum_{t=1}^T l_t(i) $ The expected regret of MAB strategy $\mathcal{B}$ can be decomposed as follows:
\begin{align*}
&\mathbb{E}\left[\sum_{t=1}^T (l_t(i_t) - l_t(i^\star))\right] = \mathbb{E}\left[\sum_{t=1}^T l_t^\top(p'_t - \textbf{1}^{i^\star})\right] =\mathbb{E}\left[\sum_{t=1}^T l_t^\top(p'_t - p_t)\right] + \mathbb{E}\left[\sum_{t=1}^T l_t^\top(p_t - \textbf{1}^{i^\star})\right] \\
&= \underbrace{\mathbb{E}\left[\sum_{t=1}^T l_t^\top(p'_t - p_t)\right]}_{(1)} + \underbrace{\mathbb{E}\left[\sum_{t=1}^T (l_t-\tilde{l}_t^\top)(p_t - \textbf{1}^{i^\star})\right]}_{(2)}+
\underbrace{\mathbb{E}\left[\sum_{t=1}^T \tilde{l}_t^\top(p_t - \textbf{1}^{i^\star})\right]}_{(3)}
\end{align*}
Term (1) is due to the sampling scheme $\mathcal{S}$, term (2) is the effect of the estimation scheme $\mathcal{E}$ and term (3) is the expected regret of the full-information strategy $\mathcal{F}$ on the loss sequence $\tilde{l}_1,\dots,\tilde{l}_T$ compared to playing the fixed strategy $\textbf{1}^{i^\star}$.
\\
\noindent \textbf{Sampling Scheme.} A commonly used sampling scheme mixes $p_t$ with the uniform distribution using a parameter $\gamma$, i.e., $p'_t = (1-\gamma)p_t + \gamma/n$. Such schemes were first introduced in the seminal work of \citet{DBLP:journals/siamcomp/AuerCFS02} and have remained a mainstay in MAB algorithm design. We use a time-varying $\gamma$, i.e., we pick $p'_t = (1-\gamma_{t-1})p_t + \gamma_{t-1}/n$. Here $\gamma_{t-1}$ could be any measurable function of $H_{t-1}$.
\\
\noindent \textbf{Estimation Scheme.} We use the \textit{Importance Weighted}(IW) estimator which was also introduced by \citet{DBLP:journals/siamcomp/AuerCFS02}. It computes $\tilde{l}_t$ as:
$$\tilde{l}_t = \frac{l_t(i_t)}{p'_t(i_t)} \textbf{1}^{i_t}$$
Since the sampling distribution is $p'_t$, the IW estimator is an unbiased estimate of $l_t$:
$$\mathbb{E}_{i_t \sim p'_t}[\tilde{l}_t] = \sum_{i_t=1}^n p'_t(i_t)\frac{l_t(i_t)}{p'_t(i_t)} \textbf{1}^{i_t} = l_t$$
Note that $p_t$ is a measurable function of $H_{t-1}$. Using the tower rule and the fact that $\mathbb{E}_{i_t \sim p'_t}[\tilde{l}_t] = l_t$, we can see that term (2) is $0$.
\\
\noindent \textbf{Full-information startegy.} For $\mathcal{F}$, there is a large variety of full-information algorithms that one could pick from. Most if not all of them belong to one of the two principle families of algorithms: Follow The Regularized Leader(FTRL) or Online Mirror Descent(OMD). Further, one also has to choose a suitable \textit{regularizer} $F$ within these algorithms for the particular application at hand. We refer to \cite{DBLP:books/daglib/0016248, DBLP:journals/ftml/Shalev-Shwartz12, DBLP:journals/ftopt/Hazan16, DBLP:journals/corr/abs-1912-13213, DBLP:conf/alt/JoulaniGS17, DBLP:journals/tcs/JoulaniGS20} for a detailed history and comparison of these algorithms. The particular algorithm we use is FTRL with a $H_t$ measurable, adaptive learning rate $\eta_t$ that resembles the adaptive schemes in AdaHedge \citep{DBLP:journals/jmlr/RooijEGK14} and AdaFTRL \citep{DBLP:journals/tcs/OrabonaP18}.
The regret of $\mathcal{F}$ has an component called the \text{stability} term $\Psi_{p}:\mathbb{R}^n \to \mathbb{R}$. In the bandit case, $\mathcal{F}$ receives the IW estimates $\tilde{l}_t$. So, it is important that the stability term be bounded with IW estimates. Without going into any technical details, we note that it is desirable to have a stability term bounded by $\Psi_p(l) \leq p^\top l^2$ as its expectation with IW estimates can be bounded.
Previous techniques to bound the stability term by $p^\top l^2$ relied on the assumptions on $l$, such as either $l\geq \textbf{0}$ or $l\geq -\textbf{1}$ (See \cite[Page 5]{DBLP:journals/corr/abs-1907-05772}). For arbitrary $l \in \mathbb{R}^n$, we show that it is possible to bound the stability term by $p^\top l^2$ using the \textit{log-barrier} regularizer. The procedure we develop to obtain this bound is the main technical contribution of our paper.
The complete algorithm for the scale-free MAB problem is described below. We give two choices for the exploration parameter $\gamma_t$. A simple non-adaptive scheme that is similar to the one in \cite{hadiji2020adaptation}, where $\gamma_t \propto \frac{1}{\sqrt{t}}$ and an adaptive scheme that picks $\gamma_t$ in a fashion that resembles the adaptive learning rate scheme $\eta_t$.
\begin{algorithm2e}
\caption{Scale-Free Multi Armed Bandit}
\label{alg:SF_MAB}
\DontPrintSemicolon
Starting Parameters: $\eta_0=n,\gamma_0=1/2$\;
Regularizer $\displaystyle F(q) = \sum_{i=1}^n (f(q(i)) - f(1/n))$, where $f(x) = -\log(x)$\;
First iterate $p_1 = (1/n,\dots,1/n)$\;
\For{$t = 1$ to $T$}{
Sampling Scheme: $\displaystyle p'_t = (1-\gamma_{t-1})p_t + \frac{\gamma_{t-1}}{n}$\;
Sample Arm $i_t \sim p'_t$ and see loss $l_t(i_t)$.\;
Estimation Scheme: $\displaystyle \tilde{l}_t = \frac{l_t(i_t)}{p'_t(i_t)} \textbf{1}^{i_t}$\;
Compute $\gamma_t$ for next step: \\(Option 1) Non-adaptive $\gamma_t = \min(1/2,\sqrt{n/t})$ \\(Option 2) Adaptive $\displaystyle \gamma_t = \frac{n}{2n + \sum_{s=1}^t \Gamma_s(\gamma_{s-1})}$ where $\displaystyle \Gamma_t(\gamma) = \frac{\gamma |l_t(i_t)|}{(1-\gamma) p_t(i_t) + \gamma/n}$\;
Compute $\displaystyle \eta_t = \frac{n}{1+\sum_{s=1}^t M_s(\eta_{s-1})} $ where $\displaystyle M_t(\eta) = \sup_{q \in \Delta_n} \left[ \tilde{l}_t^\top (p_t-q) - \frac{1}{\eta} \text{Breg}_{F}(q\|p_t) \right]$\;
Find next iterate using FTRL:
$\displaystyle p_{t+1} = \arg \min_{q \in \Delta_n} \left[ F(q) + \eta_{t} \sum_{s=1}^{t} q^\top \tilde{l}_s \right]$
}
\end{algorithm2e}
Our main result is the following regret bound for Algorithm \ref{alg:SF_MAB}.
\begin{restatable}{theorem}{main}
\label{thm:main}
For any $l_1,\dots,l_T \in \mathbb{R}^n$, the expected regret of Algorithm \ref{alg:SF_MAB} is at most:
\begin{enumerate}
\item $\tilde{\mathcal{O}}(\sqrt{nL_2} + L_\infty\sqrt{nT})$ if $\gamma_t$ is non-adaptive (Option 1) and $T\geq 4n$
\item $\tilde{\mathcal{O}}(\sqrt{nL_2} + L_\infty\sqrt{nL_1})$ if $\gamma_t$ is adaptive (Option 2)
\end{enumerate}
Where $L_\infty = \max_t \| l_t\|_\infty$, $L_2 = \sum_{t=1}^T \|l_t\|_2^2$, $L_1 = \sum_{t=1}^T \|l_t\|_1$.
\end{restatable}
\section{Preliminaries}
We begin by recalling a few definitions.
\label{sec:preliminaries}
\begin{definition}[Legendre function]
A continuous function $F:\mathcal{D} \to \mathbb{R}$ is Legendre if $F$ is strictly convex, continuously differentiable on $\text{Interior}(\mathcal{D})$ and $\lim_{x \to \mathcal{D}/\text{Interior}(\mathcal{D})} \|\nabla F(x)\| = +\infty$.
\end{definition}
For instance, the function $x\log(x)-x$, $-\sqrt{x}$, $-\log(x)$ are all Legendre on $(0,\infty)$
\begin{definition}[Bregman Divergence]
The Bregman Divergence of function $F$ is:
$$\text{Breg}_F(x\|y) = F(x)-F(y) - \nabla F(y)^\top (x-y).$$
\end{definition}
\begin{definition}[Potential Function] A function $\psi: (-\infty,a) \to (0,+\infty)$ for some $a \in \mathbb{R} \cup \{+\infty\}$ is called a Potential if it is convex, strictly increasing, continuously differentiable and satisfies:
$$\lim_{x \to -\infty} \psi(x) = 0 \quad\text{ and } \quad \lim_{x \to a} \psi(x) = +\infty$$
\end{definition}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{potential_function.png}
\caption{Potential Function}
\label{fig:potential}
\end{figure}
For instance, $\exp(x)$ is a potential with $a=\infty$ and $-1/x$ is a potential with $a=0$. A potential function typically looks like Figure \ref{fig:potential}. Potentials were introduced in \citet{DBLP:conf/colt/AudibertB09, DBLP:journals/jmlr/AudibertBL11, DBLP:journals/mor/AudibertBL14} for analyzing the Implicitly Normalized Forecaster(INF) algorithm, of which Poly-INF is a specific case.
Associated with a potential $\psi$, we define a function $f_\psi$ as the indefinite integral $f_\psi(z) = \int \psi^{-1}(z) dz + C$. Since the domain of $\psi^{-1}$ is $(0,\infty)$, the domain of $f_\psi$ is also $(0,\infty)$. For instance, if $\psi(x) = -1/x$ on the domain $(-\infty,0)$, the associated function is $f_\psi(x) = -\log(x) + C$.
Observe that $f_\psi'(z) = \psi^{-1}(z)$ and $f_\psi''(z) = \left[ \psi'(\psi^{-1}(z))\right]^{-1}$. Since $\psi$ is strictly convex and increasing, $\psi'>0$ and thus $f''_\psi >0$, making $f_\psi$ strictly convex. Moreover, $\lim_{z \to 0}\mid f_\psi'(z)\mid = \lim_{z \to 0}\mid \psi^{-1}(z)\mid = +\infty$. Thus $f_\psi$ is a Legendre function on $(0,\infty)$. Define the function $F_\psi:\mathbb{R}^n \to \mathbb{R}$ as $F_\psi(x) = \sum_{i=1}^n [f_\psi(x(i)) - f_\psi(1/n)]$. This function is Legendre on $(0,\infty)^n$.
Given a potential $\psi:(-\infty,a) \to (0,+\infty)$ and its associated function $f_\psi$, the Legendre-Fenchel dual of $f_\psi$ is $f_\psi^\star:(-\infty,a) \to \mathbb{R}$ defined as $f_\psi^\star(u) = \sup_{z > 0}(zu - f_\psi(z))$. The supremum is achieved at $z={f'_\psi}^{-1}(u)=\psi(u)$. So we have that $f_\psi^\star(u) = u \psi(u) - f_\psi(\psi(u))$. This implies $ {f_\psi^\star}'(u) = \psi(u) $ and ${f_\psi^\star}''(u) = \psi'(u)$. Further, using integration by parts on $\int \psi(u) du$ and substituting $\psi(u)=s$:
$$
\int \psi(u) du = u\psi(u) -\int u \psi'(u) du= u\psi(u) - \int \psi^{-1}(s)ds = u\psi(u)-f_\psi(\psi(u)) + C = f^\star_\psi(u) + C
$$
Thus $f^\star_\psi(u) = \int \psi(u) du - C$. Here $C$ is the same constant of integration picked when defining $f_\psi(z) = \int \psi^{-1}(z) dz + C$. We have the following property (proof in Appendix \ref{app:potentials}):
\begin{restatable}{Lemma}{bregtransform}
\label{lem:breg_transform}
Let $x,y$ be such that $x=\psi(u)$ and $y=\psi(v)$. Then $\text{Breg}_{f_\psi}(y\|x) = \text{Breg}_{f_\psi^\star}(u\|v)$
\end{restatable}
\section{New local-norm lower-bounds for Bregman divergences}
\label{subsec:BregmanLowerBound}
Let $\psi$ be a potential and $x,y \in \mathbb{R}_+$. We show a general way of obtaining lower-bounds using potential functions, that are of the form:
$$\text{Breg}_{f_\psi}(y\|x) \geq \frac{1}{2 w(x)}(x-y)^2$$
Where $w$ is some positive function.
\begin{lemma}
\label{lem:lowerbound}
Let $\psi$ be a potential and $x\in \mathbb{R}_+$ such that $x=\psi(u)$ for some $u$. Let $\phi$ be a non-negative function such that $\psi(u+\phi(u))$ exists. Define the function $m(z) = \frac{\psi(z+\phi(z))-\psi(z)}{\phi(z)}$. For all $0< y \leq \psi(u+\phi(u))$ we have the lower bound:
$\text{Breg}_{f_\psi}(y\|x) \geq \frac{1}{2} \frac{(x-y)^2}{m(\psi^{-1}(x))}$
\end{lemma}
\begin{proof}
Let $v$ be such that $y=\psi(v)$. Using Lemma \ref{lem:breg_transform}, we have $\text{Breg}_{f_\psi}(y\|x) = \text{Breg}_{f_\psi^\star}(u\|v)$. Using the fact that $f_\psi^\star(u) = \int \psi(u) du - C$, we have:
\begin{align*}
\text{Breg}_{f_\psi^\star}(u\|v) &= f_\psi^\star(u)-f_\psi^\star(v)-{f_\psi^\star}'(v)(u-v) = \int_{v}^u \psi(s) - y(u-v)
\end{align*}
We can visualize $\text{Breg}_{f_\psi^\star}(u\|v)$ using the potential function. When $v\leq u$, it is the area with green borders in Figure \ref{fig:case1} and when $u\leq v$, it is the area with green borders in Figure \ref{fig:case2}.
\begin{figure}[h]
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{lowerbound1.png}
\caption{$v\leq u$}
\label{fig:case1}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{lowerbound2.png}
\caption{$u\leq v\leq u+\phi(u)$}
\label{fig:case2}
\end{minipage}
\end{figure}
Consider the line passing through $(u,x)$ and $(u+\phi(u),\psi(u+\phi(u))$. Its slope is $m(u) \geq \psi'(u)> 0$. In both cases, the height of the red triangle is $|x-y|$ and its base is $\frac{|x-y|}{m(u)}$. So, the area of the red triangle will be $\frac{1}{2} \frac{(x-y)^2}{m(u)}$. Since the triangle is always smaller than $\text{Breg}_{f_\psi^\star}(u\|v)$, we have the lower bound $\text{Breg}_{f_\psi}(y\|x) \geq \frac{1}{2} \frac{(x-y)^2}{m(\psi^{-1}(x))}$.
\end{proof}
In the context of online learning, local-norm lower-bounds have been studied before, see for example \cite{DBLP:journals/corr/abs-1912-13213}. However, these relied upon Taylor's theorem to show that $\text{Breg}_{f_\psi}(y\|x) = \frac{1}{2}(x-y)^2 f''_\psi(z)$ for some $z \in [x,y]$. Then, they used further conditions on $x,y$ to argue that $c f''_\psi(x) \leq f''_\psi(z)$ for some positive constant $c$ and thus arrive at $\text{Breg}_{f_\psi}(y\|x) \geq \frac{c}{2}(x-y)^2 f''_\psi(x)$. We generalize this argument in Lemma \ref{lem:lowerbound}, through which we are able to generate a more rich class of lower-bounds. We illustrate with an example below:
\begin{corollary}
\label{cor:LogBarrierLowerbound}
Let $\psi(u)=-1/u$ in the domain $(-\infty,0)$. For $x,y \in (0,1]$, we have the lower-bound $$\text{Breg}_{f_\psi}(y\|x) = \frac{y}{x}-1-\ln\left( \frac{y}{x} \right)\geq \frac{1}{2} \frac{(x-y)^2}{x}$$
\end{corollary}
\begin{proof}
For any $x \in (0,1]$, let $u \in (-\infty,-1]$ be such that $\psi(u)=x$. Let $\phi(u) = -1-u$. Clearly, $\phi(u) \geq 0$ and $\psi(u+\phi(u)) = \psi(-1)=1$. We have $$m(u) = \frac{\psi(u+\phi(u))-\psi(u)}{\phi(u)} = \frac{1+\frac{1}{u}}{-1-u} = \frac{-1}{u} = \psi(u) = x$$
Applying Lemma \ref{lem:lowerbound}, we have the lower-bound for all $0<y\leq 1$:
$$\text{Breg}_{f_\psi}(y\|x) = \frac{y}{x}-1-\ln\left( \frac{y}{x} \right) \geq \frac{1}{2} \frac{(x-y)^2}{m(\psi^{-1}(x))} =\frac{1}{2} \frac{(x-y)^2}{x} $$
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[width = 0.47\textwidth]{inequality.png}
\caption{$\frac{y}{x}-1-\ln\left( \frac{y}{x} \right) \geq \frac{1}{2} \frac{(x-y)^2}{x} $}
\label{fig:inequality}
\end{figure}
The result of Corollary \ref{cor:LogBarrierLowerbound} is illustrated in Figure \ref{fig:inequality}. The shaded region is $\lbrace(x,y): x\geq 0, y\geq 0, \frac{y}{x}-1-\ln\left( \frac{y}{x} \right) \geq \frac{1}{2} \frac{(x-y)^2}{x} \rbrace$. Clearly the region $\{(x,y):0\leq x\leq 1, 0\leq y\leq 1\}$ is within the shaded region.
\section{Full-Information FTRL and AdaFTRL}
\label{subsec:FTRL}
The iterates of FTRL with the regularizer $F_\psi(x) = \sum_{i=1}^n [f_\psi(x(i)) - f_\psi(1/n)]$ for some potential function $\psi$ and positive learning rates $\{\eta_t\}_{t=0}^T$, are of the form:
$$p_{t+1} = \arg \min_{q\in \Delta_n} \left[ F_\psi(q)+\eta_t \sum_{s=1}^t l_s^\top q \right]$$
Since $F_\psi$ is Legendre, the point $p_{t+1}$ always exists strictly inside $\Delta_n$.
\cite{DBLP:journals/corr/abs-1912-13213} and \cite{DBLP:conf/alt/JoulaniGS17, DBLP:journals/tcs/JoulaniGS20} provide general purpose regret analysis of FTRL. For the sake of completeness, we show a simple way of analyzing FTRL when the action set is $\Delta_n$ and the regularizer chosen is of the form $F_\psi(x) = \sum_{i=1}^n [f_\psi(x(i)) - f_\psi(1/n)]$ in Appendix \ref{app:ftrl_regret}.
The AdaFTRL strategy picks a specific sequence of learning rate $\eta_t$ based on the history $H_t$. This strategy was analyzed in \cite{DBLP:journals/tcs/OrabonaP18} and a simpler analysis was given by \cite{koolen_2016}. Our analysis is adapted from \citet[Section E.2.1]{hadiji2020adaptation}. We consider the adaptive learning rate: $$\eta_t = \frac{\alpha}{\beta + \sum_{s=1}^t M_s(\eta_{s-1})}$$
Where $M_t(\eta) = \sup_{q \in \Delta_n} \left[ l_t^\top(p_t-q) - \frac{1}{\eta} \text{Breg}_{F_\psi}(q\|p_t)\right]$, is the \textit{Mixability Gap} and $\alpha,\beta>0$. Since $q = p_t$ is a feasible solution for this optimization problem, we have $M_t(\eta) \geq 0$. Let $p_t^\star$ be the optimal value of $q$ in the optimization. We have the upper bound $$M_t(\eta) = l_t^\top(p_t-p_t^\star) - \frac{1}{\eta}\text{Breg}_{F_\psi}(p_t^\star\|p_{t}) \leq l_t^\top(p_t-p_t^\star) \leq 2 \|l_t\|_\infty$$ Since $M_t(\eta)$ are non-negative and bounded, the sequence $\eta_t$ is non-increasing.
\begin{theorem}
\label{thm:LogBarrierRegret}
If the regularizer is the log-barrier $F_\psi(x) = \sum_{i=1}^n [\log(1/n)-\log(x(i))]$ then for any $i\in [n]$, $\epsilon \in (0,1]$ and any sequence of losses $l_1,\dots,l_T$, the iterates of AdaFTRL satisfy the regret inequality $\sum_{t=1}^T l_t^\top (p_t-\textbf{1}^i_\epsilon) $:
$$\leq n\log(\nicefrac{1}{\epsilon}) \left( \frac{\beta}{\alpha} + \frac{2 \sup_t\|l_t\|_\infty}{\alpha}\right) + 2\sup_t\|l_t\|_\infty + \sqrt{\sum_{t=1}^T p_t ^\top l_t^2} \left( \frac{n\log(\nicefrac{1}{\epsilon})}{\sqrt{\alpha}}+ \sqrt{\alpha} \right)$$
\end{theorem}
\begin{proof} The log-barrier regularizer $F_\psi(x) = \sum_{i=1}^n [\log(1/n)-\log(x(i))]$ is obtained by using the potential $\psi(u) = -1/u$ on the domain $(-\infty,0)$. Using Corollary \ref{cor:LogBarrierLowerbound}, we have the lower-bound:
\begin{align*}
\text{Breg}_{F_\psi}(p_t^\star\|p_{t}) = \sum_{i=1}^n \text{Breg}_{f_\psi}(p_t^\star(i)\|p_{t}(i)) \geq \sum_{i=1}^n \frac{1}{2} \frac{(p_t(i)-p^\star_t(i))^2}{p_t(i)}
\end{align*}
This gives us the upper-bound:
\begin{align*}
M_t(\eta) &= l_t^\top(p_t-p_t^\star) - \frac{1}{\eta}\text{Breg}_{F_\psi}(p_t^\star\|p_{t}) \leq \sum_{i=1}^n \left[ l_t(i)(p_t(i) - p_t^\star(i)) - \frac{(p_t(i)-p^\star_t(i))^2}{2\eta p_t(i)} \right]\\
&\leq \sum_{i=1}^n \sup_{s\in \mathbb{R}}\left[ l_t(i)s - \frac{1}{2\eta} \frac{s^2}{p_t(i)} \right] \leq \frac{\eta}{2} \sum_{n=1}^n p_t(i) l_t(i)^2 = \frac{\eta}{2} p_t ^\top l_t^2
\end{align*}
Thus, we have $$\frac{M_t(\eta_{t-1})}{\eta_{t-1}} \leq \frac{1}{2} p_t ^\top l_t^2$$
Applying Theorem \ref{thm:AdaFTRL_regret}(Appendix \ref{app:ftrl_regret}), for any $i \in [n]$ and $\epsilon \in (0,1]$ we have that $\sum_{t=1}^T l_t(p_t-\textbf{1}^i_\epsilon)$:
\begin{align*}
\leq F_\psi(\textbf{1}^{i}_\epsilon) \left( \frac{\beta}{\alpha} + \frac{2 \sup_t\|l_t\|_\infty}{\alpha}\right) + 2\sup_t\|l_t\|_\infty + \sqrt{\sum_{t=1}^T p_t ^\top l_t^2} \left( \frac{F_\psi(\textbf{1}^{i}_\epsilon)}{\sqrt{\alpha}}+ \sqrt{\alpha} \right)
\end{align*}
The term $F_\psi(\textbf{1}^{i}_\epsilon)$ can be bounded as:
\begin{align*}
F_\psi(\textbf{1}^{i}_\epsilon) &= n\log(1/n) - (n-1)\log(\epsilon/n) - \log((1-\epsilon) + \epsilon/n)\\
&\leq n \log(1/n) - n\log(\epsilon/n) = n\log(1/\epsilon)
\end{align*}
\end{proof}
\noindent For $p\in \Delta_n$ and regularizer $F_\psi$, the stability term $\Psi$ is defined as $$\Psi_{p}(l) = \sup_{q \in \Delta_n} \left[ l^\top(p-q) - \text{Breg}_{F_\psi}(q\|p)\right]$$
Observe that $\eta M_t(\eta) = \Psi_{p_t}(\eta l_t)$. For the log-barrier regularizer, we have $M_t(\eta) \leq \eta p_t^\top l_t^2$. Thus, $\Psi_p(l) \leq p^\top l^2$ for all $l \in \mathbb{R}^n$. Previously, the only known way to achieve $\Psi_p(l) \leq p^\top l^2$ was by using the negative-entropy regularizer along with the assumption $l \geq -\textbf{1}$ (See \citet[Eq. 6 ]{DBLP:journals/corr/abs-1907-05772} or \citet[Eq. 37.15]{lattimore_szepesvari_2020}).
\section{Scale-free bandit regret bounds}
\label{sec:proof}
\main*
\begin{proof} Suppose the best arm in hindsight is $i_\star = \arg\min_{i \in [n]} \sum_{t=1}^T l_t(i) $. Let $\textbf{1}^{i_\star}$ be the vector with $\textbf{1}^{i_\star}(i_\star)=1$ and $\textbf{1}^{i_\star}(i)=0$ for all $i\neq i_\star$. Let $\textbf{1}^{i_\star}_\epsilon = (1-\epsilon)\textbf{1}^{i_\star} + \epsilon/n$.
The exptected regret of Algorithm \ref{alg:SF_MAB} is:
\begin{align*}
\mathbb{E}\left[ \sum_{t=1}^T l_t(i_t) - l_t(i^\star) \right] &= \mathbb{E}\left[ \sum_{t=1}^T l_t^\top (p_t' - \textbf{1}^{i_\star}) \right] = \mathbb{E} \left[ \sum_{t=1}^T l_t^\top (\textbf{1}^{i_\star}_\epsilon - \textbf{1}^{i_\star}) \right] + \mathbb{E} \left[ \sum_{t=1}^T l_t^\top (p_t' - \textbf{1}^{i_\star}_\epsilon) \right] \\
&= \underbrace{\mathbb{E} \left[ \sum_{t=1}^T l_t^\top (\textbf{1}^{i_\star}_\epsilon - \textbf{1}^{i_\star}) \right]}_\textrm{(1)} + \underbrace{\mathbb{E} \left[ \sum_{t=1}^T l_t^\top (p_t - \textbf{1}^{i_\star}_\epsilon) \right]}_\textrm{(2)} + \underbrace{\mathbb{E} \left[ \sum_{t=1}^T l_t^\top (p_t' - p_t) \right]}_\textrm{(3)}
\end{align*}
For term (1), we have:
\begin{align*}
\mathbb{E} \left[ \sum_{t=1}^T l_t^\top (\textbf{1}^{i_\star}_\epsilon - \textbf{1}^{i_\star}) \right] &= \sum_{t=1}^T l_t^\top (\textbf{1}^{i_\star}_\epsilon - \textbf{1}^{i_\star}) \leq 2\epsilon \left\|\sum_{t=1}^T l_t \right\|_\infty = 2\epsilon S_\infty \label{eq:first_term}
\end{align*}
For term (2), we use the fact that $\mathbb{E}[\tilde{l}_t] = l_t$:
\begin{align*}
\mathbb{E} \left[ \sum_{t=1}^T l_t^\top (p_t - \textbf{1}^{i_\star}_\epsilon) \right] &=\mathbb{E} \left[ \sum_{t=1}^T \tilde{l}_t^\top (p_t - \textbf{1}^{i_\star}_\epsilon) \right]
\end{align*}
Since Algorithm \ref{alg:SF_MAB} runs log-barrier regularized AdaFTRL with the loss sequence $\tilde{l}_1,\dots, \tilde{l}_T$, we can bound the sum inside the expectation using Theorem \ref{thm:LogBarrierRegret} as $\sum_{t=1}^T \tilde{l}_t^\top (p_t - \textbf{1}^{i_\star}_\epsilon) $:
\begin{align*}
\leq \log(\nicefrac{1}{\epsilon}) \left(1 + 2 \sup_t\|\tilde{l}_t\|_\infty \right) + 2\sup_t\|\tilde{l}_t\|_\infty + \sqrt{n \sum_{t=1}^T p_t ^\top \tilde{l}_t^2} \left( \log(\nicefrac{1}{\epsilon})+ 1 \right) \tag{$*$}
\end{align*}
Consider the term $\sup_t\|\tilde{l}_t\|_\infty$:
\begin{align*}
\sup_t\|\tilde{l}_t\|_\infty &= \sup_t \frac{|l_t(i_t)|}{p'_t(i_t)} = \sup_t \frac{|l_t(i_t)|}{(1-\gamma_{t-1})p_t(i_t) + \gamma_{t-1}/n}\leq n \sup_t \frac{|l_t(i_t)|}{\gamma_{t-1}}
\end{align*}
Since $\gamma_t$ is a positive, non-increasing sequence:
\begin{align*}
\sup_t\|\tilde{l}_t\|_\infty &\leq n \frac{\sup_t |l_t(i_t)|}{\gamma_{T}} \leq \frac{nL_\infty}{\gamma_T}
\end{align*}
Finally, consider the term $ p_t ^\top \tilde{l}_t^2$:
\begin{align*}
p_t ^\top \tilde{l}_t^2 & = p_t(i_t) \frac{l_t(i_t)^2}{p'_t(i_t)^2} = p_t(i_t) \frac{l_t(i_t)^2}{((1-\gamma_{t-1})p_t(i_t)+\frac{\gamma_{t-1}}{n})p'_t(i_t)}
\leq \frac{l_t(i_t)^2}{(1-\gamma_{t-1})p'_t(i_t)}
\end{align*}
Since $0\leq \gamma_{t-1}\leq 1/2$, we have $1\leq (1-\gamma_{t-1})^{-1} \leq 2$. Thus:
\begin{align*}
p_t ^\top \tilde{l}_t^2 &\leq 2\frac{l_t(i_t)^2}{p'_t(i_t)}
\end{align*}
Substituting these bounds in the regret inequality $(*)$, we have $\sum_{t=1}^T \tilde{l}_t^\top (p_t - \textbf{1}^{i_\star}_\epsilon) $:
$$\leq \log(\nicefrac{1}{\epsilon}) + \sqrt{2n \sum_{t=1}^T \frac{l_t(i_t)^2}{p'_t(i_t)}} \left( \log(\nicefrac{1}{\epsilon}) + 1\right) + \frac{2n L_\infty}{\gamma_T} \left( \log(\nicefrac{1}{\epsilon}) + 1\right)$$
Applying expectation, we have $\mathbb{E}\left[ \sum_{t=1}^T \tilde{l}_t^\top (p_t - \textbf{1}^{i_\star}_\epsilon)\right]$:
\begin{align*}
\leq \log(\nicefrac{1}{\epsilon}) + \mathbb{E} \left[ \sqrt{2n \sum_{t=1}^T \frac{l_t(i_t)^2}{p'_t(i_t)}}\right] \left( \log(\nicefrac{1}{\epsilon}) + 1\right) + 2n L_\infty \left(\log(\nicefrac{1}{\epsilon}) + 1\right) \mathbb{E} \left[ \frac{1}{\gamma_T}\right]
\end{align*}
For the expectation in the second term, we apply Jensen's inequality:
\begin{align*}
\mathbb{E} \left[ \sqrt{2n \sum_{t=1}^T \frac{l_t(i_t)^2}{p'_t(i_t)}}\right] &\leq \sqrt{2n \mathbb{E} \sum_{t=1}^T \left[\frac{l_t(i_t)^2}{p'_t(i_t)}\right]} = \sqrt{2n \sum_{t=1}^T \sum_{i=1}^n l_t(i)^2} = \sqrt{2nL_2}
\end{align*}
Thus term (2) can be bounded as $\mathbb{E}\left[ \sum_{t=1}^T l_t^\top (p_t - \textbf{1}^{i_\star}_\epsilon)\right]$:
\begin{align*}
&\leq \log(\nicefrac{1}{\epsilon}) + \sqrt{2nL_2} \left( \log(\nicefrac{1}{\epsilon}) + 1\right) + 2n L_\infty \left(\log(\nicefrac{1}{\epsilon}) + 1\right) \mathbb{E} \left[ \frac{1}{\gamma_T}\right]
\end{align*}
\subsection{Non-Adaptive Exploration}
First, we present a simple way to bound term (3):
\begin{align*}
\mathbb{E} \left[ \sum_{t=1}^T l_t^\top (p_t' - p_t) \right] &= \mathbb{E} \left[ \sum_{t=1}^T l_t^\top ((1-\gamma_{t-1})p_t + \gamma_{t-1}/n - p_t) \right] = \mathbb{E} \left[ \sum_{t=1}^T \gamma_{t-1} l_t^\top (1/n - p_t) \right]\\
&\leq \mathbb{E} \left[ 2 \sum_{t=1}^T \gamma_{t-1} \|l_t\|_\infty \right] \leq 2 L_\infty \mathbb{E} \left[ \sum_{t=1}^T \gamma_{t-1} \right]
\end{align*}
Combining the upper-bounds for term (1), (2) and (3), we have $\mathbb{E}\left[ \sum_{t=1}^T l_t(i_t) - l_t(i^\star) \right]$:
\begin{align*}
\leq 2\epsilon S_\infty + \log(\nicefrac{1}{\epsilon}) + \sqrt{2nL_2} \left( \log(\nicefrac{1}{\epsilon}) + 1\right) + 2n L_\infty \left(\log(\nicefrac{1}{\epsilon}) + 1\right) \mathbb{E} \left[ \frac{1}{\gamma_T}\right] + 2 L_\infty \mathbb{E} \left[ \sum_{t=1}^T \gamma_{t-1} \right]
\end{align*}
Pick $\epsilon = (1+S_\infty)^{-1}$ and the exploration rate $\gamma_t = \min(1/2,\sqrt{n/t})$. If $T \geq 4n$, the regret of Algorithm \ref{alg:SF_MAB} with non-adaptive exploration is bounded by:
\begin{align*}
&\leq 2 + \log(1+S_\infty) + \sqrt{2nL_2} (1 + \log(1+S_\infty)) + 2L_\infty \sqrt{nT} (2 + \log(1+S_\infty))\\
&\leq \left(2 + \log(1+S_\infty)\right) \left(1 + \sqrt{2nL_2} + 2L_\infty\sqrt{nT}\right)\\
&=\tilde{\mathcal{O}}( \sqrt{nL_2} + L_\infty\sqrt{nT} )
\end{align*}
\subsection{Adaptive Exploration}
An alternate way to bound term (3) is:
\begin{align*}
\mathbb{E} \left[ \sum_{t=1}^T l_t^\top (p_t' - p_t) \right] &= \mathbb{E} \left[ \sum_{t=1}^T \tilde{l}_t^\top (p_t' - p_t) \right] = \mathbb{E} \left[ \sum_{t=1}^T \gamma_{t-1} \frac{l_t(i_t)}{p_t(i_t)} (1/n - p_t(i_t)) \right]\\
&\leq \mathbb{E} \left[ \sum_{t=1}^T \gamma_{t-1}\frac{|l_t(i_t)|}{p'_t(i_t)} \right]
\end{align*}
Combining the upper-bounds for term (1), (2) and (3), we have $\mathbb{E}\left[ \sum_{t=1}^T l_t(i_t) - l_t(i^\star) \right] $:
\begin{align*}
& \leq 2\epsilon S_\infty + \log(\nicefrac{1}{\epsilon}) + \sqrt{2nL_2} \left( \log(\nicefrac{1}{\epsilon}) + 1\right) + \mathbb{E} \left[\frac{2n L_\infty \left(\log(\nicefrac{1}{\epsilon}) + 1\right)}{\gamma_T} + \sum_{t=1}^T \gamma_{t-1}\frac{|l_t(i_t)|}{p'_t(i_t)} \right]
\end{align*}
Consider the expression inside the expectation. Let
$$\Gamma_t(\gamma) = \frac{ \gamma |l_t(i_t)|}{(1-\gamma)p_t(i_t) + \gamma/n} $$
When $0\leq \gamma \leq 1/2$, we have $0 \leq \Gamma_t(\gamma) \leq n|l_t(i_t)| \leq n L_\infty$. Moreover, we have $$\frac{\Gamma_t(\gamma_{t-1})}{\gamma_{t-1}} = \frac{|l_t(i_t)|}{p'_t(i_t)}$$
Pick $$\gamma_{t} = \frac{n}{2n + \sum_{s=1}^t \Gamma_s(\gamma_{s-1})}$$
We satisfy $0\leq \gamma_t\leq 1/2$. Applying Lemma \ref{lem:summation_lemma}, we have:
\begin{align*}
&\mathbb{E} \left[ \frac{2n L_\infty \left(\log(\nicefrac{1}{\epsilon}) + 1\right)}{\gamma_T} + \sum_{t=1}^T \gamma_{t-1}\frac{|l_t(i_t)|}{p'_t(i_t)} \right] = \mathbb{E} \left[\frac{2n L_\infty \left(\log(\nicefrac{1}{\epsilon}) + 1\right)}{\gamma_T} + \sum_{t=1}^T \Gamma_t(\gamma_{t-1}) \right] \\
&\leq 2n L_\infty(2 + L_\infty) \left(\log(\nicefrac{1}{\epsilon}) + 1\right) + nL_\infty + \left( 2 L_\infty \left(\log(\nicefrac{1}{\epsilon}) + 1\right) +1 \right) \mathbb{E} \left[\sqrt{2n \sum_{t=1}^T \frac{|l_t(i_t)|}{p'_t(i_t)}}\right]
\end{align*}
For the expectation above, we apply Jensen's inequality:
\begin{align*}
\mathbb{E} \left[ \sqrt{2n \sum_{t=1}^T \frac{|l_t(i_t)|}{p'_t(i_t)}}\right] &\leq \sqrt{2n \mathbb{E} \sum_{t=1}^T \left[\frac{|l_t(i_t)|}{p'_t(i_t)}\right]} = \sqrt{2 n\sum_{t=1}^T \sum_{i=1}^n |l_t(i)|} = \sqrt{2nL_1}
\end{align*}
Pick $\epsilon = (1+S_\infty)^{-1}$. The regret of Algorithm \ref{alg:SF_MAB} with adaptive exploration is bounded by:
\begin{align*}
& \leq 2 + \log(1+S_\infty) + \sqrt{2nL_2} \left( \log(1+S_\infty) + 1\right) \\
& \quad + 2n L_\infty(2 + L_\infty) \left(\log(1+S_\infty) + 1\right) + nL_\infty + \left( 2 L_\infty \left(\log(1+S_\infty) + 1\right) +1 \right) \sqrt{2nL_1}\\
&= \tilde{\mathcal{O}}(\sqrt{nL_2} + L_\infty\sqrt{nL_1})
\end{align*}
\end{proof}
\acks{We thank a bunch of people.}
|
2,869,038,156,740 | arxiv | \section{Introduction}
The $\beta$ Cephei stars are a group of pulsating main sequence
variables with early B spectral types. They oscillate in radial and
nonradial pressure modes with typical periods of several hours. Stankov
\& Handler (\cite{SH05}) provide an overview of those stars. Pigulski \&
Pojma{\'n}ski (\cite{PP08}) doubled the number of class members to about
200. As these are young massive stars (and are thus progenitors of type
II supernovae), they naturally occur in the galactic plane, in open
clusters, and stellar associations. In general, this statement also
holds for the less massive slowly pulsating B (SPB) stars. They
neighbour the $\beta$ Cephei stars in the HR diagram, but they are
cooler and less luminous, and they pulsate in gravity modes with periods
of a few days (see, e.g., De Cat \cite{PDC07}).
The physical origin of pulsation driving of the $\beta$ Cephei and SPB
stars is well established (Moskalik \& Dziembowski \cite{MD92}), and is
caused by the huge number of transitions inside the thin structure of
the electron shells in excited ions of the iron-group elements (Rogers
\& Iglesias \cite{RI94}): the $\kappa$ mechanism. Obviously, the power
of pulsational driving will strongly depend on the abundance of
iron-group elements and on their opacities in the driving zone. Credible
pulsational models must reflect the conditions inside the real stars,
reproducing all observables such as the extents of the $\beta$ Cephei
and SPB instability strips and their metallicity dependence. These
depend on the input data used in the models, which can therefore be
tested.
The metallicities of stellar aggregates can be determined in several
fundamental ways. The incidence of core hydrogen-burning B-type
pulsators among open cluster stars can then yield important constraints
on what abundance of metals (and, by extrapolation, amount of iron group
elements) is required to drive their oscillations. The aim of the
present and subsequent works is to determine observationally the
incidence of $\beta$ Cephei and SPB stars in a number of open clusters
exactly for this purpose.
Ground-based measurements of stellar variability are hampered by the
presence of the Earth's contaminated atmosphere. Scintillation and
variable transparency of the night sky limit the precision of stellar
brightness measurements. Therefore, the level at which the presence of
oscillations in a given star can be detected is finite. Although there are
techniques that optimize the precision of ground-based photometric
measurements (again, often taking advantage of stellar clusters),
observations from space are superior given a large enough telescope.
The {\it Kepler} mission, the most powerful instrument for measuring
stellar brightness variations to date (Koch et al. \cite{KBB10}), aims
at detecting transits of extrasolar planets in the habitable zone around
their host stars. As the only inhabited planet known so far revolves
around a middle-aged main sequence G star, the sample of target stars of
the {\it Kepler} mission was chosen to observe as many similar stars as
possible to the highest precision. Stars at such an age do not dominate
the population at low galactic latitudes, so the {\it Kepler} field was
chosen to be some 10\degr off the galactic plane (Batalha et al.
\cite{BBK10} and references therein). $\beta$~Cephei and SPB stars with
magnitudes of $V>7.5$ formed in the galactic plane would hardly reach
these galactic latitudes within their main sequence life times and are
thus expected to be unusual. Therefore, characterizing {\it Kepler}
$\beta$~Cephei and SPB star candidates is important.
The present paper reports the results of a study of bona fide and
candidate field and open cluster $\beta$ Cephei stars in the Str\"omgren
photometric system: 168 target stars were measured, 107 of them being
open cluster stars, 17 known cluster and field $\beta$ Cephei stars, and
42 Kepler targets. To transform the data into the standard system, 117
Str\"omgren photometric standards were measured as well. The outcome of
this study will be used in subsequent papers.
\section{Observations}
\subsection {Measurements and reductions}
The measurements were obtained with the 2.1-m telescope at McDonald
Observatory in Texas. Three observing runs were carried out in October
2008, March 2009, and August/September 2010. The first two observing
runs were dedicated to stars in open clusters and known field
$\beta$~Cephei stars, whereas the third run focused on Kepler targets
and on supplementary $H_\beta$ measurements of previous targets missing
this information.
In all runs, a two-channel photoelectric photometer was used, but only
employed channel 1. The same filter set, the same photomultiplier tube,
and the same operating voltage were used during all observations. The
only variables in the observational setup were the reflectivities of the
telescope's mirrors: the primary mirror was not cleaned or aluminized
within the time span of the observing runs, but dust on the secondary
mirror is blown off on a monthly basis.
Photometric apertures of 14.5 and $29\tt ''$ were used in most cases,
depending on the brightness of the target and sky background as well as
on crowding of the field. In a few cases of extreme crowding or of a
close companion, a $11\tt ''$ aperture and extremely careful (offset)
guiding had to be used. As the photometer's filter wheel can only carry
four filters at once, the $uvby$ measurements had to be taken separately
from the H$_{\beta}$ data. No H$_{\beta}$ measurements were taken for
open cluster targets that were immediately identified as non-OB stars
from their Str\"omgren "bracket quantities" (see Sect.\ 4 for details).
As the measurements aimed at obtaining estimates of the effective
temperatures and luminosities of most targets possible rather than
establishing new standard stars, most stars were observed only once. A
few exceptions were made for standard and target stars that were used
for purposes of determining extinction coefficients, for target stars
that were deemed the most interesting astrophysically, or where a
previous measurement appeared suspicious.
\subsection {Selection of standard stars}
A set of standard stars was selected to span the whole parameter range
of the targets in terms of $(b-y)$, $m_1$, $c_1$, $\beta$, and $E(b-y)$.
It was observed for transforming the measurements into the standard
system. For reasons of homogeneity in the colour transformations, the
majority of the adopted standard Str\"omgren indices were taken from the
work of a single group of researchers. The standard stars were chosen
from the papers on NGC 1502 (Crawford \cite{Cr94}), IC~4665 (Crawford \&
Barnes \cite{CB72}), NGC 2169 (Perry, Lee, \& Barnes \cite{PLB78}), NGC
6910 and NGC 6913 (Crawford, Barnes, \& Hill \cite{CBH77}), O-type stars
(Crawford \cite{Cr75}), h and $\chi$ Per (Crawford, Glaspey, \& Perry
\cite{CGP70}), Cep OB3 (Crawford \& Barnes \cite{CB70}), Lac OB1
(Crawford \& Warren \cite{CW76}), and on three field stars (Crawford et
al. \cite{CBG72}, Knude \cite{JK77}).
\subsection {Data reduction}
The data were reduced in a standard way. The instrumental system's
deadtime of 33 ns was determined by measuring the twilight sky, and then
was used to correct for coincidence losses. Sky background subtraction
was done next, followed by nightly extinction corrections determined
from measurements of extinction stars that also served as standards. The
applied extinction coefficients varied between $0.14 - 0.18$ in $y$,
$0.054 - 0.069$ in $(b-y)$, $0.050 - 0.064$ in $m_1$, and $0.126 -
0.157$ in $c_1$.
\section{Transformation equations}
The equation for $(b-y)$ only has two parameters, so we only needed to
calculate a linear fit to the data. However, as it turned out, the
photometric zeropoints of the three individual observing runs and seasons
were different (most likely as a consequence of the large temporal gaps
between the observing runs) and had to be determined separately. After
adjustment of the zeropoints, the slope of the transformation was
re-determined, and the procedure repeated until convergence. The final
transformation equation was
\begin{equation}
(b-y)=1.0563 (b-y)_N + zpt(b-y),
\end{equation}
where the subscript $N$ denotes the colour in the natural system, and
$zpt(b-y)$ is the zeropoint of the transformation equation, listed in
Table 1. The rms residual scatter of a single standard star measurement in
$(b-y)$ is an unsatisfactory 13.4 mmag.
\begin{table}
\caption{$(b-y)$ colour transformation zeropoints}
\begin{tabular}{llc}
\hline
Observing run & Standard stars & $zpt (b-y)$\\
\hline
Autumn 2008 & O stars & 1.3497 $\pm$ 0.0026\\
Autumn 2008 & Cep OB3 & 1.3514 $\pm$ 0.0034\\
Autumn 2008 & h \& $\chi$ Per & 1.3413 $\pm$ 0.0025\\
Autumn 2008 & NGC 6910/13 & 1.3566 $\pm$ 0.0021\\
\hline
Autumn 2008 & above combined & 1.3485 $\pm$ 0.0014\\
Spring 2009 & NGC 1502, 2169, 2244 & 1.3916 $\pm$ 0.0037\\
Autumn 2010 & Lac OB1, field & 1.3302 $\pm$ 0.0023\\
\hline
\end{tabular}
\end{table}
However, this high residual scatter does not mean that the present
measurements are imprecise. Some standard stars were measured more than
once, which indicates the precision of the data. The average rms scatter
of the $(b-y)$ values of standard stars that were measured three times
is only 2.0 mmag.
It is worth noting that the $(b-y)$ transformation zeropoints are
different by up to $6\sigma$ when standard stars from different
publications are considered (upper part of Table~1). This comparison
only uses data from the most fruitful observing run in Autumn 2008,
where several of the different groups of standard stars were measured in
the same nights. The total 13.4 mmag residual scatter in $(b-y)$ may
therefore be due to a combination of underestimation of the precision of
the data and of imperfections in the standard values adopted.
Because accuracy is more important than precision (see, e.g., Bevington
\cite{B69} for the distinction between these two terms) in the present
case, the same transformation slope was used for all $(b-y)$
measurements, but seasonal (lower part of Table 1) zeropoints were
applied. In other words, it is assumed that the changes in the seasonal
zeropoints of the colour equations are dominated by variations in the
instrumental system.
The remaining transformation equations are to be determined by a
(simultaneous) three-parameter fit to the measurements of the standard
stars as a colour correction by means of the $(b-y)$ data is necessary.
The equation for $m_1$ derived by simultaneously fitting three
parameters appears biased from correlations in the $m_1$ and $b-y$
indices due to reddening: $E(m_1)=-0.32E(b-y)$. The range spanned by the
$m_1$ values of the standard stars is 0.37 mag, the range in $(b-y)$ is
0.89 mag, 2.4 times larger.
Therefore, the measured and standard $m_1$ values were linearly
fitted first, and only then were the $(b-y)$ correction term and the
zeropoint fixed. This resulted in the following transformation equation
\begin{equation}
m_1=1.0195 m_{1,N}-0.0162 (b-y)_N-0.8469.
\end{equation}
Statistically insignificant variations occurred in the zeropoint when
different ensembles of comparison stars were considered. The rms residual
of a single standard $m_1$ measurement is 12.1 mmag.
Concerning $c_1$, correlations between the coefficients in the
transformation equation due to reddening are also to be expected, but
are less severe than in $m_1$ because the $c_1$ values have a much
wider spread than $m_1$ and because $c_1$ is less affected by reddening
than $m_1$. A simultaneous three-parameter linear fit yielded
\begin{equation}
c_1=1.0025 c_{1,N}+0.1018 (b-y)_N-0.5484.
\end{equation}
The seasonal zeropoints were roughly, but not fully satisfactorily,
consistent. Again, as accuracy is more important than precision, a
single zeropoint was adopted for all data sets. The residual scatter of
the standard star measurements transformed in this way is 15.6 mmag per
single point.
No difficulties with varying zeropoints were encountered when determining
the transformation equation for the $\beta$ value. This is no surprise as
it is a differential measurement at the same effective wavelength. The
transformation equation for $\beta$ is
\begin{equation}
\beta=0.8302\beta_N-0.0439(b-y)_N+0.9532,
\end{equation}
leaving a residual scatter of $11.4$~mmag per single measurement.
Finally, the transformation equations for the $V$ magnitude require
nightly zeropoints (Table 2) to take variable sky transparency into
account. As some papers reporting standard Str\"omgren colour indices do
not quote $V$ magnitudes, these values were supplemented by literature
data as supplied by the SIMBAD data base and cross-checked with the
original references. The final transformation was
\begin{table}
\caption{Nightly $V$ magnitude transformation zeropoints}
\begin{tabular}{lc}
\hline
Civil date & $zpt(y)$\\
\hline
07 Oct 2008 & $20.022 \pm 0.004$\\
08 Oct 2008 & $20.036 \pm 0.006$\\
09 Oct 2008 & $20.001 \pm 0.007$\\
16 Oct 2008 & $20.003 \pm 0.004$\\
04 Mar 2009 & $20.102 \pm 0.005$\\
01 Oct 2010 & $19.739 \pm 0.005$\\
02 Oct 2010 & $19.703 \pm 0.008$\\
\hline
\end{tabular}
\end{table}
\begin{equation}
V=0.9961y_N+0.0425(b-y)_N+zpt(y),
\end{equation}
resulting in a residual scatter of $22.2$~mmag per single measurement.
Observations yielding statistically significant outliers in each of the
transformation equations were excluded from the determination of its
parameters and are marked as such in the data tables that follow. It
cannot be judged whether this indicates a problem with the present
measurements or with the standard values used.
\section{Results}
With the transformation equations in place, the colour indices in the
standard system can be determined for all standard and target stars. The
results are listed in Tables 3 - 6. Table 3 contains the present
measurements of the standard stars themselves, transformed into the
standard system. Table 4 reports the $uvby\beta$ photometry for open
cluster target stars not previously known to pulsate. Table 5 lists the
Str\"omgren-Crawford photometry for known $\beta$~Cephei stars plus a
few other targets. Finally, Table 6 contains the results for stars in
the {\it Kepler} field.
In the following, stars in open clusters are always designated with the
cluster name followed by their identification in the WEBDA\footnote{\tt
http://www.univie.ac.at/webda/} data base. Measurements of standard
stars that were rejected for computing the transformation equations (or
where no $V$ magnitudes or $H_{\beta}$ values were available in the
literature) were treated in the same way as target star observations and
are marked with asterisks in Table 3.
Some of the stars used as standards have been shown to be intrinsically
variable in the literature. However, standard stars must have
temperatures and luminosities similar to the targets that would ideally
be pulsating variables. Therefore the use of variable standard stars of
low amplitude cannot be avoided. Intrinsic variability of standard stars
not exceeding the accuracy of the present data is therefore tolerable
and measurements that are significantly off the limits would be rejected
anyway.
\subsection{Comments on individual stars}
BD+36 4867 was mistakenly observed when intending to measure the
$uvby\beta$ standard star BD+36 4868. This error came from confusion of
the coordinates of the two stars in the SIMBAD data base at the time of
the measurements. The Str\"omgren indices of BD+36 4867 are listed for
completeness in Table 6, indicating a mid G-type star.
The published $V$ magnitudes of NGC 1893 196 vary between 12.30 and
12.79. This unusually wide range raises the suspicion of stellar
variability. Table 4 lists $V=12.637$ and $\beta=2.441$. The latter
value indicates strong hydrogen-line emission, as demonstrated
spectroscopically (Marco et al. \cite{MBN01}).
\addtocounter{table}{4}
\section{Analysis and discussion}
\subsection{Validity of the transformation equations}
The ranges in which the transformation equations are valid are examined in
Fig.\ 1. It shows the distributions of the standard and target star
measurements with respect to the different $uvby\beta$ colour indices and
reddening. The routines by Napiwotzki, Sch\"onberner, \& Wenske
(\cite{NSW93}) were used to derive the latter.
\begin{figure}
\includegraphics[width=80mm,viewport=-10 00 255 728]{16507fg1.ps}
\caption{Distributions of the different Str\"omgren-Crawford colour
indices amongst standard (thick histogram bars) and target (thin
histogram bars) stars.}
\end{figure}
The $(b-y)$ values of all but one target star (Roslund 2 13, a very red
object) are contained within the range spanned by the standard stars.
The same comment is true for the $c_1$ parameter and reddening $E(b-y)$.
Sixteen (i.e.\ 10\%) of the targets have more positive $m_1$ values than
any standard star. These are stars of later spectral types than A0 which
are not the prime interest of this work.
As far as $H_\beta$ is concerned, five stars with values below 2.55 were
observed, including two (supposed) standard and three target stars. Both
standard stars were rejected after determining the transformation
equations due to high residual deviations. It is suspected that all five
of these stars are Be stars. The hydrogen line emission of such stars is
often variable (e.g., McSwain, Huang, \& Gies \cite{MHG09}) which
explains the high residuals and makes the tabulated values unreliable.
They are listed for completeness only.
Considering the distribution in $E(b-y)$, about two thirds of the stars
with the smallest reddening are among the {\it Kepler} targets: the
satellite's field of view deliberately excludes the central galactic
plane. Two of the remaining targets are $\beta$ Cephei stars of rather
high galactic latitude, and the remainder are cool main sequence stars
in the foreground of some of the target open clusters. In Tables 4 - 6
the colour indices that are outside the range of those spanned by the
standard stars are marked with colons and should be used with caution.
\subsection{Are the present data on the standard system?}
Before inferring physical parameters of the targets, it must be made sure
that the data are commensurate with the standard system. It is a subtle
process to obtain accurate standard photometry of reddened early-type
stars, see, e.g., Crawford (\cite{Cr99}) for a discussion.
One test is to compare published $(U-B)$ colours with $(u-b)$ values
from Str\"omgren indices (see Crawford \cite{Cr94}) and to compare the
resulting relation with the one defined by standard stars. This is done
in Fig.\ 2, using the results for target stars with existing UBV
photometry. The $(U-B)$ values for the target stars were taken from the
General Catalogue of Photometric Data (Mermilliod, Mermilliod, \& Hauck
\cite{MMH97}).
For easier visual inspection, the slope of the $(U-B)$ vs.\ $(u-b)$
relation was removed by a linear fit. The residuals are compared with
those of the standard values for reddened O-type stars (Crawford
\cite{Cr75}) and for bright stars earlier than B5 (Crawford, Barnes, \&
Golson \cite{CBG71}), which are on the average considerably less
reddened than the O stars. For better illustration, we only show a fit
to the relations defined by the standard stars for comparison with the
data of the target stars.
\clearpage
\begin{figure}
\includegraphics[width=80mm,viewport=00 00 270 255]{16507fg2.ps}
\caption{Comparison of the present measurements and published Johnson
photometry. Circles are open cluster targets not yet known to pulsate,
diamonds are known $\beta$~Cephei stars with new Str\"omgren colour
indices, and star symbols are early-type targets in the {\it Kepler}
field. The dotted line is the relation defined by unreddened B stars,
whereas the full line is the relation inferred for reddened OB stars.
See text for more information.}
\end{figure}
The fits for the O and B-type stars in Fig.\ 2 are somewhat different.
However, the relation for the more strongly reddened target stars are not
systematically different from the one defined by the reddened O-type
standards, and the relation for the less reddened targets shows no
systematic offset from the one defined by the mildly reddened B-type
standards. The present $uvby\beta$ photometry is therefore on the standard
system.
\subsection{Distinguishing OB stars from cooler ones}
OB stars can be separated from objects of later spectral type by using
the reddening independent Str\"omgren ``bracket quantities"
$[m_1]=m_1+0.32(b-y)$ and $[c_1]=c_1-0.2(b-y)$. As a rule of thumb,
stars with $[m_1]<0.14$ are B type stars and stars with $[m_1]>0.22$ are
of spectral type A3 and later. Astrophysically, this separation is
caused by the changing curvature of the stellar energy distribution
depending on temperature. Figure 3 shows the distribution of the target
and standard stars in an $[m_1],[c_1]$ diagram.
\begin{figure}
\includegraphics[width=80mm,viewport=00 00 270 270]{16507fg3.ps}
\caption {Plot of the Str\"omgren "bracket quantities". These
reddening-free indices allow an easy separation between OB and cooler
stars; all objects with $[m_1]\simgt0.14$ are non-OB stars. One very cool
star lies outside the borders of this diagram. Filled circles are for
standard stars, open circles for the target stars.}
\end{figure}
All but one standard star were chosen to be of no later type than early
A: 83\% of the targets are in the same domain. Of the 30 target stars
that cannot be OB stars, twelve have been associated with the open
cluster Berkeley 4, and should therefore be foreground stars. Seven
non-OB stars are {\it Kepler} targets, and six were mentioned in
connection with NGC 7380, therefore also not being cluster members.
\subsection{Effective temperatures and luminosities of the target stars}
The effective temperatures and absolute magnitudes of the target stars
can be determined with the routines by Napiwotzki et al. (\cite{NSW93},
see their paper for accurate descriptions of the calibrations employed).
Bolometric corrections by Flower (\cite{F96}) and a bolometric magnitude
of $M_{\rm bol}=4.74$ for the Sun (Livingston \cite{L00}) were used to
derive stellar luminosities. Figure 4 shows the targets' locations in a
$\log T_{\rm eff} - \log L$ diagram, in comparison with theoretical
pulsational instability strips (Zdravkov \& Pamyatnykh \cite{ZP08}).
\begin{figure*}
\includegraphics[width=120mm,viewport=-110 00 373 483]{16507fg4.ps}
\caption{$\log T_{\rm eff} - \log L$ diagram with the positions of
the target stars, derived as explained in the text, indicated.
Circles are open cluster targets not yet known to pulsate, diamonds are
known $\beta$~Cephei stars with new Str\"omgren colour indices, and star
symbols are early-type targets in the {\it Kepler} field. Some model
evolutionary tracks are shown for comparison and are marked with the
corresponding masses. The slanted full line is the zero age main
sequence, the dotted line defines the theoretical SPB star instability
strip, and the dashed-dotted line is the theoretical $\beta$~Cephei star
instability strip.}
\end{figure*}
All targets previously known as $\beta$~Cephei stars are located within
the corresponding instability strip. The catalogue of Galactic
$\beta$~Cephei stars (Stankov \& Handler \cite{SH05}) only contains one
object with a mass above 17 $M_{\sun}$, which could be a Be star, hence
have overestimated luminosity from $H_{\beta}$ photometry. In contrast,
the present, considerably smaller, sample contains three stars with
$17.5<M/M_{\sun}<21$. There is one {\it Kepler} target in the high mass
domain, which, however, appears to be a close binary with no pulsational
light variation (Balona et al. \cite{BPD11}).
Table 7 summarizes how many of our target stars lie within the
$\beta$~Cephei and SPB star instability strips, respectively, and how
many are located in either and may therefore show both types of
oscillations. Each of the open clusters observed contains potential
pulsators and is therefore worthy of a variability search. As expected,
the {\it Kepler} field contains only a few high-mass stars.
\begin{table}
\caption{Numbers of target stars within the $\beta$~Cephei or SPB
star instability strip, or both}
\begin{tabular}{lccc}
\hline
Field & in $\beta$ Cep strip & in SPB strip & in both\\
\hline
ASCC 130 & 3/7 & 4/7 & 0/7\\
Berkeley 4 & 16/32 & 9/32 & 6/32\\
NGC 637 & 6/6 & 0/6 & 0/6\\
NGC 1893 & 10/12 & 2/12 & 1/12\\
NGC 2244 & 3/9 & 4/9 & 3/9\\
NGC 7380 & 12/27 & 9/27 & 3/27\\
Roslund 2 & 7/10 & 3/10 & 2/10\\
Kepler field & 10/42 & 26/42 & 8/42\\
\hline
\end{tabular}
\end{table}
Two stars in Fig.\ 4 appear to be post main sequence objects. Berkeley 4
513 is also known as LS I +63 98 and has been classified as an OBe star
(Hardorp et al. \cite{HRS59}). The low $H_{\beta}$ value for the star
supports this interpretation. A similar comment applies to NGC 7380 4
that has been spectrally classified as B6Vne (Hoag \& Applequist
\cite{HA65}). The post main sequence evolutionary status of these two
stars may therefore just be apparent: the calibrations of $uvby\beta$
photometry are not applicable to emission line stars.
\section{Summary}
New $uvby\beta$ photometry was acquired for 168 open cluster and field
stars, and was transformed into the standard system by means of
measurements of 117 standard stars. The data were demonstrated to be on
the standard system, and the limits in which these photometric results
are valid were determined. Most target stars are indeed OB stars, and
each cluster contains several stars that are located in the pulsational
instability strips of main sequence B stars.
These measurements are required to determine the effective temperatures
and luminosities of the targets. Published $uvby\beta$ photometry of the
target clusters may now be tied into the standard system, allowing
investigations of the clusters themselves, in terms of (differential)
reddening, distance, etc. This is the foundation for several forthcoming
papers devoted to individual clusters, including searches for stellar
variability. Balona et al. (\cite{BPD11}) discuss the variability of the
{\it Kepler} targets in detail.
\begin{acknowledgements}
This research is supported by the Austrian Fonds zur F\"orderung der
wissenschaftlichen Forschung under grant P20526-N16. This research has
made use of the WEBDA database, operated at the Institute for Astronomy
of the University of Vienna.
\end{acknowledgements}
|
2,869,038,156,741 | arxiv | \section{Introduction}
\label{intro}
Phase transitions and critical phenomena of quantum spin systems currently attract a great deal of
interest \cite{sach99}. As usual, the quantum Heisenberg model is used as a basic generating model
which should be appropriate for investigating quantum properties of insulating magnetic materials
\cite{gat85,car86,kah93,gat99}. However, a rigorous proof known as the Mermin-Wagner theorem \cite{mer66} prohibits a spontaneous long-range order for the isotropic spin-1/2 Heisenberg model on the one- and two-dimensional lattices and hence, the spontaneous ordering might in principle appear either if a three-dimensional magnetic structure is considered \cite{dys76} or a non-zero magnetic anisotropy is involved in the studied model Hamiltonian \cite{fro77}. On the other hand, it is currently well established that obvious quantum manifestations usually arise from a mutual combination of several factors, especially, when the low-dimensional magnetic structure is combined with as low coordination number as possible and low quantum spin number. Apparently, these opposite trends make hard to find a long-range ordered system that simultaneously exhibits evident quantum effects. Investigation of quantum spin systems, which can exhibit a non-trivial criticality, thus remains among the most challenging tasks in the statistical and solid-state physics.
Over the last few decades, there has been increasing interest in the study of the effect of different
anisotropies (single-ion, Dzyaloshinskii-Moriya, exchange) on the critical behaviour of the spin-1
quantum Heisenberg ferromagnet. The main interest to study this model system arises since Stanley
and Kaplan \cite{sta66,sta67} proved the existence of a phase transition in the two- and three-dimensional Heisenberg ferromagnets. In addition, the ferromagnetic quantum Heisenberg model
with the spin-1 has relevant connection with several nickel-based coordination compounds,
which provide excellent experimental realization of this model system \cite{def,djm}. Up to now,
the spin-1 quantum Heisenberg model has been explored within the standard mean-field approximation \cite{tag74,buz88}, random phase approximation \cite{mic77} or linked-cluster expansion \cite{pan93,pan95}. By making use of the pair approximation \cite{ury80,iwa97,lu06,sun06},
several further studies have been concerned with the critical behaviour of the anisotropic spin-1
XXZ Heisenberg ferromagnet with bilinear and biquadratic interactions \cite{ury80,iwa97}, the isotropic spin-1 Heisenberg ferromagnet with the bilinear and biquadratic interactions and the single-ion anisotropy \cite{lu06}, as well as, the anisotropic spin-1 XXZ Heisenberg ferromagnet with an antisymmetric Dzyaloshinskii-Moriya interaction \cite{sun06}.
To the best of our knowledge, the critical properties of the spin-1 XXZ Heisenberg ferromagnet
with the uniaxial single-ion anisotropy have not been dealt with in the literature yet. Therefore,
the primary goal of present work is to examine this model system which represents another eligible candidate for displaying an interesting criticality affected by quantum fluctuations.
The rest of this paper is organized as follows. In Section \ref{model}, we briefly describe
the model system and basic steps of the variational procedure which gives results equivalent to the Oguchi's pair approximation \cite{ogu55}. Section \ref{result} deals with the most interesting numerical results obtained for the ground-state and finite-temperature phase diagrams. Magnetization dependences on the temperature, for several values of exchange and single-ion anisotropies, are also displayed in Section \ref{result}. Finally, some concluding remarks are drawn in Section \ref{conclusion}.
\section{Model and method}
\label{model}
Let us consider the Hamiltonian of the spin-1 quantum Heisenberg model:
\begin{eqnarray}
{\cal H} = - J \sum_{(i,j)}^{Nq/2} [\Delta (S_i^x S_j^x + S_i^y S_j^y) + S_i^z S_j^z]
- D \sum_{i=1}^{N} (S_i^z)^2 - H \sum_{i=1}^{N} S_i^z,
\label{ham}
\end{eqnarray}
where $S_i^{\alpha}$ ($\alpha=x,y,z$) denotes spatial components of the spin-1 operator at the lattice site $i$, the first summation runs over nearest-neighbour pairs on a lattice with a coordination
number $q$ and the other two summations are carried out over all $N$ lattice sites. The first
term in Hamiltonian (\ref{ham}) labels the ferromagnetic XXZ Heisenberg exchange interaction with the coupling constant $J>0$, $\Delta$ is the exchange anisotropy in this interaction, the parameter $D$ stands for the uniaxial single-ion anisotropy and the last term incorporates the effect of external magnetic field $H$.
The model system described by means of the Hamiltonian (\ref{ham}) will be treated within
the pair approximation formulated as a variational procedure based on the Gibbs-Bogoliubov
inequality \cite{bog47,fey55,bog62,fal70}:
\begin{eqnarray}
G \leq G_0 + \langle {\cal H} - {\cal H}_0 \rangle_0.
\label{gbf}
\end{eqnarray}
Above, $G$ is the Gibbs free energy of the system described by the Hamiltonian (\ref{ham}), $G_0$
is the Gibbs free energy of a simplified model system given by a trial Hamiltonian ${\cal H}_0$,
and $\langle \ldots \rangle_0$ indicates a canonical ensemble averaging performed within this
simplified model system. Notice that the choice of the trial Hamiltonian ${\cal H}_0$
is arbitrary, however, its form directly determines an accuracy of the obtained results.
If only single-site interaction terms are included in the trial Hamiltonian, i.e. single-spin
cluster terms are used as the trial Hamiltonian, then, one obtains results equivalent to the
mean-field approximation. Similarly, if a two-spin cluster Hamiltonian is chosen as the trial Hamiltonian, the obtained results will be equivalent to the Oguchi's pair approximation, which
is superior to the mean-field approach.
In the present work, we shall employ the two-spin cluster approach for the considered model system
in order to obtain results equivalent to the Oguchi's pair approximation \cite{ogu55}.
The two-spin cluster trial Hamiltonian can be written in this compact form:
\begin{eqnarray}
{\cal H}_0 &=& \sum_{k=1}^{N/2} {\cal H}_k, \label{trial1} \\
{\cal H}_k &=& - \lambda [\delta (S_{k1}^x S_{k2}^x + S_{k1}^y S_{k2}^y) + S_{k1}^z S_{k2}^z]
\nonumber\\ &&- \eta [(S_{k1}^z)^2 + (S_{k2}^z)^2] - \gamma (S_{k1}^z + S_{k2}^z),
\label{trial2}
\end{eqnarray}
where the first summation is carried out over $N/2$ spin pairs and $\lambda$, $\delta$, $\eta$, and
$\gamma$ denote variational parameters which have obvious physical meaning. It is noteworthy
that an explicit expression of the variational parameters can be obtained by minimizing the
right-hand-side of Eq. (\ref{gbf}), i.e. by obtaining the best estimate of the true Gibbs free
energy. Following the standard procedure one easily derives:
\begin{eqnarray}
\lambda = J, \quad \delta = \Delta, \quad \eta = D, \quad \gamma = (q - 1) J m_0 + H,
\label{para}
\end{eqnarray}
where $m_0 \equiv \langle S_i^z \rangle_0$ denotes the magnetization per one site of the set of independent spin-1 dimers described by means of the Hamiltonian ${\cal H}_0$. By substituting
optimized values of the variational parameters (\ref{para}) into the inequality (\ref{gbf}) one consequently yields the best upper estimate of the true Gibbs free energy within the
pair-approximation method:
\begin{eqnarray}
G = \frac{N}{2} G_k + \frac{N J}{2} (q-1) m_0^2.
\label{gfe}
\end{eqnarray}
Above, $G_k$ labels the Gibbs free energy of the spin-1 Heisenberg dimer given by the Hamiltonian (\ref{trial2}). With the help of Eq. (\ref{gfe}), one can straightforwardly verify that the magnetization of the original model directly equals to the magnetization of the corresponding
dimer model, i.e. $m \equiv \langle S_i^z \rangle = \langle S_i^z \rangle_0 \equiv m_0$.
Of course, similar relations can be established for another quantities, as well.
To complete solution of the model under investigation, it is further necessary to calculate
the Gibbs free energy, magnetization and other relevant quantities of the corresponding spin-1
dimer model given by the Hamiltonian (\ref{trial2}). Fortunately, an explicit form of all relevant quantities (Gibbs free energy, magnetization, correlation functions, quadrupolar moment) can be
found for this model system elsewhere \cite{str05}. Referring to these results, the solution
of the considered model system is formally completed. For the sake of brevity, we just merely
quote final expressions for the Gibbs free energy $G_k$ and the magnetization $m_0$, both entering
into Eq. (\ref{gfe}):
\begin{eqnarray}
G_k &=& - \beta^{-1} \ln Z_k, \label{fin1} \\
Z_k &=& 2 \exp[\beta (\lambda + 2 \eta)] \cosh(2 \beta \gamma)
+ 4 \exp(\beta \eta) \cosh(\beta \gamma) \cosh(\beta \lambda \delta) \nonumber\\
&+& \exp[\beta (2\eta - \lambda)] + 2 \exp[\beta (\eta- \lambda / 2)] \cosh(\beta W), \label{fin2} \\
m_0 &=& \frac{1}{Z_d} \{ 2 \exp[\beta(\lambda + 2 \eta)] \sinh(2 \beta \gamma)
+ 2 \exp(\beta \eta) \sinh(\beta \gamma) \cosh(\beta \lambda \delta) \}, \label{fin3}
\end{eqnarray}
where $W = \sqrt{(\eta- \lambda/2)^2 + 2 (\lambda \delta)^2}$, $\beta = 1/(k_{\rm B} T)$, $k_{\rm B}$
is the Boltzmann's constant, $T$ labels the absolute temperature, and the variational parameters $\lambda$, $\delta$, $\eta$, and $\gamma$ take their optimized values (\ref{para}). It is quite
evident that the magnetization $m_0$ must obey the self-consistent transcendental Eq. (\ref{fin3}) (recall that it enters into the variational parameter $\gamma$ given by Eq. (\ref{para})), which
might possibly have more than one solution. Accordingly, the stable solution for the magnetization
$m_0$ is the one that minimizes the overall Gibbs free energy (\ref{gfe}).
In an absence of the external magnetic field ($H=0$), the magnetization tends gradually to zero
in the vicinity of a continuous (second-order) phase transition from the ordered phase ($m=m_0 \neq 0$)
towards the disordered phase ($m=m_0 = 0$). According to this, the magnetization (\ref{fin3})
close to the second-order phase transition can be expanded into the series:
\begin{eqnarray}
m = a m + b m^3 + c m^5 + \ldots.
\end{eqnarray}
Notice that the coefficients $a$, $b$, and $c$ depend on the temperature and all parameters involved
in the model Hamiltonian (\ref{ham}). Then, the power expansion of the magnetization $m$ can be straightforwardly used to locate second-order transition lines and tricritical points by following
the standard procedure described in several previous works \cite{ben85,kan86,tuc89,cha92,jia93}.
The critical temperatures corresponding to the second-order transitions must obey the condition
$a = 1$, $b < 0$, while the tricritical points can be located from the constraint $a = 1$, $b = 0$,
and $c < 0$. Finally, the critical temperatures of discontinuous (first-order) transitions must
be obtained from a comparison of Gibbs free energy of the lowest energy ordered phase
with the Gibbs free energy of the disordered phase.
\section{Results and discussion}
\label{result}
Before proceeding to a discussion of the most interesting numerical results, let us firstly mention
that some particular results for the considered model system have already been reported on by the present authors elsewhere \cite{del06}. Note that in the former preliminary report we have used an alternate approach based on the original Oguchi's pair approximation to study a particular case with the coordination number $q=4$ corresponding to the square and diamond lattices. In the present article, we shall further focus our attention to other particular case with the coordination number $q=6$, which corresponds to the case of the triangular and simple-cubic lattices. A brief comparison with the results obtained previously will be made in conclusion.
Now, let us take a closer look at the ground-state behaviour. A detailed analysis of our numerical results shows that the ground-state phase boundary between the ferromagnetically ordered and
the disordered phases can be allocated with the aid of following condition:
\begin{eqnarray}
\frac{D_{\rm b}}{J} = - \frac{q}{2} + \frac{\Delta^2}{q+1}.
\label{gs}
\end{eqnarray}
It is quite obvious from the Eq. (\ref{gs}) that the ground-state phase boundary between
the ordered and disordered phases shifts to the more positive (weaker) single-ion anisotropies
when the parameter $\Delta$ is raised from zero. As a matter of fact, the order-disorder
transition moves towards the weaker single-ion anisotropies for any $\Delta \neq0 $ in comparison
with the result $D/J = - q/2$ attained in the semi-classical Ising limit ($\Delta = 0$).
This result is taken to mean that a destabilization of the ferromagnetic order originates from
raising quantum fluctuations, which work in conjunction with the single-ion anisotropy in the
view of destroying of the ferromagnetic long-range order at zero temperature. It is worthwhile
to remark that an appearance of the planar (XY) long-range ordering cannot be definitely ruled out
in the parameter space with predominant easy-plane interactions ($D<0$ and/or $\Delta>1$), where
we have found the disordered phase only. It should be stressed, however, that the present form of two-spin cluster mean-field treatment cannot resolve a presence of the ferromagnetic long-range
order inherent to XY-type models \cite{lie62,dys78,ken88,kub88} unlike the conventional Ising-like ferromagnetic long-range order with only one non-zero component of the spontaneous magnetization.
Next, let us turn our attention to the finite-temperature phase diagram, which is shown in Fig. 1
in the reduced units $d = D/J$ and $t = k_{\rm B}T/J$ for the simple-cubic (triangular) lattice and different values of the exchange anisotropy $\Delta$. In this figure, the solid and dashed lines represent second- and first-order phase transitions between the ferromagnetic and paramagnetic phases, respectively, while the black circles denote positions of the tricritical points. It is quite obvious from this figure that the considered model system exhibits the highest values of critical temperature
\begin{figure}[h]
\begin{center}
\includegraphics[width=100mm]{fig1.eps}
\end{center}
\vspace{-15mm}
\caption{The phase diagram of the spin-1 Heisenberg model for the simple-cubic (triangular) lattice
and several values of the exchange anisotropy $\Delta$. The solid and dashed lines represent second- and first-order phase transitions, respectively. The black circles denote positions of the tricritical points.}
\end{figure}
in the Ising limit ($\Delta = 0$). The gradual increase of the exchange anisotropy $\Delta$ reduces
the transition temperature as a result of raising quantum fluctuations. It is worthwhile to
remark that all the lines of second-order phase transitions, for arbitrary but finite $\Delta$,
have the same asymptotic behaviour in the limit $d \to \infty$. Actually, the critical temperature
of the continuous transitions does not depend on the exchange anisotropy in this limiting case and
it is equal to $t^* = 5.847$. Moreover, it should be also mentioned that our approach yields for
the Ising case without the single-ion anisotropy ($\Delta = 0$, $d = 0$) the critical temperature
$t_c = 3.922$, which is consistent with the result of other pair-approximation methods \cite{sun06}
and is simultaneously superior to the result $t_c = 4.0$ obtained from the standard mean-field approximation \cite{fit92}. In addition, it can be clearly seen from Fig. 1 that the transition temperature of the continuous phase transition monotonically decreases by decreasing the single-ion anisotropy $d$ until the tricritical point (TCP) is reached. Further decrease of the anisotropy parameter $d$ changes the second-order phase transitions towards the first-order ones. It should
be realized, nevertheless, that the first-order phase transitions occur merely in a narrow region
of single-ion anisotropies close to the boundary value (\ref{gs}) at which both completely ordered phases with $m = \pm 1$ have the identical energy (coexist together) with the disordered phase
with $m=0$ and one asymptotically reaches the first-order phase transition between them in the zero-temperature limit. An origin of discontinuous phase transitions could be therefore related
to the fact that the ordered and disordered phases have very close energies near the boundary single-ion anisotropy (\ref{gs}) (the former ones are being slightly lower in energy) and the small
temperature change might possibly induce a phase coexistence (energy equivalence) between them,
what consequently leads to the discontinuous phase transition. In Fig. 2, we depict more clearly
the position of TCPs in dependence on the single-ion and exchange anisotropies by the
\begin{figure}
\begin{center}
\includegraphics[width=100mm]{fig2.eps}
\end{center}
\vspace{-15mm}
\caption{The phase diagram of the spin-1 Heisenberg model for $q = 6$ and
$\Delta = 0.0, 1.0, 2.0, 3.0,$ and $4.0$. The solid and dashed lines represent second-
and first-order phase transitions, respectively. The black circles denote positions of
the tricritical points. The dot-and-dash line represents the location of
tricritical points in dependence on the exchange anisotropy $\Delta$.}
\end{figure}
dot-and-dash line in order to clarify how the type of phase transition changes with
the anisotropy parameters. As one can see from this figure, the $d$-coordinate of TCPs ($d_t$)
shifts to more positive values upon strengthening of $\Delta$, while the $t$-coordinate of
TCPs behaves as a non-monotonic function of the exchange anisotropy $\Delta$ with a minimum
at $\Delta_{min} = 3.459$.
To illustrate the effect of uniaxial single-ion anisotropy on the phase transitions, the thermal variation of the magnetization $m$ is shown in Fig. 3 for the case of isotropic spin-1 Heisenberg
model ($\Delta = 1.0$) and several values of $d$. It can be clearly seen from this figure that the reduction of the single-ion anisotropy causes lowering of the critical temperature $t_c$. Furthermore,
\begin{figure}
\begin{center}
\includegraphics[width=100mm]{fig3.eps}
\end{center}
\vspace{-15mm}
\caption{The temperature dependence of the magnetization $m$ for the isotropic spin-1 Heisenberg model ($\Delta$ = 1.0) on the simple-cubic (triangular) lattice, when the value of the single-ion anisotropy parameter $d$ changes. The dashed lines represent the discontinuities of the magnetization at the first-order phase transitions.}
\end{figure}
it is also evident that the magnetization varies smoothly to zero for $d = 0.0$, $-2.0$, and $-2.6$ until the temperature reaches its critical value. This behaviour of magnetization, which is typical
for the second-order (continuous) phase transitions, persists until $d > d_t$ ($d_t = -2.656$ for $\Delta = 1.0$ and $q=6$). On the other hand, the magnetization jumps discontinuously to zero for $d<d_t$ (e.g. see the curves for $d=-2.7$ and $-2.8$), what is characteristic feature of the first-order (discontinuous) phase transitions. As one can see, this discontinuity in the magnetization
increases rather abruptly as the single-ion anisotropy moves to more negative values with respect
to the $d_t$ value. Finally, it should be pointed out that the similar variations of magnetization curves occur for any value of the exchange anisotropy $\Delta$.
\section{Conclusion}
\label{conclusion}
In the present paper, the phase diagram of the anisotropic spin-1 XXZ Heisenberg model with the uniaxial
single-ion anisotropy is examined within the variational procedure based on the Gibbs-Bogoliubov inequality,
which gives results equivalent to the Oguchi's pair approximation \cite{ogu55}. A comparison between the results obtained in the present study and those attained within the standard Oguchi approximation actually implies an equivalence between both the methods. The most important benefit of using the variational approach based on the Gibbs-Bogoliubov inequality is that in this way adapted method enables obtaining of all thermodynamic quantities in a self-consistent manner and moreover, it is also well suited to discern the continuous phase transitions from the discontinuous ones by distinguishing of the stable, metastable and unstable solutions inherent to the approximation used.
In the spirit of the applied pair-approximation method we have demonstrated that the single-ion anisotropy as well as the exchange anisotropy have a significant influence on the critical behaviour and both these anisotropy parameters can cause a tricritical phenomenon, i.e. the change of the continuous phase transition to the discontinuous one. Our results can serve in evidence that the tricritical phenomenon may occur in the investigated model system if at least one of the anisotropy parameters provides a sufficiently strong source of the easy-plane anisotropy. Note furthermore
that the obtained results are rather general in that they are qualitatively independent of the
lattice coordination number. The comparison between the results to be presented in this work with
those reported on previously for other particular case \cite{del06} actually implies that the
model under investigation shows qualitatively the same features irrespective of the lattice coordination number.
\begin{center}
\begin{acknowledgments}
This work was supported under the grants Nos. VVGS 11/2006, VEGA 1/2009/05 and APVT 20-005204.
\end{acknowledgments}
\end{center}
|
2,869,038,156,742 | arxiv |
\section{Introduction}
Markov Decision Processes (MDPs) may be viewed as a discrete-time stochastic control problems for sequential decision making in situations where costs are partly random and partly under the control of a decision maker. Classical MDP theory is concerned with minimizing the expected discounted total cost and, in many cases, the minimization problem is solved by establishing a Dynamic Programming Principle (DPP). Results on the vast area of MDPs may be founded in several textbooks, e.g., \cite{Bertsekas1996book,HernandezLerma1996book,Bauerle2011book}. The classical expected performance criteria is, however, limited in its application and, in many cases, it is prudent to incorporate risk assessment into decision making.
One popular criterion is based on coherent risk measures \cite{Artzner1999Coherent,Delbaen2002Coherent}. A naive combination of coherent risk measures and discounted total costs, however, lacks time consistency, hindering the derivation of a corresponding DPP. Roughly speaking, time consistency is about the property that smaller scores in the future epochs guarantee a smaller score at current epoch. We refer to \cite{Bielecki2017Survey} for a survey on various definitions of time consistency. There is a stem of literature (see, e.g., \cite{Gianin2006Risk,Riedel2007Dynamic, Ruszczynski2010Risk, Pflug2016Time,Chow2015Risk,Bauerle2021Minimizing}) that studies time consistency from multiple angles and/or attempts to integrate coherent risk measures and their variations into MDP. While in this work, we are not concerned with model uncertainty, we would like to point out \cite{Bielecki2021Risk} and the reference therein for a framework that handles model uncertainty in MDP.
In this paper, we focus on the framework proposed in \cite{Ruszczynski2010Risk} which considers deterministic costs. \cite{Ruszczynski2010Risk} introduces the notion of risk transition mappings and uses them to construct, in a recursive manner, a class of (discounted) dynamic risk measures. He proceeds to derive both finite and infinite (with bounded costs) time horizon DPPs for such dynamic risk measures. We also refer to \cite{Ruszczynski2014Erratum} for the assumptions needed. \cite{Shen2014Risk} extends the infinite horizon DPP to unbounded costs as well as for average dynamic risk measures. The risk transition mappings involved are assumed to exhibit an analogue of a strong Feller property. \cite{Chu2014Markov} studies a similar infinite horizon DPP with unbounded costs but under
arguably more accessible assumptions. Recently, \cite{Bauerle2021Markov} considers unbounded latent costs and establishes the corresponding finite and infinite horizon DPP. The authors also prove sufficiency of Markovian actions against history dependent actions. They construct dynamic risk measures, for finite time horizon problems, from iterations of static risk measures that are Fatou and law invariant. The infinite horizon problems require in addition the coherent property. They also require the underlying MDP to exhibit a certain strongly continuous/semi-continuous transition mechanism. \cite{coache2021reinforcement} develops a computational approach for optimization with dynamic convex risk measures using deep learning techniques. Finally, it is noteworthy that the concept of risk form is introduced in \cite{Dentcheva2020Risk} and is applied to handle two-stage MDP with partial information and decision-dependent observation distribution.
The main goal of this paper is to study infinite horizon risk averse MDPs in a similar framework as above, but with latent costs and randomized actions, under a weakly continuous transition mechanism.
We note that, typically, MDP theory with Polish action spaces implicitly encompasses randomized actions. In order to compare deterministic and randomized actions, however, we must characterize the randomization explicitly. In certain risk averse settings, when randomness in action is accounted for along with the random outcome, deterministic actions do not necessarily yield the optimal outcome; we refer to Appendix \ref{app:Example} for an insightful example. To this end, we propose to study risk averse MDPs under the notion of Kusuoka-type conditional risk mappings\footnote{The term, conditional risk mapping, follows from \cite[Section 6.5.2]{Shapiro2021book}.}, which is inspired by the Kusuoka representation for law invariant coherent risk measures (cf. \cite{Kusuoka2001Law} and \cite[Section 6.3.5]{Shapiro2021book}). Kusuoka-type conditional risk mappings, in principle, covers a large class of conditional risk mappings of interest. To the best of our knowledge, the counterpart of Kusuoka representation for conditional risk mappings has not yet been established. As conditional average value at risk in general lacks joint measurability in the risk level and the random event, we introduce techniques to make the Kusuoka-type conditional risk mappings rigorous, and this treatment may be of interest on its own.
For simplicity, we consider bounded costs, which allows for conditional risk mappings that contain conditional essential supremum as a major ingredient -- a feature that is often omitted otherwise. The Kusuoka-type conditional risk mappings and bounded costs together allows us access to a stronger set of regularities for the related operators, and allows us to establish the DPP with mild assumptions on the remainder of the components in our setup. To be more precise, we obtain the semi-continuity of value function jointly in state and action, avoiding the need to impose strong continuity on the transition kernel or the resulting risk transition mapping. We believe the conditional law invariant property of Kusuoka-type conditional risk mappings is essential for obtaining regularity, while the assumption on bounded costs may possibly be weaken. In static case, semi-continuity of a coherent risk measure typically requires more than weak continuity of the input; we refer to \cite[Section 6.3]{Shapiro2021book} for detailed statements. This is possibly due to the lack of law invariance. Imposing the law invariant property, as we do here, resolves such issues. In the dynamic case, we choose to avoid the assumptions of strong continuity by implicitly leveraging a similar argument through Kusuoka-type conditional risk mappings.
The main contributions of this paper can be summarized as follows:
\begin{enumerate}
\item We introduce and investigate the notion of Kusuoka-type conditional risk mappings. We first study conditional average value at risk and develop an appropriate formulation for integrating over the quantile level.
A Kusuoka-type conditional risk mapping is defined as the essential supremum of a family of integrations with random integrand and integrator.
We then establish a representation in terms of regular conditional expectations. We then show that the Kusuoka-type conditional risk mapping is conditionally law invariant and state dependent. The results are presented in Section \ref{subsec:DRM}.
\item Under mild conditions, we derive an infinite horizon DPP, for MDP with latent costs and randomized actions, subject to dynamic risk measure defined recursively via Kusuoka-type conditional risk mapping. We prove the existence of an optimal policy that is Markovian. We also derive a corresponding Q-learning version of the DPP which lends itself naturally to numerical implementation. We refer to Theorem \ref{thm:DPP} and Corollary \ref{cor:QDPP} for detailed statements.
\item In Proposition \ref{prop:MarkovControl}, we argue that certain Markovian actions are no worse than any other history-dependent actions. We also formulate in Proposition \ref{prop:wpSingleton} a sufficient condition on the optimality of deterministic actions. Further, we provide a related heuristic discussion from the perspective of a two-player game in Remark \ref{rem:RandHeur}.
\end{enumerate}
The remainder of the paper is structured as follows. In Section \ref{sec:Setup}, we first introduce our notation, then recall definitions and basic properties of various important concepts, and establish preliminary results on Kusuoka-type condition risk mappings. Formulations and assumptions for risk-averse MDP are collectively organized in the end of the section. Section \ref{sec:Aux} is devoted to auxiliary results. We introduce some useful operators related to Markovian policy and investigate their regularities. Properties of value functions are studied. In Section \ref{sec:MainResults}, we present the main results. We derive the DPP for Markovian actions and argue that Markovian actions can achieve the optimal. We also establish a sufficient condition on the optimality of deterministic actions. We accommodates an example, a brielf review on pointwise supremum and essential supremum, some technical lemmas and supplemental proofs for Section \ref{subsec:DRM} in Appendix \ref{app:Example}, \ref{app:SupReview}, \ref{app:Lemmas} and \ref{app:Proofs}, respectively. Finally, for reference, Appendix \ref{app:notations} contains a glossary of notation.
\section{Setup and preliminaries}\label{sec:Setup}
To formulate our problem, we first specify the spaces related to the underlying process, action process, probability, among others. We use the following notations for various spaces throughout the paper. We also provide in Appendix \ref{app:notations} a glossary of notations that will be introduced later.
\begin{itemize}
\item[-] We write $\mathbb{N}:=\set{1,2,...}$ and $\mathbb{N}_0:=\set{0}\cup\mathbb{N}$. We let $\mathbb{R}$ denote the real line. We endow $\mathbb{R}$ with Borel $\sigma$-algebra $\mathcal{B}(\mathbb{R})$. We let $\overline\mathbb{R}:=\mathbb{R}\cup\set{+\infty}\cup\set{-\infty}$ and $\mathcal{B}(\overline\mathbb{R}):=\sigma(\mathcal{B}(\mathbb{R})\cup\set{\set{+\infty},\set{-\infty}})$.
\item[-] For any measurable space $(\mathbb{Y},\mathscr{Y})$, we write $\ell^\infty(\mathbb{Y},\mathscr{Y})$ for the set of bounded real-valued $\mathscr{Y}$-$\mathcal{B}(\mathbb{R})$ measurable functions. We let $\ell^\infty(\mathbb{N};\mathbb{Y},\mathcal{B}(\mathbb{Y}))$
denote the set of $\mathfrak{v} = (v_t)_{t\in\mathbb{N}}\subseteq \ell^\infty(\mathbb{Y},\mathcal{B}(\mathbb{Y}))$, and equip it with norm $\|\mathfrak{v}\|_\infty:=\sup_{t\in\mathbb{N}, y\in\mathbb{Y}}|v_t(y)|$, which makes\\ $\ell^\infty(\mathbb{N};\mathbb{Y},\mathcal{B}(\mathbb{Y}))$ complete. For any $y\in\mathbb{Y}$, the Dirac probability measure at $y$, denoted by $\delta_y$, is defined as $\delta_y(A) := \1_{A}(y)$ for $A\in\mathscr{Y}$.
\item[-] Let $(\Omega, \mathscr{H}, \mathbb{P})$ be a complete probability space. We write $L^\infty(\Omega,\mathscr{H},\mathbb{P})$ for the set of real-valued $\mathscr{H}$-$\mathcal{B}(\mathbb{R})$ random variables that are $\mathbb{P}$-almost surely bounded.
\item[-] We let $\mathbb{F}:=(\mathscr{F}_t)_{t\in\mathbb{N}}$ and $\mathbb{G}:=(\mathscr{G}_t)_{t\in\mathbb{N}}$ be filtrations of $\mathscr{H}$ such that $\mathscr{F}_t\subseteq\mathscr{G}_t$ for all $t\in\mathbb{N}$ and $\mathscr{F}_1$ contains all $\mathbb{P}$-negligible set. We set $\mathscr{F}_0:=\mathscr{G}_0:=\set{\emptyset,\Omega}$. We also define $\mathscr{U}_t:=\mathscr{G}_{t-1}\vee\mathscr{F}_t$ for $t\in\mathbb{N}$. It follows that $\mathbb{U}:=(\mathscr{U}_t)_{t\in\mathbb{N}}$ is also a filtration. We also set $\mathscr{U}_0:=\set{\emptyset,\Omega}$.
\item[-] Let $\mathbb{X}$ be a complete separable metric space equipped with Borel $\sigma$-algebra $\mathcal{B}(\mathbb{X})$. Let $\Xi$ be the set of probability measures on $\mathcal{B}(\mathbb{X})$. We endow $\Xi$ with weak topology, which is the coarsest topology on $\Xi$ containing sets $\left\{\xi\in\Xi: \int_{\mathbb{X}}f(a)\xi(\dif x)\in U\right\}$ with $f\in C_b(\mathbb{X})$ and $U\subseteq\mathbb{R}$ open. The corresponding Borel $\sigma$-algebra is denoted by $\mathcal{B}(\Xi)$. The evaluation $\sigma$-algebra on $\Xi$, denoted by $\mathcal{E}(\Xi)$, is generated by
sets $\set{\xi\in\Xi:\xi(A)\in B}$ with $A\in\mathcal{B}(\mathbb{X})$ and $B\in\mathcal{B}([0,1])$. Equivalently, $\mathcal{E}(\Xi)$ is the $\sigma$-algebra generated by sets $\set{\xi\in\Xi:\int_{\mathbb{X}}f(x)\xi(\dif x) \in B}\,$ for all $f\in\ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$ and $B\in\mathcal{B}(\mathbb{R})$. In view of Lemma \ref{lem:sigmaAlgBE}, we have $\mathcal{B}(\Xi)=\mathcal{E}(\Xi)$.
\item[-] Let $\mathbb{A}$ be another complete separable metric space equipped with Borel $\sigma$-algebra $\mathcal{B}(\mathbb{A})$. Let $\Lambda$ be the set of probability measures on $\mathcal{B}(\mathbb{A})$. We endow $\Lambda$ with the weak topology and the corresponding Borel $\sigma$-algebra $\mathcal{B}(\Lambda)$. The evaluation $\sigma$-algebra on $\Lambda$ is denoted by $\mathcal{E}(\Lambda)$. By Lemma \ref{lem:sigmaAlgBE} again, we have $\mathcal{B}(\Lambda)=\mathcal{E}(\Lambda)$.
\item[-] For $t\in\mathbb{N}$, the domain of admissible actions $\mathcal{A}_t:(\mathbb{X},\mathcal{B}(\mathbb{X}))\to 2^{\mathbb{A}}$ is nonempty closed valued and weakly measurable, i.e., $\set{x\in\mathbb{X}:\mathcal{A}_t(x)\cap U\neq\emptyset}\in\mathcal{B}(\mathbb{A})$ for any open $U\subseteq\mathbb{A}$. For each $t\in\mathbb{N}$ and $x\in\mathbb{X}$, we let $\varpi_t(x)$ be the set of probability measure on $\mathcal{B}(\mathbb{A})$ such that $\pi(\mathcal{A}_t(x))=1$ for any $\pi\in\varpi_t(x)$,
and let $\Pi_t$ consist of $\pi_t:(\mathbb{X},\mathcal{B}(\mathbb{X}))\to(\Lambda,\mathcal{E}(\Lambda))$ such that $\pi_t(x)\in\varpi_t(x)$ for $x\in\mathbb{X}$.\footnote{$\Pi_t$ is not empty. Too see this, note by Kuratowski and Ryll-Nardzewski measurable selection theorem (cf. \cite[Theorem 18.13]{Aliprantis2006book}), there exists $\alpha:(\mathbb{X},\mathcal{B}(\mathbb{X}))\to(\mathbb{A},\mathbb{B}(\mathbb{A}))$ such that $\alpha(x)\in\mathcal{A}_t(x)$ for $x\in\mathbb{X}$. It follows that if $\pi(x):=\delta_{\alpha(x)}$, then $\pi\in\Pi_t$.} $\Pi$ is the set of $\mathfrak{p}:=(\pi_t)_{t\in\mathbb{N}}$ such that $\pi_t\in\Pi_t$ for all $t\in\mathbb{N}$.
\item[-] Let $\mathbb{M}$ be the set of probability measures on $\mathcal{B}([0,1])$, endowed with the weak topology. Note $\mathbb{M}$ under the weak topology is compact (cf. \cite[Section 15.6, Theorem 15.22]{Aliprantis2006book}). The corresponding Borel $\sigma$-algebra and evaluation $\sigma$-algebra are denoted by $\mathcal{B}(\mathbb{M})$ and $\mathcal{E}(\mathbb{M})$, respectively. Since $\mathbb{M}$ is separable and metrizable (cf. \cite[Section 15.3, Theorem 15.15]{Aliprantis2006book}), invoking Lemma \ref{lem:sigmaAlgBE} again, we have $\mathcal{B}(\mathbb{M})=\mathcal{E}(\mathbb{M})$. For $t\in\mathbb{N}$, we let $\mathcal{M}_t:(\mathbb{X},\mathcal{B}(\mathbb{X}))\to 2^{\mathbb{M}}$ be non-empty, closed-valued, and weakly measurable, that is, $\set{x\in\mathbb{X}:\mathcal{M}_t(x)\cap U\neq\emptyset}\in\mathcal{B}(\mathbb{X})$ for any open $U\subseteq\mathbb{M}$. Finally, we let $\mathcal{M}_0$ denote a closed subset of $\mathbb{M}$.
\end{itemize}
\subsection{Regular Conditional Distribution}\label{subsec:RegCondDist}
In this section, we recall the definition of regular conditional distributions, as it plays a crucial role in many aspects of the paper.
Consider $Y:(\Omega,\mathscr{H})\to(\mathbb{Y},\mathscr{Y})$ and $\mathscr{G}\subseteq\mathscr{H}$. The conditional distribution of $Y$ given $\mathscr{G}$, defined as $\set{\mathbb{P}(Y\in B|\mathscr{G}):=\mathbb{E}(\1_B(Y)|\mathscr{G})}_{B\in\mathscr{Y}}$, can be viewed as a function of $\omega\in\Omega$ and $B\in\mathscr{Y}$. For each $B\in\mathscr{Y}$, however, $\mathbb{P}(Y\in B|\mathscr{G})$ is defined only almost surely, which hinders us from using $\mathbb{P}(Y\in \,\cdot\,|\mathscr{G})$ as a probability measure depending on $\omega\in\Omega$ (countable additivity may not hold). To resolve such issues, we recall the notion of a regular conditional distribution.
\begin{definition}\label{def:RegCondDist}
$P^{Y|\mathscr{G}}:\Omega\times\mathscr{Y}\to[0,1]$
is a regular version of $\mathbb{P}(Y\in\,\cdot\,|\mathscr{G})$ if
\begin{itemize}
\item[(i)] for each $A\in\mathscr{Y}$, $\omega\mapsto P^{Y|\mathscr{G}}(\omega,A)$ is $\mathscr{G}$-$\mathcal{B}(\mathbb{R})$ measurable;
\item[(ii)] for each $\omega\in\Omega$, $P^{Y|\mathscr{G}}(\omega,\,\cdot\,)$ is a probability measure on $\mathscr{Y}$;
\item[(iii)] for each $A\in\mathscr{Y}$, $P^{Y|\mathscr{G}}(\omega, A) = \mathbb{P}(Y\in A|\mathscr{G})(\omega)$ for $\mathbb{P}-a.e.$ $\omega\in\Omega$.
\end{itemize}
\end{definition}
If $\mathscr{G}$ is the $\sigma$-algrebra generated by a random variable, say $X$, we will write $P^{Y|X}$ instead of $P^{Y|\sigma(X)}$.
Let the set of probability measures on $\mathscr{Y}$ be denoted by $\mathcal{P}$ and endowed with $\sigma$-algebra $\mathcal{E}(\mathcal{P})$. Because for $A\in\mathscr{Y},\,B\in\mathcal{B}(\mathbb{R})$,
\begin{align*}
\set{\omega\in\Omega:P^{Y|\mathscr{G}}(\omega,\,\cdot\,)\in\set{\zeta\in\mathcal{P}:\eta(A) \in B} } = \set{\omega\in\Omega:P^{Y|\mathscr{G}}(\omega,A)\in B} \in \mathscr{G},
\end{align*}
by \cite[Section 4.5, Corollary 4.24]{Aliprantis2006book}, $\omega\mapsto P^{Y|\mathscr{G}}(\omega,\,\cdot\,)$ is a measure-valued $\mathscr{G}$-$\mathcal{E}(\mathcal{P})$ random variable.
By \cite[Chapter I Section 3, Theorem 3]{Gikhman1974book} (see also \cite[Theorem 10.4.8 and Example 10.4.9]{Bogachev2007book}), if $\mathbb{Y}$ is a complete separable metric space and $\mathscr{Y}=\mathcal{B}(\mathbb{Y})$ is the corresponding Borel $\sigma$-algebra, then $\mathbb{P}(Y\in\,\cdot\,|\mathscr{G})$ always has a regular version. Moreover, $P^{Y|\mathscr{G}}(\omega,\,\cdot\,)$ as a probability measure is unique upto a $\mathbb{P}$-negligible set of $\omega\in\Omega$ (cf. \cite[Lemma 10.4.3]{Bogachev2007book}). In view of Lemma \ref{lem:sigmaAlgBE}, it is true in this case that $P^{Y|\mathscr{G}}$ is also $\mathscr{G}$-$\mathcal{B}(\mathcal{P})$ measurable.
For a nonnegative $\mathscr{Y}$-$\mathcal{B}(\mathbb{R})$ measurable $f$, for each $\omega\in\Omega$, we consider
the Lebesgue integral $\int_{\mathscr{Y}}f(y)P^{Y|\mathscr{G}}(\omega,\dif y)$. When no confusion arises, we will omit $\omega$ and write $P^{Y|\mathscr{G}}(B)$ and $\int_{\mathscr{Y}}f(y)P^{Y|\mathscr{G}}(\dif y)$ instead. Clearly, for any $A\in\mathscr{Y}$,
\begin{align}\label{eq:RegCondProbInt}
\int_{\mathscr{Y}}\1_A(y)P^{Y|\mathscr{G}}(\dif y) = P^{Y|\mathscr{G}}(A) = \mathbb{P}(Y\in A|\mathscr{G}),\quad\mathbb{P}-a.s..
\end{align}
\subsection{Controlled process $(\mathfrak{X},\mathfrak{A})$}\label{subsec:controlledP}
We let $\mathfrak{X}:=(X_t)_{t\in\mathbb{N}}$ be an $\mathbb{X}$-valued $\mathbb{F}$-adapted process, i.e., $X_t$ is $\mathscr{F}_t$-$\mathcal{B}(\mathbb{X})$ measurable for $t\in\mathbb{N}$. We also let $\mathfrak{A}:=(A_t)_{t\in\mathbb{N}}$ be a $\mathbb{A}$-valued $\mathbb{G}$-adapted process. $\mathfrak{X}$ and $\mathfrak{A}$ represent the underlying process and the action process, respectively. Heuristically, letting $\mathfrak{A}$ be $\mathbb{G}$-adapted allows us to have randomized actions with the property $A_t\sim\pi_t(X_t)$, where $\pi_t:\mathbb{X}\to\Lambda$. Below we recall the concepts of a transition kernel from $(X_t,A_t)$ to $X_{t+1}$ and Markovian action.
Suppose that for $t\in\mathbb{N}$, we have
\begin{align}\label{eq:GMarkov}
\mathbb{P}(X_{t+1}\in B\,|\,\mathscr{G}_t) = \mathbb{P}(X_{t+1}\in B\,\big|\,\sigma(X_t)\vee\sigma(A_t)),\quad B\in\mathcal{B}(\mathbb{X}).
\end{align}
Let $P^{X_{t+1}|(X_t,A_t)}$
be the corresponding regular conditional distribution. By Definition \ref{def:RegCondDist} (i), for each $B\in \mathcal{B}(\mathbb{X})$, $\omega\mapsto P^{X_{t+1}|(X_t,A_t)}(\omega,B)$ is $\sigma(X_t)\vee\sigma(A_t)$-$\mathcal{B}(\mathbb{R})$ measurable. It follows from \cite[Section 4.8, Theorem 4.11]{Aliprantis2006book} that there is an $h^t_B:(\mathbb{X}\times\mathbb{A},\mathcal{B}(\mathbb{X})\otimes\mathcal{B}(\mathbb{A}))\to(\mathbb{R},\mathcal{B}(\mathbb{R}))$ such that $h^t_B(X_t(\omega),A_t(\omega)) = P^{X_{t+1}|(X_t,A_t)}(\omega,B)$ for all $\omega\in\Omega$. Then by Definition \ref{def:RegCondDist} (ii), for $(x,a)\in\mathbb{X}\times\mathbb{A}$ such that $(X_t(\omega),A_t(\omega))=(x,a)$ for some $\omega\in\Omega$, we have $h^t_\cdot(x,a)$
is a probability measure on $\mathcal{B}(\mathbb{X})$. For $(x,a)$ that does not belong to the pointwise range of $(X_t,A_t)$, we may set $h_B(x,a)=\delta_{x_0}(B)$ for some $x_0\in\mathbb{X}$ so that $h_\cdot(x,a)$ is a probability measure on $\mathcal{B}(\mathbb{X})$ and $h_B$ is still measurable. By writing $P(t,x,a,B)=h^t_B(x,a)$, we have
\begin{align}\label{eq:transkernel}
\mathbb{P}(X_{t+1}\in B\,\big|\,\sigma(X_t)\vee\sigma(A_t)) = P(t,X_t, A_t, B) ,\quad\mathbb{P}-a.s.,\,t\in\mathbb{N},\,B\in \mathcal{B}(\mathbb{X}).
\end{align}
Note that for any $A\in\mathcal{B}(\mathbb{X})$ and $B\in\mathcal{B}(\mathbb{R})$ we have
\begin{align*}
&\left\{(x,a)\in\mathbb{X}\times\mathbb{A}:P(t,x,a,\,\cdot\,)\in\left\{\xi\in\Xi:\xi(A)\in B\right\} \right\} \\
&\quad = \left\{(x,a)\in\mathbb{X}\times\mathbb{A}:P(t,x,a,A) \in B \right\} = \left\{(x,a)\in\mathbb{X}\times\mathbb{A}:h^t_A(x,a) \in B \right\} \in \mathcal{B}(\mathbb{X})\otimes\mathcal{B}(\mathbb{A}),
\end{align*}
thus by \cite[Section 4.5, Corollary 4.24]{Aliprantis2006book}, $(x,a)\mapsto P(t,x,a,\cdot)$ is $\mathcal{B}(\mathbb{X})\otimes\mathcal{B}(\mathbb{A})$-$\mathcal{E}(\Xi)$ measurable.
Finally, we say $\mathfrak{A}$ is a Markovian action if
\begin{align}\label{eq:MarkovControl}
\mathbb{P}(A_t\in B\,|\,\mathscr{U}_t) = \mathbb{P}(A_t\in B\,|\,\sigma(X_t)),\quad B\in\mathcal{B}(\mathbb{A}),\,t\in\mathbb{N}.
\end{align}
Then, with similar reasoning as before, for any $t\in\mathbb{N}$ there is $\pi_t:(\mathbb{X},\mathcal{B}(\mathbb{X}))\to(\Lambda,\mathcal{E}(\Lambda))$ such that $\omega\mapsto\pi_t(X_t(\omega))$ is a regular version of $\mathbb{P}(A_t\in \,\cdot\,|\,\sigma(X_t))$, and
\begin{align}\label{eq:MarkovControlkernel}
\mathbb{P}(A_t\in B\,|\,\sigma(X_t)) = [\pi_t(X_t)](B),\quad\mathbb{P}-a.s.,\,B\in\mathcal{B}(\mathbb{A}).
\end{align}
\subsection{Kusuoko-type dynamic risk measure}\label{subsec:DRM}
In this section we introduce Kusuoka-type conditional risk mappings and the define dynamic risk measures as the nested composition of Kusuoka-type conditional risk mappings. The definitions and properties established in the section rely on the notions of pointwise supremum and essential supremum, and we refer to Appendix \ref{app:SupReview} for a brief review. To avoid confusion, we write $\essub Y := \inf\set{r\in\mathbb{R}:\mathbb{P}(Y>r)=0}$ and $\esslb Y := \sup\set{r\in\mathbb{R}:\mathbb{P}(Y<r)=0}$.
\subsubsection{Kusuoka-type conditional risk mapping}\label{subsec:KusuokaCRM}
We define the average value at risk conditioned on $\mathscr{U}_t$ as follows
\begin{align}\label{eq:cvarDef}
\avar^{\mathscr{U}_t}_{\kappa}(Z) := \begin{cases}
\essinf_{W\in L^\infty(\Omega,\mathscr{U}_t,\mathbb{P})}\left\{ W + \kappa^{-1}\mathbb{E}\left(\left(Z-W\right)_+\Big|\mathscr{U}_t\right) \right\}, &\kappa\in(0,1],\\
\essinf_{W\in L^\infty(\Omega,\mathscr{U}_t,\mathbb{P}),\, \mathbb{P}(W>Z)=1} W, &\kappa = 0.
\end{cases}
\end{align}
Let us fix the underlying process $\mathfrak{X}$ for the rest of this subsection. For any $t\in\mathbb{N}_0$, we let $\Upsilon^\mathfrak{X}_t$ be a subset of random probability measures $M:(\Omega,\mathscr{U}_t)\to(\mathbb{M},\mathcal{E}(\mathbb{M}))$ such that $M(\omega)\in\mathcal{M}_t(X_t(\omega))$ for $\mathbb{P}$-almost every $\omega\in\Omega$. \footnote{To see that $\mathcal{M}_t$ is not empty, in view of Kuratowski and Ryll-Nardzewski measurable selection theorem (cf. \cite[Section 18.3, Theorem 18.13]{Aliprantis2006book}), there is $m:(\mathbb{X},\mathcal{B}(\mathbb{X}))\to(\mathbb{M},\mathcal{B}(\mathbb{M}))$ such that $m(x)\in\mathcal{M}_t(x)$ for $x\in\mathbb{X}$. By Lemma \ref{lem:sigmaAlgBE}, $m$ is also $\mathcal{B}(\mathbb{X})$-$\mathcal{E}(\mathbb{M})$ measurable. Then, $M(\omega):=m(X_t(\omega))\in\mathcal{M}_t$.}
To introduce the Kusuoka-type conditional risk mapping, we need to consider the following integration,
\begin{align*}
\esssup_{M\in\Upsilon^\mathfrak{X}_t}\int_{[0,1]} \avar^{\mathscr{U}_t}_{\kappa}(Z)\,M(\dif\kappa).
\end{align*}
For a fixed $\omega\in\Omega$, however, the measurability of $\kappa\mapsto \avar^{\mathscr{U}_t}_{\kappa}(Z)$ is elusive, thus the integral above may not be well-defined even if $M$ is deterministic. On the other hand, in the unconditional case, it is known that $\avar_\kappa(Z)$ is continuous in $\kappa\in[0,1]$ (cf. Lemma \ref{lem:avarReg}). Inspired by this observation, we consider the definition below.
Let $\mathscr{A}\subseteq\mathscr{H}$ contain all $\mathbb{P}$-negligible sets. Let $(Y_\kappa)_{\kappa\in[0,1]}\subset L^\infty(\Omega,\mathscr{A},\mathbb{P})$ be essentially bounded from below, uniformly in $\kappa\in[0,1]$, i.e., $\inf_{\kappa\in[0,1]}\esslb Y_\kappa>-\infty$. Next, define $C(\Omega,\mathscr{A},\mathbb{P};(Y_\kappa)_{\kappa\in[0,1]})$ as the set of $(\widetilde Y_\kappa)_{\kappa\in[0,1]}$
such that
\begin{itemize}
\item[(i)] $\omega\mapsto\widetilde Y_\kappa(\omega)$ is $\mathscr{A}$-$\mathcal{B}(\mathbb{R})$ measurable for each $\kappa\in[0,1]$;
\item[(ii)] $\kappa\mapsto\widetilde Y_\kappa$ is continuous for each $\omega\in\Omega$;
\item[(iii)] $\widetilde Y_\kappa\le Y_\kappa,\,\mathbb{P}-a.s.$ for all $\kappa\in[0,1]$.
\end{itemize}
For any $(\widetilde Y_\kappa)_{\kappa\in[0,1]}\in C(\Omega,\mathscr{A},\mathbb{P};(Y_\kappa)_{\kappa\in[0,1]})$, we have that $(\omega,\kappa)\mapsto\widetilde Y_\kappa(\omega)$ is $\mathscr{A}\otimes\mathcal{B}([0,1])$-measurable (cf. \cite[Section 4.10, Lemma 4.51]{Aliprantis2006book}), and $C(\Omega,\mathscr{A},\mathbb{P};(Y_\kappa)_{\kappa\in[0,1]})$ contains at least the constant function with value $\inf_{\kappa\in[0,1]}\esslb Y_\kappa$.
For $M:(\Omega,\mathscr{A})\mapsto(\mathbb{M},\mathcal{E}(\mathbb{M}))$, we define the operator $\diamond^\mathscr{A}_{\kappa}$ via
\begin{align*}
(Y_\kappa\diamond^\mathscr{A}_{\kappa}M)(\omega) := \esssup_{(\widetilde Y_\kappa)_{\kappa\in[0,1]} \in C(\Omega,\mathscr{A},\mathbb{P};(Y_\kappa)_{\kappa\in[0,1]})} \int_{[0,1]}\widetilde Y_\kappa(\omega)\;[M(\omega)](\dif\kappa),
\end{align*}
where the integral on the right hand side is understood as integrating over $\kappa$ for each fixed $\omega\in\Omega$. In the sequel, we omit $\omega$ when no confusion arises. The following lemma concerns the measurability of integrals and the $\diamond^\mathscr{A}_{\kappa}$.
\begin{lemma}\label{lem:diamondMeasurable}
Let $\widetilde Y_\kappa\in C(\Omega,\mathscr{A},\mathbb{P};(Y_\kappa)_{\kappa\in[0,1]})$. Both $\int_{[0,1]}\widetilde Y_\kappa\;M(\dif\kappa)$ and $Y_\kappa\diamond^\mathscr{A}_{\kappa}M$ are $\mathscr{A}$-$\mathcal{B}(\mathbb{R})$ measurable.
\end{lemma}
\begin{proof
The measurability of $\int_{[0,1]}\widetilde Y_\kappa M(\dif\kappa)$ is an immediate consequence of Lemma \ref{lem:IntfMeasurability}.
As for the measurability of $Y_\kappa\diamond^\mathscr{A}_{\kappa}M$, we observe that by \eqref{eq:esssupRep}, there is a countable subset $\mathfrak{c} \in C(\Omega,\mathscr{A},\mathbb{P};(Y_\kappa)_{\kappa\in[0,1]})$ such that
\begin{align*}
Y_\kappa\diamond^\mathscr{A}_{\kappa}M = \sup_{(\widetilde Y_\kappa)_{\kappa\in[0,1]} \in \mathfrak{c}} \int_{[0,1]}\widetilde Y_\kappa\,M(\dif\kappa) = \lim_{n\to\infty} \max_{(\widetilde Y_\kappa)_{\kappa\in[0,1]} \in \mathfrak{c}_n} \int_{[0,1]}\widetilde Y_\kappa\,M(\dif\kappa),\quad\mathbb{P}-a.s.,
\end{align*}
where $\mathfrak{c}_n$ consists of the first $n$ elements in $\mathfrak{c}$. Note the maximum on the right hand side above is $\mathscr{A}$-$\mathcal{B}(\mathbb{R})$ measurable. Then, by Lemma \ref{lem:asconvMeasurable}, $Y_\kappa\diamond^\mathscr{A}_{\kappa}M$ is $\mathscr{A}$-measurable.
\end{proof}
The lemma below illustates that the operator $\diamond_\kappa^\mathscr{A}$ is essentially an integration of the continuous version of the integrand, if exists.
\begin{lemma}\label{lem:diamondContMod}
If $(\widetilde Y_\kappa)_{\kappa\in[0,1]} \in C(\Omega,\mathscr{A},\mathbb{P};(Y_\kappa)_{\kappa\in[0,1]})$ and $\widetilde Y_\kappa = Y_\kappa,\,\mathbb{P}-a.s.$ for any $\kappa\in[0,1]$, then $\int_{[0,1]} \widetilde Y_\kappa \,M(\dif\kappa) = Y_\kappa\diamond^{\mathscr{A}}_\kappa M,\,\mathbb{P}-a.s.$ for any $M:(\Omega,\mathscr{A})\to(\mathbb{M},\mathcal{E}(\mathbb{M}))$.
\end{lemma}
\begin{proof}
It follows from Definition \ref{def:esssup} (i) that $\int_{[0,1]} \widetilde Y_\kappa\, M(\dif\kappa) \le Y_\kappa\diamond^{\mathscr{A}}_\kappa M,\,\mathbb{P}-a.s.$. Next, notice that for any other $(\widetilde Y'_\kappa)_{\kappa\in[0,1]} \in C(\Omega,\mathscr{A},\mathbb{P};(Y_\kappa)_{\kappa\in[0,1]})$, we have $\mathbb{P}(\widetilde Y'_\kappa\le \widetilde Y_\kappa,\,\kappa\in\mathbb{Q}\cap[0,1]) = 1$. Since $\widetilde Y_\kappa,\widetilde Y'_\kappa$ are pointwise continuous in $\kappa\in[0,1]$, we have $\mathbb{P}(\widetilde Y'_\kappa\le \widetilde Y_\kappa,\,\kappa\in[0,1]) = 1$, and thus $\int_{[0,1]} \widetilde Y_\kappa M(\dif\kappa)\ge\int_{[0,1]} \widetilde Y'_\kappa M(\dif\kappa),\,\mathbb{P}-a.s.$. In view of Definition \ref{def:esssup} (ii), the proof is complete.
\end{proof}
We are now in a position to define what we term a Kusuoka-type conditional risk mapping. We define $\rho^\mathfrak{X}_t: L^\infty(\Omega,\mathscr{H},\mathbb{P})\to L^\infty(\Omega,\mathscr{U}_t,\mathbb{P})$ as
\begin{align}\label{eq:rhotDef}
\rho^\mathfrak{X}_{t}(Z) := \esssup_{M\in\Upsilon^\mathfrak{X}_t}\avar^{\mathscr{U}_t}_{\kappa}(Z)\diamond^{\mathscr{U}_t}_{\kappa} M,\quad t\in\mathbb{N}_0.
\end{align}
Notice that $\rho^\mathfrak{X}_t$ depends on $X_t$ through $\Upsilon^\mathfrak{X}_t$ (see below \eqref{eq:cvarDef}).
\begin{remark}\label{rem:rhotL} $\rho^\mathfrak{X}_t(Z)$ indeed belongs to $L^\infty(\Omega,\mathscr{U}_t,\mathbb{P})$. It can be shown that for $\kappa\in[0,1]$, $\mathbb{P}(\avar^{\mathscr{U}_t}_{\kappa}(Z)\in[\esslb Z,\essub Z])=1$. The essential boundedness of $\rho^\mathfrak{X}_t(Z)$ follows automatically. As for the measurability, by \eqref{eq:esssupRep}, for $\kappa\in(1,0]$ there is a countable $\mathfrak{l}\subset L^\infty(\Omega,\mathscr{U}_t,\mathbb{P})$ such that
$$\avar^{\mathscr{U}_t}_{\kappa}(Z) = \inf_{W\in\mathfrak{l}}\left\{ W + \kappa^{-1}\mathbb{E}\left(\left(Z-W\right)_+\Big|\mathscr{U}_t\right) \right\} = \lim_{n\to\infty}\min_{W\in\mathfrak{l}_n}\left\{ W + \kappa^{-1}\mathbb{E}\left(\left(Z-W\right)_+\Big|\mathscr{U}_t\right) \right\},$$
where $\mathfrak{l}_n$ consists of the first $n$ elements on $\mathfrak{l}$. It follows from Lemma \ref{lem:asconvMeasurable} that $\avar^{\mathscr{U}_t}_{\kappa}(Z)$ is $\mathscr{U}_t$-$\mathcal{B}(\mathbb{R})$ measurable for $\kappa\in(0,1]$. A similar argument holds for $\avar^{\mathscr{U}_t}_0(Z)$. Then, by Lemma \ref{lem:diamondMeasurable}, for $M\in\Upsilon^\mathfrak{X}_t$, $\avar^{\mathscr{U}_t}_{\kappa}(Z)\diamond^{\mathscr{U}_t}_{\kappa} M$ is also $\mathscr{U}_t$-measurable. Finally, with a similar argument as before, we obtain the $\mathscr{U}_t$-$\mathcal{B}(\mathbb{R})$ measurability for $\rho^\mathfrak{X}_t(Z)$.
\end{remark}
The proposition below states that $\rho^\mathfrak{X}_t$ is a bona fide conditional risk mapping; see, e.g., \cite[Section 6.5.2]{Shapiro2021book} for the definition. We defer the proof to Appendix \ref{app:Proofs}.
\begin{proposition}\label{prop:CondRiskMeasure}
For any $t\in\mathbb{N}_0$, $\rho^\mathfrak{X}_t$ is a conditional risk mapping from $L^\infty(\Omega,\mathscr{H},\mathbb{P})$ to $L^\infty(\Omega,\mathscr{U}_t,\mathbb{P})$. More precisely, $\rho^\mathfrak{X}_t$ satisfies the following conditions:
\begin{itemize}
\item[(a)][Monotonicity] for any $Z^1,Z^2\in L^\infty(\Omega,\mathscr{H},\mathbb{P})$ such that $Z^1\le Z^2,\mathbb{P}-a.s.$, $$\rho^\mathfrak{X}_t(Z^1)\le\rho^\mathfrak{X}_t(Z^2),\quad\mathbb{P}-a.s.;$$
\item[(b)][Translation equivariance] for any $Y\in L^\infty(\Omega,\mathscr{U}_t,\mathbb{P})$ and $Z\in L^\infty(\Omega,\mathscr{H},\mathbb{P})$,
$$\rho^\mathfrak{X}_t(Y+Z) = Y + \rho^\mathfrak{X}_t(Z),\quad\mathbb{P}-a.s.;$$
\item[(c)][Positive homogeneity] for any $\beta\ge 0$ and $Z\in L^\infty(\Omega,\mathscr{H},\mathbb{P})$,
$$\rho^\mathfrak{X}_t(\beta Z) = \beta\; \rho^\mathfrak{X}_t(Z),\quad\mathbb{P}-a.s.;$$
\item[(d)][Convexity] for any $Z^1,Z^2\in L^\infty(\Omega,\mathscr{H},\mathbb{P})$ and $\beta\in[0,1]$,
\begin{align*}
\rho^\mathfrak{X}_t(\beta\; Z^1+(1-\beta)\;Z^2) \le \beta\;\rho^\mathfrak{X}_t(Z^1) + (1-\beta)\;\rho^\mathfrak{X}_t(Z^2),\quad\mathbb{P}-a.s..
\end{align*}
\end{itemize}
\end{proposition}
Let $P^{Z|\mathscr{U}_t}$ be the regular version of $\mathbb{P}(Z\in\,\cdot\,|\mathscr{U}_t)$. The proposition below provides a useful representation for $\rho^\mathfrak{X}_t$, the proof of which is provided in Appendix \ref{app:Proofs}.
\begin{proposition}\label{prop:rhoMod}
The following is true for any $t\in\mathbb{N}_0$ and $Z\in L^\infty(\Omega,\mathscr{H},\mathbb{P})$:
\begin{enumerate}
\item[(a)] for any $\kappa\in(0,1]$, $\inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1} \int_{\mathbb{R}} (z-q)_+\,P^{Z|\mathscr{U}_t}(\dif z) \right\} \in L^\infty(\Omega,\mathscr{U}_t,\mathbb{P})$, and
\begin{align}\label{eq:CvarInfDef}
\avar_{\kappa}^{\mathscr{U}_t}(Z) = \inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1} \int_{\mathbb{R}} (z-q)_+\,P^{Z|\mathscr{U}_t}(\dif z) \right\},\quad\mathbb{P}-a.s.;
\end{align}
\item[(b)] for each $\omega\in\Omega$, $\kappa\mapsto\inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1} \int_{\mathbb{R}} (z-q)_+\,P^{Z|\mathscr{U}_t}(\dif z) \right\}$ is continuous on $(0,1]$, and $\lim_{\kappa\to0+}\inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1} \int_{\mathbb{R}} (z-q)_+\,P^{Z|\mathscr{U}_t}(\dif z) \right\} = \inf\set{r\in\mathbb{R}:P^{Z|\mathscr{U}_t}((r,\infty))=0};$
\item[(c)] $\inf\set{r\in\mathbb{R}:P^{Z|\mathscr{U}_t}((r,\infty))=0} \in L^\infty(\Omega,\mathscr{U}_t,\mathbb{P})$, and
\begin{align*}
\avar_{0}^{\mathscr{U}_t}(Z) = \inf\set{r\in\mathbb{R}:P^{Z|\mathscr{U}_t}((r,\infty))=0},\quad\mathbb{P}-a.s.;
\end{align*}
\item[(d)] the right hand side below belongs to $L^\infty(\Omega,\mathscr{U}_t,\mathbb{P})$, and
\begin{multline}\label{eq:rhotRep}
\rho^\mathfrak{X}_t(Z) = \sup_{\eta\in\mathcal{M}_t(X_t)} \left\{ \int_{(0,1]}\inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1} \int_{\mathbb{R}} (z-q)_+\,P^{Z|\mathscr{U}_t}(\dif z) \right\}\,\eta(\dif\kappa) \right.
\\
+ \eta(0) \; \inf\set{r\in\mathbb{R}:P^{Z|\mathscr{U}_t}((r,\infty))=0} \bigg\},\quad\mathbb{P}-a.s..
\end{multline}
\end{enumerate}
\end{proposition}
\subsubsection{Dynamic risk measure}
Next, we introduce a (discounted) dynamic risk measure similar to \cite[Section 6]{Ruszczynski2010Risk}, but for Kusuoka-type conditional risk.
Let $\gamma\in(0,1)$. For any $\mathfrak{Z}:=(Z_n)_{n\in\mathbb{N}_0}\subset L^\infty(\Omega,\mathscr{H},\mathbb{P})$ such that $\sup_{n\in\mathbb{N}_0}\essub |Z_n|<\infty$, we define
\begin{align}\label{eq:rhotTDef}
\rho^\mathfrak{X}_{t,T}(\mathfrak{Z}) :=
\begin{cases}
\rho^\mathfrak{X}_{t}\left(Z_t + \gamma\;\rho^\mathfrak{X}_{t+1,T}(\mathfrak{Z})\right), & t<T,\\
\rho^\mathfrak{X}_{T}(Z_T), & t=T.
\end{cases}
\end{align}
Equivalently,
\begin{align*
\rho^\mathfrak{X}_{t,T}(\mathfrak{Z}) := \rho^\mathfrak{X}_t( Z_t + \gamma \rho^\mathfrak{X}_{t+1}( Z_{t+1} + \gamma \rho^\mathfrak{X}_{t+2}( Z_{t+2} +... + \gamma \rho^\mathfrak{X}_{T-1}( Z_{T-1} + \gamma\rho^\mathfrak{X}_{T}( Z_T) ) ) ) ).
\end{align*}
The lemma below allows us to define the infinite horizon version of this risk measure, $\rho^\mathfrak{X}_{t,\infty}(\mathfrak{Z}):=\lim_{T\to\infty}\rho^\mathfrak{X}_{t,T}(\mathfrak{Z})$, as a $\mathbb{P}$-almost sure limit. We refer to Appendix \ref{app:Proofs} for the proof.
\begin{lemma}\label{lem:rhotTasConv}
For any $\mathfrak{Z}$ such that $\sup_{n\in\mathbb{N}_0}\essub |Z_n|<\infty$ and $t\in\mathbb{N}_0$, $\rho^\mathfrak{X}_{t,T}(\mathfrak{Z})$ converges $\mathbb{P}$-almost surely as $T\to\infty$.
\end{lemma}
Additionally, in view of \ref{lem:asconvMeasurable} and Remark \ref{rem:rhotL}, we have $\rho^\mathfrak{X}_{t,\infty}(\mathfrak{Z})\in L^\infty(\Omega,\mathscr{U}_t,\mathbb{P})$.
\subsection{Problem formulation}\label{subsec:Problem}
Here, we provide some standing assumptions and remarks on the key problem that we address: how to optimise the Kusouka type dynamic risk measure over actions?
We let $\Psi$ be a subset of $(\mathfrak{X},\mathfrak{A})$ satisfying \eqref{eq:GMarkov} and $\mathbb{P}(A_t\in\mathcal{A}_t(X_t),\,t\in\mathbb{N})=1$ (note that by \cite[Section 18.1, Theorem 18.6]{Aliprantis2006book}, $\set{(x,a)\in\mathbb{X}\times\mathbb{A}:a\in\mathcal{A}_t(x)}\in\mathcal{B}(\mathbb{X})\otimes\mathcal{B}(\mathbb{A})$). Throughout the rest of the paper, we make the following standing assumption.\footnote{For a nontrivial example, one can set $\mathbb{X}=\mathbb{A}=\mathbb{R}$ and construct $(\mathfrak{X},\mathfrak{A})$ from any given $P$ and $\mathfrak{p}$ on a complete probability space that supports a countable family of mutually independent $U([0,1])$ random variables. The construction can be done by utilizing that $F^{-1}(U)$ has distribution function $F$, where $U\sim U[0,1]$, $F$ is arbitrary and $F^{-1}(y):=\inf\set{x\in\mathbb{R}:F(x)\ge y}$. For examples of $\mathbb{X}$ and $\mathbb{A}$ that are complete separable metric spaces, we refer to \cite{Blackwell1983Extension}.}
\begin{assumption}\label{asmp:PlaceHolder}
The family $\set{(\Omega,\mathscr{H},\mathbb{P}),\mathbb{F},\mathbb{G},\Psi}$ satisfies the conditions below:
\begin{itemize}
\item[(i)] there exist $\mu$ and $P$ such that for any $(\mathfrak{X},\mathfrak{A})\in\Psi$, $X_1\sim\mu$ and \eqref{eq:transkernel} holds true for any $t\in\mathbb{N}$, where $\mu$ is a probability measure on $\mathcal{B}(\mathbb{X})$ and $P$ satisfies
\begin{itemize}
\item $P(t,x,a,\cdot)$ is a probability measure on $\mathcal{B}(\mathbb{X})$ for any $(t,x,a)\in\mathbb{N}\times\mathbb{X}\times\mathbb{A}$;
\item $(x,a)\mapsto P(t,x,a,B)$ is $\mathcal{B}(\mathbb{X})\otimes\mathcal{B}(\mathbb{A})$-measurable for any $t\in\mathbb{N}$ and $B\in\mathcal{B}(\mathbb{X})$;
\end{itemize}
\item[(ii)] for any $\mathfrak{p}\in\Pi$, there is $(\mathfrak{X},\mathfrak{A})\in\Psi$ such that \eqref{eq:MarkovControlkernel} holds true for $t\in\mathbb{N}$.
\end{itemize}
\end{assumption}
When $\mathfrak{A}$ is associated with some $\mathfrak{p}\in\Pi$ via \eqref{eq:MarkovControlkernel}, we will write $(\mathfrak{X}^{\mathfrak{p}}, \mathfrak{A}^{\mathfrak{p}}) = \set{(X^{\mathfrak{p}}_t, A^{\mathfrak{p}}_t)}_{t\in\mathbb{N}}$ to emphasize the dependence on $\mathfrak{p}$.
At each $t\in\mathbb{N}$, we are given a cost function $C_t:(\mathbb{X}\times\mathbb{A}\times\mathbb{X},\mathcal{B}(\mathbb{X})\otimes\mathcal{B}(\mathbb{A})\otimes\mathcal{B}(\mathbb{X}))\to(\mathbb{R},\mathcal{B}(\mathbb{R}))$ and we stipulate $C_0\equiv 0$. Let us define $\mathfrak{C}(\mathfrak{X},\mathfrak{A}):=(C_t(X_t,A_t,X_{t+1}))_{n\in\mathbb{N}_0}$. Below is our main goal.
\begin{align}\tag{P}
\text{Find \quad $\inf_{(\mathfrak{X},\mathfrak{A})\in\Psi}\rho^\mathfrak{X}_{0,\infty}(\mathfrak{C}(\mathfrak{X},\mathfrak{A}))$, and the optimal policy if exists.}
\end{align}
Apart from Assumption \ref{asmp:PlaceHolder}, we need some additional technical assumptions to hold for the derivation of the DPP to be rigorous.
\begin{assumption}\label{asmp:Main} The following is true for any $t\in\mathbb{N}$:
\begin{itemize}
\item[(i)] $(x,a)\mapsto P(t,x,a,\cdot)$ is weakly continuous, that is, for any $(x^n)_{n\in\mathbb{N}}\subseteq\mathbb{X}$ and $(a^n)_{n\in\mathbb{N}}\subseteq\mathbb{A}$ such that $\lim_{n\to\infty}x^n=x^0$ and $\lim_{n\to\infty}a^n=a^0$, we have
\begin{align*}
\lim_{n\to\infty}\int_{\mathbb{R}}f(y)P(t,x^n,a^n,\dif y) = \int_{\mathbb{R}}f(y)P(t,x,a,\dif y),\quad f\in C_b(\mathbb{X}).
\end{align*}
\item[(ii)]
$\bigcup_{x\in\mathbb{X}}\mathcal{A}_t(x)$ is compact for $x\in\mathbb{X}$ and $\mathcal{A}_t$ is upper hemi-continuous, that is, at any $x\in\mathbb{X}$ for every open $U_\mathbb{A}\supseteq\mathcal{A}_t(x)$ there is a open $U_\mathbb{X}\ni x$ such that $z\in U_\mathbb{X}$ implies $\mathcal{A}_t(z)\subseteq U_\mathbb{A}$;
\item[(iii)] $\mathcal{M}_t$ is lower hemi-continuous, that is, at any $x\in\mathbb{X}$ for every open $U_\mathbb{M}\subset\mathbb{M}$ such that $U_\mathbb{M}\cap\mathcal{M}_t(x)\neq\emptyset$ there is an open $U_\mathbb{X}\subseteq\mathbb{X}$ such that $z\in U_\mathbb{X}$ implies $U_\mathbb{M}\cap\mathcal{M}_t(z)\neq\emptyset$;
\item[(iv)] the cost function $C_t$ is lower semi-continuous and $\|C_t\|_\infty \le b$ for some $b>0$.
\end{itemize}
\end{assumption}
Note Assumption \ref{asmp:Main} (ii) implies that $\bigcup_{x\in\mathbb{X}}\varpi_t(x)$ is compact (cf. \cite[Section 15.6, Theorem 15.22]{Aliprantis2006book}) and $\varpi_t$ is upper hemi-continuous (cf. \cite[Section 17.2, Theorem 17.13]{Aliprantis2006book}).
\begin{remark}
By Assumption \ref{asmp:Main} (i), $(x,a)\mapsto P(t,x,a,\,\cdot\,)$ is $\mathcal{B}(\mathbb{X})\otimes\mathcal{B}(\mathbb{A})$-$\mathcal{B}(\Xi)$ measurable. This together with Lemma \ref{lem:sigmaAlgBE} implies that any $P$ satisfying Assumption \ref{asmp:Main} (i) also satisfies the conditions on $P$ mentioned in Assumption \ref{asmp:PlaceHolder} (i). Next, in our setting where the input space $\mathbb{X}$ is endowed with $\mathcal{B}(\mathbb{X})$, upper/lower hemi-contiuous implies weakly measurable (cf. \cite[Section 17.2, Lemma 17.4, Lemma 17.5 and Section 18.1, Lemma 18.2]{Aliprantis2006book}). Therefore, Assumption \ref{asmp:Main} (ii) and (iii) does not contradict any previous conditions on $\mathcal{M}_t$ and $\mathcal{A}_t$.
\end{remark}
\begin{remark}
As an alternative formulation, we may study instead the problem from the first time-step, i.e., $\inf_{(\mathfrak{X},\mathfrak{A})\in\Psi}\rho^\mathfrak{X}_{1,\infty}(\mathfrak{C}(\mathfrak{X},\mathfrak{A}))$. Under suitable conditions this, however, will lead to the same set of Markovian actions. We refer to Theorem \ref{thm:DPP} for the detailed statement.
\end{remark}
\section{Auxiliaries}\label{sec:Aux}
Momentarily, let us restrict our attention to Markovian actions associated with $\mathfrak{p}$ and investigate $\rho^{\mathfrak{X}^\mathfrak{p}}_{t,\infty}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p}))$. The Markovian nautre of the controlled process together with Proposition \ref{prop:rhoMod} provides a way to express $\rho^{\mathfrak{X}^\mathfrak{p}}_{t,T}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p}))$ as composition of certain operators, which in turn provides a way to analyze $\rho^{\mathfrak{X}^\mathfrak{p}}_{t,\infty}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p}))$. The various operators involved in this program are introduced below.
For $t=0$ we define a functional $H_0$ on $\ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$ as follows, for any $v\in \ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$
\begin{align}\label{eq:H0Def}
H_0 v &:= \sup_{\eta\in\mathbb{M}_0}\left\{ \int_{(0,1]}\inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1}\int_\mathbb{X}\left(\gamma \;v(y)-q\right)_+ \mu(\dif y) \right\} \, \eta(\dif\kappa) \right.\nonumber\\
&\qquad\qquad\qquad\left. + \eta(0)\cdot \inf\left\{r\in\mathbb{R}:\int_{\mathbb{X}}\1_{(r,\infty)}(\gamma \;v(y))\,\mu(\dif y) = 0\right\} \right\}.
\end{align}
For $t\in\mathbb{N}$ we define operators $G^{\lambda}_t$ and $H^{\mathfrak{p}}_t$ on $\ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$ as
\begin{align}\label{eq:Gdef}
&G^{\lambda}_t v(x) := \nonumber\\
&\quad \sup_{\eta\in\mathcal{M}_t(x)} \left\{ \int_{(0,1]}\inf_{q\in\mathbb{R}}\left\{ \resizebox{0.6\hsize}{!}{$ q + \kappa^{-1}\int_\mathbb{A}\int_{\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+ P(t,x,a,\dif y)\lambda(\dif a) $} \right\} \, \eta(\dif\kappa) \right.\nonumber\\
&\qquad\qquad + \eta(0) \; \left. \inf\left\{r\in\mathbb{R}:\int_\mathbb{A}\int_{\mathbb{X}}\1_{(r,\infty)}(C_t(x,a,y)+\gamma v(y))\,P(t,x,a,\dif y)\lambda(\dif a) = 0\right\} \right\}
\end{align}
and $H^{\mathfrak{p}}_t v(x) := G^{\pi_t(x)}_t v(x)$, respectively.
\subsection{Regularities}
We first establish the measurability of $(x,\lambda)\mapsto G^{\lambda}_t v(x)$.
\begin{lemma}\label{lem:GMeasurability}
For any $v\in\ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$, the mapping $(x,\lambda)\mapsto G^{\lambda}_t v(x)$ is $\mathcal{B}(\mathbb{X})\otimes\mathcal{E}(\Lambda)$-$\mathcal{B}(\mathbb{R})$ measurable.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:IntfMeasurability}, $(x,q,a)\mapsto$$\int_{\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+$$P(t,x,a,\dif y)$ is $\mathcal{B}(\mathbb{X})\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{B}(\mathbb{A})$-$\mathcal{B}(\mathbb{R})$ measurable. Let $g:(\mathbb{X}\times\mathbb{R}\times\mathbb{A}, \mathcal{B}(\mathbb{X})\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{B}(\mathbb{A}))\to(\mathbb{R},\mathcal{B}(\mathbb{R}))$ be non-negative. By Lemma \ref{lem:IntfMeasurability} again (with $f(x,\lambda,q,a)=g(x,q,a)$ and $M(x,\lambda,q)=\lambda$), $(x,\lambda,q)\mapsto\int_\mathbb{A} g(x,q,a)\lambda(\dif a)$ is $\mathcal{B}(\mathbb{X})\otimes\mathcal{E}(\Lambda)\otimes\mathcal{B}(\mathbb{R})$-$\mathcal{B}(\mathbb{R})$ measurable. Consequently,
\begin{align*}
(x,\lambda,q)\mapsto\int_\mathbb{A}\int_{\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+ P(t,x,a,\dif y))\lambda(\dif a)
\end{align*}
is $\mathcal{B}(\mathbb{X})\otimes\mathcal{E}(\Lambda)\otimes\mathcal{B}(\mathbb{R})$-$\mathcal{B}(\mathbb{R})$ measurable. Let $\mathbb{Q}_n$ consist of the first $n$-th
rational numbers. Because
\begin{align}\label{eq:xinfq}
(x,\lambda)\mapsto &\inf_{q\in\mathbb{R}}\left\{ q+\kappa^{-1}\int_\mathbb{A}\int_{\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+ P(t,x,a,\dif y)\lambda(\dif a) \right\}\nonumber\\
&= \lim_{n\to\infty}\min_{q\in\mathbb{Q}_n}\left\{ q+\kappa^{-1}\int_\mathbb{A}\int_{\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+ P(t,x,a,\dif y)\lambda(\dif a) \right\},\quad\kappa\in(0,1],
\end{align}
we obtain $\mathcal{B}(\mathbb{X})\otimes\mathcal{E}(\Lambda)$-$\mathcal{B}(\mathbb{R})$ measurability (cf. \cite[Section 4.6, Lemma 4.29]{Aliprantis2006book}). A similar reasoning as before implies the $\mathcal{B}(\mathbb{X})\otimes\mathcal{E}(\Lambda)$-$\mathcal{B}(\mathbb{R})$ measurability of
\begin{align}\label{eq:xinfr}
(x,\lambda)\mapsto\inf\left\{r\in\mathbb{R}:\int_\mathbb{A}\int_{\mathbb{X}}\1_{(r,\infty)}(C_t(x,a,y)+\gamma v(y))\,P(t,x,a,\dif y)\lambda(\dif a) = 0\right\}.
\end{align}
In view of Lemma \ref{lem:avarReg}, the function below is continuous in $[0,1]$ for each $(x,\lambda)$:
\begin{align*}
\kappa\mapsto\begin{cases}
\displaystyle \inf_{q\in\mathbb{R}}\left\{ q+\kappa^{-1}\int_\mathbb{A}\int_{\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+ P(t,x,a,\dif y)\lambda(\dif a) \right\},&\kappa\in(0,1],
\\
\displaystyle \inf\left\{r\in\mathbb{R}:\int_\mathbb{A}\int_{\mathbb{X}}\1_{(r,\infty)}(C_t(x,a,y)+\gamma v(y))\,P(t,x,a,\dif y)\lambda(\dif a) = 0\right\},&\kappa=0.
\end{cases}
\end{align*}
It follows from \cite[Section 4.10, Lemma 4.51]{Aliprantis2006book} that the right hand sides above as a function of $(x,\lambda,\kappa)$ is $\mathcal{B}(\mathbb{X})\otimes\mathcal{E}(\Lambda)\otimes\mathcal{B}([0,1])$-$\mathcal{B}(\mathbb{R})$ measurable. Finally,
\begin{align*}
((x,\lambda),\eta)\mapsto&\left\{ \int_{(0,1]}\inf_{q\in\mathbb{R}}\left\{ \resizebox{0.6\hsize}{!}{$ q + \kappa^{-1}\int_\mathbb{A}\int_{\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+ P(t,x,a,\dif y)\lambda(\dif a) $} \right\} \, \eta(\dif\kappa) \right.\nonumber\\
&\quad + \eta(0) \cdot \left. \inf\left\{r\in\mathbb{R}:\int_\mathbb{A}\int_{\mathbb{X}}\1_{(r,\infty)}(C_t(x,a,y)+\gamma v(y))\,P(t,x,a,\dif y)\lambda(\dif a) = 0\right\} \right\}
\end{align*}
is a continuous function of $\eta\in\mathbb{M}$ for $(x,\lambda)\in\mathbb{X}\times\Lambda$, and is a $\mathcal{B}(\mathbb{X})\otimes\mathcal{E}(\Lambda)$-$\mathcal{B}(\mathbb{R})$ measurable function of $(x,\lambda)\in\mathbb{X}\times\Lambda$ for $\eta\in\mathbb{M}$ (cf. \cite[Corollary 3.4.6]{Bogachev2006book}), thus Carath\'eodory (cf. \cite[Section 4.10, Definition 4.50]{Aliprantis2006book}).
Invoking the measurable maximal theorem (cf. \cite[Section 18.3, Theorem 18.19]{Aliprantis2006book}) completes the proof.
\end{proof}
Lemma \ref{lem:GBasic} and Lemma \ref{lem:GContr} below reveals some basic properties of $G^\lambda_t$.
\begin{lemma}\label{lem:GBasic}
The following is true for any $(t,\lambda)\in\mathbb{N}\times\Lambda$:
\begin{enumerate}
\item If $v^1,v^2\in\ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$ satisfies $v^1\le v^2$, then $G^{\lambda}_t v^1(x) \le G^{\lambda}_t v^2(x)$ for $x\in\mathbb{X}$. Moreover, under Assumption \ref{asmp:Main} (iv), $G^{\lambda}_t v \in [-b-\gamma\|v\|_\infty, b+\gamma\|v\|_\infty]$.
\item For any $a\in\mathbb{R}$ and $v\in\ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$, we have $G^{\lambda}_t (a+v) = a + G^{\lambda}_tv$.
\end{enumerate}
Properties for $H_0$ also holds analogously.
\end{lemma}
\begin{proof}
These are immediate consequences of the definition of $G^{\lambda}_t$.
\end{proof}
\begin{lemma}\label{lem:GContr}
Suppose Assumption \ref{asmp:Main} (iv) holds.
For any $t\in\mathbb{N}$, $\lambda\in\Lambda$ and $v^1,v^2\in\ell^\infty(\mathbb{X},\allowbreak\mathcal{B}(\mathbb{X}))$, we have
\begin{align*}
\left\|G^{\lambda}_t v^1 - G^{\lambda}_t v^2\right\|_\infty \le \gamma\|v^1-v^2\|_\infty
\end{align*}
and
\begin{align*}
\left|H_0 v^1 - H_0 v^2\right| \le \gamma\|v^1-v^2\|_\infty.
\end{align*}
\end{lemma}
\begin{proof}
Fix $t\in\mathbb{N}$ and $x\in\mathbb{X}$ for the remainder of the proof.
Then, for $i=1,2$ define
\begin{align*}
I^i_\kappa &:= \inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1}\int_\mathbb{A}\int_{\mathbb{R}\times\mathbb{X}}\left(C_t(x,a,y) + \gamma v^i(y)-q\right)_+ P(t,x,a,\dif y)\lambda(\dif a) \right\},\quad \kappa\in(0,1],
\\[0.5em]
I^i_0 &= \inf\left\{r\in\mathbb{R}:\int_\mathbb{A}\int_{\mathbb{X}}\1_{(r,\infty)}(C_t(x,a,y)+\gamma v^i(y))\,P(t,x,a,\dif y)\lambda(\dif a) = 0\right\}.
\end{align*}
Notice that $|I^1_\kappa-I^2_\kappa|\le\gamma\|v^1-v^2\|_\infty$
because
\begin{align*}
I^1_\kappa
&\le \inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1}\int_\mathbb{A}\int_{\mathbb{R}\times\mathbb{X}}\left(C_t(x,a,y) + \gamma v^2(y) + \gamma\|v^1-v^2\|_\infty -q\right)_+ P(t,x,a,\dif y)\lambda(\dif a) \right\}\\
&= I^2_{\kappa} + \gamma\|v^1-v^2\|_\infty
\end{align*}
and vice versa. As for $|I^1_0-I^2_0|$, without loss of generality, suppose $I^1_0-I^2_0>\gamma\|v^1-v^2\|_\infty$, then there are $r^1,r^2$ such that $r^1<I^1_0$, $r^2>I^2_0$, $r^1-r^2>\gamma\|v^1-v^2\|_\infty$ and
\begin{align*}
\int_\mathbb{A}\int_{\mathbb{X}}\1_{(r^1,\infty)}(C_t(x,a,y)+\gamma v^1(y))\,P(t,x,a,\dif y)\lambda(\dif a) > 0,\\
\int_\mathbb{A}\int_{\mathbb{X}}\1_{(r^2,\infty)}(C_t(x,a,y)+\gamma v^2(y))\,P(t,x,a,\dif y)\lambda(\dif a) = 0.
\end{align*}
This, however, leads to the contradiction below:
\begin{align*}
0&<\int_\mathbb{A}\int_{\mathbb{X}}\1_{(r^1,\infty)}(C_t(x,a,y)+\gamma v^1(y))\,P(t,x,a,\dif y)\lambda(\dif a)\\
&= \int_\mathbb{A}\int_{\mathbb{X}}\1_{(r^2,\infty)}(C_t(x,a,y)+\gamma v^1(y)-(r^1-r^2))\,P(t,x,a,\dif y)\lambda(\dif a)\\
&\le \int_\mathbb{A}\int_{\mathbb{X}}\1_{(r^2,\infty)}(C_t(x,a,y)+\gamma v^2(y))\,P(t,x,a,\dif y)\lambda(\dif a) = 0.
\end{align*}
Therefore, we must have $|I^1_0-I^2_0|\le\gamma\|v^1-v^2\|_\infty$ as well.
Consequently,
\begin{align*}
&\left|G^{\lambda}_t v^1(x) - G^{\lambda}_t v^2(x)\right| = \left|\sup_{\eta\in\mathcal{M}_t(x)}\left\{ \int_{(0,1]}I^1_\kappa \,\eta(\dif\kappa) + \eta(0)\cdot I^1_0 \right\} - \sup_{\eta\in\mathcal{M}_t(x)}\left\{ \int_{(0,1]}I^2_\kappa \,\eta(\dif\kappa) + \eta(0)\cdot I^2_0 \right\}\right|\\
&\quad \le \sup_{\eta\in\mathbb{M}}\left| \int_{(0,1]} \left(I^1_\kappa - I^2_\kappa\right) \, \eta(\dif\kappa) + \eta(0)(I^1_0 - I^2_0) \right| \le \gamma\|v^1-v^2\|_\infty.
\end{align*}
This proves the contraction property of $G^{\lambda}_t$. A similar argument proves the contraction property of $H_0$.
\end{proof}
Below is a result regarding the lower semi-continuity of $(x,\lambda)\mapsto G^{\lambda}_t v(x)$.
\begin{lemma}\label{lem:GvLSC}
Suppose Assumption \ref{asmp:Main} (i) (iii) and (iv). Let $(x^n)_{n\in\mathbb{N}}\subset\mathbb{X}$ and $(\lambda^n)_{n\in\mathbb{N}}\subset\Lambda$ converge to $x^0\in\mathbb{X}$ and $\lambda^0\in\Lambda$, respectively. If $v\in\ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$ is lower semi-continuous, then
\begin{align*}
\liminf_{n\to\infty} G^{\lambda^n}_t v (x^n) \ge G^{\lambda^0}_t v(x^0).
\end{align*}
\end{lemma}
\begin{proof}
First, define
\begin{align*}
h(x,a,q):=\int_{\mathbb{R}\times\mathbb{X}}\left(C_t(x,a,z) + \gamma v(z)-q\right)_+ P(t,x,a,\dif z).
\end{align*}
As $y\mapsto(y - q)_+$ is non-decreasing and $(x,a,z)\mapsto C_t(x,a,z)+\gamma v(z)$ is lower semi-continuous (due to Assumption \ref{asmp:Main} (iv)), we have $(x,a,z)\mapsto\left(C_t(x,a,z) + \gamma v(z)-q\right)_+$ is also lower semi-continuous. By the boundedness of $C_t$ and $v$, we have that $(x,a,z)\mapsto\left(C_t(x,a,z) + \gamma v(z)-q\right)_+$ is also bounded. Let $(x^n)_{n\in\mathbb{N}}\subseteq\mathbb{X}$ and $(a^n)_{n\in\mathbb{N}}\subseteq\mathbb{A}$ converge to $x^0$ and $a^0$, respectively. By Assumption \ref{asmp:Main} (i) and Lemma \ref{lem:ConvVaryingMeas}, we have
\begin{align*}
\liminf_{n\to\infty} h(x^n,a^n,q) \ge h(x^0,a^0,q),
\end{align*}
This implies that $(x,a)\mapsto h(x,a,q)$ is lower semi-continuous for each $q\in\mathbb{R}$. Then, by Assumption \ref{asmp:Main} (i) and Lemma \ref{lem:ConvVaryingMeas} again, we have
\begin{align*}
\liminf_{n\to\infty} \int_\mathbb{A} h(x^n,a,q)\lambda^n(\dif a) \ge \int_\mathbb{A} h(x^0,a,q)\lambda^0(\dif a),\quad q\in\mathbb{R},
\end{align*}
which implies that $(x,\lambda)\mapsto \int_\mathbb{A} h(x,a,q)\lambda(\dif a) $ is lower semi-continuous for any $q\in\mathbb{R}$. Next, for $\kappa\in(0,1]$ define
\begin{align*}
f(x,\lambda,q,\kappa) &:= q+\kappa^{-1}\int_\mathbb{A} h(x,a,q)\lambda(\dif a) \\
&= q+\kappa^{-1}\int_\mathbb{A}\int_{\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+ P(t,x,a,\dif y)\lambda(\dif a).
\end{align*}
As $q\mapsto f(x,\lambda,q,\kappa)$ is $(1+\kappa^{-1})$-Lipschitz continuous for any $(x,\lambda)\in\mathbb{X}\times\Lambda$ and $\kappa\in(0,1]$, an application of the triangle inequality shows that $(x,\lambda,q)\mapsto f(x,\lambda,q,\kappa)$ is lower semi-continuous for each $\kappa\in(0,1]$:
\begin{align*}
\liminf_{n\to\infty} f(x^n,\lambda^n,q^n,\kappa) &\ge \liminf_{n\to\infty} f(x^n,\lambda^n,q^0,\kappa) + \liminf_{n\to\infty} \left(f(x^n,\lambda^n,q^n,\kappa)-f(x^n,\lambda^n,q^0,\kappa)\right)\\
&\ge f(x^0,\lambda^0,q^0,\kappa),
\end{align*}
where $\lim_{n\to\infty}q^n = q^0$. Due to the boundedness of $C$ and $v$, there is some constant $K>0$ such that
\begin{align*}
\inf_{q\in\mathbb{R}}f(x,\lambda,q,\kappa) = \inf_{q\in[-K,K]}f(x,\lambda,q,\kappa),\qquad (x,\lambda,\kappa)\in\mathbb{X}\times\Lambda\times(0,1].
\end{align*}
By Lemma \ref{lem:InffLSC}, we obatin the lower semi-continuity of $(x,\lambda)\mapsto\inf_{q\in\mathbb{R}}f(x,\lambda,q,\kappa)$ for each $\kappa\in(0,1]$.
Next, we consider $\kappa=0$. For $n\in\mathbb{N}$ define
\begin{align*}
\mu^n(B):= \int_\mathbb{A}\int_{\mathbb{R}\times\mathbb{X}}\1_{B}\big(C_t(x^n,a,u)+\gamma v(u)\big)\,P(t,x^n,a,\dif u)\lambda^n(\dif a), \quad B\in\mathcal{B}(\mathbb{R}),
\end{align*}
and $s^n:=\inf\left\{r\in\mathbb{R}: \mu^n((r,\infty))= 0\right\}$. Further, define $s^\diamond:=\liminf_{n\to\infty} s^n$ and let $(n_k)_{k\in\mathbb{N}}$ be such that $s^\diamond=\lim_{k\to\infty}s^{n_k}$. Since $(x,a,u)\mapsto C_t(x,a,u)+\gamma v(u)$ is lower semi-continuous, $(r,x,a,u)\mapsto\1_{(r,\infty)}(C_t(x,a,u)+\gamma v(u))$ is also lower semi-continuous.\footnote{Let $(x^i)_{i\in\mathbb{N}},(a^i)_{i\in\mathbb{N}},(u^i)_{i\in\mathbb{N}},(r^i)_{i\in\mathbb{N}}$ converge to $x^0,a^0,u^0,r^0$, respectively. Since indicator takes value from $\set{0,1}$, it is sufficient to consider only the case of $\liminf_{i\to\infty} \1_{(r^i,\infty)}(C_t(x^i,a^i,u^i)+v(y^i)) = 0$. To this end observe that there is a subsequence $(x^{i_j})_{j\in\mathbb{N}},(a^{i_j})_{j\in\mathbb{N}},(y^{i_j})_{j\in\mathbb{N}},(r^{i_j})_{j\in\mathbb{N}}$ such that $C_t(x^{i_j},a^{i_j},u^{i_j})+v(y^{i_j})\le r^{i_j}$. Taking $\liminf_{j\to\infty}$, due to the lower semi-continuity of $(x,a,u)\mapsto C_t(x,a,u)+\gamma v(u)$, we have $C_t(x^0,a^0,u^0)+v(u^0)\le r^0$, i.e., $\1_{(r^0,\infty)}(C_t(x^0,a^0,u^0)+v(y^0))=0$.} Then, by Lemma \ref{lem:ConvVaryingMeas} (with $(r,x,a)$ and $u$ playing the roles of $y$ and $z$), $(r,x,a)\mapsto \int_{\mathbb{R}\times\mathbb{X}}\1_{(r,\infty)}(C_t(x,a,u)+\gamma v(u))\,P(t,x,a,\dif u)$ is lower semi-continuous. By Lemma \ref{lem:ConvVaryingMeas} again (with $(r,x)$ and $a$ playing the roles of $y$ and $z$), we obtain
\begin{multline*}
\liminf_{k\to\infty}\int_\mathbb{A}\int_{\mathbb{R}\times\mathbb{X}}\1_{(s^{n_k},\infty)}\big(C_t(x^{n_k},a,u)+\gamma v(u)\big)\,P(t,x^{n_k},a,\dif u)\lambda^{n_k}(\dif a) \\
\ge \int_\mathbb{A}\int_{\mathbb{R}\times\mathbb{X}}\1_{(s^{0},\infty)}\big(C_t(x^0,a,u)+\gamma v(y)\big)\,P(t,x^{0},a,\dif y)\lambda^{0}(\dif a),
\end{multline*}
which may be written succinctly as $\liminf_{k\to\infty}\mu^{n_k}((s^{n_k},\infty)) \ge \mu^{0}((s^\diamond,\infty))$. Note $\mu^n((s^n,\infty))=0$ for all $n\in\mathbb{N}$.\footnote{Since there is $(r^{n,i})_{i\in\mathbb{N}}\subset(s^n,\infty)$ such that $\bigcup_i(r^{n,i},\infty)=(s^n,\infty)$ and $\mu^n((r^{n,i},\infty))=0$. Thus, $\mu^n((s^n,\infty)) = \mu^{n}(\bigcup_i(r^{n,i},\infty)) = \lim_{j\to\infty}\mu^n(\bigcup_{i=1}^j(r^{n,i},\infty)) = 0$.} It follows that $\mu^{0}((s^\diamond,\infty))=0$ and thus $s^\diamond\ge s^0$. In other words,
\begin{align*}
(x,\lambda)\mapsto\inf\left\{r\in\mathbb{R}: \int_\mathbb{A}\int_{\mathbb{R}\times\mathbb{X}}\1_{(r,\infty)}\big(C_t(x,a,y)+\gamma v(y)\big)\,P(t,x,a,\dif y)\lambda(\dif a) = 0\right\}
\end{align*}
is lower semi-continuous.
Finally,
in view of Lemma \ref{lem:avarReg}, the function below is continuous in $[0,1]$ for each $(x,\lambda)$:
\begin{align*}
\kappa\mapsto\begin{cases}
\displaystyle
\inf_{q\in\mathbb{R}}\left\{ q+\kappa^{-1}\int_\mathbb{A}\int_{\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+ P(t,x,a,\dif y)\lambda(\dif a) \right\},&\kappa\in(0,1],
\\
\displaystyle
\inf\left\{r\in\mathbb{R}:\int_\mathbb{A}\int_{\mathbb{X}}\1_{(r,\infty)}(C_t(x,a,y)+\gamma v(y))\,P(t,x,a,\dif y)\lambda(\dif a) = 0\right\},&\kappa=0.
\end{cases}
\end{align*}
Moreover, it is jointly measurable in $(x,\lambda,\kappa)$ (cf. \cite[Section 4.10, Lemma 4.51]{Aliprantis2006book}).
The above together with Assumption \ref{asmp:Main} (iii) allows the application of Lemma \ref{lem:SupIntLSC} and this completes the proof.
\end{proof}
\subsection{Connecting $\rho^{\mathfrak{X}^\mathfrak{p}}_{t,T}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p}))$ and $H^\mathfrak{p}_t$}
In this subsection, we reformulate the dynamic risk measure $\rho^{\mathfrak{X}^\mathfrak{p}}_{t,T}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p}))$ in terms of composition of $H^\mathfrak{p}_t$'s. To this end, let $O(x):=0$ for $x\in\mathbb{X}$.
\begin{lemma}\label{lem:rhotTH}
Under Assumption \ref{asmp:Main} (iv), for any $0<t\le T<\infty$,
\begin{align*}
\rho^{\mathfrak{X}^\mathfrak{p}}_{t,T}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})) = H^{\mathfrak{p}}_t \circ\cdots\circ H^{\mathfrak{p}}_T O (X^{\mathfrak{p}}_t), \quad \mathbb{P}-a.s.,
\end{align*}
and
\begin{align*}
\rho^{\mathfrak{X}^\mathfrak{p}}_{0,T}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})) = H_0\circ H^{\mathfrak{p}}_1 \circ\cdots\circ H^{\mathfrak{p}}_T O.
\end{align*}
\end{lemma}
\begin{proof}
Let $B\in\mathcal{B}(\mathbb{R})$, then by \eqref{eq:GMarkov} and \eqref{eq:transkernel},
\begin{align*}
\mathbb{P}\left(C_T(X^\mathfrak{p}_T,A^\mathfrak{p}_T,X^\mathfrak{p}_{T+1})\in B\big|\mathscr{U}_T\right)
&=
\mathbb{E}\left(\mathbb{P}\left(C_T\big(X^\mathfrak{p}_T,A^\mathfrak{p}_T,X^\mathfrak{p}_{T+1}\big)\in B|\mathscr{G}_T\right)\big|\mathscr{U}_T\right)
\\
& = \mathbb{E}\left(\mathbb{E}\left(\1_{B}\big(C_T(X^\mathfrak{p}_T,A^\mathfrak{p}_T,X^\mathfrak{p}_{T+1})\big)\,\big|\,\sigma(X^\mathfrak{p}_{T})\vee\sigma(A^\mathfrak{p}_{T})\right)\,\big|\,\mathscr{U}_T\right)
\\
& = \mathbb{E}\left(\int_{\mathbb{X}}\1_{B}(C_T(X^\mathfrak{p}_T,A^\mathfrak{p}_T,y))P(t,X^\mathfrak{p}_T,A^\mathfrak{p}_T,\dif y)\bigg|\mathscr{U}_T\right),\quad\mathbb{P}-a.s.,
\end{align*}
where we have used Lemma \ref{lem:CondExpnXY} in the last equality. It follows from \eqref{eq:MarkovControlkernel}, Lemma \ref{lem:IntfMeasurability} and Lemma \ref{lem:CondExpnXY} that
\begin{multline}\label{eq:PAXinD}
\mathbb{P}\left(C_T(X^\mathfrak{p}_T,A^\mathfrak{p}_T,X^\mathfrak{p}_{T+1})\in B\big|\mathscr{U}_T\right)
\\
= \int_{\mathbb{A}}\int_{\mathbb{X}}\1_{B}\big(C_T(X^\mathfrak{p}_T,a,y)\big)P(t,X^\mathfrak{p}_T,a,\dif y)[\pi_t(X^\mathfrak{p}_T)](\dif a),\quad\mathbb{P}-a.s..
\end{multline}
It is not difficult to verify that $\int_{\mathbb{A}}\int_{\mathbb{X}}\1_{\,\cdot\,}\big(C_T(x,a,y)\big)P(t,x,a,\dif y)[\pi_T(x)](\dif a)$ is a probability measure on $\mathcal{B}(\mathbb{R})$ for $x\in\mathbb{X}$. This together with \eqref{eq:PAXinD} implies that
\begin{align*}
\int_{\mathbb{A}}\int_{\mathbb{X}}\1_{\,\cdot\,}\big(C_T(X^\mathfrak{p}_T,a,y)\big)\,P(t,X^\mathfrak{p}_T,a,\dif y)[\pi_t(X^\mathfrak{p}_T)](\dif a)
\end{align*}
is a regular version of $\mathbb{P}(C_T(X^\mathfrak{p}_T,A^\mathfrak{p}_T,X^\mathfrak{p}_{T+1})\in\,\cdot\,|\mathscr{U}_T)$. For each $\omega\in\Omega$, by simple function approximation from below (cf. \cite[Section 4.7, Theorem 4.36]{Aliprantis2006book}) and monotone convergence, the corresponding Lebesgue integration for nonnegative $f:(\mathbb{R},\mathcal{B}(\mathbb{R}))\to (\mathbb{R},\mathcal{B}(\mathbb{R}))$ equals to
\begin{align*}
\int_{\mathbb{A}}\int_{\mathbb{X}}f(C_T(X^\mathfrak{p}_T,a,y)) \,P(t,X^\mathfrak{p}_T,a,\dif y)\,[\pi_t(X^\mathfrak{p}_T)](\dif a).
\end{align*}
It follows from Proposition \ref{prop:rhoMod} (d), \eqref{eq:Gdef}, that
\begin{align*}
\rho^{\mathfrak{X}^\mathfrak{p}}_{T,T}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})) = \rho^{\mathfrak{X}^\mathfrak{p}}_T\left(C_T(X^\mathfrak{p}_T,A^\mathfrak{p}_T,X^\mathfrak{p}_{T+1})\right) = H^{\mathfrak{p}}_T O (X^{\mathfrak{p}}_T), \quad \mathbb{P}-a.s..
\end{align*}
In view of \eqref{eq:rhotTDef} and Lemma \ref{prop:CondRiskMeasure} (a), we have
\begin{align*}
\rho^{\mathfrak{X}^\mathfrak{p}}_{T-1,T}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})) = \rho^{\mathfrak{X}^\mathfrak{p}}_{T-1}\left(C_{T-1}(X^{\mathfrak{p}}_{T-1},A^{\mathfrak{p}}_{T-1},X^{\mathfrak{p}}_{T}) + H^{\mathfrak{p}}_T O (X^{\mathfrak{p}}_T)\right),\quad\mathbb{P}-a.s..
\end{align*}
Inducing backward with similar reasoning as above, completes the proof.
\end{proof}
\subsection{Value functions}
In this section, we study the
value functions associated with a policy $\mathfrak{p}$.
In what follows, for $t\in\mathbb{N}$ and $T\ge t$ we define the value functions
\begin{align}\label{eq:Jdef}
J^{\mathfrak{p}}_{t,T} := \begin{cases}
O, & t > T,\\
H^{\mathfrak{p}}_t J^{\mathfrak{p}}_{t+1,T}, & t \le T,
\end{cases}
\end{align}
and $J^{\mathfrak{p}}_{0,T}:= H_0 J^{\mathfrak{p}}_{1,T}$.
Recall from Lemma \ref{lem:GMeasurability} that $x\mapsto H^{\mathfrak{p}}_tv(x)$ is $\mathcal{B}(\mathbb{X})$-$\mathcal{B}(\mathbb{R})$ measurable. In view of Lemma \ref{lem:rhotTH}, we have the relationship between the dynamic risk measure and the value functions
\begin{align*}
\rho^{\mathfrak{X}^\mathfrak{p}}_{0,T}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})) = J^{\mathfrak{p}}_{0,T},\quad\text{and}\quad \rho^{\mathfrak{X}^\mathfrak{p}}_{t,T}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})) = J^{\mathfrak{p}}_{t,T}(X^{\mathfrak{p}}_t),\quad\mathbb{P}-a.s.,\, t,T\in\mathbb{N},\, t<T.
\end{align*}
Lemma \ref{lem:JpInftyBasic} below justifies the definition of $J^{\mathfrak{p}}_{t,\infty}$ as the infinite horizon version of $J^{\mathfrak{p}}_{t,T}$,
and reveals the relationship between $J^{\mathfrak{p}}_{t,\infty}$ and $J^{\mathfrak{p}}_{t+1,\infty}$. Moreover, Lemma \ref{lem:JpInftyBasic} together with Lemma \ref{lem:rhotTasConv} implies that
\begin{align}\label{eq:rhoJEquiv}
\rho^{\mathfrak{X}^\mathfrak{p}}_{0,\infty}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})) = J^{\mathfrak{p}}_{0,\infty}\quad\text{and}\quad\rho^{\mathfrak{X}^\mathfrak{p}}_{t,\infty}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})) = J^{\mathfrak{p}}_{t,\infty}(X^{\mathfrak{p}}_t),\quad\mathbb{P}-a.s.,\, t\in\mathbb{N}.
\end{align}
\begin{lemma}\label{lem:JpInftyBasic}
Under Assumption \ref{asmp:Main} (iv), $(J^{\mathfrak{p}}_{t,T})_{T\in\mathbb{N}}$ converges uniformly, and
\begin{align}\label{eq:JHJ}
J^{\mathfrak{p}}_{t,\infty} := \lim_{T\to\infty}J^{\mathfrak{p}}_{t,T} = \begin{cases}
H^{\mathfrak{p}}_t J^{\mathfrak{p}}_{t+1,\infty}, & t\in\mathbb{N},\\
H_0 J^{\mathfrak{p}}_{1,\infty}, & t = 0.
\end{cases}
\end{align}
Moreover, $\sup_{t\in\mathbb{N}}\|J^\mathfrak{p}_{t,\infty}\|\le\frac{b}{1-\gamma}$.
\end{lemma}
\begin{proof}
Let $r\in\mathbb{N}$. First, by Assumption \ref{asmp:Main} (iv) and Lemma \ref{lem:GBasic} (a), we have
\begin{align*}
H^\mathfrak{p}_{T+r}O(x) = G^{\pi_{T+r}(x)}_{T+r}O(x) \in [-b,b],\quad x\in\mathbb{X},
\end{align*}
i.e., $\|H^{\mathfrak{p}}_{T+r} O\|_\infty \le b$. Then, by Assumption \ref{asmp:Main} (iv) and Lemma \ref{lem:GBasic} (a),
\begin{align*}
H^{\mathfrak{p}}_{T+r-1}\circ H^{\mathfrak{p}}_{T+r} O(x) = G^{\pi_{T+r-1}(x)}_{T+r-1}\left[ H^{\mathfrak{p}}_{T+r} O \right] (x) \in [-b-\gamma b, \;b+\gamma b],
\end{align*}
i.e., $\|H^{\mathfrak{p}}_{T+r-1}\circ H^{\mathfrak{p}}_{T+r} O\|_\infty \le b + \gamma b$. By induction, we have $\|H^{\mathfrak{p}}_{T+1}\circ\cdots\circ H^{\mathfrak{p}}_{T+r} O\|_\infty \le (1-\gamma)^{-1} b$. Then, by Lemma \ref{lem:GContr} we obtain
\begin{align*}
\left|H^{\mathfrak{p}}_T O(x) - H^{\mathfrak{p}}_{T}\circ \cdots \circ H^{\mathfrak{p}}_{T+r} O(x)\right| &= \left|G^{\pi_T(x)}_TO(x) - G^{\pi_T(x)}_T\left[H^{\mathfrak{p}}_{T+1}\circ \cdots \circ H^{\mathfrak{p}}_{T+r} O\right](x)\right|
\\
&\le \gamma\|O-H^{\mathfrak{p}}_{T+1}\circ \cdots \circ H^{\mathfrak{p}}_{T+r}O\|_\infty
\\
&\le \frac{\gamma}{1-\gamma} b, \quad x\in\mathbb{X},
\end{align*}
which implies that $\left\| H^{\mathfrak{p}}_T O - H^{\mathfrak{p}}_{T}\circ \cdots \circ H^{\mathfrak{p}}_{T+r} O\right\|_\infty \le \frac{\gamma}{1-\gamma} b$. By induction backward we obtain
\begin{align*}
\left\|H^{\mathfrak{p}}_t \circ\cdots\circ H^{\mathfrak{p}}_T O - H^{\mathfrak{p}}_t \circ\cdots\circ H^{\mathfrak{p}}_{T} \circ H^{\mathfrak{p}}_{T+r} O\right\|_\infty \le \frac{\gamma^{T-t}}{1-\gamma} b.
\end{align*}
The above proves that $(J^{\mathfrak{p}}_{t,T})_{T\in\mathbb{N}}$ converges uniformly. The above also proves $\sup_{t\in\mathbb{N}}\|J^\mathfrak{p}_{t,\infty}\|\le\frac{b}{1-\gamma}$. Next, we prove \eqref{eq:JHJ}.
For this, consider $t\in\mathbb{N}$. In view of \eqref{eq:Jdef}, Lemma \ref{lem:GContr} and the uniform convergence proved above, we have
\begin{align*}
\|J^{\mathfrak{p}}_{t,\infty} - H^{\mathfrak{p}}_t J^{\mathfrak{p}}_{t+1,\infty}\|_\infty &\le \|J^{\mathfrak{p}}_{t,\infty} - J^{\mathfrak{p}}_{t,T}\|_\infty + \|J^{\mathfrak{p}}_{t,T}-H^{\mathfrak{p}}_t J^{\mathfrak{p}}_{t+1,T}\|_\infty + \|H^{\mathfrak{p}}_t J^{\mathfrak{p}}_{t+1,T}-H^{\mathfrak{p}}_t J^{\mathfrak{p}}_{t+1,\infty}\|_\infty\\
& \le (1+\gamma) \|J^{\mathfrak{p}}_{t,\infty}-J^{\mathfrak{p}}_{t,T}\|_\infty \xrightarrow[T\to\infty]{} 0.
\end{align*}
This proves the case of $t\in\mathbb{N}$ in \eqref{eq:JHJ}. The case of $t=0$ can be proved similarly.
\end{proof}
We next establish that the operator $\mathfrak{H}^\mathfrak{p}$ is a contraction mapping with fixed point given by the value function. To this end, define
$\mathfrak{H}^{\mathfrak{p}} \mathfrak{v} := (H^{\mathfrak{p}}_t v_{t+1})_{t\in\mathbb{N}}$, then as an consequence of Lemma \ref{lem:GContr} and Lemma \ref{lem:JpInftyBasic}, we obtain the following proposition regarding the value of a policy $\mathfrak{p}$.
\begin{proposition}\label{prop:PolicyEval}
Under Assumption \ref{asmp:Main} (iv), for any $\mathfrak{p}\in\Pi$, $\mathfrak{H}^{\mathfrak{p}}$ is a contraction mapping on $\ell^\infty(\mathbb{N};\mathbb{X},\mathcal{B}(\mathbb{X}))$ and $(J^{\mathfrak{p}}_{t,\infty})_{t\in\mathbb{N}}$ is the unique fixed point of $\mathfrak{H}^{\mathfrak{p}}$.
\end{proposition}
\section{Main results}\label{sec:MainResults}
Before presenting our main result concerning the DPP for Kusuoka type dynamic risk measures, we must introduce some new notation and some technical lemmas regarding the regularity of various components.
We define operator $S_t$ acting on $v\in\ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$ as
\begin{align*}
S_t v(x) := \inf_{\lambda\in\varpi_t(x)}G^{\lambda}_t v(x),\qquad t\in\mathbb{N}.
\end{align*}
Next, for $t\in\mathbb{N},\,T\ge t$, we define
\begin{align}\label{eq:JstarDef}
J^{*}_{t,T} :=
\begin{cases}
O, & t > T\\
S_{t} J^{*}_{t+1,T}, & t \le T
\end{cases}
\end{align}
In order for the definition above to be well defined, we must establish the measurability of $J_{t+1,T}^*$
(the boundedness is obvious from Lemma \ref{lem:GBasic} (a)). The lemma below resolves this issue by establishing its lower semicontinuity.
\begin{lemma}\label{lem:JstarLSC}
Under Assumption \ref{asmp:Main}, for any $t,T\in\mathbb{N},\, t<T$, $x\mapsto J^{*}_{t,T}(x)$ is lower semi-continuous on $\mathbb{X}$.
\end{lemma}
\begin{proof}
Note due to Assumption \ref{asmp:Main} (i) and Prokhorov theorem (cf. \cite[Section 15.6, Theorem 15.22]{Aliprantis2006book}), $\varpi_t(x)$ is compact. If $t=T$, then $J^{*}_{t,T} = S_tO$ by \eqref{eq:JstarDef}, and the lower semi-continuity in $x\in\mathbb{X}$ follows from Assumption \ref{asmp:Main} (ii), Lemma \ref{lem:GvLSC}, and Lemma \ref{lem:InffLSC}. We induce backward for $t<T$. Since $J^*_{t,T}=S_tJ^{*}_{t+1,T}$, invoking again Assumption \ref{asmp:Main} (ii), Lemma \ref{lem:GvLSC}, and Lemma \ref{lem:InffLSC}, completes the proof.
\end{proof}
For $t\in\mathbb{N}$, we define the infinite horizon version of the value function as $J^{*}_{t,\infty}(x):=\lim_{T\to\infty}J^{*}_{t,T}(x)$, which is justified by the following lemma.
\begin{lemma}\label{lem:JstarInftyUnifConv}
Under Assumption \ref{asmp:Main}, for any $t\in\mathbb{N}$, the sequence $(J^{*}_{t,T})_{T> t}$ converges uniformly and $\sup_{t\in\mathbb{N}}\left\|J^{*}_{t,\infty}\right\|_\infty \le \frac{b}{1-\gamma}$.
\end{lemma}
\begin{proof}
Fix $t\in\mathbb{N}$ for the remainder of this proof. First, we show that for any $t<T$,
\begin{align}\label{eq:JBound}
\left\|J^{*}_{t,T}\right\|_\infty \le \frac{b}{1-\gamma}.
\end{align}
To this end, observe that $\left\|J^{*}_{T-1,T}\right\|_\infty\le b$ because $\|G^\lambda_{T-1}O\|_\infty\le b$ due to Assumption \ref{asmp:Main} (iv) and Lemma \ref{lem:GBasic} (a). By Assumption \ref{asmp:Main} (iv) and Lemma \ref{lem:GBasic} (a) again, $\left\|G^\lambda_{T-2}J^{*}_{T-1,T}\right\|_\infty\le b+\gamma b$ and thus $\left\|J^{*}_{T-2,T}\right\|_\infty \le b + \gamma b$. Then, \eqref{eq:JBound} follows by induction. Next, note for any $t\le T$, by Lemma \ref{lem:GContr},
\begin{align}\label{eq:DiffJIndc}
\left|J^{*}_{t,T}(x) - J^{*}_{t,T+r}(x)\right| &= \left|\inf_{\lambda\in\varpi_t(x)}G^{\lambda}_{t} J^{*}_{t+1,T}(x) - \inf_{\lambda\in\varpi_t(x)}G^{\lambda}_{t}J^{*}_{t+1,T+r}(x)\right|
\nonumber\\
&\le \sup_{\lambda\in\Lambda}\left|G^{\lambda}_{t} J^{*}_{t+1,T}(x) - G^{\lambda}_{t} J^{*}_{t+1,T+r}(x)\right| \le \gamma\left\|J^{*}_{t+1,T} - J^{*}_{t+1,T+r}\right\|_\infty.
\end{align}
In view of \eqref{eq:JBound} and \eqref{eq:DiffJIndc}, by induction, we conclude
\begin{align*}
\left|J^{*}_{t,T}(x) - J^{*}_{t,T+r}(x)\right| \le \gamma^{T-t}\|J^{*}_{T,T}(x) - J^{*}_{T,T+r}(x)\| = \gamma^{T-t}\|J^{*}_{T,T+r}(x)\| \le \frac{\gamma^{T-t}b}{1-\gamma}.
\end{align*}
\end{proof}
By combining Lemma \ref{lem:JstarLSC} and Lemma \ref{lem:JstarInftyUnifConv}, we obtain the lower semi-continuity of $x\mapsto J^*_{t,\infty}(x)$.
\begin{lemma}\label{lem:JstarInftyLSC}
Under Assumption \ref{asmp:Main}, for any $t\in\mathbb{N}$, $x\mapsto J^{*}_{t,\infty}(x)$ is lower semi-continuous on $\mathbb{X}$.
\end{lemma}
We are now in position to present our main results. The theorem below regards the dynamic programming principle for the optimization problem $\inf_{(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})\in\Psi}\rho^{\mathfrak{X}^\mathfrak{p}}_{0,\infty}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p}))$. In view of \eqref{eq:rhoJEquiv}, it is sufficient to solve $\inf_{\mathfrak{p}\in\Pi}J^\mathfrak{p}_{0,\infty}$.
\begin{theorem}\label{thm:DPP}
Under Assumption \ref{asmp:Main}, the following is true.
\begin{enumerate}
\item[(a)] $(J^{*}_{t,\infty})_{t\in\mathbb{N}}$ satisfies $J^*_{t,\infty}=S_t J^*_{t+1,\infty}$ for $t\in\mathbb{N}$. Moreover, for $(J'_{t,\infty})_{t\in\mathbb{N}}\in\ell^\infty(\mathbb{N};\mathbb{X},\mathcal{B}(\mathbb{X}))$ satisfying $\sup_{t\in\mathbb{N}}\|J'_t\|_\infty<\infty$ and $J'_{t,\infty}=S_t J'_{t+1,\infty}$,
we have $(J'_{t,\infty})_{t\in\mathbb{N}}=(J^*_{t,\infty})_{t\in\mathbb{N}}$.
\item[(b)] For $t\in\mathbb{N}$ and $x\in\mathbb{X}$, $\argmin_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x)$ is not empty and closed, and there is a measurable $\pi^*_t:(\mathbb{X},\mathcal{B}(\mathbb{X}))\to(\Lambda,\mathcal{E}(\Lambda))$ such that
\begin{align}\label{eq:pistar}
\pi^*_t(x)\in\argmin_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x),\quad x\in\mathbb{X}.
\end{align}
\item[(c)]
For $\mathfrak{p}^*=(\pi^*_t)_{t\in\mathbb{N}}$ satisfying \eqref{eq:pistar} for all $t\in\mathbb{N}$, we have $J^{\mathfrak{p}^*}_{t,\infty} = J^{*}_{t,\infty} = \inf_{\mathfrak{p}\in\Pi} J^{\mathfrak{p}}_{t,\infty}$ for $t\in\mathbb{N}$, and $J^{\mathfrak{p}^*}_{0,\infty} = H_0 J^{*}_{1,\infty} = \inf_{(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})\in\Psi}\rho^{\mathfrak{X}^\mathfrak{p}}_{0,\infty}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p}))$.
\end{enumerate}
\end{theorem}
\begin{proof}
\textbf{(a)} First, observe that by \eqref{eq:JstarDef}, Lemma \ref{lem:JstarInftyUnifConv} and \eqref{eq:SContr}, we have
\begin{align*}
\left\|J^{*}_{t,\infty} - S_t J^{*}_{t+1,\infty}\right\|_\infty \le \left\|J^{*}_{t,\infty} - J^{*}_{t,T}\right\|_\infty + 0 + \left\|S_tJ^{*}_{t+1,T} - S_t J^{*}_{t+1,\infty}\right\|_\infty \xrightarrow[T\to\infty]{} 0.
\end{align*}
Next, by Lemma \ref{lem:GContr}, we have
\begin{align}\label{eq:SContr}
\left|J^*_{t}(x) - J'_{t}(x)\right| &= \left|S_t J^*_{t+1}(x) - S_t J'_{t+1}(x)\right| = \left|\inf_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^*_{t+1}(x) - \inf_{\lambda\in\varpi_t(x)}G^{\lambda}_t J'_{t+1}(x)\right|\nonumber\\
&\le \sup_{\lambda\in\Lambda}\left|G^{\lambda}_t J^*_{t+1}(x) - G^{\lambda}_t J'_{t+1}(x)\right| \le \gamma\|J^*_{t+1}-J'_{t+1}\|_\infty,\quad (t,x)\in\mathbb{N}\times\mathbb{X}.
\end{align}
Therefore,
\begin{align*}
\sup_{t\in\mathbb{N}}\|J^*_{t}-J'_t\|_\infty \le \gamma\sup_{t\in\mathbb{N}}\|J^*_{t+1}-J'_{t+1}\|_\infty \le \gamma\sup_{t\in\mathbb{N}}\|J^*_{t}-J'_t\|_\infty.
\end{align*}
This together with Lemma \ref{lem:JstarInftyUnifConv} and the hypothesis that $\sup_{t\in\mathbb{N}}\|J'_t\|_\infty<\infty$ completes the proof.
\textbf{(b)}
We fix $t\in\mathbb{N}$ for the rest of the proof. By Lemma \ref{lem:GvLSC} and Lemma \ref{lem:JstarInftyLSC}, $(x,\lambda) \mapsto G^{\lambda}_t J^{*}_{t+1}(x)$ is lower semi-continuous. Due to Assumption \ref{asmp:Main} (ii), $\varpi_t(x)$ is compact. It follows that for $x\in\mathbb{X}$, $\argmin_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x)$ is not empty and closed. We claim that the lower semi-continuity of $(x,\lambda) \mapsto G^{\lambda}_t J^{*}_{t+1}(x)$ implies that for any closed $F\subseteq\bigcup_{x\in\mathbb{X}}\varpi_t(x)$,
\begin{align*
B_F := \left\{x\in\mathbb{X}: \argmin_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x)\cap F \ne \emptyset \right\} \in \mathcal{B}(\mathbb{X}),
\end{align*}
i.e., the set-valued mapping $x\mapsto\argmin_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x)$ is $\mathcal{B}(\mathbb{X})$-measurable (cf. \cite[Section 18.1, Definition 18.1]{Aliprantis2006book}). To this end note that by statement (a) we have
\begin{align*}
B_{F} = \left\{x\in\mathbb{X}: J^{*}_t(x) = \min_{\lambda\in F}G^{\lambda}_t J^{*}_{t+1,\infty}(x)\right\},
\end{align*}
where $x\mapsto\min_{\lambda\in F}G^{\lambda}_t J^{*}_{t+1,\infty}(x)$ is well defined and lower semi-continuous in $x\in\mathbb{X}$ due to Lemma \ref{lem:GvLSC}, Lemma \ref{lem:JstarInftyLSC}, Lemma \ref{lem:InffLSC} and the fact that $F\subseteq\bigcup_{x\in\mathbb{X}}\varpi_t(x)$ is compact and upper hemi-continuous. Recall from Lemma \ref{lem:JstarInftyLSC} that $J^{*}_t$ is lower semi-continuous. It follows that both $\min_{\lambda\in F}G^{\lambda}_t J^{*}_{t+1,\infty}$ and $J^*_t$ are $\mathcal{B}(\mathbb{X})$-$\mathcal{B}(\mathbb{R})$ measurable. Consequently, $B_{F}\in\mathcal{B}(\mathbb{X})$. Note that a measurable set-valued function is also weakly measurable (cf. \cite[Section 18.1, Lemma 18.2]{Aliprantis2006book}). Then, by applying the Kuratowski and Ryll-Nardzewski measurable selection theorem (cf. \cite[Section 18.3, Theorem 18.13]{Aliprantis2006book}), the set-valued function $x\mapsto\argmin_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x) $ has a $\mathcal{B}(\mathbb{X})-\mathcal{B}(\Lambda)$ measurable selector. In view of Lemma \ref{lem:sigmaAlgBE}, such selector is also $\mathcal{B}(\mathbb{X})$-$\mathcal{E}(\Lambda)$ measurable.
\textbf{(c)} Combining Lemma \ref{lem:GBasic} (a) with \eqref{eq:Jdef} and \eqref{eq:JstarDef}, we obatin $J^{*}_{t,T}(x) \le J^{\mathfrak{p}}_{t,T}(x)$ for any $t\le T$, $x\in\mathbb{X}$ and $\mathfrak{p}\in\Pi$. Then by Lemma \ref{lem:JpInftyBasic} and Lemma \ref{lem:JstarInftyUnifConv}, we have $J^{*}_{t,\infty}(x) \le J^{\mathfrak{p}}_{t,\infty}(x)$ for any $t\in\mathbb{N}$, $x\in\mathbb{X}$ and $\mathfrak{p}\in\Pi$. On the other hand, note that by (a) and (b) we have
\begin{align*}
J^*_{t,\infty} = S_{t} J^{*}_{t+1,\infty} = G^{\pi^*_t(x)}_t J^{*}_{t+1,\infty}(x).
\end{align*}
It follows from Proposition \ref{prop:PolicyEval} that $J^{\mathfrak{p}^*}_{t,\infty} = J^{*}_{t,\infty}$ for $t\in\mathbb{N}$. Consequently, we have that $J^{\mathfrak{p}^*}_{t,\infty} = J^{*}_{t,\infty} = \inf_{\mathfrak{p}\in\Pi} J^{\mathfrak{p}}_{t,\infty}$ for $t\in\mathbb{N}$. Finally, this together with \eqref{eq:H0Def}, \eqref{eq:rhoJEquiv} and \eqref{eq:JHJ} implies $J^{\mathfrak{p}^*}_{0,\infty} = H_0 J^{*}_{1,\infty} = \inf_{\mathfrak{p}\in\Pi}\rho^{\mathfrak{X}^\mathfrak{p}}_{0,\infty}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p}))$.
\end{proof}
Next, we provide a $Q$-learning version of Theorem \ref{thm:DPP}. Let $Q_t(x,\lambda):= G^{\lambda}_t J^*_{t,\infty}(x)$ for $(x,\lambda)\in\mathbb{X}\times\Lambda$. Note that $Q_t$ is $\mathcal{B}(\mathbb{X})\otimes\mathcal{E}(\Lambda)$-$\mathcal{B}(\mathbb{R})$ measurable due to Lemma \ref{lem:GMeasurability}. For $u:(\mathbb{X}\times\Lambda,\mathcal{B}(\mathbb{X})\otimes\mathcal{E}(\Lambda))\to(\mathbb{R},\mathcal{B}(\mathbb{R}))$ satisfying $\sup_{t\in\mathbb{N}}\|Q_t'\|_\infty<\infty$, $x\mapsto\inf_{\lambda\in\varpi_t(x)} u(x,\lambda)\in\ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$, we define $T_t u(x,\lambda) := [G^{\lambda}_t \inf_{\zeta\in\varpi_t(\cdot)} u(\,\cdot\,,\zeta)](x)$.
\begin{corollary}\label{cor:QDPP}
Under Assumption \ref{asmp:Main}, the following hold:
\begin{itemize}
\item[(a)] For any $t\in\mathbb{N}$, $\inf_{\lambda\in\varpi_t(x)} Q_t(x,\lambda) = J^*_{t,\infty}(x)$ for $x\in\mathbb{X}$ and $Q_t=T_tQ_{t+1}$. Moreover, for any $(Q'_t)_{t\in\mathbb{N}}$ satisfying $\inf_{\zeta\in\varpi_t(\cdot)} Q'_{t+1}(\,\cdot\,,\zeta)\in\ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$ and $Q'_t=T_tQ'_
{t+1}$ for $t\in\mathbb{N}$, we have $(Q'_t)_{t\in\mathbb{N}}=(Q_t)_{t\in\mathbb{N}}$.
\item[(b)] For any $t\in\mathbb{N}$, there is a $\pi^*_t:(\mathbb{X},\mathcal{B}(\mathbb{X}))\to(\Lambda,\mathcal{E}(\Lambda))$ such that
\begin{align*}
\pi^*_t(x) \in \argmin_{\lambda\in\varpi_t(x)} Q_t(x,\lambda),\quad x\in\mathbb{X}.
\end{align*}
\item[(c)] If $\mathfrak{p}^*=(\pi^*_t)_{t\in\mathbb{N}}$ satisfies $\pi^*_t(x) \in \argmin_{\lambda\in\varpi_t(x)} Q_t(x,\lambda)$ for $x\in\mathbb{X}$ and $t\in\mathbb{N}$, $\mathfrak{p}^*$ is the optimal Markovian policy.
\end{itemize}
\end{corollary}
\begin{proof}
\textbf{(a)} It follows immediately from the definition of $Q_t$ and Theorem \ref{thm:DPP} (a) that $\inf_{\zeta\in\varpi_t(x)} Q_t(x,\zeta) = S_t J^*_{t+1,\infty}(x) = J^*_{t,\infty}(x)$ for $x\in\mathbb{X}$. In addition, $J^*_{t,\infty}\in\ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$ due to Lemma \ref{lem:JstarInftyLSC}. Consequently, we have
\begin{align*}
Q(t,\lambda) = \left[G^{\lambda}_t \inf_{\zeta\in\varpi_t(\cdot)} Q_{t+1}(\,\cdot\,,\zeta)\right](x) = T_t Q_{t+1} (t,\lambda).
\end{align*}
Next, note that by Lemma \ref{lem:GContr},
\begin{align*}
\left\|Q_{t}-Q'_{t}\right\|_\infty
&= \left\|T_t Q_{t+1} - T_t Q'_{t+1}\right\|_\infty \\
&= \sup_{(x,\lambda)\in\mathbb{X}\times\Lambda}\left|\left[G^{\lambda}_t \inf_{\zeta\in\varpi_t(\cdot)} Q_{t+1}(\,\cdot\,,\zeta)\right](x)-\left[G^{\lambda}_t \inf_{\zeta\in\varpi_t(\cdot)} Q'_{t+1}(\,\cdot\,,\zeta)\right](x)\right|\\
&\le \gamma \sup_{(x,\lambda)\in\mathbb{X}\times\Lambda} \sup_{x\in\mathbb{X}} \left|\inf_{\zeta\in\varpi_t(x)} Q_{t+1}(x,\zeta)-\inf_{\zeta\in\varpi_t(x)} Q'_{t+1}(x,\zeta)\right| \le \gamma\|Q^*_{t+1}-Q'_{t+1}\|_\infty.
\end{align*}
Therefore, $\sup_{t\in\mathbb{N}}\left\|Q_{t}-Q'_{t}\right\|_\infty \le \gamma\sup_{t\in\mathbb{N}}\|Q^*_{t+1}-Q'_{t+1}\|_\infty \le \gamma \sup_{t\in\mathbb{N}}\|Q^*_{t}-Q'_{t}\|_\infty$. Notice that $\sup_{t\in\mathbb{N}}\|Q^*_{t}-Q'_{t}\|_\infty<\infty$ due to Lemma \ref{lem:JstarInftyUnifConv} and the proven. We finally yield $(Q'_t)_{t\in\mathbb{N}}=(Q_t)_{t\in\mathbb{N}}$. \\
\textbf{(b)} This follows from Theorem \ref{thm:DPP} (b) and the definition of $Q_t$.\\
\textbf{(c)} Note that for $t\in\mathbb{N}$, $\pi^*_t$ satisfies \eqref{eq:pistar} due to the definition of $Q_t$. Invoking Theorem \ref{thm:DPP} (c), completes the proof.
\end{proof}
\begin{remark}
When the transition kernel $P$, the action domain $\mathcal{A}_t$ and the family of probability measures $\mathcal{M}_t$ are constant in $t\in\mathbb{N}$, it follows immediately from \eqref{eq:JstarDef} that $S_t$ is constant in $t\in\mathbb{N}$ and $J^*_{t,T}=J^*_{t+1,T+1}$ for $t\in\mathbb{N}$ and $T\ge T$. Thus, by Lemma \ref{lem:JstarInftyUnifConv}, $J^*_t$ is also constant in $t$. The stationary versions of Theorem \ref{thm:DPP} and Corollary \ref{cor:QDPP} following immediately.
\end{remark}
Blow we argue that the optimal Markovian action is no worse than any other $\mathbb{G}$-adpated action. First, recall from Theorem \ref{thm:DPP} that the optimal Markovian policy is attainable.
\begin{proposition}\label{prop:MarkovControl}
Under Assumption \ref{asmp:Main}, we have that
$$\inf_{(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})\in\Psi}\rho^{\mathfrak{X}^\mathfrak{p}}_{0,\infty}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})) = \inf_{(\mathfrak{X},\mathfrak{A})\in\Psi}\rho^\mathfrak{X}_{0,\infty}(\mathfrak{C}(\mathfrak{X},\mathfrak{A})).$$
\end{proposition}
\begin{proof}
Fix $(\mathfrak{X},\mathfrak{A},\mathfrak{C})\in\Psi$ for the remainder of the proof. Let $P^{A_t|\mathscr{U}_t}$ be the regular version of $\mathbb{P}(A_t\in\,\cdot\,|\mathscr{U}_t)$. By \eqref{eq:GMarkov}, \eqref{eq:transkernel}, Lemma \ref{lem:IntfMeasurability} and Lemma \ref{lem:CondExpnXY}, for any $t\in\mathbb{N}$ and $B\in\mathcal{B}(\mathbb{R})$,
\begin{align}\label{eq:RegCondDistCX}
\mathbb{E}\big(\1_B(C_t(X_t,A_t,X_{t+1}))|\mathscr{U}_t\big) &= \mathbb{E}\left(\left.\mathbb{E}\big(\left.\1_B(C_t(X_t,A_t,X_{t+1}))\,\right|\,\sigma(X_t)\vee\sigma(A_t)\big)\,\right|\,\mathscr{U}_t\right)
\nonumber\\
&= \mathbb{E}\left(\int_{\mathbb{X}} \1_B(C_t(X_t,A_t,y)) P(t,X_t,A_t,\dif y)\,\big|\,\mathscr{U}_t\right)
\nonumber\\
&= \int_{\mathbb{A}} \int_{\mathbb{X}} \1_B(C_t(X_t,a,y)) P(t,X_t,a,\dif y)\,P^{A_t|\mathscr{U}_t}(\dif a),\quad\mathbb{P}-a.s..
\end{align}
Hence, $\int_{\mathbb{A}} \int_{\mathbb{X}} \1_\cdot(C_t(X_t,a,y)) P(t,X_t,a,\dif y)\,P^{A_t|\mathscr{U}_t}(\dif a)$ is a regular version of $\mathbb{P}\big(C_t(X_t,A_t,X_{t+1})\in\,\cdot\,|\mathscr{U}_t]big)$. For each $\omega\in\Omega$, by simple function approximation from below (cf. \cite[Section 4.7, Theorem 4.36]{Aliprantis2006book}) and monotone convergence, the corresponding Lebesgue integration for nonnegative $f:(\mathbb{R},\mathcal{B}(\mathbb{R}))\to (\mathbb{R},\mathcal{B}(\mathbb{R}))$ equals
\begin{align*}
\int_{\mathbb{A}} \int_{\mathbb{X}} f(C_t(X_t,a,y)) P(t,X_t,a,\dif y)\,P^{A_t|\mathscr{U}_t}(\dif a).
\end{align*}
This together with \eqref{eq:rhotTDef} and Proposition \ref{prop:rhoMod} (d) implies that, for any $T\in\mathbb{N}$,
\begin{align*}
&\rho^\mathfrak{X}_{T,T}(\mathfrak{C}(\mathfrak{X},\mathfrak{A})) = \nonumber\\
&\quad \sup_{\eta\in\mathcal{M}_t(X_t)} \left\{ \int_{(0,1]}\inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1}\int_\mathbb{A}\int_{\mathbb{X}}\left(C_T(X_T,a,y)-q\right)_+ P(T,X_T,a,\dif y)P^{A_T|\mathscr{U}_T}(\dif a) \right\} \, \eta(\dif\kappa) \right.\nonumber\\
&\qquad\qquad + \eta(0) \cdot \left. \inf\left\{r\in\mathbb{R}:\int_\mathbb{A}\int_{\mathbb{X}}\1_{(r,\infty)}(C_T(X_T,a,y))\,P(T,X_T,a,\dif y)P^{A_T|\mathscr{U}_T}(\dif a) = 0\right\} \right\}.
\end{align*}
Note that $P^{A_T|\mathscr{U}_T}(B)|_{B=\mathcal{A}_T(X_T)}=\int_{\mathbb{A}}\1_{\mathcal{A}_T(X_T)}(a)\, P^{A_T|\mathscr{U}_T}(\dif a)=1,\,\mathbb{P}-a.s.$ due to the definition of $\Psi$ and Lemma \ref{lem:CondExpnXY} (we invoke \cite[Section 18.1, Theorem 18.6]{Aliprantis2006book} for the joint measurability of $\1_{\mathcal{A}_T(x)}(a)$ as a function of $(x,a)$). It follows from \eqref{eq:JstarDef} that $\rho^\mathfrak{X}_{T,T}(\mathfrak{C}(\mathfrak{X},\mathfrak{A}))\ge J^*_{T,T}(X_T),\,\mathbb{P}-a.s.$. We next proceed to pull back the time index by induction. Suppose for some $0<t<T$ we have $\rho^\mathfrak{X}_{t+1,T}(\mathfrak{C}(\mathfrak{X},\mathfrak{A}))\ge J^*_{t+1,T}(X_{t+1}),\,\mathbb{P}-a.s.$. Then, by \eqref{eq:rhotTDef} and Proposition \ref{prop:CondRiskMeasure} (a), $\rho^\mathfrak{X}_{t,T}(\mathfrak{C}(\mathfrak{X},\mathfrak{A}))\ge\rho^\mathfrak{X}_{t}(C_{t}(X_t,A_t,X_{t+1})+\gamma J^*_{t+1,T}(X_{t+1})$. With similar reasoning as before, we obtain
\begin{align*}
&\rho^\mathfrak{X}_{t,T}(\mathfrak{C}(\mathfrak{X},\mathfrak{A})) \ge \nonumber\\
&\quad \sup_{\eta\in\mathcal{M}_t(X_t)} \left\{ \int_{(0,1]}\inf_{q\in\mathbb{R}}\left\{ \resizebox{0.5\hsize}{!}{$ q + \kappa^{-1}\int_\mathbb{A}\int_{\mathbb{X}}\left(C_t(X_t,a,y) + \gamma J^*_{t+1,T}(y)-q\right)_+ P(t,x,a,\dif y)P^{A_{t}|\mathscr{U}_{t}}(\dif a) $} \right\} \, \eta(\dif\kappa) \right.\nonumber\\
&\qquad\qquad + \eta(0) \cdot \left. \inf\left\{r\in\mathbb{R}: \resizebox{0.5\hsize}{!}{$ \int_\mathbb{A}\int_{\mathbb{X}}\1_{(r,\infty)}(C_t(X_t,a,y) + \gamma J^*_{t+1,T}(y))\,P(t,x,a,\dif y)P^{A_{t}|\mathscr{U}_{t}}(\dif a) = 0 $} \right\} \right\},\,\mathbb{P}-a.s..
\end{align*}
By \eqref{eq:JstarDef} and the fact that $P^{A_t|\mathscr{U}_t}(B)|_{B=\mathcal{A}_t(X_t)}=1,\,\mathbb{P}-a.s.$ again, we have $\rho^\mathfrak{X}_{t,T}(\mathfrak{C}(\mathfrak{X},\mathfrak{A}))\ge J^*_{t,T}(X_{t})$, $\mathbb{P}-a.s.$. Consequently, $\rho^\mathfrak{X}_{1,T}(\mathfrak{C}(\mathfrak{X},\mathfrak{A}))\ge J^*_{1,T}(X_{1}),\,\mathbb{P}-a.s.$. With similar reasoning as before, we yield $\rho^\mathfrak{X}_{0,T}(\mathfrak{C}(\mathfrak{X},\mathfrak{A}))\ge J^*_{0,T}$. Finally, in view of Lemma \ref{lem:rhotTasConv}, \eqref{eq:rhoJEquiv} and Theorem \ref{thm:DPP} (c), we have $\inf_{(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})\in\Psi}\rho^{\mathfrak{X}^\mathfrak{p}}_{0,\infty}(\mathfrak{C}(\mathfrak{X}^\mathfrak{p},\mathfrak{A}^\mathfrak{p})) = J^*_{0,\infty} \le \rho^\mathfrak{X}_{0,\infty}(\mathfrak{C}(\mathfrak{X},\mathfrak{A}))$.
\end{proof}
As a final key result, we provide a sufficient condition on when deterministic actions can attain the optimal. We first introduce a technical lemma.
\begin{lemma}\label{lem:SpetralG}
Let $v\in\ell^\infty(\mathbb{X},\mathcal{B}(\mathbb{X}))$, $\lambda^1,\lambda^2\in\Lambda$ and $\beta\in(0,1)$. If at a point $x\in\mathbb{X}$, $\mathcal{M}_t(x)$ is singleton, then
\begin{align*}
G^{\beta\lambda^1+(1-\beta)\lambda^2}_tv(x) \ge \beta G^{\lambda^1}_tv(x) + (1-\beta) G^{\lambda^2}_tv(x).
\end{align*}
\end{lemma}
\begin{proof}
The statement follows immediately from the observations below:
\begin{align*}
&\inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1}\int_\mathbb{A}\int_{\mathbb{R}\times\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+ P(t,x,a,\dif y)(\beta\lambda^1+(1-\beta)\lambda^2)(\dif a) \right\}\\
&\quad\ge \beta\inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1}\int_\mathbb{A}\int_{\mathbb{R}\times\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+ P(t,x,a,\dif y)\lambda^1(\dif a) \right\}\\
&\qquad + (1-\beta)\inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1}\int_\mathbb{A}\int_{\mathbb{R}\times\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+ P(t,x,a,\dif y)\lambda^2(\dif a) \right\},
\end{align*}
and
$\int_\mathbb{A}\int_{\mathbb{X}}\1_{(r,\infty)}(C_t(x,a,y)+\gamma v(y))\,P(t,x,a,\dif y)(\beta\lambda^1+(1-\beta)\lambda^2)(\dif a) = 0$
implies that
\begin{equation*}
\int_\mathbb{A}\int_{\mathbb{X}}\1_{(r,\infty)}(C_t(x,a,y)+\gamma v(y))\,P(t,x,a,\dif y)\lambda^i(\dif a) = 0, \quad i=1,2.
\end{equation*}
\end{proof}
\begin{proposition}\label{prop:wpSingleton}
Under Assumption \ref{asmp:Main}, if $\mathcal{M}_t(x)$ is a singleton for all $x\in\mathbb{X}$, then there is $\pi^\delta_t:(\mathbb{X},\mathcal{B}(\mathbb{X}))\to(\Lambda,\mathcal{E}(\Lambda))$ such that $\pi^\delta_t(x)$ is a Dirac measure for all $x\in\mathbb{X}$ and $$\pi^\delta_t(x)\in\argmin_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x).$$
\end{proposition}
\begin{proof}
We will fix $t\in\mathbb{N}$ for the rest of the proof. We first argue that for any $x\in\mathbb{X}$ the set $D(x):=\set{a\in\mathcal{A}_t(x):G^{\delta_a}_t J^{*}_{t+1,\infty}(x)=\min_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x)}$ is not empty and closed. To this end, notice that $\mathcal{A}_t(x)$ is compact (due to Assumption \ref{asmp:Main} (ii)) thus totally bounded (cf. \cite[Section 3.7, Theorem 3.28]{Aliprantis2006book}), i.e., for any $\varepsilon>0$ there is $n\in\mathbb{N}$ and $(A^\varepsilon_k)_{k=1}^n$ such that $A^\varepsilon_k\subseteq\mathbb{A}$ is an $\varepsilon$-open ball and $\mathcal{A}_t(x)\subseteq\bigcup_{k=1}^n A^\varepsilon_k$. Let $\lambda^*\in\argmin_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x)$.
We claim that if $\lambda^*(A^\varepsilon_k)>0$, then
\begin{align}\label{eq:lambdaAargmin}
\lambda^*_{A^\varepsilon_k}:=\frac{\lambda^*(A^\varepsilon_k\cap\,\cdot\,)}{\lambda^*(A^\varepsilon_k)}\in\argmin_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x).
\end{align}
To see this notice that if $\lambda^*((A^\varepsilon_k)^c)=0$, $\lambda^*=\lambda^*_{A^\varepsilon_k}$ and \eqref{eq:lambdaAargmin} follows immediately. If $\lambda^*((A^\varepsilon_k)^c)>0$, then $\lambda^*_{(A^\varepsilon_k)^c}$ is well defined and $\lambda^*=\lambda^*(A^\varepsilon_k)\,\lambda^*_{A^\varepsilon_k}+\lambda^*((A^\varepsilon_k)^c)\,\lambda^*_{(A^\varepsilon_k)^c}$, and \eqref{eq:lambdaAargmin} follows from Lemma \ref{lem:SpetralG} and the hypothesis that $\lambda^*\in\argmin_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x)$. Since $(A^\varepsilon_k)_{k=1}^n$ covers $\mathcal{A}_t(x)$, there exists one $k\in\set{1,...,n}$ such that $\lambda^*(A^\varepsilon_k)>0$. This together with \eqref{eq:lambdaAargmin} implies that for any $m\in\mathbb{N}$, there is $a^m\in\mathcal{A}_t(x)$ and $\lambda^m\in\varpi_t(x)$ such that $\supp\lambda^m\subseteq \mathcal{A}_t(x)\cap \overline B_{\frac1m}(a^m)$ and $\lambda^m\in\argmin_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x)$, where $\overline B_{\frac1m}(a^m)$ is the closed $\frac1m$-ball centered at $a^m$. As $\mathcal{A}_t(x)$ is compact, without loss of generality, we set $a^0:=\lim_{n\to\infty}a^m$. Let $f\in C_b(\mathbb{A})$; note $f$ is uniformly continuous on $\mathcal{A}_t(x)$. Then, we have
\begin{align*}
\left|\int_\mathbb{A} f(a)\lambda^m(\dif a) - f(a^0)\right| &= \left|\int_{\mathcal{A}_t(x)\cap\overline B_{\frac1m}(a^m)} f(a)\lambda^m(\dif a) - f(a^0)\right|
\\
&\le \sup_{a\in \mathcal{A}_t(x)\cap\overline B_{\frac1n}(a^m)}|f(a)-f(a^0)|
\\
&\le \sup_{a\in \mathcal{A}_t(x)\cap\overline B_{\frac1m}(a^m)}|f(a)-f(a^m)| + |f(a^m)-f(a^0)| \xrightarrow[n\to\infty]{} 0,
\end{align*}
i.e., $(\lambda^m)_{m\in\mathbb{N}}$ converges to $\delta_{a^0}$ weakly. By Lemma \ref{lem:GvLSC} and Lemma \ref{lem:JstarInftyLSC}, we obtain
\begin{align*}
\min_{\lambda\in\varpi_t(x)}G^{\lambda}_t J^{*}_{t+1,\infty}(x) = \liminf_{m\to\infty} G^{\lambda^m}_t J^*_{t+1,\infty}(x) \ge G^{\delta_{a^0}}_t J^*_{t+1,\infty}(x),
\end{align*}
and thus $\delta_{a^0}\in D(x)$. The closedness of $D(x)$ follows from Lemma \ref{lem:GvLSC} and Lemma \ref{lem:JstarInftyLSC} and the fact that $(\delta_{a^\ell})_{\ell\in\mathbb{N}}$ converges weakly if $(a^\ell)_{\ell\in\mathbb{N}}\subset\mathbb{A}$ converges.
Now that we have shown $D(x)$ is non-empty and closed for any $x\in\mathbb{X}$, the existence of $\pi^\delta_t$ follows from an analogous argument as in the proof of Theorem \ref{thm:DPP} (b).
\end{proof}
\begin{remark}\label{rem:RandHeur}
Let us fix $t\in\mathbb{N}$ and $x\in\mathbb{X}$. Here we provide some discussion on the case when $\mathcal{M}_t(x)$ is not a singleton from the perspective of two-player games. We may treat $\eta\in\mathcal{M}_t(x)$ as an action controlled by another player who aims to maximize the score. Let us name the players as player-$\lambda$ and player-$\eta$, respectively. From the definition of $G^\lambda_t v$, the decision of player-$\eta$ is made without the realization of player-$\lambda$'s action. This incites player-$\lambda$ to use a randomized policy to take advantage of player-$\eta$'s reduced information.
On the contrary, if player-$\eta$ was allowed to make decisions with the additional information on the realization of player-$\lambda$'s action, the corresponding score would be
\begin{align*}
&\int_\mathbb{A}\sup_{\eta\in\mathcal{M}_t(x)} \left\{ \int_{(0,1]}\inf_{q\in\mathbb{R}}\left\{ q + \kappa^{-1}\int_{\mathbb{X}}\left(C_t(x,a,y) + \gamma v(y)-q\right)_+ P(t,x,a,\dif y) \right\} \, \eta(\dif\kappa) \right.\nonumber\\
&\qquad\qquad + \eta(0) \cdot \left. \inf\left\{r\in\mathbb{R}:\int_{\mathbb{X}}\1_{(r,\infty)}( C_t(x,a,y)+\gamma v(y))\,P(t,x,a,\dif y) = 0\right\} \right\}\,\lambda(\dif a),
\end{align*}
which, with a similar argument leading to Proposition \ref{prop:wpSingleton}, guarantees the sufficiency of deterministic actions. Finally, unlike player-$\lambda$, player-$\eta$ does not benefit from further randomization over the set $\mathcal{M}_t(x)$, again due to a similar argument leading to Proposition \ref{prop:wpSingleton}.
\end{remark}
\bibliographystyle{siamplain}
|
2,869,038,156,743 | arxiv | \subsection{Theory}
In what follows, we perform the calculation of the depletion, considering that the STED laser has a noisy component. We used a two-step fluorescence mechanism that only takes into account the ground electronic state $\mathrm{S_{0}}$ and the first electronic excited state $\mathrm{S_{1}}$ of the system (see Fig.~\ref{Jablonski}).
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figure1.pdf}
\caption{Simplified Jablonski energy diagrams showing the processes of excitation, fluorescence and stimulated emission. The fluorophores are excited by an excitation laser. Then, the excited molecules relax to the ground state via spontaneous or stimulated emission induced by a second laser.}
\label{Jablonski}
\end{figure}
The fluorescence signal is proportional to $\mathrm{N_{1} (t)}$, which is obtained by solving the following set of stochastic differential equations (SDE) where $\mathrm{N_{0} (t)}$ and $\mathrm{N_{1} (t)}$ are the corresponding ground and excited-state normalized populations, respectively.
\begin{equation}
\begin{split}
\mathrm{ \frac{dN_0(t)}{dt}}=-\mathrm{k_{e} \cdot N_0(t)+k_{s}(t)\cdot N_1(t)+k_{f} \cdot N_1 (t) } \\
\mathrm{ \frac{dN_1(t)}{dt}}=-\mathrm{ k_{f} \cdot N_1(t)-k_{s}(t)\cdot N_1(t)+k_{e} \cdot N_0 (t) }
\end{split}
\label{eqsys}
\end{equation}
The parameter $\mathrm{ k_{e}}$ represents the excitation rate from $\mathrm{S_0}$ to $\mathrm{S_1}$ while $\mathrm{k_{f}}$ represents the probability of emitting a photon in the relaxation process from $\mathrm{S_1}$. For simplicity, these two quantities are assumed to be independent of time. The time-dependent $\mathrm{ k_{s}(t)}$ function considers the rate of stimulated emission induced by the depletion laser. Given the linear relationship between $\mathrm{ k_s(t)}$ and the intensity of the depletion laser, $\mathrm{ I_s(t)} $ ($\mathrm{ k_s(t)}=\sigma_\mathrm{s} \mathrm{I_s(t)}$), they will share the same stochastic properties. The proportionality constant $\sigma_\mathrm{s}$ is the cross-section of the stimulated emission. The intensity of the STED laser can be seen as the fluctuation function $\delta\mathrm{I_{s}}(t)$ relative to the average laser intensity $\mathrm{\langle I_{s} \rangle} $, which is time-independent. Note that the latter is also valid for $\mathrm{ k_{s}(t)}$ such that, $\mathrm{ k_{s}(t)} = \langle \mathrm{ k_{s} } \rangle + \delta \mathrm{k_{s}}(\mathrm{t})$. This, together with the fact that the sum of the populations of $\mathrm{S_{0}}$ and $\mathrm{S_{1}}$ is normalized ($\mathrm{N_0(t)+N_1(t)=1}$), allows for the simplification of the SDE as:
\begin{equation}
\mathrm{\frac{dN_1(t)}{dt}}= -\left[ \mathrm{ k_{f}+ \left\langle k_{s} \right\rangle +k_{e} } \right] \cdot \mathrm{ N_1(t) + k_{e} } -\delta \mathrm{k_{s}(t) } \cdot \mathrm{N_1(t)}
\label{eqs02}
\end{equation}
This equation is a multiplicative noise stochastic differential equation \cite{van1997nonequilibrium,garcia2012noise}, and its solution is the cornerstone of this study. There are mainly two different ways to proceed based on the interpretations of Ito and Stratonovich. The two are entirely equivalents, the main difference being that the Stratonovich integrals are defined so that the chain rule of ordinary calculus holds. Therefore, we use the Stratonovich interpretation in what follows.
The \eqref{eqs02} can be rewritten as,
\begin{equation}
\mathrm{ \frac{dN_1(\tilde{t})}{d\tilde{t}} }= -\left[1+ \frac{\delta \mathrm{ k_{s} (\tilde{t}) } }{ \mathrm{ K } }\right] \mathrm{ N_1(\tilde{t}) } + \mathrm{ \frac{k_{e}}{K} }
\label{eqa}
\end{equation}
where $ \mathrm{ \tilde{t}= K \cdot t }$ is a dimensionless variable and $ \mathrm{ K = k_{f}+\langle k_{s}\rangle+k_{e} } $ represents the sum of all the time-independent rates. From here on, we will omit the tilde mark over "$\mathrm{ t }$" to avoid cumbersome notation.
To proceed, we need to specify the nature of the noise $\delta \mathrm{k_{s}}$, and in our case, we will consider the well known colored Ornstein-Uhlenbeck noise. This choice enables us to simultaneously study the effects of noise variance and correlation time on the depletion value. The noise $\delta \mathrm{k_{s}}$ can be generated through the Ornstein–Uhlenbeck equation
\begin{equation}
\mathrm{ \dfrac{d}{dt} } \delta \mathrm{ k_{s}(t) } = -\lambda \cdot \delta \mathrm{ k_{s}(t) \left( t \right) } + \lambda \cdot \beta \mathrm{ \left( t \right) }
\label{eqa18}
\end{equation}
where $\beta \mathrm{ (t) } $ is a white noise of auto-correlation function $\left<\beta \mathrm{ (t)}\beta \mathrm{(t')}\right>=\Delta \cdot \delta \mathrm{(t-t')}$ and $\Delta$ its variance. The parameter $\lambda = \tau^{-1}$ represents the inverse of the characteristic correlation time. In addition, we simply consider $\delta \mathrm{ k_{s} }(0)=0$ as the initial condition.
This particular model proposed for $\delta\mathrm{k_{s}(t)}$ guarantees that in the limit $t\rightarrow\infty$ and $\lambda\rightarrow\infty$ ( Gaussian white noise limits), $\left<\delta\mathrm{k_{s}(t)}\delta\mathrm{k_{s}(t')}\right>=\Delta \cdot \delta \mathrm{ (t-t')}$.
It means that by varying the characteristic correlation time in Eq.\ref{eqa18} we can interpolate between the Gaussian white noise limit and a highly time-correlated Ornstein-Uhlenbeck noise. For the last process, a direct calculation \cite{gardiner1985handbook} allows us to obtain the auto-correlation function such as:
\begin{eqnarray}
\nonumber
\left\langle \delta \mathrm{k_{s}} \mathrm{ \left( t \right) } \cdot \delta \mathrm{ k_{s}} \mathrm{ \left( t' \right) } \right\rangle &=& \frac{\Delta \cdot \lambda}{2} \mathrm{ exp } \left[ - \lambda \cdot \vert \mathrm{t-t' } \vert \right] \\
&-& \frac{\Delta \cdot \lambda}{2} \mathrm{ exp } \left[ - \lambda \cdot \left( \mathrm{t+t' } \right) \right]
\label{eqa20}
\end{eqnarray}
which in the long time limit ($\mathrm{ t, t'} \gg \lambda^{-1}$) behave as:
\begin{equation}
\left\langle \delta \mathrm{ k_{s}} \mathrm{ \left( t \right) } \cdot \delta \mathrm{ k_{s}} \mathrm{ \left( t' \right) } \right\rangle = \frac{\Delta \lambda}{2} \cdot \mathrm{ exp } \left[- \lambda \cdot \vert \mathrm{t-t' } \vert \right].
\label{eqa21}
\end{equation}
The knowledge of the statistical properties of $\delta \mathrm{ k_s(t)}$ are crucial for the determination of the average population of our model $\langle \mathrm{ N_1(t)}\rangle$. This quantity will be used to calculate the depletion, which is defined as:
\begin{equation}
\eta =\frac{\langle \mathrm{N}_1(+\infty,\langle k_s\rangle)\rangle}{\langle \mathrm{N}_1(+\infty,0)\rangle }.
\label{eqa18a}
\end{equation}
where $\langle \mathrm{N}_1(+\infty,\langle k_s\rangle)\rangle$ corresponds to the average population at $\mathrm{t}\rightarrow\infty$, after taking the statistical average over realisations of $\delta k_s(t)$ and the normalization factor $\langle \mathrm{N}_1(+\infty,0)\rangle$ corresponds to the equilibrium population setting $\mathrm{k_{s}=0}$. The long time limit $ \mathrm{t} \rightarrow \infty$ is considered because, experimentally, the depletion is calculated assuming a steady-state condition. Physically $\eta$ give the probability to force fluorophores to their ground off-state for a given STED laser power.
To calculate the average value of $\langle \mathrm{ N_1(t)}\rangle$, we proceed first with the solution of \eqref{eqa}, using standard methods for solving linear differential equations \cite{LevElsgolts}. In a second step, we take the statistical average of the solution, which yields
\begin{eqnarray}
\nonumber
\mathrm{ \left\langle N_{1}(t) \right\rangle } &=& \mathrm{ N_{1} \left( 0 \right) \cdot exp \left(- t \right) } \mathrm{I(t,0)} \\
&+& \mathrm{ \dfrac{k_{e}}{K} }\int_{0}^{\mathrm{t}} \mathrm{dt'} \mathrm{exp \left[ - (t-t') \right] } \mathrm{ I(t,t')},
\label{eqa24}
\end{eqnarray}
where $\mathrm{ N_{1} \left( 0 \right)}$ is the initial population and the function $\mathrm{ I(t,t')}$ is given by the general expression
\begin{equation}
\mathrm{ I(t,t') } = \left\langle \mathrm{ exp } \left[ - \int_{\mathrm{t'}}^{\mathrm{t}} \dfrac{\delta \mathrm{ k_{s} (t_{2}) }}{\mathrm{ K }} \mathrm{ dt_{2}} \right] \right\rangle
\end{equation}
To continue with the determination of $\mathrm{ I(t,t')}$, we can now expand the exponential into its power series and
then compute the corresponding average to the $\mathrm{2n}$-point correlation functions of $\delta \mathrm{k_s(t)}$, evaluated at different times. Since we assume that $\delta \mathrm{k_s(t)}$ is a Gaussian random variable we can always write the $\mathrm{2n}$-point correlation functions in terms of the two point correlation function $\langle\delta \mathrm{k_s(t_1)}\delta \mathrm{k_s(t_2)}\rangle$. The result obtained then allows a resummation of the series \cite{kardar2007statistical} after the formal integration over the time variables yielding
\begin{eqnarray}
\mathrm{ I(t,t') } &=& \mathrm{exp} \left[ \dfrac{1}{\mathrm{2K^{2}}} \int_{\mathrm{t'}}^{\mathrm{t}} \int_{\mathrm{t'}}^{\mathrm{t}} \left\langle \delta \mathrm{ k_{s}} \mathrm{ \left( t_{1} \right) }\delta \mathrm{ k_{s}} \mathrm{ \left( t_{2} \right) } \right\rangle \mathrm{ dt_{1}dt_{2}} \right].
\label{eqa26}
\end{eqnarray}
If we now take into account the specific form of $\langle \delta \mathrm{ k_{s} } \mathrm{ \left(t\right) } \delta \mathrm{ k_{s}} \mathrm{ \left(t' \right)} \rangle$ given in \eqref{eqa20}, we can calculate exactly the form of $\mathrm{ I(t,t')}$ and consequently the value of $\langle \mathrm{N_1(t)}\rangle$ in the long time limit ($\mathrm{ t} \rightarrow\infty$). In this limit, we can verify that the contribution from the initial conditions from both $\mathrm{N_1(0)}$ and $\delta\mathrm{k_s(0)}$ are negligible for weak enough noise intensities ($\Delta<\mathrm{ 2K^2}$). The described procedure lead us to
\begin{eqnarray}
\nonumber
\mathrm{ \left\langle N_{1}(+\infty) \right\rangle } &=& \mathrm{ \dfrac{k_{e}}{K} } \cdot \mathrm{exp} \left( -\dfrac{\Delta}{2\lambda \mathrm{K^{2}}} \right) \int_{0}^{\infty} \mathrm{exp} \left[ - \left( \mathrm{u} - \dfrac{\Delta}{2\mathrm{K^{2}}} \mathrm{u} \right) \right]\\
& \cdot & \mathrm{exp} \left( \dfrac{\Delta}{2\mathrm{K^{2}} \lambda} e^{-\lambda \mathrm{u}} \right) \mathrm{du}.
\label{eqa30}
\end{eqnarray}
The integral above can be written in terms of Gamma and incomplete Gamma functions in the form:
\begin{eqnarray}
\nonumber
\mathrm{ \left\langle N_{1}(+\infty) \right\rangle } &=& \mathrm{ \dfrac{k_{e}}{K} } \cdot \mathrm{exp} \left( -\dfrac{\Delta}{2\lambda \mathrm{K^{2}}} \right) \cdot \left( -\dfrac{\Delta}{2\lambda \mathrm{K^{2}}} \right)^{\dfrac{-1+\frac{\Delta}{2\mathrm{K}^{2}}}{\lambda}} \\ \nonumber
&&\dfrac{1}{\lambda} \left[ \Gamma \left( \dfrac{1-\frac{\Delta}{2\mathrm{K}^{2}}}{\lambda} \right) - \Gamma \left( \dfrac{1-\frac{\Delta}{2\mathrm{K}^{2}}}{\lambda}, - \dfrac{\Delta}{2\mathrm{K}^{2} \lambda} \right) \right]. \\
\label{eqa31}
\end{eqnarray}
Although this expression is not easy to interpret physically, it allows a straightforward calculation of the depletion, as defined in \eqref{eqa18a}. Additionally, we can analyse the limit cases corresponding to $\lambda\rightarrow0$ or $\Delta\rightarrow0$ and $\lambda\rightarrow\infty$. The first scenario corresponds to the ideal case in which noise does not play any role and the second case when the colored noise becomes a white noise due to a reduction of the correlation time. For those cases, simple analytical expressions for depletion can be obtained allowing a direct physical interpretation of the results.
\subsubsection{Ideal case, $ \lambda\rightarrow0 $ or $\Delta\rightarrow0$}
In this limit, we recover $\langle\mathrm{N_{1}}(+\infty)\rangle=\mathrm{k_{e}/K}$, which lead us to
\begin{equation}
\eta=\left[ 1+\frac{\langle \mathrm{ k_{s}}\rangle}{\mathrm{ k_{f}}+\mathrm{k_{e}}} \right]^{-1}.
\end{equation}
This result can be expressed in terms of the intensity of saturation $\mathrm{I_{sat}}=\frac{\mathrm{k_{f}}+\mathrm{k_{e}}} {\sigma_s}$, this variable just consider the the fluorescent properties of the molecules. In this way we reach the expression:
\begin{equation}
\eta=\left[ 1+\frac{\langle \mathrm{ I_{s}} \rangle}{\mathrm{ I_{sat}}} \right]^{-1}.
\label{eqafree}
\end{equation}
The above expression is well know in the super-resolution imaging community and have been used in theoretical and experimental contexts to assess the performance of STED microscopy.
\subsubsection{White noise case, $\lambda\rightarrow\infty$ }
Using \eqref{eqa31} we can obtain that the average population of the steady-state is given by:
\begin{eqnarray}
\mathrm{ \left\langle N_{1}(+\infty) \right\rangle } &=& \mathrm{ \dfrac{k_{e}}{K} } \cdot \left[ 1-\frac{\Delta}{2\mathrm{K}^{2}} \right]^{-1}
\label{eqa33}
\end{eqnarray}
Although this expression was obtained using, as a premise, the Gaussian character of the coloured noise, it can be shown that in the limit of white noise, such a result holds even if the noise have a non-Gaussian local distribution. We can now proceed with the calculation of the depletion in this scenario, which yields
\begin{equation}
\eta= \left[ 1+\frac{\langle \mathrm{ k_{s} }\rangle-\frac{1}{2}\frac{\Delta}{ \mathrm{ k_{f}+\langle k_{s}\rangle+k_{e}}}}{ \mathrm{k_{f}+k_{e}}} \right]^{-1}.
\end{equation}
This expression shows some limitations of our mathematical model. We can notice that for $\langle \mathrm{ k_{s}}\rangle<\frac{1}{2}\frac{\Delta}{\mathrm{ k_{f}+\langle k_{s}\rangle+k_{e}}}$, the calculated depletion would be greater than one. This is a non-physical result produced by the fact that our mathematical model for $\mathrm{k_s(t)}$ does not rule out the possibility of negative values for this quantity at large enough noise amplitudes once we have fixed $\langle \mathrm{k_{s}}\rangle$. In this nonphysical scenario sufficiently high noise fluctuations in $\mathrm{ k_s(t)} $ can produce the enhancement of the values of $\langle \mathrm{ N }_1(+\infty)\rangle$ when compared to its corresponding value in the absence of the STED laser.
Rewriting the depletion in terms of $\langle \mathrm{ I_{s}}\rangle/\mathrm{I_{sat}}$ we get our working expression for the depletion in the white noise limit:
\begin{equation}
\eta=\left[1+\frac{\langle \mathrm{ I_{s}}\rangle}{\mathrm{I_{sat}}}-\frac{\alpha^2}{2}\frac{\left(\frac{\langle \mathrm{ I_{s}}\rangle}{\mathrm{I_{sat}}}\right)^2}{1+\frac{\langle \mathrm{ I_{s}}\rangle}{\mathrm{I_{sat}}}}\right]^{-1},
\label{dep2}
\end{equation}
where $\alpha=\frac{\sqrt{\Delta}} {\langle \mathrm{ k_{s}}\rangle}$, is a quantity that characterizes the laser noise distribution giving us the relative standard deviation (rsd) of the STED laser. This equation links the rsd of the laser to the final depletion efficiency obtained at a given STED power.
This model is expected to better describe the CW-STED depletion curve as it depends not only on the sample properties but also on the noise properties of the depletion laser. In the absence of laser noise $(\Delta=0)$, Eq.~\ref{dep2} reduces to the well-known noise-free depletion expression for CW-STED microscopy.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{figure2.pdf}\caption{Theoretical calculation of the laser intensity noise effect on the performance of the CW-STED microscopy. (a) Depletion curves corresponding to coloured noises with different correlation times, the corresponding values of the characteristic correlation time are indicated in the inset of the figure. (b) Depletion curves for different noise strengths (0.2 rsd, 0.4 rsd and 0.6 rsd) fixing the correlation time of the coloured noise. (c) and (d) Behavior of the depletion efficiency for a given saturation factor ($\xi$=10) varying the noise strength and the correlation time, respectively.}
\label{Figure 2}
\end{figure}
In Fig. \ref{Figure 2}, we study the behaviour of the depletion varying the strength of the noise ($\Delta$) and the inverse of the characteristic correlation time ($\lambda$) in different scenarios. The depletion as a function of the saturation factor $\frac{\langle \mathrm{I_{s}}\rangle} {\mathrm{I_{sat}}}$, represented as $\xi$ in what follows was studied numerically. The fig.~\ref{Figure 2}~(a) shows the behaviour of the depletion ($\eta$) in three different scenarios: ideal case ($\alpha=0$), coloured noise ($\alpha\neq0$ and $0<\lambda<+\infty$) and white noise ($\lambda\rightarrow\infty$). As expected, the optimal depletion curve corresponds to the ideal case, in which the laser noise is absent~\cite{coto2014influence}. On the other hand, when the noise is present our theory confirms that the higher the intensity noise the lower the depletion efficiency for a given average intensity. For instance, a noise intensity of $0.6$ rsd will need a saturation factor of roughly $20$ to reach the same fluorescence quenching obtained with a saturation factor of $14$, in the case of a noise intensity of $0.2$ rsd (figure 2(a)). The presence of fluctuations deteriorates the depletion i.e. the higher the laser stability, the higher the depletion efficiency, see Fig.~\ref{Figure 2}~(b). In the low-intensity noise regime $(\mathrm{rsd<0.2})$, depletion efficiency is not significantly affected. However, when the strength of the noise is high enough ($\mathrm{rsd>0.6}$), the decrease in efficiency can no longer be neglected. An increase of the variance of the noise results in a suboptimal depletion efficiency and an increase of the intensity of the STED beam is needed to recover the depletion efficiency of low noise scenario. In Fig.~\ref{Figure 2} (c) and (d), we observe that at a given saturation factor the depletion is strongly affected by an increase of $\alpha$ and the inverse of the correlation time of the noise ($\lambda$), in a way that systems with a higher $\alpha$ ($\lambda$) are more affected by an increase of $\lambda$ ($\alpha$).
\subsubsection{Simulation}
The analytical predictions obtained until here were verified through the numerical solution of Eq.~\ref{eqa} using well-established routines developed in the Mathematica 11 software \cite{Mathematica}. The stationary population average and consequently the numerical depletion was estimated considering a large number of noise realizations $\beta\mathrm{(t)}$. The latter was taken so that the difference between the analytical prediction and the corresponding numerical average is always less than $5\%$. Fig.~\ref{Figure 3} b-d shows the comparison between the analytical and numerical results for the general case of colored noise at different noise strengths. As can be seen, a good agreement is obtained, which validates the analytical predictions. In addition, we perform simulations for the limit case of white noise (results not shown), obtaining the same level of agreement.
\subsubsection{Comparison of theory with experimental results}
Having validated our analytical model, we fit experimental depletion curves, previously published \cite{coto2014influence}, to the analytical expressions. For simplicity, we used only those obtained for the white noise limit, Eq.~\ref{dep2}. Two CW depletion lasers with the same average intensity but with different intensity noise profiles are investigated, see Fig.~\ref{expdata}. The Low Noise Laser (LNL) has a normal distribution (0.01 rsd), while the High Noise Laser (HNL) has an unknown distribution (0.34 rsd). The measurement time was long enough to assume that the equilibrium was reached. From the fit of the experimental curves, we extracted the saturation intensity ($\mathrm{I_{sat}}$) of the fluorophore and the noise (rsd) of the STED laser. No significant changes were found for low-noise scenarios when fitting the experimental curve with our model and the noise-free depletion curve, Eq.~\ref{eqafree}. These two models are statistically consistent. Previous work~\cite{coto2014influence} empirically introduced a constant offset $\alpha$ in the noise-free depletion curve model, i.e., $\eta_{\mathrm{noise}}(\mathrm{I_{s}})=(1-\alpha)\eta(\mathrm{I_{s}})+\alpha$ to explain reduction of depletion efficiency for high-noise scenarios. It should be noted that the model obtained here offers an analytical expression for such scenarios. The experimental depletion curves are well described by the model with fitted parameters $I_{\mathrm{sat}} = 5.09 \mathrm{MW cm^{-2}}$ and $\alpha= 1.45$ for the HNL and $I_{\mathrm{sat}} = 5.96 \mathrm{MW cm^{-2}}$ and $\alpha= 0.45$ for the LNL. On the other hand, at high intensities, there is a small deviation between our theoretical curves and those obtained experimentally. A possible explanation for this effect is related to the incomplete decay of the depletion curves due to the signal background caused by the excitation from the STED beam (anti-Stokes emission) \cite{coto2014new}. Overall, the model works well for both scenarios, high and low noise, with an adjusted R-squared value of 0.98, giving a good agreement with the previously published experimental data \cite{coto2014influence}. Finally, since the colored noise model has the Gaussian white noise model as a limiting case, it is expected that the fits using the colored noise will also work. At least we will always have the trivial solution in which the correlation time resulting from the fit is a small quantity. On the other hand, given that the experimental data have a non-negligible noise level, it will be impossible to establish which of the two theoretical models is more appropriate to describe the experiments. The chi-square test in both cases yields similar values.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figure3.pdf}
\caption{Influence of colored noise intensity on depletion efficiency of CW-STED microscopy. (a) Theoretical depletion curves at different noise strengths (0.2 rsd, 0.4 rsd, and 0.6 rsd). (b-d) Comparison between analytical (full line) and computational (empty circles) simulation results for depletion as a function of saturation factor. The relative difference between theoretical and computational results for the different noise strengths 0.2 rsd, 0.4 rsd and 0.6 rsd was less than $2\%$, $3\%$ and $5\%$, respectively.}
\label{Figure 3}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figure4.pdf}
\caption{Effect of laser fluctuations in CW-STED microscopy. (a) Characterization of the intensity of the two lasers based on representative time traces of 200 µs length (sampling 1 ns). The right panel displays the normalized Probability Density Functions of the two lasers. (b-c) Depletion curves measured with Alexa 488-labeled antibody (empty dots) and their corresponding fit (lines) according to the white-noise model proposed in Eq. \ref{dep2}.}
\label{expdata}
\end{figure}
\subsubsection{Discussion and Conclusions}
This letter theoretically demonstrated the importance of using stable lasers to reduce the sample illumination on CW-STED implementations. The use of a noise-eater is strongly recommended to stabilize the amplitude of a high-noise depletion laser. On the other hand, laser power stability lower the intensity to reach a certain resolution \cite{Leutenegger2010, Moffitt2011, Vicidomini2013}, thus they reduce potential photodamage effects and re-excitation caused by the depletion laser \cite{coto2014new}.
As we have shown, intensity fluctuations play a negative role in the performance of CW-STED microscopy. However, controlled variations of the STED intensity induces spatially encoded variations of the fluorescence emission that can, in principle, be decoded to further improve the effective spatial resolution of the STED image \cite{sarmento2018exploiting, lanzano2015encoding}. As a result, if these fluctuations are adequately detected, one can exploit the 'natural' changes of STED intensity during the image acquisition and separate photons based on the depletion dynamics in the phasor plot.
In conclusion, this work introduces an analytical formulation capable of accurately describe the impact of intensity fluctuations and intensity correlation time on the performance (depletion efficiency) of a CW-STED microscope. The effects of noise intensity on image resolution can be understood by consdering the linear proportionality relation ship of this quantity with the depletion efficiency \cite{vicidomini2014importance}. Comparison with numerical simulations and previously published experimental data validated the analytical results. The analytical approach followed here can easily be extended to other imaging modalities, such as ground-state depletion and RESOLFT microscopy \cite{hell1995ground, testa2012nanoscopy}. In future works, we will investigate the effects of time jitter and donut variability (shape and polarization) on efficiency of the STED microscope \cite{ neupane2013tuning}.\\
\textbf{Acknowledgment.}
The Berthiaume Family Foundation supported this study. In addition, the authors thank Giuseppe Vicidomini (Istituto Italiano di Tecnologia) and Luca Lanzano ( University of Catania) for helpful comments on the article. We also thank Nate Jowett for proofreading the manuscript. Finally, A.M.C. acknowledges financial support from Funda\c{c}\~ao de Amparo \`a Pesquisa de Santa Catarina FAPESC. \\
\textbf{Disclosures.} The authors declare no conflicts of interest.
\\
\textbf{Data Availability.} Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
\subsection{Theory}
In what follows, we perform the calculation of the depletion, considering that the STED laser has a noisy component. We used a two-step fluorescence mechanism that only takes into account the ground electronic state $\mathrm{S_{0}}$ and the first electronic excited state $\mathrm{S_{1}}$ of the system (see Fig.~\ref{Jablonski}).
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figure1.pdf}
\caption{Simplified Jablonski energy diagrams showing the processes of excitation, fluorescence and stimulated emission. The fluorophores are excited by an excitation laser. Then, the excited molecules relax to the ground state via spontaneous or stimulated emission induced by a second laser.}
\label{Jablonski}
\end{figure}
The fluorescence signal is proportional to $\mathrm{N_{1} (t)}$, which is obtained by solving the following set of stochastic differential equations (SDE) where $\mathrm{N_{0} (t)}$ and $\mathrm{N_{1} (t)}$ are the corresponding ground and excited-state normalized populations, respectively.
\begin{equation}
\begin{split}
\mathrm{ \frac{dN_0(t)}{dt}}=-\mathrm{k_{e} \cdot N_0(t)+k_{s}(t)\cdot N_1(t)+k_{f} \cdot N_1 (t) } \\
\mathrm{ \frac{dN_1(t)}{dt}}=-\mathrm{ k_{f} \cdot N_1(t)-k_{s}(t)\cdot N_1(t)+k_{e} \cdot N_0 (t) }
\end{split}
\label{eqsys}
\end{equation}
The parameter $\mathrm{ k_{e}}$ represents the excitation rate from $\mathrm{S_0}$ to $\mathrm{S_1}$ while $\mathrm{k_{f}}$ represents the probability of emitting a photon in the relaxation process from $\mathrm{S_1}$. For simplicity, these two quantities are assumed to be independent of time. The time-dependent $\mathrm{ k_{s}(t)}$ function considers the rate of stimulated emission induced by the depletion laser. Given the linear relationship between $\mathrm{ k_s(t)}$ and the intensity of the depletion laser, $\mathrm{ I_s(t)} $ ($\mathrm{ k_s(t)}=\sigma_\mathrm{s} \mathrm{I_s(t)}$), they will share the same stochastic properties. The proportionality constant $\sigma_\mathrm{s}$ is the cross-section of the stimulated emission. The intensity of the STED laser can be seen as the fluctuation function $\delta\mathrm{I_{s}}(t)$ relative to the average laser intensity $\mathrm{\langle I_{s} \rangle} $, which is time-independent. Note that the latter is also valid for $\mathrm{ k_{s}(t)}$ such that, $\mathrm{ k_{s}(t)} = \langle \mathrm{ k_{s} } \rangle + \delta \mathrm{k_{s}}(\mathrm{t})$. This, together with the fact that the sum of the populations of $\mathrm{S_{0}}$ and $\mathrm{S_{1}}$ is normalized ($\mathrm{N_0(t)+N_1(t)=1}$), allows for the simplification of the SDE as:
\begin{equation}
\mathrm{\frac{dN_1(t)}{dt}}= -\left[ \mathrm{ k_{f}+ \left\langle k_{s} \right\rangle +k_{e} } \right] \cdot \mathrm{ N_1(t) + k_{e} } -\delta \mathrm{k_{s}(t) } \cdot \mathrm{N_1(t)}
\label{eqs02}
\end{equation}
This equation is a multiplicative noise stochastic differential equation \cite{van1997nonequilibrium,garcia2012noise}, and its solution is the cornerstone of this study. There are mainly two different ways to proceed based on the interpretations of Ito and Stratonovich. The two are entirely equivalents, the main difference being that the Stratonovich integrals are defined so that the chain rule of ordinary calculus holds. Therefore, we use the Stratonovich interpretation in what follows.
The \eqref{eqs02} can be rewritten as,
\begin{equation}
\mathrm{ \frac{dN_1(\tilde{t})}{d\tilde{t}} }= -\left[1+ \frac{\delta \mathrm{ k_{s} (\tilde{t}) } }{ \mathrm{ K } }\right] \mathrm{ N_1(\tilde{t}) } + \mathrm{ \frac{k_{e}}{K} }
\label{eqa}
\end{equation}
where $ \mathrm{ \tilde{t}= K \cdot t }$ is a dimensionless variable and $ \mathrm{ K = k_{f}+\langle k_{s}\rangle+k_{e} } $ represents the sum of all the time-independent rates. From here on, we will omit the tilde mark over "$\mathrm{ t }$" to avoid cumbersome notation.
To proceed, we need to specify the nature of the noise $\delta \mathrm{k_{s}}$, and in our case, we will consider the well known colored Ornstein-Uhlenbeck noise. This choice enables us to simultaneously study the effects of noise variance and correlation time on the depletion value. The noise $\delta \mathrm{k_{s}}$ can be generated through the Ornstein–Uhlenbeck equation
\begin{equation}
\mathrm{ \dfrac{d}{dt} } \delta \mathrm{ k_{s}(t) } = -\lambda \cdot \delta \mathrm{ k_{s}(t) \left( t \right) } + \lambda \cdot \beta \mathrm{ \left( t \right) }
\label{eqa18}
\end{equation}
where $\beta \mathrm{ (t) } $ is a white noise of auto-correlation function $\left<\beta \mathrm{ (t)}\beta \mathrm{(t')}\right>=\Delta \cdot \delta \mathrm{(t-t')}$ and $\Delta$ its variance. The parameter $\lambda = \tau^{-1}$ represents the inverse of the characteristic correlation time. In addition, we simply consider $\delta \mathrm{ k_{s} }(0)=0$ as the initial condition.
This particular model proposed for $\delta\mathrm{k_{s}(t)}$ guarantees that in the limit $t\rightarrow\infty$ and $\lambda\rightarrow\infty$ ( Gaussian white noise limits), $\left<\delta\mathrm{k_{s}(t)}\delta\mathrm{k_{s}(t')}\right>=\Delta \cdot \delta \mathrm{ (t-t')}$.
It means that by varying the characteristic correlation time in Eq.\ref{eqa18} we can interpolate between the Gaussian white noise limit and a highly time-correlated Ornstein-Uhlenbeck noise. For the last process, a direct calculation \cite{gardiner1985handbook} allows us to obtain the auto-correlation function such as:
\begin{eqnarray}
\nonumber
\left\langle \delta \mathrm{k_{s}} \mathrm{ \left( t \right) } \cdot \delta \mathrm{ k_{s}} \mathrm{ \left( t' \right) } \right\rangle &=& \frac{\Delta \cdot \lambda}{2} \mathrm{ exp } \left[ - \lambda \cdot \vert \mathrm{t-t' } \vert \right] \\
&-& \frac{\Delta \cdot \lambda}{2} \mathrm{ exp } \left[ - \lambda \cdot \left( \mathrm{t+t' } \right) \right]
\label{eqa20}
\end{eqnarray}
which in the long time limit ($\mathrm{ t, t'} \gg \lambda^{-1}$) behave as:
\begin{equation}
\left\langle \delta \mathrm{ k_{s}} \mathrm{ \left( t \right) } \cdot \delta \mathrm{ k_{s}} \mathrm{ \left( t' \right) } \right\rangle = \frac{\Delta \lambda}{2} \cdot \mathrm{ exp } \left[- \lambda \cdot \vert \mathrm{t-t' } \vert \right].
\label{eqa21}
\end{equation}
The knowledge of the statistical properties of $\delta \mathrm{ k_s(t)}$ are crucial for the determination of the average population of our model $\langle \mathrm{ N_1(t)}\rangle$. This quantity will be used to calculate the depletion, which is defined as:
\begin{equation}
\eta =\frac{\langle \mathrm{N}_1(+\infty,\langle k_s\rangle)\rangle}{\langle \mathrm{N}_1(+\infty,0)\rangle }.
\label{eqa18a}
\end{equation}
where $\langle \mathrm{N}_1(+\infty,\langle k_s\rangle)\rangle$ corresponds to the average population at $\mathrm{t}\rightarrow\infty$, after taking the statistical average over realisations of $\delta k_s(t)$ and the normalization factor $\langle \mathrm{N}_1(+\infty,0)\rangle$ corresponds to the equilibrium population setting $\mathrm{k_{s}=0}$. The long time limit $ \mathrm{t} \rightarrow \infty$ is considered because, experimentally, the depletion is calculated assuming a steady-state condition. Physically $\eta$ give the probability to force fluorophores to their ground off-state for a given STED laser power.
To calculate the average value of $\langle \mathrm{ N_1(t)}\rangle$, we proceed first with the solution of \eqref{eqa}, using standard methods for solving linear differential equations \cite{LevElsgolts}. In a second step, we take the statistical average of the solution, which yields
\begin{eqnarray}
\nonumber
\mathrm{ \left\langle N_{1}(t) \right\rangle } &=& \mathrm{ N_{1} \left( 0 \right) \cdot exp \left(- t \right) } \mathrm{I(t,0)} \\
&+& \mathrm{ \dfrac{k_{e}}{K} }\int_{0}^{\mathrm{t}} \mathrm{dt'} \mathrm{exp \left[ - (t-t') \right] } \mathrm{ I(t,t')},
\label{eqa24}
\end{eqnarray}
where $\mathrm{ N_{1} \left( 0 \right)}$ is the initial population and the function $\mathrm{ I(t,t')}$ is given by the general expression
\begin{equation}
\mathrm{ I(t,t') } = \left\langle \mathrm{ exp } \left[ - \int_{\mathrm{t'}}^{\mathrm{t}} \dfrac{\delta \mathrm{ k_{s} (t_{2}) }}{\mathrm{ K }} \mathrm{ dt_{2}} \right] \right\rangle
\end{equation}
To continue with the determination of $\mathrm{ I(t,t')}$, we can now expand the exponential into its power series and
then compute the corresponding average to the $\mathrm{2n}$-point correlation functions of $\delta \mathrm{k_s(t)}$, evaluated at different times. Since we assume that $\delta \mathrm{k_s(t)}$ is a Gaussian random variable we can always write the $\mathrm{2n}$-point correlation functions in terms of the two point correlation function $\langle\delta \mathrm{k_s(t_1)}\delta \mathrm{k_s(t_2)}\rangle$. The result obtained then allows a resummation of the series \cite{kardar2007statistical} after the formal integration over the time variables yielding
\begin{eqnarray}
\mathrm{ I(t,t') } &=& \mathrm{exp} \left[ \dfrac{1}{\mathrm{2K^{2}}} \int_{\mathrm{t'}}^{\mathrm{t}} \int_{\mathrm{t'}}^{\mathrm{t}} \left\langle \delta \mathrm{ k_{s}} \mathrm{ \left( t_{1} \right) }\delta \mathrm{ k_{s}} \mathrm{ \left( t_{2} \right) } \right\rangle \mathrm{ dt_{1}dt_{2}} \right].
\label{eqa26}
\end{eqnarray}
If we now take into account the specific form of $\langle \delta \mathrm{ k_{s} } \mathrm{ \left(t\right) } \delta \mathrm{ k_{s}} \mathrm{ \left(t' \right)} \rangle$ given in \eqref{eqa20}, we can calculate exactly the form of $\mathrm{ I(t,t')}$ and consequently the value of $\langle \mathrm{N_1(t)}\rangle$ in the long time limit ($\mathrm{ t} \rightarrow\infty$). In this limit, we can verify that the contribution from the initial conditions from both $\mathrm{N_1(0)}$ and $\delta\mathrm{k_s(0)}$ are negligible for weak enough noise intensities ($\Delta<\mathrm{ 2K^2}$). The described procedure lead us to
\begin{eqnarray}
\nonumber
\mathrm{ \left\langle N_{1}(+\infty) \right\rangle } &=& \mathrm{ \dfrac{k_{e}}{K} } \cdot \mathrm{exp} \left( -\dfrac{\Delta}{2\lambda \mathrm{K^{2}}} \right) \int_{0}^{\infty} \mathrm{exp} \left[ - \left( \mathrm{u} - \dfrac{\Delta}{2\mathrm{K^{2}}} \mathrm{u} \right) \right]\\
& \cdot & \mathrm{exp} \left( \dfrac{\Delta}{2\mathrm{K^{2}} \lambda} e^{-\lambda \mathrm{u}} \right) \mathrm{du}.
\label{eqa30}
\end{eqnarray}
The integral above can be written in terms of Gamma and incomplete Gamma functions in the form:
\begin{eqnarray}
\nonumber
\mathrm{ \left\langle N_{1}(+\infty) \right\rangle } &=& \mathrm{ \dfrac{k_{e}}{K} } \cdot \mathrm{exp} \left( -\dfrac{\Delta}{2\lambda \mathrm{K^{2}}} \right) \cdot \left( -\dfrac{\Delta}{2\lambda \mathrm{K^{2}}} \right)^{\dfrac{-1+\frac{\Delta}{2\mathrm{K}^{2}}}{\lambda}} \\ \nonumber
&&\dfrac{1}{\lambda} \left[ \Gamma \left( \dfrac{1-\frac{\Delta}{2\mathrm{K}^{2}}}{\lambda} \right) - \Gamma \left( \dfrac{1-\frac{\Delta}{2\mathrm{K}^{2}}}{\lambda}, - \dfrac{\Delta}{2\mathrm{K}^{2} \lambda} \right) \right]. \\
\label{eqa31}
\end{eqnarray}
Although this expression is not easy to interpret physically, it allows a straightforward calculation of the depletion, as defined in \eqref{eqa18a}. Additionally, we can analyse the limit cases corresponding to $\lambda\rightarrow0$ or $\Delta\rightarrow0$ and $\lambda\rightarrow\infty$. The first scenario corresponds to the ideal case in which noise does not play any role and the second case when the colored noise becomes a white noise due to a reduction of the correlation time. For those cases, simple analytical expressions for depletion can be obtained allowing a direct physical interpretation of the results.
\subsubsection{Ideal case, $ \lambda\rightarrow0 $ or $\Delta\rightarrow0$}
In this limit, we recover $\langle\mathrm{N_{1}}(+\infty)\rangle=\mathrm{k_{e}/K}$, which lead us to
\begin{equation}
\eta=\left[ 1+\frac{\langle \mathrm{ k_{s}}\rangle}{\mathrm{ k_{f}}+\mathrm{k_{e}}} \right]^{-1}.
\end{equation}
This result can be expressed in terms of the intensity of saturation $\mathrm{I_{sat}}=\frac{\mathrm{k_{f}}+\mathrm{k_{e}}} {\sigma_s}$, this variable just consider the the fluorescent properties of the molecules. In this way we reach the expression:
\begin{equation}
\eta=\left[ 1+\frac{\langle \mathrm{ I_{s}} \rangle}{\mathrm{ I_{sat}}} \right]^{-1}.
\label{eqafree}
\end{equation}
The above expression is well know in the super-resolution imaging community and have been used in theoretical and experimental contexts to assess the performance of STED microscopy.
\subsubsection{White noise case, $\lambda\rightarrow\infty$ }
Using \eqref{eqa31} we can obtain that the average population of the steady-state is given by:
\begin{eqnarray}
\mathrm{ \left\langle N_{1}(+\infty) \right\rangle } &=& \mathrm{ \dfrac{k_{e}}{K} } \cdot \left[ 1-\frac{\Delta}{2\mathrm{K}^{2}} \right]^{-1}
\label{eqa33}
\end{eqnarray}
Although this expression was obtained using, as a premise, the Gaussian character of the coloured noise, it can be shown that in the limit of white noise, such a result holds even if the noise have a non-Gaussian local distribution. We can now proceed with the calculation of the depletion in this scenario, which yields
\begin{equation}
\eta= \left[ 1+\frac{\langle \mathrm{ k_{s} }\rangle-\frac{1}{2}\frac{\Delta}{ \mathrm{ k_{f}+\langle k_{s}\rangle+k_{e}}}}{ \mathrm{k_{f}+k_{e}}} \right]^{-1}.
\end{equation}
This expression shows some limitations of our mathematical model. We can notice that for $\langle \mathrm{ k_{s}}\rangle<\frac{1}{2}\frac{\Delta}{\mathrm{ k_{f}+\langle k_{s}\rangle+k_{e}}}$, the calculated depletion would be greater than one. This is a non-physical result produced by the fact that our mathematical model for $\mathrm{k_s(t)}$ does not rule out the possibility of negative values for this quantity at large enough noise amplitudes once we have fixed $\langle \mathrm{k_{s}}\rangle$. In this nonphysical scenario sufficiently high noise fluctuations in $\mathrm{ k_s(t)} $ can produce the enhancement of the values of $\langle \mathrm{ N }_1(+\infty)\rangle$ when compared to its corresponding value in the absence of the STED laser.
Rewriting the depletion in terms of $\langle \mathrm{ I_{s}}\rangle/\mathrm{I_{sat}}$ we get our working expression for the depletion in the white noise limit:
\begin{equation}
\eta=\left[1+\frac{\langle \mathrm{ I_{s}}\rangle}{\mathrm{I_{sat}}}-\frac{\alpha^2}{2}\frac{\left(\frac{\langle \mathrm{ I_{s}}\rangle}{\mathrm{I_{sat}}}\right)^2}{1+\frac{\langle \mathrm{ I_{s}}\rangle}{\mathrm{I_{sat}}}}\right]^{-1},
\label{dep2}
\end{equation}
where $\alpha=\frac{\sqrt{\Delta}} {\langle \mathrm{ k_{s}}\rangle}$, is a quantity that characterizes the laser noise distribution giving us the relative standard deviation (rsd) of the STED laser. This equation links the rsd of the laser to the final depletion efficiency obtained at a given STED power.
This model is expected to better describe the CW-STED depletion curve as it depends not only on the sample properties but also on the noise properties of the depletion laser. In the absence of laser noise $(\Delta=0)$, Eq.~\ref{dep2} reduces to the well-known noise-free depletion expression for CW-STED microscopy.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{figure2.pdf}\caption{Theoretical calculation of the laser intensity noise effect on the performance of the CW-STED microscopy. (a) Depletion curves corresponding to coloured noises with different correlation times, the corresponding values of the characteristic correlation time are indicated in the inset of the figure. (b) Depletion curves for different noise strengths (0.2 rsd, 0.4 rsd and 0.6 rsd) fixing the correlation time of the coloured noise. (c) and (d) Behavior of the depletion efficiency for a given saturation factor ($\xi$=10) varying the noise strength and the correlation time, respectively.}
\label{Figure 2}
\end{figure}
In Fig. \ref{Figure 2}, we study the behaviour of the depletion varying the strength of the noise ($\Delta$) and the inverse of the characteristic correlation time ($\lambda$) in different scenarios. The depletion as a function of the saturation factor $\frac{\langle \mathrm{I_{s}}\rangle} {\mathrm{I_{sat}}}$, represented as $\xi$ in what follows was studied numerically. The fig.~\ref{Figure 2}~(a) shows the behaviour of the depletion ($\eta$) in three different scenarios: ideal case ($\alpha=0$), coloured noise ($\alpha\neq0$ and $0<\lambda<+\infty$) and white noise ($\lambda\rightarrow\infty$). As expected, the optimal depletion curve corresponds to the ideal case, in which the laser noise is absent~\cite{coto2014influence}. On the other hand, when the noise is present our theory confirms that the higher the intensity noise the lower the depletion efficiency for a given average intensity. For instance, a noise intensity of $0.6$ rsd will need a saturation factor of roughly $20$ to reach the same fluorescence quenching obtained with a saturation factor of $14$, in the case of a noise intensity of $0.2$ rsd (figure 2(a)). The presence of fluctuations deteriorates the depletion i.e. the higher the laser stability, the higher the depletion efficiency, see Fig.~\ref{Figure 2}~(b). In the low-intensity noise regime $(\mathrm{rsd<0.2})$, depletion efficiency is not significantly affected. However, when the strength of the noise is high enough ($\mathrm{rsd>0.6}$), the decrease in efficiency can no longer be neglected. An increase of the variance of the noise results in a suboptimal depletion efficiency and an increase of the intensity of the STED beam is needed to recover the depletion efficiency of low noise scenario. In Fig.~\ref{Figure 2} (c) and (d), we observe that at a given saturation factor the depletion is strongly affected by an increase of $\alpha$ and the inverse of the correlation time of the noise ($\lambda$), in a way that systems with a higher $\alpha$ ($\lambda$) are more affected by an increase of $\lambda$ ($\alpha$).
\subsubsection{Simulation}
The analytical predictions obtained until here were verified through the numerical solution of Eq.~\ref{eqa} using well-established routines developed in the Mathematica 11 software \cite{Mathematica}. The stationary population average and consequently the numerical depletion was estimated considering a large number of noise realizations $\beta\mathrm{(t)}$. The latter was taken so that the difference between the analytical prediction and the corresponding numerical average is always less than $5\%$. Fig.~\ref{Figure 3} b-d shows the comparison between the analytical and numerical results for the general case of colored noise at different noise strengths. As can be seen, a good agreement is obtained, which validates the analytical predictions. In addition, we perform simulations for the limit case of white noise (results not shown), obtaining the same level of agreement.
\subsubsection{Comparison of theory with experimental results}
Having validated our analytical model, we fit experimental depletion curves, previously published \cite{coto2014influence}, to the analytical expressions. For simplicity, we used only those obtained for the white noise limit, Eq.~\ref{dep2}. Two CW depletion lasers with the same average intensity but with different intensity noise profiles are investigated, see Fig.~\ref{expdata}. The Low Noise Laser (LNL) has a normal distribution (0.01 rsd), while the High Noise Laser (HNL) has an unknown distribution (0.34 rsd). The measurement time was long enough to assume that the equilibrium was reached. From the fit of the experimental curves, we extracted the saturation intensity ($\mathrm{I_{sat}}$) of the fluorophore and the noise (rsd) of the STED laser. No significant changes were found for low-noise scenarios when fitting the experimental curve with our model and the noise-free depletion curve, Eq.~\ref{eqafree}. These two models are statistically consistent. Previous work~\cite{coto2014influence} empirically introduced a constant offset $\alpha$ in the noise-free depletion curve model, i.e., $\eta_{\mathrm{noise}}(\mathrm{I_{s}})=(1-\alpha)\eta(\mathrm{I_{s}})+\alpha$ to explain reduction of depletion efficiency for high-noise scenarios. It should be noted that the model obtained here offers an analytical expression for such scenarios. The experimental depletion curves are well described by the model with fitted parameters $I_{\mathrm{sat}} = 5.09 \mathrm{MW cm^{-2}}$ and $\alpha= 1.45$ for the HNL and $I_{\mathrm{sat}} = 5.96 \mathrm{MW cm^{-2}}$ and $\alpha= 0.45$ for the LNL. On the other hand, at high intensities, there is a small deviation between our theoretical curves and those obtained experimentally. A possible explanation for this effect is related to the incomplete decay of the depletion curves due to the signal background caused by the excitation from the STED beam (anti-Stokes emission) \cite{coto2014new}. Overall, the model works well for both scenarios, high and low noise, with an adjusted R-squared value of 0.98, giving a good agreement with the previously published experimental data \cite{coto2014influence}. Finally, since the colored noise model has the Gaussian white noise model as a limiting case, it is expected that the fits using the colored noise will also work. At least we will always have the trivial solution in which the correlation time resulting from the fit is a small quantity. On the other hand, given that the experimental data have a non-negligible noise level, it will be impossible to establish which of the two theoretical models is more appropriate to describe the experiments. The chi-square test in both cases yields similar values.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figure3.pdf}
\caption{Influence of colored noise intensity on depletion efficiency of CW-STED microscopy. (a) Theoretical depletion curves at different noise strengths (0.2 rsd, 0.4 rsd, and 0.6 rsd). (b-d) Comparison between analytical (full line) and computational (empty circles) simulation results for depletion as a function of saturation factor. The relative difference between theoretical and computational results for the different noise strengths 0.2 rsd, 0.4 rsd and 0.6 rsd was less than $2\%$, $3\%$ and $5\%$, respectively.}
\label{Figure 3}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figure4.pdf}
\caption{Effect of laser fluctuations in CW-STED microscopy. (a) Characterization of the intensity of the two lasers based on representative time traces of 200 µs length (sampling 1 ns). The right panel displays the normalized Probability Density Functions of the two lasers. (b-c) Depletion curves measured with Alexa 488-labeled antibody (empty dots) and their corresponding fit (lines) according to the white-noise model proposed in Eq. \ref{dep2}.}
\label{expdata}
\end{figure}
\subsubsection{Discussion and Conclusions}
This letter theoretically demonstrated the importance of using stable lasers to reduce the sample illumination on CW-STED implementations. The use of a noise-eater is strongly recommended to stabilize the amplitude of a high-noise depletion laser. On the other hand, laser power stability lower the intensity to reach a certain resolution \cite{Leutenegger2010, Moffitt2011, Vicidomini2013}, thus they reduce potential photodamage effects and re-excitation caused by the depletion laser \cite{coto2014new}.
As we have shown, intensity fluctuations play a negative role in the performance of CW-STED microscopy. However, controlled variations of the STED intensity induces spatially encoded variations of the fluorescence emission that can, in principle, be decoded to further improve the effective spatial resolution of the STED image \cite{sarmento2018exploiting, lanzano2015encoding}. As a result, if these fluctuations are adequately detected, one can exploit the 'natural' changes of STED intensity during the image acquisition and separate photons based on the depletion dynamics in the phasor plot.
In conclusion, this work introduces an analytical formulation capable of accurately describe the impact of intensity fluctuations and intensity correlation time on the performance (depletion efficiency) of a CW-STED microscope. The effects of noise intensity on image resolution can be understood by consdering the linear proportionality relation ship of this quantity with the depletion efficiency \cite{vicidomini2014importance}. Comparison with numerical simulations and previously published experimental data validated the analytical results. The analytical approach followed here can easily be extended to other imaging modalities, such as ground-state depletion and RESOLFT microscopy \cite{hell1995ground, testa2012nanoscopy}. In future works, we will investigate the effects of time jitter and donut variability (shape and polarization) on efficiency of the STED microscope \cite{ neupane2013tuning}.\\
\textbf{Acknowledgment.}
The Berthiaume Family Foundation supported this study. In addition, the authors thank Giuseppe Vicidomini (Istituto Italiano di Tecnologia) and Luca Lanzano ( University of Catania) for helpful comments on the article. We also thank Nate Jowett for proofreading the manuscript. Finally, A.M.C. acknowledges financial support from Funda\c{c}\~ao de Amparo \`a Pesquisa de Santa Catarina FAPESC. \\
\textbf{Disclosures.} The authors declare no conflicts of interest.
\\
\textbf{Data Availability.} Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2,869,038,156,744 | arxiv | \section{SUSY Seesaw Type I and Slepton Mass Matrix}\label{sec:1}
The observed neutrino oscillations imply the existence of neutrino
masses and flavor mixing, giving a hint towards physics beyond the
Standard Model. For example, the seesaw mechanism involving heavy
right handed Majorana neutrinos, which explains well the smallness
of the neutrino masses, allows for leptogenesis and induces
sizeable lepton flavor violation (LFV) in a supersymmetric
extension of the Standard Model.
If three right handed neutrino singlet fields $\nu_R$ are added to
the MSSM particle content, one gets additional terms in the
superpotential \cite{Casas:2001sr}:
\begin{equation}
\label{suppot4}
W_\nu = -\frac{1}{2}\nu_R^{cT} M \nu_R^c + \nu_R^{cT} Y_\nu L
\cdot H_2.
\end{equation}
Here, \(Y_\nu\) is the matrix of neutrino Yukawa couplings, $M$ is
the right handed neutrino Majorana mass matrix, and $L$ and $H_2$
denote the left handed lepton and hypercharge +1/2 Higgs doublets,
respectively. If the mass scale $M_R$ of the matrix $M$ is much
greater than the electroweak scale, and consequently much greater
than the scale of the Dirac mass matrix \(m_D=Y_\nu \langle H_2^0
\rangle\) (where \(\langle H_2^0 \rangle=v\sin\beta\) is the
appropriate Higgs v.e.v., with \(v=174\)~GeV and \(\tan\beta
=\langle H_2^0\rangle/\langle H_1^0\rangle\)), the effective left
handed neutrino mass matrix $M_\nu$ will be naturally obtained,
\begin{equation}
\label{eqn:SeeSawFormula}
M_\nu = m_D^T M^{-1} m_D = Y_\nu^T M^{-1} Y_\nu (v \sin\beta )^2.
\end{equation}
The matrix $M_\nu$ is diagonalized by the unitary matrix
\(U_{MNS}\), yielding the three light neutrino masses:
\begin{equation}
\label{eqn:NeutrinoDiag}
U_{MNS}^T M_\nu U_{MNS} = \textrm{diag}(m_1,m_2,m_3).
\end{equation}
The other three neutrino mass eigenstates are too heavy to be
observed directly, but, through virtual corrections, induce small
off-diagonal terms in the evolved MSSM slepton mass matrix,
\begin{eqnarray}
m_{\tilde l}^2=\left(
\begin{array}{cc}
m_L^2 & (m_{LR}^{2})^\dagger \\
m_{LR}^2 & m_R^2
\end{array}
\right)_{\rm MSSM}\!\!\!\!\!\!+\left(
\begin{array}{cc}
\delta m_L^2 & (\delta m_{LR}^{2})^\dagger \\
\delta m_{LR}^2 & 0
\end{array}
\right)\!\!,
\end{eqnarray}
leading to observable LFV processes. These corrections in leading
log approximation are \cite{Hisano:1999fj}
\begin{eqnarray}
\label{left_handed_SSB2}
\delta m_{L}^2 &=& -\frac{1}{8 \pi^2}(3m_0^2+A_0^2)(Y_\nu^\dag L Y_\nu),\\
\delta m_{LR}^2 &=& -\frac{3 A_0 v \cos\beta}{16\pi^2}(Y_l Y_\nu^\dag L Y_\nu),
\end{eqnarray}
where $L_{ij} = \ln(M_{GUT}/M_i)\delta_{ij}$, and \(m_0\) and
\(A_0\) are the common scalar mass and trilinear coupling,
respectively, of the minimal supergravity (mSUGRA) scheme. The
product of the neutrino Yukawa couplings $Y_\nu^\dagger L Y_\nu$
entering these corrections can be determined by inverting
(\ref{eqn:SeeSawFormula}),
\begin{equation}
\label{eqn:yy}
Y_\nu =
\frac{1}{v\sin\beta}\textrm{diag}(\sqrt{M_i})
\!\cdot\!R\!\cdot\!\textrm{diag}(\sqrt{m_i})\!\cdot\! U_{MNS}^\dagger,
\end{equation}
using neutrino data as input for the masses \(m_i\) and
\(U_{MNS}\), and evolving the result to the unification scale
$M_{GUT}$. The unknown complex orthogonal matrix $R$ may be
parametrized in terms of 3 complex angles $\theta_i=x_i +i y_i$.
\section{LFV Rare Decays and LHC Processes}\label{sec:2}
At the LHC, a feasible test of LFV is provided by production of
squarks and gluinos, followed by cascade decays via neutralinos
and sleptons \cite{Agashe:1999bm,Andreev:2006sd}:
\begin{eqnarray}\label{eqn:LHCProcesses}
pp &\to& \tilde q_\alpha \tilde q_\beta, \tilde g \tilde q_\alpha, \tilde g
\tilde g,\nonumber\\
\tilde q_\alpha(\tilde g)&\to& \tilde\chi^0_2 q_\alpha(g),\nonumber\\
\tilde\chi^0_2 &\to& \tilde l_a l_i,\nonumber\\
\tilde l_a &\to& \tilde\chi^0_1 l_j,
\end{eqnarray}
where \(a,b,i\) run over all sparticle mass eigenstates including
antiparticles. LFV can occur in the decay of the second lightest
neutralino and/or the slepton, resulting in different lepton
flavors, \(\alpha\neq\beta\). The total cross section for the
signature \(l^+_\alpha l^-_\beta + X\) can then be written as
\begin{eqnarray}\label{eqn:LHCProcess}
&&\sigma(pp\to l^+_\alpha l^-_\beta+X) = \nonumber\\
&& \Bigl\{
\quad \sum_{a,b}\sigma(pp\to\tilde q_a\tilde q_b)\times Br(\tilde q_a\to\tilde\chi^0_2
q_a)\nonumber\\
&&\quad+\sum_{a}\sigma(pp\to\tilde q_a\tilde g) \times Br(\tilde q_a\to\tilde\chi^0_2 q_a)\nonumber\\
&&\quad+\quad\,\,\,\,\, \sigma(pp\to\tilde g\tilde g)\times Br(\tilde g\to\tilde\chi^0_2
g) \nonumber\\
&& \Bigr\}\times Br(\tilde\chi^0_2\to l_\alpha^+l_\beta^-\tilde\chi^0_1),
\end{eqnarray}
where \(X\) can involve jets, leptons and LSPs produced by lepton
flavor conserving decays of squarks and glui\-nos, as well as low
energy proton remnants. The cross section is calculated at the LO
level \cite{Dawson:1983fw} with 5 active quark flavors, using
CTEQ6M PDFs \cite{Pumplin:2002vw}. Possible signatures of this
inclusive process are:
\begin{itemize}
\item $ l_i l_j \quad\,\,\, + 2\textrm{jets} + E_{miss}$
\item $ l_i l_j \quad\,\,\, + 3\textrm{jets} + E_{miss}$
\item $ l_i l_j l_k l_k + 2\textrm{jets} + E_{miss}$,
\end{itemize}
with at least two leptons \(l_i, l_j\) of unequal flavor.
The LFV branching ratio \(Br(\tilde\chi^0_2\to
l_\alpha^+l_\beta^-\tilde\chi^0_1)\) is for example calculated in
\cite{Bartl:2005yy} in the framework of model-independent MSSM
slepton mixing. In general, it involves a coherent summation over
all intermediate slepton states.
As a sensitivity comparison it is useful to correlate the expected
LFV event rates at the LHC with LFV rare decays (see
\cite{Deppisch:RareDecays} and references therein for a discussion
of LFV rare decays in SUSY Seesaw Type I scenarios). This is shown
in Figures~\ref{fig:br12_N2} and \ref{fig:br23_N2} for the event
rates \(N(\tilde\chi_2^0\to\mu^+e^-\tilde\chi_1^0)\) and
\(N(\tilde\chi_2^0\to\tau^+\mu^-\tilde\chi_1^0)\), respectively,
originating from the cascade reactions (\ref{eqn:LHCProcesses}).
Both are correlated with \(Br(\mu\to e\gamma)\), yielding maximum
rates of around \(10^{2-3}\) per year for an integrated luminosity
of \(100\textrm{fb}^{-1}\) in the mSUGRA scenario C'
\cite{Battaglia:2001zp}, consistent with the current limit
\(Br(\mu\) \(\to e\gamma)<10^{-11}\). The MEG experiment at PSI is
expected to reach a sensitivity of \(Br(\mu\to e\gamma)\approx
10^{-13}\).
The correlation is approximately independent of the neutrino
parameters, but highly dependent on the mSUGRA parameters. This is
contemplated further in Figure~\ref{fig:scan_lhc_seesaw},
comparing the sensitivity of the signature
\(N(\tilde\chi_2^0\to\mu^+e^-\tilde\chi_1^0)\) at the LHC with
\(Br(\mu\to e\gamma)\) in the mSUGRA \(m_0-m_{1/2}\) parameter
plane. LHC searches can be competitive to the rare decay
experiments for small \(m_0\approx200\)~GeV. Tests in the
large-\(m_0\) region are severely limited by collider kinematics.
\begin{figure}[t]
\centering
\includegraphics[clip,width=0.49\textwidth]{br12vN212Cprime.eps}
\caption{Correlation of the number of
\(\tilde\chi_2^0\to\mu^+e^-\tilde\chi_1^0\) events per year at the
LHC and \(Br(\mu\to e\gamma)\) in mSUGRA scenario C'
(\(m_0=85\)~GeV, \(m_{1/2}=400\)~GeV, \(A_0=0\)~GeV,
\(\tan\beta=10\)~GeV, \(\textrm{sign}\mu=+\)) for the case of
hier.\ $\nu_{R/L}$ (blue stars), deg.\ $\nu_R$/hier.\ $\nu_L$ (red
boxes) and deg.\ $\nu_{R/L}$ (green triangles). The neutrino
parameters are scattered within their experimentally allowed
ranges~\cite{Maltoni:2003sr}. For degenerate heavy neutrino
masses, both hierarchical (green diamonds) and degenerate (blue
stars) light neutrino masses are considered with real $R$ and
$10^{11}\ {\rm GeV}<M_R < 10^{14.5}\ {\rm GeV}$. In the case of
hierarchical heavy and light neutrino masses (red triangles),
$x_i$ is scattered over $0<x_i <2\pi$ while $y_i$ and $M_i$ are
scattered in the ranges allowed by leptogenesis and perturbativity
\cite{Deppisch:2005rv}. An integrated LHC luminosity of
\(100\textrm{fb}^{-1}\) is assumed. The current limit on
\(Br(\mu\to e\gamma)\) is displayed by the vertical line.}
\label{fig:br12_N2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[clip,width=0.49\textwidth]{br12vN223Cprime.eps}
\caption{Same as Figure~\ref{fig:br12_N2}, but correlating
\(\tilde\chi_2^0\to\tau^+\mu^-\tilde\chi_1^0\) with \(Br(\mu\to
e\gamma)\).} \label{fig:br23_N2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[clip,width=0.45\textwidth]{emuprod_seesaw.eps}
\caption{Contours of the number of
\(\tilde\chi_2^0\to\mu^+e^-\tilde\chi_1^0\) events at the LHC with
an integrated luminosity of \(100\textrm{fb}^{-1}\) (solid) and of
\(Br(\mu\to e\gamma)\) in the \(m_0-m_{1/2}\) plane . The
remaining mSUGRA parameters are as in Figure~\ref{fig:br12_N2}.
The neutrino parameters are at their best fit values
\cite{Maltoni:2003sr}, with \(m_{\nu_1}=0\) and a degenerate r.h.
neutrino mass \(M_R=10^{14}\)~GeV. The shaded (red) areas are
already excluded by mass bounds from various experimental
sparticle searches.} \label{fig:scan_lhc_seesaw}
\end{figure}
Up to now we have considered LFV in the class of type I SUSY
seesaw model described in Section~\ref{sec:1}, which is
representative of models of flavor mixing in the left-handed
slepton sector only. However, it is instructive to analyze general
mixing in the left- and right-handed slepton sector, independent
of any underlying model for slepton flavor violation. The easiest
way to achieve this is by assuming mixing between two flavors
only, which can be parametrized by a mixing angle \(\theta_{L/R}\)
and a mass difference \((\Delta m)_{L/R}\) between the sleptons,
in the case of left-/right-handed slepton mixing,
respectively\footnote{This is different to the approach in
\cite{Bartl:2005yy}, where the slepton mass matrix elements are
scattered randomly.}. In particular, the left-/right-handed
selectron and smuon sector is then diagonalized by
\begin{equation}\label{eqn:TwoFlavorDiag}
\left(
\begin{array}{c}
\tilde l_1 \\
\tilde l_2 \\
\end{array}
\right)
= U\cdot
\left(
\begin{array}{c}
\tilde e_{L/R} \\
\tilde \mu_{L/R} \\
\end{array}\right)
\end{equation}
with
\begin{equation}
U=\left(\begin{array}{cc}
\cos\theta_{L/R} & \sin\theta_{L/R} \\
-\sin\theta_{L/R} & \cos\theta_{L/R} \\
\end{array}\right),
\end{equation}
and a mass difference \(m_{\tilde l_2}-m_{\tilde l_1}=(\Delta
m)_{L/R}\) between the slepton mass eigenvalues\footnote{For
left-handed slepton mixing, \(\theta_L\) and \((\Delta m)_L\) are
also used to describe the sneutrino sector.}. The LFV branching
ratio \(Br(\tilde\chi_2^0\to\mu^+e^-\tilde\chi_1^0)\) can then be
written in terms of the mixing parameters and the flavor
conserving branching ratio \(Br(\tilde\chi_2^0\to
e^+e^-\tilde\chi_1^0)\) as
\begin{eqnarray}\label{eqn:TwoFlavorMixing}
Br(\tilde\chi_2^0\to\mu^+e^-\tilde\chi_1^0)&=&
2\sin^2\theta_{L/R}\cos^2\theta_{L/R} \nonumber\\
&\times&\frac{(\Delta m)^2_{L/R}}{(\Delta m)^2_{L/R}+\Gamma^2_{\tilde
l}} \nonumber\\
&\times&Br(\tilde\chi_2^0\to e^+e^-\tilde\chi_1^0),
\end{eqnarray}
with the average width \(\Gamma_{\tilde l}\) of the two sleptons
involved. Maximal LFV is thus achieved by choosing
\(\theta_{L/R}=\pi/4\) and \((\Delta m)_{L/R}\gg\Gamma_{\tilde
l}\). For definiteness, we use \((\Delta m)_{L/R}\) \(=0.5\)~GeV.
The results of this calculation can be seen in
Figures~\ref{fig:scan_lhc maxmix_L} and \ref{fig:scan_lhc
maxmix_R}, which show contour plots of
\(N(\tilde\chi_2^0\to\mu^+e^-\tilde\chi_1^0)\) in the
\(m_0-m_{1/2}\) plane for maximal left- and right-handed slepton
mixing, respectively. Also displayed are the corresponding
contours of \(Br(\mu\to e\gamma)\). We see that the present bound
\(Br(\mu\to e\gamma)=10^{-11}\) still permits sizeable LFV signal
rates at the LHC. However, \(Br(\mu\to e\gamma)<10^{-13}\) would
largely exclude the observation of such an LFV signal at the LHC.
\begin{figure}[t!]
\centering
\includegraphics[clip, width=0.45\textwidth]{emuprod_sel2.eps}
\caption{Contours of the events per year
\(N(\tilde\chi_2^0\to\mu^+e^-\tilde\chi_1^0)\) for maximal
\(\tilde e_L\tilde\mu_L\) mixing at the LHC with an integrated
luminosity of \(100\textrm{fb}^{-1}\) in the \(m_0-m_{1/2}\) plane
(solid lines). The remaining mSUGRA parameters are:
\(A_0=-100\)~GeV, \(\tan\beta=10\), \(\textrm{sign}(\mu)=+\).
Contours of \(Br(\mu\to e\gamma)\) are shown by dashed lines. The
shaded (red) areas are forbidden by mass bounds from various
experimental sparticle searches.} \label{fig:scan_lhc maxmix_L}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[clip, width=0.45\textwidth]{emuprod_sel1.eps}
\caption{As in Figure~\ref{fig:scan_lhc maxmix_L} but for maximal
\(\tilde e_R\tilde\mu_R\) mixing.}\label{fig:scan_lhc maxmix_R}
\end{figure}
\section*{Acknowledgments}
The author would like to thank S. Albino, D. Ghosh and R. R\"uckl
for the collaboration on which the presentation is based.
|
2,869,038,156,745 | arxiv | \section{Introduction}
Let us first speak about the {\it principle of exclusion} (POE), introduced by the author in 2008, and then relate the POE to more familiar inclusion-exclusion (IE). That will involve, in the first paragraph, slightly more technicalities than usually encountered in an introduction.
Let $\Gamma'_1, \cdots, \Gamma_m'$ be properties, each one of which applying to certain subsets\footnote{With POE the objects having (or not having) properties are always {\it sets}, whereas with IE the objects can be anything (permutations, numbers, sets etc). We use the terms ``constraint'' and ``property'' interchangeably.} of $[h]=\{1, \cdots, h\}$. Thus formally each $\Gamma'_i$ is a subset of the powerset ${\cal P}[h]$, and $U \in {\cal P}[h]$ {\it has} property $\Gamma'_i$ if $U \in \Gamma'_i$.
In order to determine those sets $U \in {\cal P}[h]$ that simultaneously satisfy all properties, i.e. to determine the {\it model set} Mod $: = \Gamma'_1 \cap \cdots \cap \Gamma'_m$, the POE proceeds as follows. Starting with $\mbox{Mod}_0 : = {\cal P}[h]$ one generates
$$\mbox{Mod}_{i+1} : = \{X \in \, \mbox{Mod}_i: \ X \ \mbox{satisfies} \ \Gamma'_{i+1} \}$$
by {\it excluding} all duds $X$ (i.e. violating $\Gamma'_{i+1}$) from the family $\mbox{Mod}_i$ of partial models (i.e. satisfying the first $i$ properties). In the end $\mbox{Mod}_m$ equals Mod. All of that is only efficient when Mod$_i$ can be compactly represented. This is done by the use of certain multivalued rows. For instance, suppose that $\Gamma_1, \ldots,\Gamma_m$ are any fixed subsets of $[h]$, and $\Gamma'_i : = \{U \in {\cal P}[h]: \Gamma_i \not\subseteq U \}$. Thus a subset of $[h]$ has property $\Gamma'_i$ if it is a {\it noncover} of $\Gamma_i$.
It follows that Mod is a set ideal ${\cal S}{\cal C}$. In fact ${\cal S}{\cal C} = {\cal P}[h] \backslash {\cal S}{\cal F}$ where ${\cal S}{\cal F}$ is the set filter ${\cal S}{\cal F} : = \{X \in {\cal P}[h]: (\exists i) \, \Gamma_i \subseteq X\}$. This specific realization of POE is discussed in [W1], and will be applied in the present article.
The POE often outperforms IE when both apply, but there are plenty situations where IE cannot be replaced by POE. This article argues that sometimes the two can join hands. That is, POE doesn't replace IE but rather accelerates the otherwise infeasible IE calculations.
We now turn to IE in more detail. Say $C(1), C(2) , \ldots, C(h)$ are any constraints potentially applying to anyone of $N_0$ fixed objects. Let $N$ be the number of objects satisfying {\it all} constraints and e.g. $N(\underline{2}, \underline{4})$ the number of objects {\it violating} $C(2)$ and $C(4)$. Note that the number $N(\underline{\phi})$ of objects violating nothing equals $N_0$. For systematic reasons we prefer to write $N(\underline{\phi})$. Then one version of (IE) states that
(1) \qquad $N = N(\underline{\phi}) - \Sigma N(\underline{i}) + \Sigma N(\underline{i}_1, \underline{i}_2) - \Sigma N(\underline{i}_2, \underline{i}_3, \underline{i}_3) - \cdots \pm N(\underline{1}, \underline{2}, \cdots, \underline{h})$
where say the third sum is taken over all ${h \choose 3}$ triplets $(i_1, i_2, i_3)$ with $1 \leq i_1< i_2 < i_3 \leq h$. Often many terms $N(\underline{i}_1,\underline{i}_2, \cdots )$ are zero and it is desirable to exclude them from consideration {\it beforehand}. Observe that the sets $\{i_1, i_2, \cdots \}$ corresponding to the zero terms constitute a set filter ${\cal S}{\cal F} \subseteq {\cal P}[h]$. We call it the {\it irreleveant IE set filter}.
Fortunately, as indicated above, ${\cal S}{\cal F}$ can be discarded using POE in so far that ${\cal S}{\cal C} = {\cal P}[h] \backslash {\cal S}{\cal F}$ can be represented in a compact way. Throughout the article the theory is illustrated by examples of quite a wide variety. Next comes the section break up.
Taking as objects permutations, and as constraints forbidden blocks, Section 2 explains what is meant by an {\it upgrade $B$} of inclusion-exclusion: In essence ${\cal S}{\cal C}$ can be found fast, but its members must be processed one by one.
Section 3 introduces an {\it upgrade $A$} of IE. Its prerequisite is that $N(\underline{i}_1, \cdots, \underline{i}_k)=g(k)$ is an invariant of $k$. Then, provided the numbers $f(k)$ of $k$-element sets of ${\cal S}{\cal C}$ can be found quickly, the evaluation of (1) speeds up significantly. Again permutations $\pi$ provide our first illustration. Namely, after reviewing the familiar forbidden position problem (rook polynomials etc.) we proceed to generalize it. Instead of each constraint being {\it one} forbidden position (say $\pi (6) \neq d$), it suffices (under mild conditions) that it be a {\it disjunction} of the kind $(\pi (6) \neq d$ or $\pi (8) \neq f$ or $\pi (10) \neq g)$. The upgrade $A$ examples in Section 3 are small enough for the numbers $f(k)$ to be found by inspection.
Section 4 shows how $f(k)$ is to be calculated in general, i.e. from a representation of ${\cal S}{\cal C}$ as disjoint union of multivalued rows. That involves collecting the coefficients of products of formal polynomials. The whole procedure carries over to a setting where $N(\underline{i}_1, \cdots, \underline{i}_k)$ need not be an invariant of the cardinality of $U = \{i_1, \ldots, i_k\}$, but merely an invariant of a suitable ``weight'' or ``value'' val$(U)$ coupled to each $U \in{\cal S}{\cal C}$.
A concrete example of such a weighted upgrade $A$ of IE features in Section 5. The objects there are integer partitions. One of the glimpsed alternatives is based on the Fast Fourier Transform.
Section 6 counts the models of a Boolean function in disjunctive normal form, both with the Mathematica command {\tt SatisfiabilityCount} and with some upgrade $B$ of IE. Even though the latter was programmed in high level Mathematica code, it beat the former for many types of random instances (Table 5). Those choices of parameters for which {\tt SatisfiabilityCount} was superior, succomb when instead of all models only the models of some prescribed cardinality must be counted.
\section{Upgrade $B$ for inclusion-exclusion}
To fix ideas, say our objects are all $N (\underline{\phi}) = 9!$ permutations $\pi$ of $[9]$. Consider these $h =6$ constraints $C(1)$ to $C(6)$:
(2) \qquad $\neg 123, \quad \neg 923, \quad \neg 9541, \quad \neg 3716, \quad \neg 379, \quad \neg 649$
Here say $C(2)$ states that no contiguous block 923 must occur. Obviously $N(\underline{1}, \underline{2}) = 0$ since $123$ and $923$ cannot\footnote{If say $\neg 923$ merely meant that $9, 2, 3$ must not occur in this {\it order} (and similarly for $\neg 123$), then $N(\underline{1}, \underline{2}) > 0$ since say $\pi = 894152673$ would violate both $C(1)$ and $C(2)$.} occur simultaneously. On the other hand $N(\underline{1}, \underline{3})> 0$ since say $\pi = 895412367$ is counted by $N(\underline{1},\underline{3})$. One checks that all three $N(\underline{1}, \underline{3}), N(\underline{1}, 5), N(\underline{3}, 5)> 0$. However $N(\underline{1}, \underline{3}, \underline{5})=0$ since the simultaneous occurence of $379, 9541$ (thus $379541$) and $123$ is impossible. Albeit a little tedious, it is easy to see that the minimal sets $\{i_1, \cdots, i_k \} \subseteq [h]$ for which $N(\underline{i}_1, \cdots, \underline{i}_k)=0$ are:
(3) \qquad $\{1, 2\}, \{1, 4\}, \{2, 3\}, \{2, 5\}, \{3, 4\}, \{3, 6\}, \{4, 5\}, \{5, 6\}, \{1, 3, 5\}, \{2, 4, 6\}$
Thus the ten sets in (3) are the {\it generators} $\Gamma_i$ of the irrelevant IE set filter ${\cal S}{\cal F} \subseteq {\cal P}[h]$. Coupled\footnote{The relevant IE set ideal is called the {\it nerve}
in [D, p.10]. Dohmen mainly considers an IE version dual to (1), but we shall stick to (1) with the exception of Section 6. The speed-up of IE considered in [D] proceeds along lines different from ours. It is e.g. shown that IE works fast whenever $N(i) \cap N(j) \subseteq N(i \vee j) \ (i, j \in [h])$ where $\vee$ is any semilattice operation on [h].} to ${\cal S}{\cal F}$ is the {\it relevant IE set ideal} ${\cal S}{\cal C} : = {\cal P}[h] \backslash {\cal S}{\cal F}$. We choose the acronym ${\cal S}{\cal C}$ since another name for set ideal is ({\it abstract}) {\it simplicial complex}. In particular the empty set $\emptyset$ belongs to ${\cal S}{\cal C}$. The sets $U \in{\cal S}{\cal C}$ are the {\it faces} of ${\cal S}{\cal C}$, and the maximal faces $F$ are the {\it facets} of ${\cal S}{\cal C}$.
Feeding the generators of ${\cal S}{\cal F}$ to the $n$-algorithm of [W1] one obtains ${\cal S}{\cal C} = r_1 \uplus r_2 \uplus r_2 \uplus r_4$ as a disjoint union of set systems $r_1$ to $r_4$ which are defined as:
\begin{tabular}{l|c|c|c|c|c|c|c}
& 1 & 2& 3 & 4 & 5 & 6\\ \hline
$r_1 =$ & 1 & 0 & 0 & 0 & 0 & 2 & $\rightarrow 2$ \\ \hline
$r_2 =$ & 0 & $n$ & 0 & $n$ & 0 & $n$ & $\rightarrow 7$ \\ \hline
$r_3=$ & 2 & 0 & 1 & 0 & 0 & 0 & $\rightarrow 2$ \\ \hline
$r_4=$ & $n$ & 0 & $n$ & 0 & 1 & 0 & $\rightarrow 3$ \\ \hline
\end{tabular}
Table 1
Each $\{0,1,2,n\}${\it -valued} row $r_i$ comprises a bunch of bitstrings $u$ whose supports $U \subseteq [6]$ are faces of ${\cal S}{\cal C}$.
Besides the {\it don't care} symbol $2$ which can be freely chosen to be either 0 or 1, we use the wildcard $n n \cdots n$ which means ``at least one $0$''. In other words, only $1 1 \cdots, 1$ is forbidden. (The letter $n$ stems from $n$oncover.) Thus say $r_2$ contains $2^3 -1 = 7$ bitstrings, one of them being $(0,0,1, 0, 1,0,0)$ which matches the face $\{3,5\}$. Also $\emptyset$ is in $r_2$. One feature of the $n$-algorithm is that the $\{0,1,2,n\}$-{\it valued} rows that it produces are, viewed as set systems, mutually disjoint. In our case it follows that
$$|{\cal S}{\cal C}| = 2 + 7 + 2 +3 = 14.$$
Scanning the faces of ${\cal S}{\cal C}$ row-wise, indicated by the square brackets [$\cdots$] below, the order of the terms in (1) gets scrambled but this does no harm:
(4) \quad $N = [N(\underline{1}, \underline{6}) - N(\underline{1})] \ + \ [N(\underline{2}, \underline{4}) + N(\underline{2}, \underline{6}) + N(\underline{4}, \underline{6})-N(\underline{2}) - N(\underline{4})-N(\underline{6}) + N(\underline{\emptyset})]$
\hspace*{1.6cm} $+[N(\underline{1}, \underline{3})- N(\underline{3})] \ + \ [N(\underline{1},\underline{5}) +N(\underline{2},\underline{5}) - N(\underline{5})]$.
For instance the permutations $\pi$ of [9] that satisfy $\neg C(1) \wedge \neg C(6)$ match the permutations of the blocks $123, 649, 5, 7, 8$, and so $N(\underline{1}, \underline{6}) = 5!$. Further $N(\underline{4}, \underline{6}) = 4!$ is the number of permutations of $371649, 2, 5, 8$.
{\bf 2.1} If all generators $\Gamma_i$ of an irrelevant set filter ${\cal S}{\cal F}$ are 2-element, i.e. matching the edges of a graph $G$, then ${\cal S}{\cal C}$ consists of all anticliques (independent sets) of $G$. Instead of feeding all edges $\Gamma_i$ of $G$ to the $n$-algorithm, it would be more economic if the fewer {\it vertices} of $G$ could be processed, somehow. In a nutshell, this is how to do it. Say $3 \in V(G)$ with set of neighbours $N(3) = \{1, 4, 7\}$. If $X \subseteq V(G)$ is an anticlique that happens to contain 3, then $N(3) \cap X = \emptyset$. In other words, each anticlique $X$ satisfies the ``anti-implication'' $3 \rightarrow \overline{1} \wedge \overline{4} \wedge \overline{7}$ (and similarly for the other $h-1$ vertices $\neq 3$). Conversely, any set $X \subseteq V(G)$ satisfying these $h$ {\it anti-implications} necessarily is an anticlique. A symbolic notation for the family of all sets $Y \subseteq V(G)$ satisfying $3 \rightarrow \overline{1} \wedge \overline{4} \wedge \overline{7}$ is $( \overline{b}, 2, a, \overline{b}, 2,2, \overline{b})$, assuming that $h=7$. Formally
(5) \quad $(\overline{b}, 2, a, \overline{b}, 2,2, \overline{b}): = (2, 2, {\bf 0}, 2,2,2,2) \uplus (0, 2, {\bf 1}, 0, 2, 2, 0)$
This leads to the $(a, \overline{b})$-algorithm [W2] which represents the anticliques of any graph as a disjoint union of $\{0,1,2, a, \overline{b}\}$-valued rows.
{\bf 2.2} In summary, the upgrade $B$ of inclusion-exclusion is as follows.
\begin{enumerate}
\item [(B)] Provided the generators of the irrelevant set filter ${\cal S}{\cal F}\subseteq P[h]$ can be found with moderate effort, they are used to represent the relevant set ideal ${\cal S}{\cal C}$ as disjoint union of multivaled rows $r$ of length $h$. Proceeding row by row calculate $N(\underline{i}_1, \ldots, \underline{i}_k)$ for each face $\{i_1, \cdots, i_k\} \in r$ and get $N$ as
$$N = \Sigma \{(-1)^k N(\underline{i}_1, \cdots, \underline{i}_k ): \ \{i_1, \cdots, i_k \} \in {\cal S}{\cal C} \}.$$
\end{enumerate}
Here ``multivalued'' means $\{0,1,2,n\}$-valued or $\{0,1,2,a, \overline{b}\}$-valued.
Embarking on a upgrade $B$ of IE is the more tempting the smaller $|{\cal S}{\cal C}|$ is compared to $2^h$, and the easier it is to find the generators $\Gamma_1,\ldots, \Gamma_m$ of ${\cal S}{\cal F}$. In the previous example $|{\cal S}{\cal C}|< 2^h$ is $14 < 64$.
Conveniently, once the $\Gamma_i$'s are available, the cardinality of ${\cal S}{\cal C}$ can be predicted fast with a single call of the Mathematica command {\tt SatisfiabilityCount} (which is based on binary decision diagrams). Specifically, for each $U \in {\cal P}[h]$ it holds that
$$U \in {\cal S}{\cal C} \ \ \Leftrightarrow \ \ (\forall i) \ \ \Gamma_i \ \not\subseteq \ U \ \ \Leftrightarrow \ \ (\forall i) \ \ \Gamma_i \cap ([h] \backslash U) \neq \emptyset,$$
and so $|{\cal S}{\cal C}|$ equals the number $\tau$ of transversals of the set system $\{\Gamma_1, \cdots, \Gamma_m\}$. With each $i \in [h]$ associate a Boolean variable $x_i$. If say $\Gamma_j = \{3, 7, 8\}$, match it with the clause $C_j = (x_3 \vee x_7 \vee x_8)$. If $b(x)=b(x_1, \cdots, x_h) : = C_1 \wedge C_2 \wedge \cdots \wedge C_m$ then $\tau = $ {\tt SatisfyabilityCount}$[b(x)]$.
Upon knowing $|{\cal S}{\cal C}|$ one can decide whether or not to proceed\footnote{For $2^h$ small the cost of running the $n$-algorithm doesn't pay off and one can do IE the old-fashioned way. Assume $2^h$ is large. If $|{\cal S}{\cal C}|$ is too large as well, drop the IE endeavour altogether (unless upgrade A applies), otherwise run the $n$-algorithm.} with the $n$-algorithm respectively $(a, \overline{b})$-algorithm. If yes, the $n$-algorithm displays ${\cal S}{\cal C}$ as a disjoint union of $R$ multivalued rows $r_1, r_2, \cdots, r_R$ in time $O(Rm^2h^2)$ according to [W1] respectively [W2].
\section{Upgrade A for inclusion-exclusion}
Having the relevant IE set ideal ${\cal S}{\cal C} \subseteq {\cal P}[h]$ nicely packaged in $\{0,1,2,n\}$-valued rows $r_j$ is good and well, but ${\cal S}{\cal C}$ may still be too large to be scanned one by one. If we are lucky we can cope as follows. Suppose for each face $U = \{i_1, \cdots, i_k\} \in {\cal S}{\cal C}$ the number $N(\underline{i}_1, \cdots, \underline{i}_k)$ is an invariant of $k$, thus $N(\underline{i}_1, \cdots, \underline{i}_k) = g(k)$ for some function $g(k)$. Then it pays to calculate the {\it face numbers}
$$f(k): = |\{U \in {\cal S}{\cal C}: \ |U|=k\}| \quad (0 \leq k \leq h)$$
and to calculate $N$ as
(6) \qquad $N = \displaystyle\sum_{k=0}^h (-1)^k f(k) g(k)$.
In the remainder of Section 3 we show how the {\it upgrade} $A$ of inclusion-exclusion provided by (6) applies to the count of permutations (and also non-bijective maps) that are constrained in novel ways. Specifically, in Subsection 3.1 we review the classic problem of ``forbidden positions''. In 3.2 this gets relaxed to a situation where the constraints are {\it disjunctions} of forbidden positions. The examples in 3.1 and 3.2 are small enough for the face numbers $f(k)$ required in (6) to be found by inspection. How this is done in general is the subject of Section 4 which also features the analogon (A) to the summary (B) in Section 2.
{\bf 3.1} Suppose we were to count all permutations $\pi: [10] \rightarrow \{a,b,c,d, e, f, g, h, i, j \}$ that satisfy
(7) \qquad $\pi (1) \neq c \quad \wedge \quad \pi (1) \neq e \quad \wedge \quad \pi (2) \neq a \quad \wedge \quad \pi (2) \neq g \quad \wedge \quad \pi (3) \neq a \quad \wedge \quad \pi (6) \neq d$
If we write permutations as strings, say {\it hidebagcfj}, this amounts to the familiar problem of counting permutations with forbidden positions, thus $c$ must not be on position 1, $a$ not on position 2 or $3$, and so forth. The six constraints in (7), call them $\Gamma (1)$ to $\Gamma (6)$, match the black squares\footnote{In many textbooks permutations of $[n]$ with forbidden positions similarly lead to chessboards with black squares (and to associated rook polynomials). However, instead of the $4 \times 5$ board in Figure 1 we would then be dealing with a $10 \times 10$ board. Our approach more easily adapts to the generalization in 3.2.} in the ``chessboard'' in Fig. 1 (the order of rows or columns doesn't matter).
\includegraphics{IEFig1}
{\bf 3.2} Now consider
a case where all constraints $C(i)$ are not just inequalities but {\it disjunctions} of inequalities $\pi (x) \neq y$:
$\begin{array}{llllll}
C(1): & \pi (3) \neq a & \vee & \pi (4) \neq b & \vee & \pi (5) \neq c \\
\\
C(2): & \pi (1) \neq e & \vee & \pi (4) \neq f & \vee & \pi (5) \neq d\\
\\
C(3) : & \pi (2) \neq a & \vee & \pi (6) \neq g & \vee & \pi (7) \neq j \\
\\
C(4) : & \pi (6) \neq d & \vee & \pi (8) \neq f & \vee & \pi (10) \neq g \\
\\
C(5) : & \pi (1) \neq c & \vee & \pi (8) \neq e & \vee & \pi (9) \neq h\\
\\
C(6) : & \pi (2) \neq g & \vee & \pi (7) \neq b & \vee & \pi (10) \neq j \end{array}$ \hfill (8)
The six constraints $\Gamma (i)$ from Figure 1 match the first column in the display (8). Thus the conjunction $\Gamma (1) \wedge \cdots \wedge \Gamma (6)$ implies $C(1) \wedge \cdots \wedge C(6)$, but not conversely.
Of course say
$$\neg C(1) \quad \mbox{means} \quad \pi (3) = a \quad \wedge \quad \pi (4) = b \quad \wedge \quad \pi (5) = c.$$
So for instance $\neg C(4) \ \wedge \ \neg C(6)$ entails $(\pi (10) = g \ \wedge \ \pi (10) = j)$ which is imposssible. Hence $N(\underline{4}, \underline{6}) =0$, and so $\{4,6\} \in {\cal S}{\cal F}$. Similarly $\{1, 3\} \in {\cal S}{\cal F}$ since $\neg C(1) \ \wedge \ \neg C(3)$ entails $(\pi (3) = a \ \wedge \ \pi (2) = a)$ which contradicts the injectivity of $\pi$.
In fact we claim that {\it all} generators of ${\cal S}{\cal F}$ are 2-element, i.e. ${\cal S}{\cal F}$ is the set of anticliques of some graph. What's more, $N(\underline{i}_1, \cdots, \underline{i}_k)$ is an invariant $g(k)$.
To see why, observe that within any negated constraint $\neg C(i)$ no two equalities {\it clash} (in the sense illustrated above), and that each equality $\pi (x) = y$ occurs at most in {\it one} of $\neg C(1), \cdots, \neg C(6)$. Say $\pi (5) = d$ only occurs in $\neg C(2)$. This implies that the satifiability of $\neg C(i_1) \wedge \cdots \wedge \neg C(i_i)$ (equivalently: $N(\underline{i}_i, \cdots, \underline{i}_k)> 0)$ amounts to $\neg C(i_1)$ up to $\neg C(i_k)$ being mutually non-clashing. Thus, if $N(\underline{i}_1, \cdots, \underline{i}_k) > 0$ then $3k$ non-clashing values of $\pi$ are fixed, and so
$$N(\underline{i}_1, \cdots, \underline{i}_k) = g(k) = (10-3k)!.$$
For instance, there are $N(\underline{5},\underline{6}) = (10-6)!$ permutations $\pi : [10] \rightarrow \{a,b,c,d,e,f,g,h,i,j\}$ violating $C(5)$ and $C(6)$, i.e. satisfying
$$\pi (1) = c \ \wedge \ \pi (8) = e \ \wedge \ \pi (9) = h \ \wedge \ \pi (2) = g \ \wedge \ \pi (7) = b \ \wedge \ \pi (10) = j.$$
Conversely, if $N(\underline{i}_k, \cdots, \underline{i}_k)=0$ then this can {\it only} be caused by two clashing equalities, thus $N(\underline{j}, \underline{j}') =0$ for some $j, j'\in \{i_1, \cdots, i_k\}$, and so each generator of ${\cal S}{\cal F}$ has cardinality 2. Consequently ${\cal S}{\cal C} = {\cal P}[6] \backslash {\cal S}{\cal F}$ is the set ideal of all anticliques of a graph $G$, and the face numbes $f(k)$ are the numbers of $k$-element anticliques of $G$. For the constraints in (8) the graph $G$ appears in Figure 3 (ignore the dashed edges). Two negated constraints in (8) are more likely to clash than two negated constraints in (7), and so the graph in Fig.3 has more edges than the one in Fig.2.
\begin{center}
\includegraphics[scale=0.5]{IEFig2und3}
\end{center}
One finds by inspection
$f(1) = 6, \ f(2) = 7, \ f(3) = 1, \ f(4) = f(5) = f(6) =0$, and so by upgrade $A$ inclusion-exclusion
$$N = \displaystyle\sum_{k=0}^6 (-1)^k f(k) g(k) = 10! - 6 \cdot 7! + 7 \cdot 4! - 1 \cdot 1! =3598727.$$
Instead of permutations let us count {\it arbitrary} maps $\pi : [10] \rightarrow \{a, b, \cdots, j\}$ that satisfy the six constraints\footnote{We note that say the disjunction $C(1)$ can also be viewed as {\it implication} $(\pi (3) = a \wedge \pi (4) =4) \rightarrow (\pi (5) \neq c)$. Another example, where IE upgrade $A$ applies to $h$ more complicated implications, features in an earlier version of this article (arXiv). Albeit logically equivalent, on a psychological level (these kind of) implications are often more appealing than disjunctions.} in (8). Observe that now $N (\underline{1}, \underline{3}) > 0$ since $(\pi (3) = a$ and $\pi (2) = a$) is allowed. In terms of the graph in Fig.3, there is no longer an edge between $C(1)$ and $C(3)$. Similarly the edge between $C(2)$ and $C(4)$ disappears. Hence the IE set ideal is the simplicial complex ${\cal S}{\cal C} \subseteq {\cal P}[6]$ of all anticliques of the adjusted graph without the dashed edges. By inspection one finds that its face numbers are $f(1) =6, \ f(2) = 9, \ f(3) = 2, \ f(4) = f(5) = f(6) = 0$. As opposed to $g(k) = (10-3k)!$ here
$$g(k) = 10^{10-3k},$$
and so
$$N = \displaystyle\sum_{k=0}^h (-1)^k f(k) g(k) = 10^{10} - 6 \cdot 10^7 + 9 \cdot 10^4 - 2\cdot 10= 940089980.$$
In a similar fashion constrained injective or surjective maps (using Stirling numbers) can be dealt with.
\section{Calculating the face numbers $f(k)$ and variations thereof}
As stated in (6), whenever IE upgrade A applies, the number $N$ of objects satisfying all constraints $C(1)$ to $C(h)$ equals
$(6')$ \quad $N = \displaystyle\sum_{k=0}^h (-1)^k f(k) g(k) \quad = \quad \displaystyle\sum_{k\in \mathbb{Z}^+} f_{\rm even} (k) g(k) - \displaystyle\sum_{k\in \mathbb{Z}^+} f_{\rm odd} (k) g(k).$
Here $f_{\rm even} (k) : = f(k)$ when $k$ is even and $k\leq h$. Otherwise $f_{\rm even} (k) :=0$. Similarly $f_{\rm odd}$ is defined. It remains to see how generally the face numbers $f(k)$ are calculated.
This will be done in 4.1. Two generalizations feature in 4.2 and 4.3.
{\bf 4.1} \ If ${\cal S}{\cal C} = r_1 \uplus r_2 \uplus + \cdots \uplus r_R$ with multivalued rows $r_i$ then $f(k) = \, \mbox{Card}(r_1, k) + \cdots + \, \mbox{Card}(r_R, k)$ where
$$\mbox{Card}(r, k) : = |\{U \in r: |U| = k \}|.$$
In the sequel ``multivalued'' means $\{0,1,2,n\}$-valued. For $\{0,1,2,a,\overline{b}\}$-valued rows as in (5) matters are mutatis mutandis the same. Extending the example in Table 1, a $\{0,1,2, n\}$-valued row $r$ can have {\it several} $n$-bubbles, which are then distinguished by subscripts. In order to calculate all numbers Card$(r, k)$ for say
$$r: = (0,1,2,2,2,\, n_1, n_1,\, n_2, n_2,\, n_3, n_3, n_3, n_3, n_3)$$
we associate with each component 1 the polynomial $x$, with each component $2$ the polynomial $1+x$, and with each $n$-bubble $(n,n, \cdots, n)$ of length $t$ the polynomial $(1+x)^t-x^t=1+tx + {t \choose 2} x^2 + \cdots {t \choose t-1} x^{t-1}$, and multiply out. Here this results in
$$\begin{array}{lll}
p(x) & = & x \cdot (1+x)^3 \cdot (1+2x)^2 \cdot (1+5x + 10x^2 + 10x^3+5x^4) \\
\\
& =& x + 12x^2 + 64x^3 + 200x^4 + 406x^5+ 559x^6 + 525x^7+ 325x^8 + 120x^9 + 20x^{10} \end{array}$$
It is not hard to see that always the coefficients of the expanded polynomial yield the sought numbers Card$(r,k)$; say Card$(r,5) = 406$.
The Mathematica command {\tt Expand$[p[x]]$} readily does the job. As to the formal cost of expanding products of polynomials, see also Section 5.
{\bf 4.2} \ Let us generalize formula (6) in a natural way; a concrete application will follow in Section 5. The main idea is that $N(\underline{i}_k, \cdots, \underline{i}_k)$ need not be a function $g(k)$ of the face's cardinality $k$, it suffices to be a function $g(v)$ of any ``weight'' or ``value'' $v$ coupled to $\{i_1, \cdots, i_k\}$.
Specifically, for faces $U = \{i_1, \cdots, i_k\}$ of ${\cal S}{\cal C}$ we shall write $N(\underline{u}: \ u \in U)$ instead of $N (\underline{i}_1, \ldots, \underline{i}_k)$. Suppose $V$ is any set and {\it val}: ${\cal S}{\cal C} \rightarrow V$ is {\it IE-invariant} in the sense that for all $U_1, U_2 \in {\cal S}{\cal C}$ one has:
(9) \quad $val(U_1) = val (U_2)\ (=: v ) \quad \Rightarrow \quad N(\underline{u}: u \in U_1) = N(\underline{u}: u \in U_2) \ (=: g(v))$
In words, $g$ maps $V$ to $\mathbb{Z}^+$, and is such that $g(v)$ is the number $N(\underline{u}: \, u \in U)$ triggered by every $U \in \, \mbox{val}^{-1}(v)$. Whenever $val$ is IE-invariant we are compelled to put
$${\cal S}{\cal C}_{\rm even} : = \{U \in {\cal S}{\cal C} : |U| \ \mbox{is even} \}, \quad {\cal S}{\cal C}_{\rm odd}: = \{U \in {\cal S}{\cal C} : |U| \ \mbox{is odd} \},$$
and define
$\begin{array}{lll}
f_{\rm even}(v) & : = & |val^{-1}(v) \cap {\cal S}{\cal C}_{\rm even}|, \\
\\
f_{\rm odd}(v)& : = & |val^{-1}(v) \cap {\cal S}{\cal C}_{\rm odd}|. \end{array}$
Thus say $f_{\rm odd}(v)$ is the number of faces with odd cardinality and value $v$. It is evident that
(10) \quad $N = \displaystyle\sum_{v\in V} f_{\rm even} (v) g(v) - \displaystyle\sum_{v\in V} f_{\rm odd} (v) g(v)$
The special case 4.1 fits in nicely. Then $V = \mathbb{Z}^+$ and (10) matches the right hand side of $(6')$.
In principle $val: {\cal S}{\cal C} \rightarrow V$ can be any function (as long as (9) is satisfied), but it is most handy if $val: \, {\cal S}{\cal C} \rightarrow \mathbb{Z}^+$ can be chosen as
(11) \quad $val (\{i_1, \cdots, i_k\}) : = a_{i_1} + \cdots + a_{i_k} \quad (=v)$
for some suitable numbers $a_1, \cdots, a_h \in \mathbb{Z}^+$. (Obviously 4.1 is the special case $a_1 = \cdots = a_h =1$.) For weight functions $val$ of type (11) the $n$-algorithm framework from above adapts as follows. Along with each $\{0,1,2,n\}$-valued final row, say
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|}
& 1 & 2 & 3 & 4 & 5& 6 & 7 & 8 & 9 & 10 & 11 & 12\\ \hline
$r=$ & 2 & 2 & 2 & 2 & $n_1$ & $n_1$ & $n_1$ & $n_2$ & $n_2$ & $n_2$ & $n_2$ & $n_2$ \\ \hline \end{tabular}
comes a {\it bivariate} polynomial $p_r(x,y)$. Whereas in 4.1 the powers of $x$ matched {\it cardinality}, we now just have $x^0 =1$ and $x^1=x$ which take care of even and odd {\it parity}. Specifically,
$$p_r(x,y) = p_0(x,y) \cdot p_1 (x,y) \cdot p_2 (x,y).$$
The three factors match twos$(r)$, the $n_1$-bubble, and the $n_2$-bubble respectively. All factors are defined in like fashion. Focussing on the $n_2$-bubble, if say $a_8 = a_9=a_{10}=2$ and $a_{11} = a_{12} = 5$, then
$$p_2(x,y) = 1 + 3y^4+6y^7 + y^{10} + 2y^{11}+3y^{14} + x(3y^2+2y^5 + 6y^9 + 3y^{12})$$
This is because from the $2^5-1$ sets represented by the $n_2$-bubble,
1 set (namely $\phi$) has even cardinality and weight zero, whence the term $1x^0y^0 =1$; furthermore, 3 sets have even cardinality and weight 4 (namely $\{8,9\}, \{8, 10\}, \{9, 10\}$), whence $3x^0y^4=3y^4$; similarly $6y^7$ up to $3y^{14}$ are explained. The term $6xy^9$ (say) signifies that 6 sets in the $n_2$-bubble have odd cardinality and weight 9.
Upon reducing the powers of $x$ modulo 2 the coefficients of the product $p_r(x,y)$ count (parity-wise and weight-wise) the sets represented by the {\it whole} row $r$. A concrete calculation takes place in Section 5.
Let
(12) \quad $p(x,y) = (c_0 + c_1y + c_2y^2+ \cdots )+ x(d_0+d_1y+d_2y^2 + \cdots)$
be the {\it sum} of all polynomials $p_r(x,y)$ where $r$ ranges over all final $\{0,1,2,n\}$-valued rows.
It is in {\it standard form} when, as above, the factor $x$ is taken out. The occuring weights $v$ in (11) are the occuring $y$-exponents. The values $f_{\rm even}(v)$ and $f_{\rm odd}(v)$ needed in (10) can be read off as
(13) \quad $f_{\rm even}(v) = c_v \quad \mbox{and} \quad f_{\rm odd}(v) = d_v $
As to the numbers $g(v)$ in (10), if the weighted version of upgrade $A$ applies at all, then one reason is that calculating $g(v)$ is easy. See also Section 5.
In summary the upgrade (A) of inclusion-exclusion works as follows:
\begin{enumerate}
\item [(A)] Provided the generators of the irrelevant set filter ${\cal S}{\cal F}\subseteq P[h]$ can be found with moderate effort, they are used to represent the relevant set ideal ${\cal S}{\cal C}$ as disjoint union of $\{0,1,2,n\}$-valued rows of length $h$. By the IE invariance of val (or card) we can process {\it each row as a whole} and calculate $N$ either by formula (6) (using the face numbers $f(k)$), or by formula (10) (using the numbers $f_{\rm even}(v)$ and $f_{\rm odd}(v)$).
\end{enumerate}
\section{One more upgrade A example: Integer partitions}
Let us count the number $N$ of (non-negative integer) solutions of the upper bounded problem
(14) \quad $u_1+u_2+u_3 + u_4 + u_5 + u_6 = 9$, \\
\hspace*{1.1cm} subject to \ $u_i < a_i$ \ with $a_1 = 7, \, a_2 =4, \, a_3 = a_4 = 3, \ a_5 = a_6 =2$.
We indicate three lines of attack, the second one being IE upgrade A.
The first approach (Table 2) is to recursively count the number $N(t, \cdots, 6; k)$ of solutions to
$$u_t + \cdots + u_6 =k,$$
where $1\leq t \leq 6$ and $0 \leq k \leq 9$.
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
$k=$ & 0 & 1& 2 & 3 & 4 & 5& 6 & 7 & 8 & 9 \\ \hline
$N(6;k)=$ & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline
$N(5,6;k)=$ & 1 & 2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline
$N(4,5,6;k)=$ & 1 & 3 & 4 & 3 & 1 & 0 & 0 & 0 & 0 & 0 \\ \hline
$N(3,4,5,6;k)=$ & 1 & 4 & 8 & 10 & 8 & 4 & 1 & 0 & 0 & 0 \\ \hline
$N(2,3,4,5,6;k)=$ & 1 & 5 & 13 & 23 & 30 & 30 & 23 & 13 & 5 & 1\\ \hline
$N(1,2,3,4,5,6;k)=$ & $\ast$ & $\ast$ & $\ast$ & $\ast$ & $\ast$ & $\ast$ & $\ast$ & $\ast$ & $\ast$ & 125 \\ \hline
\end{tabular}
Table 2
For instance, as one verifies ad hoc, the number $N(4,5,6;k)$ of upper-bound solutions of $u_4 + u_5+u_6=k$ equals $1,3,4,3,1,0,0,0,0,0$ for $k = 0,1, \cdots, 9$ respectively. Hence say $N(3,4,5,6;4) = 1+3+4=8$, where $1,3,4$ match the cases $u_3=0, u_3=1, u_3 =2$, i.e. the cases
$$u_4 + u_5+u_6=4-0, \ u_4+u_5+u_6=4-1, \ u_4+u_5+u_6 = 4-2.$$
In the end $N = N(1, \cdots, 6;9) = 1+5+ \cdots + 30+23 = 125$. (The other values $\ast$ in the last row of Table 2 are irrelevant.)
The second proposal to get $N$ is by inclusion-exclusion. As a prerequisite we recall from enumerative combinatorics [B, p.36] that the number of integer solutions $(u_1, \cdots, u_m)$ of the {\it lower} bounded problem
$$u_1 +u_2+ \cdots + u_m =n, \ \mbox{subject to} \ u_1 \geq \lambda_1, \, u_2 \geq \lambda_2, \cdots, u_m \geq \lambda_m$$
with $\lambda_i \geq 0$ fixed, equals
(15) \quad $\left\{ \begin{array}{cll}
\left( \begin{array}{cc} n-\lambda_1 - \cdots - \lambda_m +m-1 \\
m-1 \end{array}\right) & , & \mbox{if} \ \lambda_1 + \cdots + \lambda_m \leq n\\
\\
0 & , & \mbox{otherwise} \end{array}\right.$
Thus $N(\underline{\phi}) = \left( \begin{array}{cc} 9+6-1\\
6-1 \end{array} \right) = \displaystyle{14 \choose 5}$ is the number of solutions $u=(u_1, \cdots, u_6)$ to $u_1 + \cdots + u_6 =9$. By definition $u$ satisfies the constraint $C(i)$ if $u_i < a_i$. For $\{i_1, \cdots, i_k\} \subseteq [6]$ let $N(\underline{i}_1, \cdots, \underline{i}_k )$ be the number of $u$'s with $u_{i_1} \geq a_i, \cdots, u_{i_k} \geq a_{i_k}$. While $N(\underline{i}_1, \cdots, \underline{i}_k)$ isn't determined by $k$ as in 4.1, it is determined by the {\it value} of $\{i_1, \cdots, i_k\}$ if this is defined as
$$val(\{i_1, \cdots, i_k\}) : = \ a_{i_1} + \cdots + a_{i_k}.$$
Namely, putting $v: = a_{i_1} + \cdots +a_{i_k}$ it follows from (15) that
(16) \quad $N(\underline{i}_1, \cdots, \underline{i}_k) = \displaystyle{9-v+5 \choose 5} = {14-v \choose 5} =: g(v)$.
We can therefore employ the weighted version of upgrade $A$ (Section 4.2) to calculate $N$.
First, ${\cal S}{\cal C} = {\cal P}[6]\backslash {\cal S}{\cal F}$ is to be found with the $n$-algorithm. Specifically, the generators $\Gamma_i$ of the irrelevant set filter ${\cal S}{\cal F}$ are readily seen to be
(17) \quad $\{1,2\}, \ \{1,3\}, \ \{1,4\}, \ \{1,5,6\}, \ \{2,3,4\}, \ \{2,3,5,6\}, \ \{2,4,5,6\}, \ \{3,4,5,6\}$.
For instance $\{2,3,5,6\}$ qualifies because $a_2 + a_3+a_5+a_6 = 4+3+2+2 > 9$ and each proper subsum is $\leq 9$. Feeding these generators to the $n$-algorithm yields ${\cal S}{\cal C} = r_1 \uplus r_2 \uplus r_3 \uplus r_4$ with $\{0,1,2,n\}$-valued rows $r_i$ as shown in Table 3:
\begin{tabular}{l|c|c|c|c|c|c|}
$i=$ & 1 & 2 & 3 & 4 & 5 & 6\\ \hline
$a_i=$ & 7 & 4 & 3 & 3 & 2 & 2\\ \hline \hline
$r_1=$ & 0 & $n_1$ & $n_1$ & $n_1$ & $n_2$ & $n_2$\\ \hline
$r_2=$ & 1 & 0 & 0 & 0 & $n$ & $n$ \\ \hline
$r_3=$ & 0 & $n$ & 0 & $n$ & 1 & 1 \\ \hline
$r_4=$ & 0 & 0 & 1 & 0 & 1 & 1 \\ \hline \end{tabular}
Table 3
The bivariate polynomials from 4.2 that are attached to these $\{0,1,2,n\}$-valued rows are
$\begin{array}{lll}
p_{r_1}(x,y) & = & [(1+y^6+2y^7)+x(2y^3+y^4)] \cdot [1+2xy^2] \\ \\
& = & 1 + y^6 + 2y^7 + x(2y^2+2y^8+ 4y^9) + x(2y^3+y^4) + x^2(4y^5+2y^6)\\ \\
& \equiv & 1+ 4y^5 + 3y^6+2y^7 + x(2y^2 + 2y^3 + y^4 + 2 y^8 + 4y^9)\\
\\
p_{r_2} (x,y) & = & xy^7 + 2y^9\\
\\
p_{r_3}(x,y) & =& y^4 + xy^7 + xy^8\\
\\
p_{r_4}(x,y) & =& xy^7 \end{array}$
Here $\equiv$ indicates that we reduced the powers of $x$ modulo 2. The sum $p(x,y) = p_{r_1}(x,y) + \cdots + p_{r_4}(x,y)$ in standard form is
$$p(x,y) = 1+y^4+4y^5 + 3y^6 + 2y^7 + 2y^9 + x(2y^2 + 2y^3 + y^4 + 3y^7 + 3y^8+4y^9).$$
This together with (13) and (16) yields Table 4.
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|}
$v=$ & 0 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\\ \hline
$f_{\rm even}(v)=$ & 1 & 0 & 0 & 1& 4 & 3 & 2 & 0 & 2\\ \hline
$f_{\rm odd}(v)=$ & 0 &2 & 2 & 1 & 0 & 0 & 3 & 3& 4\\ \hline
$g(v)=$ & 2002 & 792 & 462 & 252 & 126 & 56 & 21 & 6 & 1 \\ \hline \end{tabular}
Table 4
Substituting from Table 4 into (10) yields
$$\begin{array}{lll}
N &= & (f_{\rm even}(0) g(0) + \cdots + f_{\rm even} (9) g(9)) \ - \ (f_{\rm odd}(0) g(0) + \cdots + f_{\rm odd} (9) g(9)) \\
\\
& =& (1 \cdot 2002 + 1 \cdot 252 + \cdots +2 \cdot 1) - (2 \cdot 792+2 \cdot 462 + \cdots + 4 \cdot 1 )\\
\\
& = & 2970 - 2845 = 125 \end{array}$$
which matches the number obtained by the recursive method.
Generalizing the toy example (14) consider the problem to count the number $N$ of solutions to
(18) \quad $u_1 + u_2 + \cdots + u_h =t, \quad \mbox{subject to} \quad u_1 < a_1, \, u_2 < a_2, \cdots, u_h < a_h.$
How does the recursive way (Table 2) compare to IE upgrade A. The former requires work proportional to $ht$ because $h$ and $t$ are the dimensions of the table that matches Table 2. In contrast IE upgrade $A$ merely depends on $h$. Thus it is applicable\footnote{As to the polynomials $p_r(x,y)$ accompanying the final rows it may raise eyebrows that for twos$(r)$, and for each $n$-bubble with support $S \subseteq [h]$, one has to scan the whole of ${\cal P}(S)$ to compute the factor of $p_r(x,y)$ that matches $S$. There are at least two set-ups where this is benign.
First, if all {\it generators $\Gamma_i$ are small} then by the nature of the $n$-algorithm no $n$-bubble can be larger than the largest $\Gamma_i$.
Second, if {\it many $a_i$ are equal} (say all $a_i \in \{1, 2, 3\}$), the calculations simplify a lot, independent of the sizes of the $n$-bubbles.} to Gargantuan values of $t$ provided $h$ is moderate (but potentially much higher than with ordinary IE).
Due to a lack of expertise in this are our third approach to calculating $N$ is rudimentary. It is clear that $N$ equals the coefficient of $x^t$ in the product $q(x)$ of the $h$ polynomials $1+x+x^2+\cdots + x^{a_1-1}$ up to $1+x+x^2+\cdots + x^{a_h-1}$, see (18). While the fastest way to multiply general polynomials is by means of the Fast Fourier Transform [CLR, chapter 32], the fastest way to multiply a polynomial $p(x)$ with the very special polynomial $1+x+\cdots + x^{a-1}$ arguably is to multiply $p(x)$ with $x^a-1$ and then to divide the result by $x-1$; that requires about 2a additions. Trouble is, in this way one gets the {\it whole} of $q(x)$, not just the coefficient $N$ at $x^t$.
The observation that $N$ equals the $t$-th derivative of $q(x)$ evaluated at $0$, divided by some binomial coefficient, doesn't look helpful in this situation either.
\section{One more upgrade B example: Boolean DNF}
Consider Boolean functions $b: \{0, 1\}^n \rightarrow \{0,1\}$ in DNF, i.e. they are disjunctions of $h$ conjunctions, henceforth called {\it terms}. Say for $n=6$ and $h=3$:
(19) \qquad $b(x) = (x_2 \wedge \overline{x}_5 \wedge x_6 ) \ \vee \ (x_1 \wedge x_2 \wedge \overline{x}_3) \ \vee \ (x_2 \wedge \overline{x}_3 \wedge \overline{x}_4)$
Here $x = (x_1, \cdots, x_6)$ is the vector of Boolean variables and we identify a bitstring $y \in \{0,1\}^n$ with its support $\{i| \, y_i =1\} \in {\cal P}[n]$. A bitstring $y$ is a {\it model} of $b$ if $b(y) = 1$. Let Mod$(b) \subseteq {\cal P}[n]$ be the set of all models. We shall compare two methods to calculate the cardinality $|\mbox{Mod}(b)|$.
First, this can be done with binary decision diagrams (BDDs). Specifically we shall use the Mathematica command {\tt SatisfiabilityCount} which is based on BDDs. For $b(x)$ in (20) one gets
$$|\mbox{Mod}(b)| = {\tt SatisfiabilityCount}[b(x)] = 17.$$
In the second approach, based on inclusion-exclusion, we combine the individual model sets $\rho_i \subseteq {\cal P}[n]$ of the terms $T_i$ of the DNF. Each $\rho_i$ is an interval in the Boolean lattice ${\cal P}[n]$ and can be written succinctly as a $\{0,1,2\}$-valued row. See (20) below for the $b(x)$ in (19). What's more, each intersection of sets $\rho_i$ can be written as $\{0,1,2\}$-valued row $\rho$ as well.
(20) \hspace*{1cm} $\begin{array}{rll}
\rho_1 & =& (2,1,2,2,0,1) \\
\rho_2 & =& (1,1,0,2,2,2) \\
\rho_3 & =& (2,1,0,0,2,2) \\
\rho_1 \cap \rho_2 & =& (1,1,0,2,0,1) \\
\rho_1 \cap \rho_3 & =& (2,1,0,0,0,1) \\
\rho_2 \cap \rho_3 & =& (1,1,0,0,2,2)\\
\rho_1 \cap \rho_2 \cap \rho_3 & =& (1,1,0,0,0,1) \end{array}$
Because the cardinality of $\rho$ is $2^t$ where $t$ is the number of components 2 in $\rho$, inclusion-exclusion\footnote{This is an IE version dual to (1), thus while say $N(\underline{1}, \underline{3})$, is counted positive in (1), now $N(1,3):= |\rho_1 \cap \rho_3|$ counts negative in (21).} readily yields
(21) \quad $\begin{array}{lll}
|\mbox{Mod}(b)| & =& |\rho_1| +|\rho_2| + |\rho_3| - |\rho_1 \cap \rho_2| - |\rho_1 \cap \rho_3| - |\rho_2 \cap \rho_3| + |\rho_1 \cap \rho_2 \cap \rho_3|\\
\\
&=& N(1) + N(2) + N(3) - N(1,2) - N(1,3) - N(2,3) + N(1,2,3) \\
\\
& = & 8 + 8 + 8 - 2-2-4+1=17 \end{array}$
An intersection of any number of $\{0,1,2\}$-valued rows is empty if and only if some {\it two} among them (say $\rho_i$ and $\rho_j$) have empty intersection; and this amounts to either ones$(\rho_i) \cap \ \mbox{zeros}(\rho_j) \neq \emptyset$ or $\mbox{zeros}(\rho_i) \cap \ \mbox{ones}(\rho_j) \neq \emptyset$. Thus let $G$ be the graph whose $h$ vertices are the terms $T_i$ of the DNF, and which has $T_i, T_j$ adjacent if and only if $\rho_i \cap \rho_j = \emptyset$. Then the relevant IE set ideal ${\cal S}{\cal C}$ is the family of all anticliques of $G$. It can be calculated as ${\cal S}{\cal C} = r_1 \uplus \cdots \uplus r_R$ where the $r_j$ are $\{0,1,2,a, \overline{b}\}$-valued rows of length $h$ (see Section 2.1). Within each row $r_j$ the faces $U \in r_j$ are processed as follows. If $U = \{i_1, \cdots, i_k\} \subseteq [h]$, then as illustrated in (20) and (21):
$$N(i_1, \cdots, i_k) = 2^{|Z(U)|}, \quad \mbox{where} \quad Z(U) = \ \mbox{twos}(\rho_{i_1}) \cap \ \mbox{twos}(\rho_{i_2}) \cap \cdots \cap \ \mbox{twos}(\rho_{i_k})$$
We pitted the IE upgrade $B$ method, programmed in high level Mathematica code, against the hardwired command {\tt SatisfiabilityCount}.
Specifically, for each of 15 choices of parameters $(n, n_1, n_0, h)$ we sampled about four\footnote{For times larger than 200 sec it may be only two or three. No time entry at {\tt SatisfiabilityCount} means that the algorithm was aborted after at least two hours. Concerning the number of anticliques (\# AC) for instance 9.5 in 307752 (9.5) means that the average size of the maximum anticliques in four trials was 9.5.} random DNF's as follows. Each DNF is based on $n$ variables, and consists of $h$ terms. Each term $\rho_i$ is a conjunction of $n_1$ randomly chosen positive literals $x_j$, and $n_0$ randomly chosen (but non-overlapping) negative literals $\overline{x}_j$. Thus $\rho_i$ can be identified with a $\{0,1,2\}$-valued row of length $n$ that has $|\mbox{ones}(\rho_i)| =n_1$ and $|\mbox{zeros}(\rho_i)| = n_0$. For fixed $n$ any two terms $\rho_i$ and $\rho_j$ are the more likely to be adjacent in $G$ (that is, $\rho_i \cap \rho_j = \emptyset$) the larger $n_1$ and $n_0$ are. Roughly speaking, the denser $G$, the fewer anticliques it has, and the better our IE upgrade $B$ performs as compared to {\tt SatisfiabilityCount}. More specifically, it is apparent that IE-time is a sublinear function of \#AC. As illustrated by the examples having \#AC equal to 1499, 1826, 1621, when keeping \#AC fixed, IE-time only mildly increases with increasing problem size. As to the roughly 1.25 billion anticliques in the $h=70$ example, they were encoded within 19 million multivalued rows. {\tt SatisfiabilityCount} is more discontinuous. For instance the times for the $h=54$ example ranged from 27 sec to 144 sec. For $h=55$ we had once 52 sec but three times aborted after more than 2 hours. For both algorithms the actual number of models (sometimes more than $10^{1000}$) does not influence the running time.
\begin{tabular}{c|c|c|c|c}
$h$ & $n(n_1+n_0)$ & \# AC & IE (upgrade B) & {\tt SatisfiabilityCount} \\ \hline
50 & $50(5+4)$ & 3950 (7) & 0.9 & 1.7 \\ \hline
50 & $50(3+3)$ & $232 \, 760 \, (13)$ & $9.7$ & $34.9$ \\ \hline
50 & $50(2+2)$ & $53 \, 575 \, 873 \, (19)$ & 836.5 & 39.4\\ \hline
54 & $50(2+2)$ & $123 \, 127 \, 905 \, (21)$ & 1936 & 69.3\\ \hline
60 & $50 (2+2)$ & $313 \, 514 \, 739 \, (20.7)$ & 4928 & \\ \hline
70 & $50(2+2)$ & $1 \, 252 \, 459 \, 795 \, (21.5)$ & 22583 & \\ \hline
50 & $200(14+14)$ & 270 \,(4) & 0.4 & 38.2 \\ \hline
50 & $200 (12+11)$ & 764 \,(4.5) & 0.4 & 89.3\\ \hline
50 & $200 (11+11)$ & 830 \, (5) & 0.4 & \\ \hline
200 & $50(8+8)$ & 1499 \,(4) & 5.7 & 4.0\\ \hline
200 & $50 (6+7)$ & 9850 \, (5.7) & 13.0 & 39.5\\ \hline
200 & $50 (5+5)$ & 307 752 \, (9.5) & 181.5 & \\ \hline
200 & $50 (4+4)$ & 19 995 836 (14.5) & 5071 & \\ \hline
200 & $1000(37+36)$ & 1826 \, (4) & 20.9 & \\ \hline
200 & $2000(51+51)$ & 1621 \, (4) & 39.4 & \\ \hline
\end{tabular}
Table 5
{\bf 6.1} Let us glimpse how the two methods compare when only the models of $b(x)$ of {\it fixed cardinality} $k$ need to be counted. If BDD's are to be (ab-)used for this task then the DNF of $b(x)$ must be extended by ${n \choose k}$ terms that spell out the constraint in a clumsy way. For instance, if only the 3-element models of $b(x)$ in (19) are to be counted then {\tt SatisfiabilityCount} needs to be applied to the formula
$$b'(x) =b(x) \wedge ((x_1 \wedge x_2 \wedge x_3 \wedge \overline{x}_4 \wedge \overline{x}_5 \wedge \overline{x}_6) \vee (x_1 \wedge x_2 \wedge \overline{x}_3 \wedge x_4 \wedge \overline{x}_5 \wedge \overline{x}_6) \vee \cdots )$$
where $x_1 \wedge x_2 \wedge \overline{x}_3 \wedge x_4 \wedge \overline{x}_5 \wedge \overline{x}_6)$ is one of ${6 \choose 3}$ terms. It is evident that already small values of $n$ and $k$ blow $b'(x)$ out of proportion, let alone evaluating $b'(x)$.
In contrast IE-upgrade $B$ adapts better to the cardinality constraint because for each intersection $\rho$ of $\{0,1,2\}$-valued rows $\rho_i$ of the type appearing in (21) it is easy to calculate the number Card$(\rho, k)$ of $k$-element models contained in $\rho$. If $\beta: = |\mbox{ones}(\rho)|$ and $\gamma: = |\mbox{twos}(\rho)|$ then
$$\mbox{Card}(\rho, k)= \left\{ \begin{array}{lll} {\gamma \choose k-\beta}, & \mbox{if} & k \leq \beta \\
\\
0, & \mbox{if} & k > \beta \end{array}\right.$$
However, in the new setting the generators of ${\cal S}{\cal F}$ can have cardinality $> 2$. Perhaps there is a clever way to find them, in order to continue with the $n$-algorithm. The author didn't see that way and opted to keep the $(a, \overline{b})$-algorithm but to discard infeasible $\{0,1,2,a,\overline{b}\}$-valued rows $\rho$ as soon as they emerged. Here ``infeasible'' means that either $|\mbox{ones}(\rho )| > k$ or $|\mbox{ones}(\rho) \cup \, \mbox{twos}(\rho)| < k$. The results will be published in [W3].
Different from Section 6 of the present article the issue in [W3] won't be counting but rather {\it generating} the models. We mention that for generating models it matters whether the Boolean function is given in DNF or CNF. For merely counting models this doesn't matter. To see why, for any Boolean function $b(x)$ in CNF or DNF let $\overline{b}(x)$ be the function obtained by switching $\wedge, \vee$ and literals $x_i, \overline{x}_i$ throughout. Thus, if $b(x)$ was in CNF then $\overline{b}(x)$ is in DNF, and vice versa. Furthermore,
$$(\forall y \in \{0,1\}^n) \quad b(y) =1 \ \Leftrightarrow \ \overline{b}(y) =0.$$
Consider any algorithm which processes Boolean functions $b(x)$ in (say) DNF-format and outputs $N_1: = |\{y \in \{0,1\}^n: b(y) = 1\} |$ or, just as easily, $N_0: = |\{y\in\{0,1\}^n: b(y) =0\}|=2^n-N_1$. In view of (21) that immediately yields a procedure for counting the models of Boolean functions $\beta (x)$ in CNF.
\section{A third type of upgrade A}
We consider IE-invariant functions $val : \ {\cal S}{\cal C} \rightarrow V$ of a type different from (11). Namely $V$ is any lattice and $val : \ {\cal S}{\cal C} \rightarrow V$ is such that
(22) \quad $val (U_1 \cup U_2) = val (U_1) \vee val (U_2)$
for all $U_1, U_2 \in {\cal S}{\cal C}$ contained in a common facet. A face $U$ is {\it val-maximal} if $U \varsubsetneqq U'$ implies $val (U) < val (U')$. Let $F$ be any facet. Using standard lattice theory\footnote{The set $L: = \{U \in {\cal S}{\cal C} : \ U \subseteq F\}$ is a (Boolean) lattice with $O_L = \emptyset$ and $1_L = F$. Put $L':= \{v \in V: \, val (O_L) \leq v\}$. By [CLM, Prop.3.42] the restriction $f$ of $val$ to $L \rightarrow L'$ is residuated. Hence by [CLM, Thm.3.37] the family ${\cal C}\ell (F) \ (=g(L')$ with $g$ as in CLM) is a closure system.} one shows that the family ${\cal C}l(F)$ of all $val$-maximal faces $U \subseteq F$ is a closure system with top element $F$. Suppose the sets $X \in {\cal C}l (F)$ can be identified efficiently, and along with $X$ all its lower covers $X_1, \cdots, X_s$ within ${\cal C}l(F)$. Put $v: = val (X)$. In view of (9) we wish to encode all set families ${\cal U}(X): = \{U \in {\cal P}(F): \ val (U) = v\}$ in a compact way $(X \in {\cal C}l(F))$. It is clear that ${\cal U}(X) = \{U \in {\cal P}(X): U \not\subseteq X_1, \cdots, U \not\subseteq X_s\}$. Furthermore, such a kind of set family can be represented as disjoint union of few $\{0,1,2,e\}$-valued sets. Here $e$ is the dual twin of $n$; see [W1] for details.
We mention that our example in Section 6 fits the setting of (22). Namely, let $V$ be the dual of the powerset lattice ${\cal P}[h]$ (i.e. with $\cap$ and $\cup$ switched) and define $val: \ {\cal S}{\cal C} \rightarrow V$ by $val (U) = Z(U)$. Since here ${\cal C}l(F)$ doesn't strike one to be significantly smaller than ${\cal P}(F)$, and since the plain upgrade $B$ version already beat BDD's in many cases, the author didn't implement this third type of upgrade A.
\section*{References}
\begin{enumerate}
\item [{[B]}] V.K. Balakrishnan, Schaum's outline of theory and problems of combinatorics, McGraw-Hill 1995.
\item[{[CLM]}] N. Caspar, B. Leclerc, B. Monjardet, Finite Ordered Sets, Encylcopedia of Math. and Appl. 144, Cambridge University Press 2012.
\item[{[CLR]}] Cormen T., Leiserson C., Rivest R., Introduction to Algorithms, The MIT Press 1992.
\item[{[D]}] K. Dohmen, Improved Bonferroni Inequalities via Abstract Tubes, Lecture Notes in Mathematics 1826, Springer 2003.
\item[{[W1]}] M. Wild, How to partition or count an abstract simplicial complex, given its facets, arXiv:1302.1039, submitted.
\item[{[W2]}] M. Wild, Enumerating all maximum anticliques, Part 2. In preparation. A preliminary version is arXiv:0901.4417.
\item[{[W3]}] M. Wild, Disjoint sums of products - an approach using wildcards. In preparation.
\end{enumerate}
\end{document}
|
2,869,038,156,746 | arxiv | \section{Introduction}
In this paper, our main motivation is to suggest a probabilistic construction of interactions in Euclidean Quantum Field Theory (QFT). QFT is a physical theory which combines field theory, already used in Classical Mechanics to describe Electromagnetism, and quantum mechanical principles, believed to govern the behavior of microscopic systems. The most computable approach of QFT is the path integral formalism where free Euclidean QFT is assumed to be a formal Gaussian probability law on a space of fields and interacting QFT extends the formal Gaussian law by adding an additional term, the interacting term, to the quadratic term of the free QFT measure.
However, in this setting, interacting QFTs suffer from divergence problems because most of relevant quantities calculated in are divergent. Although there exists a physical theory, the renormalization theory, enable to solve divergence problems, to introduce interacting term as additional term of the free term is unjustified from a probabilistic viewpoint because the resulting theory is not necessary a probability law. This paper indicates a construction of interacting theories which are --\emph{a priori}-- probability laws.
From a probability law considered as free theory, an interacting theory is constructed by convolution product between the free theory and an interacting term where this latter defines also a probability law. In this case, the interacting term does not depend on the free term and two different free theory can be implemented with the same interaction as in usual theories of interaction such as Gauge theory. When the free and interacting terms are Gaussian laws, we work out some natural conditions on the convolution product and use the exponential map to provide a general example. A calculation of the two-point function of our theory exhibits some analogous proprieties already present in the usual path integral formalism. The direct use of Gaussian measures, bypassing Lebesgue measures, allows to generalize the present construction for infinite dimensional spaces equipped with Gaussian measures such as often encountered in QFT \cite{jgaj81}.
\section{Partition functions in QFT}
Partition functions are the main object of the path integral approach in QFT. It allows to compute correlation fucntions which are then used to derive the S-matrix of a physical process described by an interacting QFT. For such QFTs, fundamental interactions between matters are usually explained by Gauge Theory. Its main feature is that a free Lagrangian is not invariant under some local transformations on matter fields unless one introduces a supplementary term containing a new 'field', the \emph{gauge potential}, which mediates interaction between matter fields; it is the minimal coupling procedure. In addition, one must also construct a free Lagrangian term for the gauge potential.
When one works within the path integral formalism, one defines formally the Euclidean partition function Z as
\begin{eqnarray}
Z &:=& \int_{Fields} D\phi \int_{GP} DA \; e^{-S_{m,free}(\phi ) - S_{int}(\phi,A) }e^{-S_{g,free}(A) - S_{g,self-int}(A)},
\end{eqnarray}
with the normalization condition fixed by free theory
\begin{eqnarray}
Z_{free} := \int_{Fields} D\phi\; e^{-S_{m,free}(\phi)} = 1,
\label{norcon}
\end{eqnarray}
where:
\begin{itemize}
\item S$_{m,free}$ and S$_{g,free}$ are respectively free actions of the matter field $\phi\in$Fields and the gauge potential A$\in$GP. Usually, free actions are nondegenerate positive sequilinear forms for matter fields but they are initially degenerate for gauge potentials. However, final free gauge actions, which are nondegenerate, are obtained from initial ones by adding a gauge fixing term.
\item S$_{int}$($\phi$,A) is the interacting term which describes fundamental interactions between matters. In the minimal coupling procedure, it is of the form
\begin{eqnarray}
S_{int}(\phi,A) = B(\phi,\Sigma(A)\phi) \quad \phi\in Fields,\: A\in GP,
\end{eqnarray}
where B is a sesquilinear form on Fields and $\Sigma$(A) an hermitian operator on Fields for any gauge potential A. The hermiticity of $\Sigma$(A) is equivalent to the unitarity of gauge transformations on Fields.
\item S$_{g,self-int}$(A) is a self-interaction term of non-abelian gauge potentials which are present when describing some fundamental interactions such as strong interaction.
\end{itemize}
\begin{note} The above partition function is a formal object because first, the two measures D$\phi$ and DA each on infinite dimensional spaces are formal and second, it is divergent when one tries to evaluate (some part of) it. However, after some nontrivial procedure on Z called \emph{renormalization} \cite{pdel96}, one obtains a finite quantity Z$_{ren}$ and for the probability convenience of QFT, one may normalize it. According to the normalization condition (\ref{norcon}), one may try to define a QFT as a probability law on the space of fields (after performing the integration on GP). In order to manipulate well-defined functional integrals, our strategy is to consider only functional integrals constructed from Gaussian measures.
\end{note}
\section{Sequence construction of interaction}
As seen in the first section, one may assume the existence of a Gaussian measure $\mu_{free}$ on the space of fields, and usual interacting QFTs are obtained by namely adding a supplementary (interacting) term to the free action. However, this last step conducts to divergence problems. To obtain an intertacting theory which is again a probability law, our idea is to introduce the interacting term by means of convolution product as is done in some constructions in probability theory when one deals with sequences of dependent random variables.
\subsection{Interacting sequences}
In probability theory, theorems on the weak convergence to a normal law, such as the Lindeberg-Feller theorem \cite{ribap01}, works essentially for sequences of independent random variables. More precisely, one considers a sequence of independent random variables (X$_n$)$_{n\in\textbf{N}}$ and its partial sum process ($\displaystyle\sum_{i=0}^nX_i$)$_{n\in\textbf{N}}$; under some additional conditions on the mean and the variance of (X$_n$)$_{n\in\textbf{N}}$, the partial sum process ($\displaystyle\sum_{i=0}^nX_i$)$_{n\in\textbf{N}}$ converges weakly to a normal random variable. These conditions on the mean and variance of the sequence are not so important in the sense that they do not depend on the values of these two quantities. Roughly speaking, the partial sum process of a sequence of independent random variables is inclined to follow a normal law.
\\[10pt]
On the other hand, free physical systems such as free QFTs are often described by a quadratic action, i.e. by normal laws in the path integral formalism. Therefore, one may suggest:
\begin{ass} A free physical system can be represented by the partial sum process of an independent random variables sequence. More generally, an interacting physical system can be represented by sequence of dependent random variables.
\end{ass}
It is well-known that the probability law of a sum of independent (not necessarily equally distributed) random variables is given by the convolution product of random variable's laws. One deduces from the above explanation that a sequence of convolutions of probability laws converges weakly to a normal law when its mean and variance satisfy some technical conditions.
Now, we will show that the probability law of an interacting sequence can also be obtained by a convolution product of its free probability law. In order to introduce ourself on the subject, it suffices to consider first discrete probability laws.
\subsubsection{Pointwise product construction.}
For discrete probability laws, this method consists to introduce interacting term by pointwise product with the free probability. For probabilities having densities, it amounts to pointwise product the interacting term with the free probability density. Therefore, from a discrete probability law p$_{free}$ representing a free theory, we define a new probability law p by:
\begin{eqnarray}
p = p_{free}.p_{int},
\end{eqnarray}
where . is the pointwise product of real-valued functions, and p$_{int}$ is a real function such that:
\begin{eqnarray}
0 \leq p_{free}.p_{int} \leq 1 \quad \textrm{and} \quad
\sum_{k\in\textbf{N}} p_{free}(k)p_{int}(k) = 1.
\end{eqnarray}
Clearly, the construction of the interacting term p$_{int}$ amounts to find a random variable with law p$_{free}$ and mean one.
\begin{note}
In the pointwise product construction, even if different interacting terms can be associated to a given free probability, their constructions depend implicitly on the free term. Moreover, it is not difficult to show that two different free probability laws cannot have the same interacting term. These two proprieties are not convenient for Particle Physics where interactions are constructed independently of the free term and different particles may have the same interaction as suggested by Gauge Theory \cite{pdel96}.
\end{note}
One obtains analogous results for probabilities having densities, so let us move on to the next construction which would develop more appropriate proprieties.
\subsubsection{Convolution product construction.}
Another way to introduce interacting term is to multiplicate this latter with the free term by means of a convolution product. For probabilities having densities, it amounts to consider the convolution of the free probability density with the interacting term. From a discrete probability law p$_{free}$, we define a probability law \^p of an interacting sequence by:
\begin{eqnarray}
\hat{p} = p_{free}*\hat{p}_{int},
\end{eqnarray}
where for f,g$\in$ Map($\textbf{N},\textbf{C}$), the associative convolution product is defined by:
\begin{eqnarray}
f*g := m_{\textbf{C}}\circ (f\otimes g)\circ\Delta^+ ,\quad \Delta^+(k) := \sum_{\substack{a+b = k\\a,b\in \textbf{N}}}a\oplus b, \: k\in \textbf{N} \quad \text{(m$_{\textbf{C}}$ is the multiplication on $\textbf{C}$)}
\label{stacovpr}
\end{eqnarray}
and analogous conditions to those of the pointwise construction for the real-valued function p$_{int}$, i.e.
\begin{eqnarray}
0 \leq p_{free}*\hat{p}_{int} \leq 1 \quad \textrm{and} \quad \sum_{k\in \textbf{N}} p_{free}*\hat{p}_{int}(k) = 1.
\label{codcovpr}
\end{eqnarray}
\begin{note} The interacting term \^p$_{int}$ is necessarily a probability law when one uses the discrete convolution product (\ref{stacovpr}). Moreover, it does not depend on the free probability law and two different free probability laws can have the same interacting term. Such proprieties are present in some constructions of interaction such as Gauge theory in Particle physics.
\end{note}
Analogous results are obtained for probabilities having densities when one uses the standard convolution product on the space L$^1$(\textbf{R}) of integrable functions defined on $\textbf{R}$. It is then promising to extract more features of the above construction for measurable vector spaces.
\section{Convolution product construction on finite dimensional vector spaces}
After these discussions concerning mainly discrete laws, it is natural to consider the usual generalization of the convolution product of measures defined on a measurable vector space. In this case, the (usual) convolution product of two measures is simply given by the pushforward of their product measure under the addition map of the vector space.
\begin{dfn} Let V be a measurable vector space, $\mu_1$, $\mu_2$ two measures on V, then the \emph{convolution product} of $\mu_1$ by $\mu_2$ is the measure $\mu_1*\mu_2$ on V defined by:
\begin{eqnarray*}
\int_V d(\mu_1*\mu_2)(x)\,f(x) &:=& \int_V d\mu_1(x)\int_V d\mu_2(y)\,f(x+y) \quad \forall \text{ f an integrable map on V},\\
&=& \int_V d\mu_1(x)\int_V d\mu_2(y)\,f(x),\\
\textrm{ or } \; \mu_1*\mu_2 &:=& (\mu_1\times \mu_2)\circ C(\Sigma),
\end{eqnarray*}
where $\mu_1\times \mu_2$ is the product measure of $\mu_1$ by $\mu_2$ defined on V$\oplus$V, and
\begin{eqnarray*}
\begin{aligned}
\Sigma: V\oplus V &\rightarrow V\\
x\oplus y &\mapsto x + y
\end{aligned}
\quad , \quad
\begin{aligned}
C(\Sigma): C(V) &\rightarrow C(V\oplus V)\\
f &\mapsto f\circ \Sigma.
\end{aligned}
\end{eqnarray*}
\label{defconv}
\end{dfn}
However, it is not difficult to show that the convolution product of two Gaussian measures on $\textbf{R}$ with variances $\sigma^2$ and $\sigma'^2$ is again a Gaussian measure with variance $\sigma^2$ + $\sigma'^2$. In other words, the usual convolution product is not convenient for the construction of an interacting (i.e. non Gaussian) probability law from Gaussian laws on a measurable vector space. Moreover, in a standard interacting QFT, the partition function is given by an iterated functional integral over two different domains. Therefore, we are interested in the construction of an interacting measure from two (Gaussian) measures defined respectively on two different vector spaces. For two measures $\mu_F$ and $\mu_P$ defined on two measurable vector spaces F and P respectively, we will use the following definition:
\begin{eqnarray*}
\int d(\mu_F*_{\zeta}\mu_P)(u)\,f(u) &:=& \int d\mu_F(u)\int d\mu_P(A)\,f(\zeta(A)^{-1}u) \quad \forall \text{ f an integrable map on F},\\
&=& \int d\mu_P(A)\int d\mu_F(\zeta(A)u)\,f(u),\\
\textrm{or } \mu_1*\mu_2 &=& (\mu_1\times \mu_2)\circ C(\Theta),
\end{eqnarray*}
where $\zeta$ is a map from P to Aut(F) and
\begin{eqnarray*}
\begin{aligned}
\Theta: F\oplus P &\rightarrow F,\\
u\oplus A &\mapsto \zeta(A)^{-1}u.
\end{aligned}
\end{eqnarray*}
Of course, one may obtain many possible convolution products following possible maps $\zeta$.
The next subsection will select generalized convolution products which may lead to good physical interpretations.
\subsection{Interaction for probabilities on finite dimensional vector spaces}
Here, we will select convolution products, by choosing the map $\zeta$, which well-behaved when we are dealing with Gaussian measures on finite dimensional vector spaces.
Let F,P be two finite dimensional complex vector spaces representing a space of 'matter fields' and a space of 'gauge potentials' respectively\footnote{However, one can recover the self-interaction case by considering F = P.}. We agree ourselves to consider positive definite sesquilinear forms B$_m$ and B$_g$ on F and P respectively as free actions which then defines Gaussian measures on these spaces.
In the minimal coupling procedure, the interacting term is of the form
\begin{eqnarray}
S_{int}(u,A) = B(u,\Sigma(A)u) \quad u\in F,\: A\in P,
\end{eqnarray}
where B is a bilinear form on F and $\Sigma$(A) an hermitian operator on F for any gauge potential A. The hermiticity of $\Sigma$(A) is equivalent to the unitarity of gauge transformations on F.
On the other hand, our construction of interaction consists to define the partition function and the probability law of an interacting physical theory on F by means of the following convolution product:
\begin{eqnarray}
&& Z_{m*_{\zeta}g} := N(\zeta)\int_{F} du \int_{P} dA \; e^{-\frac{1}{2}B_m(\zeta(A)u,\zeta(A)u)}e^{-\frac{1}{2}B_g(A,A)},\\
\label{scpart1}
&\text{and}& d\mu_{m*_{\sigma}g}(u) := N(\zeta)\,du\int_PdA\; e^{-\frac{1}{2}B_m(\zeta(A)u,\zeta(A)u)}e^{-\frac{1}{2}B_g(A,A)},\\
&& \mu_{m*_{\zeta}g} := (\mu_m\times\mu_g)\circ C(\Theta),\quad \Theta: u\oplus A \mapsto \zeta(A)^{-1}u,
\label{measdef}
\end{eqnarray}
where:
\begin{enumerate}
\item measures du and dA are respectively Lebesgue measures on F and P such that
\begin{eqnarray*}
\int_{F} du \; e^{-\frac{1}{2}B_m(u,u)} := \int_Fd\mu_m(u) = \int_{P} dA \; e^{-\frac{1}{2}B_g(A,A)} := \int_P d\mu_g(A) = 1.
\end{eqnarray*}
It is important to note that the definition (\ref{measdef}) of $\mu_{m*_{\zeta}g}$ uses directly Gaussian measures $\mu_m$ and $\mu_g$ and then may admit a suitable generalization in infinite dimensional vector spaces equipped with Gaussian measures such as the dual Schwartz space $\mathcal{S}^*$($\textbf{R}^4$) \cite{jbas04}.
\item for all A$\in$P, $\zeta$(A)$\in$ End(F). The map $\zeta$ characterizes the nature of the convolution product, in other words, those of the interaction;
\item for all A$\in$P, B$_{m,\zeta(A)}$ := B$_m$($\zeta(A)\cdot, \zeta(A)\cdot$) is positive definite and sesquilinear. This is equivalent to consider invertible maps $\zeta$(A) for all A$\in$P. The main reason for this condition is to facilitate the normalization of Z$_{m*_{\zeta}g}$. Indeed, when we perform only the integration on F, we obtain:
\begin{eqnarray*}
Z_{m*_{\zeta}g} = N(\zeta)\int_{P} dA \;det(\zeta(A)^*\zeta(A))^{-1/2} e^{-\frac{1}{2}B_g(A,A)},
\end{eqnarray*}
where $\zeta$(A)* is the hermitian conjugate of $\zeta$(A) with rapport to B$_m$.
\item the determinant det($\zeta(A)^*\zeta(A)$) =: N($\zeta$)$^2$, N($\zeta$)$\geq$0, does not depend on A. With this supplementary condition, the integration on the r-dimensional vector space P is easily achieved and one obtains a normalized partition function:
\begin{eqnarray}
Z_{m*_{\zeta}g} = N(\zeta)det(\zeta(A)^*\zeta(A))^{-1/2} = 1.
\end{eqnarray}
\end{enumerate}
The third condition is equivalent to the fact that $\zeta(A)^*\zeta(A)$ is a positive definite operator and there is a converse propriety \cite{npla98} which says that every positive definite operator is of this form. Henceforth, one can formulate an equivalent definition of the partition function given by:
\begin{eqnarray}
Z_{m*_{\zeta}g} := N(\Xi)\int_{F} du \int_{P} dA\; e^{-\frac{1}{2}B_m(u,\Xi(A)u)}e^{-\frac{1}{2}B_g(A,A)},
\label{scpart2}
\end{eqnarray}
where:
\begin{enumerate}
\item for all A$\in$P, $\Xi$(A)$\in$ End(F) is positive definite. This implies that det($\Xi$(A)) $\neq$ 0 for all A$\in$P;
\item the determinant det($\Xi$(A)) =: N($\Xi$)$^{2}$, N($\Xi$)$\geq$0, does not depend on A. This implies that $\Xi$ is not linear. Indeed, suppose $\Xi$(zA) = z$\Xi$(A) for z$^r\neq$ 1, z$\in$$\textbf{C}$, then det($\Xi$(A)) = det($\Xi$(zA)) = det(z$\Xi$(A)) = z$^{r}$det($\Xi$(A)), a contradiction.
\end{enumerate}
\textbf{Example}: A general example is provided by maps $\zeta$(A) defined by means of the exponential map on Aut(F)
\begin{eqnarray*}
e : Lie(Aut(F)) &\rightarrow& Aut(F),\\
X &\mapsto& e^X,
\end{eqnarray*}
where Lie(Aut(F)) is the Lie algebra of the group of automorphisms on F. \\
Hence, for traceless elements T$^a$, a=1,...,r, of Lie(Aut(F)), and A$\in$P, the operator e$^{iA_aT^a}\in$Aut(F) has determinant one and satisfies all above conditions relative to $\zeta$(A).
\begin{note} When T$^a$ = 0, so $\zeta$(A) = Id, then the measure $\mu_{m*_{\zeta}g}$ reduces to the Gaussian measure with covariance B$^{-1}_m$ and this means that there is no interaction. When $\zeta$(A) =: $\tau$ does not depend on A, then $\mu_{m*_{\zeta}g}$ reduces to the Gaussian measure with covariance B$^{-1}_{m,\tau}$.
\end{note}
\begin{note} When dim(P) = dim(F) = 1, then e$^{iA}\in\textbf{C}$ and $\mu_{m*_{\zeta}g}$ reduces to the Gaussian measure with covariance B$^{-1}_{m,e^{iA}}$ = B$^{-1}_m$. This means that our construction is nontrivial for essentially higher dimensional spaces F and P.
\end{note}
Now, it is time to calculate important quantities for probability laws, namely the correlation functions.
\subsection{Correlation functions of interacting probabilities}
Of course, one defines correlation functions and their generating functionals in an analogous manner than usual probability laws.
\begin{dfn} Let B$^{-1}_m$, B$^{-1}_g$ be the covariances of two Gaussian measures defined on F and P respectively, $\mu_{m*_{\zeta}g}$ be their interacting probability law with interaction $\zeta$, then the \emph{correlation function} of the interacting theory is defined by:
\begin{eqnarray*}
<f_1...f_N> := \int_F d\mu_{m*_{\zeta}g}(u)\; f_1(u)...f_N(u), \quad f_i \in F^*,\; i=1,...,N,\: N\in \textbf{N}.
\end{eqnarray*}
\end{dfn}
\textbf{The two-point correlator}: The two-point correlation function is given by:
\begin{eqnarray*}
<f_1.f_2> &:=& \int_F d\mu_{m*_{\zeta}g}(u)\; f_1(u)f_2(u), \quad f_1,f_2 \in F^*,\\
&=& N(\zeta)\int_{P} dA\; e^{-\frac{1}{2}B_g(A,A)} \int_{F} du \; e^{-\frac{1}{2}B_m(\zeta(A)u,\zeta(A)u)}\; f_1(u)f_2(u),\\
&=& \int_P d\mu_g(A) \int_F d\mu_{m,\zeta(A)}(u)\; f_1(u)f_2(u), \\
&&(\mu_{m,\zeta(A)} \text{ is the Gaussian law with covariance }B^{-1}_{m,\zeta(A)}) \\
&=& \int_P d\mu_g(A)\,B^{-1}_{m,\zeta(A)}(f_1,f_2), \quad \text{(Wick theorem)}
\end{eqnarray*}
Noticing that
\begin{eqnarray*}
B^{-1}_{m,\zeta(A)}(f_1,f_2) = B^{-1}_{m}(f_1\circ\zeta(A)^{-1},f_2\circ\zeta(A)^{-1}), \quad f_1,f_2 \in F^*,
\end{eqnarray*}
it is not difficult to show that the two-point correlator $<f_1,f_2>$ is a Gaussian integral over P with a \emph{nonquadratic} integrand. Its exact calculation seems to be not straightforward but one may use standard perturbative approach by approximating the inverse $\zeta(A)^{-1}$ with polynomials in A.
\section{Conclusion}
We have seen some insights of probability theory in the formulation of QFT within the path integral formalism. This lead us to a probabilistic construction of interacting theories which is obtained by means of conditions compatible to important features of nowadays interacting QFTs such as the path integral description of free theories, and the independency of interaction with rapport to the free theory. The advantage of our construction is that interacting theories are again represented by probability laws and then may be rigourously defined. Our future work will be concerned with further developments of the convolution product construction of interactions, including pertubative calculations of correlations functions of concrete physical system.
|
2,869,038,156,747 | arxiv | \section{Introduction}\label{intro}
It is well known that any closed orientable 3-manifold admits a Heegaard decomposition along a splitting
surface $\Sigma$, that is, a decomposition with $M=H^1\cup H^2$, $H^1$ and $H^2$ being homeomorphic handlebodies and $H^1\cap H^2=\partial
H^1=\partial H^2=\Sigma$.
For a Heegaard splitting as above and $j=1,2$, we consider a subset $\cal D_j(\Sigma)\subset \cal
C^1(\Sigma)$ in the curve graph of $\Sigma$ consisting of all essential closed
curves that bound discs in $H^j$, which we call meridians.
Masur and Minsky \cite{MaM2} proved that these subsets are quasi-convex in the curve graph.
The Hempel distance or the Heegaard distance of the splitting $M=H^1 \cup H^2$ is defined to
be
$$d(H^1,H^2)=\min\{d(c_1,c_2)|c_i\in \cal D_i(\Sigma)\}.$$
The Hempel distance reflects some properties of 3-manifolds.
Haken \cite{Haken} showed that if $d(H^1, H^2) \geq 1$, then $M$ is irreducible.
Hempel \cite{Hempel} proved that if $M$ contains
an incompressible torus or is a Seifert fibred manifold, then $d(H^1,H^2)\leq 2$ for any splitting
$(H^1,H^2;\Sigma)$.
Combined with the Geometrisation Theorem due to Thurston and Perelman, this implies that if $d(H^1, H^2) \geq 3$, then $M$ is hyperbolic.
Recently Minsky \cite{Mi2} introduced a notion of primitive
stability for $\pslc$ representations of free groups. The main interest of primitive stable representations is that they form an open subset of the character variety which is bigger than the set of convex cocompact representations and on which the group of outer automorphisms acts properly discontinuously.
Let us briefly recall the definition of primitive stable representations.
In a free group, an element is called \emph{primitive} if it can be a member of a free generating set. A representation $\rho:F\rightarrow \pslc$ of a free group is {\em primitive stable} if it has the following property. Any $\rho$-equivariant map from a Cayley graph of $F$ to $\hyperbolic^3$ maps the geodesics defined by primitive elements to uniform quasi-geodesics.
In \cite{MiM}, Minsky and Moriah constructed lattices which are the images of primitive stable representations and asked whether any lattice is the image of such a primitive stable representation. In this note, we shall give further examples of lattices which are images of primitive stable representations by giving some sufficient conditions for primitive stability for closed hyperbolic 3-manifolds.
To be more concrete, we shall show that under some conditions, every Heegaard splitting with Hempel distance large enough and with some boundedness condition (boundedness with regard to subsurfaces) give two primitive stable representations of the free groups corresponding to the two handlebodies constituting the splitting.
As an application of our main result, we shall also show that any boundary point of a Schottky space is a limit of primitive stable representations corresponding to closed hyperbolic 3-manifolds.
\section{Statement of Main Theorem}
Throughout this paper we assume all manifolds to be closed and orientable, and all Heegaard surfaces to have genus at least $2$.
Let $\Sigma$ be a closed surface.
An essential simple closed curve in $\Sigma$ is simply called a curve on $\Sigma$.
A subsurface $Y$ of $\Sigma$ is said to be essential when every frontier component of $Y$ is essential in $\Sigma$.
For a curve $c$ on $\Sigma$ and an essential subsurface $Y$ of $\Sigma$, we define the projection of $c$ to $Y$, which we denote by $\pi_Y(c)$ as follows.
If $c$ does not intersect $Y$ essentially, then we define $\pi_Y(c)$ to be the empty set.
If $c$ intersects $Y$ essentially, then we define $\pi_Y(c)$ to be the set of simple closed curves on $Y$ obtained from the components of $c \cap Y$ by connecting their endpoints by arcs on
the frontier of $ Y$.
\begin{definition}
\label{bounded}
Let $M=H^1 \cup_\Sigma H^2$ be a Heegaard splitting.
Let $\mathcal D_1$ and $\mathcal D_2$ be the set of isotopy classes of meridians in $H^1$ and $H^2$ respectively, regarded as subsets in the curve graph of $\Sigma$.
We call these the meridian complexes for $H^1$ and $H^2$.
Let $Y$ be an essential subsurface of $\Sigma$.
Then the $Y$-Heegaard distance of $H^1 \cup_\Sigma H^2$ is the distance in the curve graph of $Y$ between $\pi_Y(\mathcal D_1)$ and $\pi_Y(\mathcal D_2)$.
\end{definition}
\begin{definition}
We say that a Heegaard splitting $M=H^1 \cup_\Sigma H^2$ has {\em $R$-bounded subsurface distance} when for any essential subsurface $Y$ of $\Sigma$ the $Y$-Heegaard distance of $H^1 \cup_\Sigma H^2$ is bounded by $R$.
\end{definition}
The main theorem of this note is the following.
\begin{thm}
\label{main}
For any $R$, there exists $K$ depending only on $R$ and the genus $g$ as follows.
For any $3$-manifold admitting a genus-$g$ Heegaard splitting $M=H^1 \cup_\Sigma H^2$ whose Heegaard distance is greater than $K$ and which has $R$-bounded subsurface distance, the manifold $M$ is hyperbolic and the representation $\iota_*: \pi_1(H^j) \rightarrow \pi_1(M) \subset \pslc$ is primitive stable for $j=1,2$.
\end{thm}
Our proof of this theorem relies on the result of Namazi \cite{Na} on model manifolds for Heegaard splittings with uniformly bounded $Y$-Heegaard distance and on the characterisation of primitive stable discrete and faithful representations given in \cite{JKOL}. As was mentioned before, as long as $K\geq 3$, the manifold $M$ is hyperbolic by the Geometrisation Theorem, but Namazi gave an alternative proof of the hyperbolicity of $M$ for $K$ large enough as in our statement, which does not use the full Geometrisation Theorem. See \cite{Na}.
\section{Criterion for primitive stability}
Let $F$ be a non-abelian free group.
Fix some symmetric generator system and consider the Cayley graph $C(F)$ with respect to the generator system.
Given a representation $\rho : F \rightarrow\pslc$ and a base point $o\in
\bh^3$, there is a unique $\rho$-equivariant map
$\tau_{\rho,o}:C(F) \ra \bh^3$ sending the origin $e$ of $C(F)$
to $o$ and taking each edge to a geodesic segment \cite{Fl}.
A representation $\rho : F \rightarrow \pslc$ is {\it primitive stable} if
there are constants $K,\delta$ such that
$\tau_{\rho,o}$ takes all bi-infinite geodesics in $C(F)$ determined by primitive elements to $(K,\delta)$-quasi-geodesics in $\bh^3$.
This definition is independent of the choice of the base point $o\in \bh^3$, which we can easily verify by changing $K$ and $\delta$.
A measured lamination (or a simple closed curve) $\lambda$ on the boundary of a handlebody is said to be {\it disc-busting} if there exists
$\eta>0$ such that $i(\partial D,
\lambda)>\eta$ for any compressing disc $D$.
Otherwise $\lambda$ is called {\em disc-dodging}.
In \cite{JKOL}, a complete criterion for primitive stability for faithful discrete representations of the fundamental group of a handlebody $H$ to $\pslc$ was given.
\begin{thm}[Jeon-Kim-Ohshika-Lecuire]
\label{JKOL}
Let $\rho$ be a discrete, faithful and geometrically infinite
representation possibly with parabolics such that the non-cuspidal part $H_0$ of $H=\bh^3/\rho(F)$ is the union of a relative compact core $C_0$ and finitely many ends $E_i$ facing $S_i\subset \partial H$.
Then the representation $\rho$ is primitive stable if and only if every parabolic curve is disc-busting, and every geometrically infinite end $E_i$ has ending lamination $\lambda_i$ which is disc-busting on $\partial H$.
In particular, if $\rho$ is purely loxodromic, then it is primitive stable.
\end{thm}
\section{Proof of the main theorem}
The proof of the main theorem relies on the work of Namazi \cite{Na} on geometric models associated with Heegaard splittings and on Theorem \ref{JKOL} (more specifically the last sentence of the statement, i.e. \cite[Theorem 1.2]{JKOL}).
We shall prove the contrapositive of our statement.
Fix $R>0$ and consider a sequence $\{M_i\}$ of hyperbolic 3-manifolds with genus $g$ Heegaard splittings $M_i=H^1_i \cup_{\Sigma_i} H^2_i$ having $R$-bounded subsurface distances and Hempel distances $K_i\longrightarrow\infty$. By \cite[Main Theorem 1]{Na}, for sufficiently large $K_0$, if $K_i\geq K_0$, there is a negatively curved model manifold $N_i$ homeomorphic to $M_i$ whose sectional curvature lies in $(-1-\epsilon, -1+\epsilon)$, where $\epsilon$ depends on $K_0$ and $g$, and goes to $0$ as $K_0 \rightarrow \infty$ fixing $g$, and whose injectivity radii are bounded from below by a uniform positive constant depending only on $K_0$ and $g$. By \cite[Corollary 12.2]{Na}, the metric on $N_i$ is close to the metric on $M_i$ up to the third derivative (and gets closer and closer as $i$ goes to $\infty$).
The model manifold $N_i$ is constructed as follows (see \cite{Na}, p. 173-175). Let $\alpha^1_i$ and $\alpha^2_i$ be pants decompositions of $\partial H^1_i =\partial H^2_i=\Sigma_i$ such that each component of $\alpha^j_i$ bounds a compressing disc in $H^j_i$ and a component of $\alpha^1_i$ and a component of $\alpha^2_i$ realise the Hempel distance of the splitting (i.e. their distance in the curve graph of $\Sigma\approx\Sigma_i$ is equal to the Hempel distance of the splitting $M_i=H^1_i \cup_{\Sigma_i} H^2_i$). Pick a point $\tau^j_i$ in the Teichm\"{u}ller space of $\Sigma$ such that the length of $\alpha^j_i$ with respect to $\tau^j_i$ is less than a fixed constant $B_0$ (for example the Bers constant), and let $m^1_i$ be a convex cocompact hyperbolic structure on $H\approx H^1_i$ whose conformal structure at infinity is $\tau^2_i$, and $m^2_i$ a convex cocompact hyperbolic structure on $H\approx H^2_i$ whose conformal structure at infinity is $\tau^1_i$. Then $N_i$ is obtained
by pasting a large piece of the convex core of $C^1_i:=(H^1_i, m^1_i)$ with a large piece of the convex core of $C^2_i:=(H^2_i, m^2_i)$ using an $I$-bundle homeomorphic to $\Sigma_i\times I$ (see \cite{Na}, p. 173-175). The metric on this $I$-bundle comes from a deformation of a piece of a doubly degenerate hyperbolic manifold whose description is not relevant to our proof.
It follows from this construction that there are sequences of base points $x^j_i\subset N_i$ and $y^j_i\subset C^j_i$ such that the sequences $\{(N_i,x^j_i)\}$ and $\{(C^j_i,y^j_i)\}$ have the same limit in the pointed Hausdorff-Gromov topology for $j=1,2$. Since the metric on $N_i$ gets closer and closer to the metric on $M_i$, there are sequences of base points $z^j_i\subset M_i$ such that the sequences $\{(M_i,z^j_i)\}$ and $\{(C^j_i,y^j_i)\}$ have the same limit in the pointed Hausdorff-Gromov topology for $j=1,2$.
Since the Heegaard splittings $M_i=H^1_i \cup H^2_i$ have $R$-bounded subsurface distances, it follows from \cite[Corollary 6.4]{Na} that, passing to a subsequence, $\tau^j_i$ tends in the Thurston compactification to an arational lamination $\lambda^j$ in the Masur domain on the boundary of $H\approx H^1_i\approx H^2_i$. It follows then from \cite{Oh} or \cite{NS2} that a subsequence of $\{C^j_i\}$ (under the identification of $H$ with $H^j_i$ as above) converges algebraically (and geometrically by the Covering Theorem \cite{Ca2}) to a singly degenerate hyperbolic open handlebody with $|\lambda^j|$ its ending lamination.
Therefore, by pulling back a marking on a compact core of the geometric limit of $\{(M_i, z_i^j)\}$, we have a marking $\phi^j_i : F_g \rightarrow \pi_1(H^j_i) \subset \pi_1(M_i) \subset \pslc$ which converges algebraically as $i \rightarrow \infty$ after passing to a subsequence.
Let $\phi^j_\infty$ be the limit of $\{\phi^j_i\}$.
As we explained before, $H_\infty^j=\bh^3/\phi_\infty^j(F_g)$ is isometric to the geometric limit of $\{(C^j_i,y^j_i)\}$ which is a singly degenerate hyperbolic open handlebody. By Theorem \ref{JKOL}, $\phi^j_\infty$ is primitive stable for $j=1,2$. Since the primitive stability is an open condition, this implies that $\phi^j_i$ is also primitive stable for sufficiently large $i$. This concludes the proof of our main theorem, for we have proved its contrapositive.
\section{Applications}
In this section, we shall present two applications of our main theorem.
In the first, we shall give concrete examples of primitive stable representations obtained by Theorem \ref{main}.
In the second, we shall show that every point on the boundary of the Schottky space can be approximated by primitive stable representations corresponding to closed hyperbolic 3-manifolds.
Consider a standard genus $g$ Heegaard splitting of $\sharp_g S^1 \times S^2=H^1 \cup H^2$ obtained by doubling.
Let $\eta: \partial H^2 \rightarrow \partial H^1$ denote the identification of the two boundaries in this splitting.
For a mapping class $\phi$ of $S=\partial H^2$, we denote by $H^1 \cup_\phi H^2$ the Heegaard splitting of a 3-manifold obtained by pasting $\partial H^2$ to $\partial H^1$ by $\eta \circ \phi$ instead of $\eta$.
A pseudo-Anosov mapping class on the compressible boundary $S$ of a 3-manifold
$M$ is said to {\em partially extend} if its representative extends to a homeomorphism on
a non-trivial compression body inside $M$, whose exterior boundary is $S$ (see \cite{BJM}).
\begin{thm}\label{pseudo} Set $M_i=H^1 \cup_{\phi^i} H^2$, where $\phi:\partial H^2
\ra \partial H^2$ is a pseudo-Anosov mapping class no power of which
extends partially to $H^2$. Then
there is some $N$ such that $M_i$ is primitive stable for every $i \geq N$.
\end{thm}
By \cite{BJM}, no power of $\phi$ partially extends if and only if the stable lamination of $\phi$ lies in
the Masur domain of $\partial H^2$.
It was proved in \cite{BJM} this also implies that the unstable lamination of $\phi$ also lies in the Masur domain of $\partial H_2$.
\begin{proof}
This could be proved just by replacing the arguments from \cite{Na} in the proof of Theorem \ref{main} with arguments from \cite{NS2}.
Instead, we shall prove that this is a special case of Theorem \ref{main}.
Let $\mathcal D^1_i$ and $\mathcal D^2_i$ be the meridian complexes for the Heegaard splitting $M_i=H^1 \cup_{\phi^i} H^2$.
Let $\gamma_i$ be a tight geodesic in the curve graph of $S$ connecting a point in $\mathcal D^1_i$ and a point in $\mathcal D^2_i$.
Then, as was shown in Namazi-Souto \cite{NS2}, $\gamma_i$ converges, uniformly on any compact set, to a tight geodesic in the curve complex connecting the unstable and stable laminations which are regarded as points at infinity of the curve graph, as $i \rightarrow \infty$.
This implies, by an argument of \cite{MaM}, that there are $R>0$ and $n_0 \in \bn$ such that if $i \geq n_0$, for any essential subsurface $Y$ on $S$, the projections of the endpoints of $\gamma_i$ to the curve complex of $Y$ are within the distance $R$.
Thus, we see that Theorem \ref{pseudo} is just a special case of Theorem \ref{main}.
\end{proof}
We consider the character variety $\chi(F_g)$ of the representations of the free group $F_g$ of rank $g$ to $\pslc$.
Let $\mathcal S_g$ be the subspace of $\chi(F_g)$ consisting of Schottky representations.
\begin{thm}
\label{density}
Every point in the frontier of $\mathcal S_g$ in $\chi(F_g)$ is a limit of a sequence of primitive stable unfaithful discrete representations $\{\rho_n\}$ such that $\bh^3/\rho_n(F_g)$ is a closed hyperbolic 3-manifold for every $n$.
\end{thm}
\begin{proof}
In the following, we regard faithful discrete representations of $F_g$ as hyperbolic structures on the interior of a handlebody $H_g$ of genus $g$.
We let $S$ be the boundary of $H_g$, and regard ending laminations or parabolic curves as lying on $S$.
By Corollary 15.1 of Canary-Culler-Hersonsky-Shalen \cite{CCHS}, in $\Fr\mathcal S_g$, the maximal cusps, \ie geometrically finite representations without non-trivial quasi-conformal deformations, are dense.
Therefore we have only to show that any maximal cusp is a limit of a sequence of primitive stable unfaithful discrete representations corresponding to closed manifolds.
Let $\rho: F_g \rightarrow \pslc$ be a maximal cusp, and $M$ a hyperbolic 3-manifold $\bh^3/\rho(F_g)$.
We note that the union of parabolic curves of any maximal cusp is doubly incompressible, \ie has a positive lower bound for the intersection numbers with the meridians of $H_g$.
Let $c_1, \dots , c_p$ be the parabolic curves of $\rho$ regarded as lying on $S$.
Now, let $\lambda$ be the stable lamination contained in the Masur domain of $S$ of a pseudo-Anosov homeomorphism $\phi$ on $S$.
Let $\psi: F_g \rightarrow \pslc$ be a representation on the frontier of $\mathcal S_g$ with ending lamination $|\lambda|$ (the support of $\lambda$).
Now, we consider the composition of $n$-time iterated Dehn twists on $c_1, \dots, c_p$, and denote it by $\tau_n$.
Consider the measured lamination $\tau_n(\lambda)$ and its projective class $[\tau_n(\lambda)]$.
Since $c_1 \cup \dots \cup c_p$ is doubly incompressible and $[\tau_n(\lambda)]$ converges to $[c_1\cup \dots \cup c_p]$ as $n \rightarrow \infty$, we see that $\tau_n(\lambda)$ is also doubly incompressible.
On the other hand, since $\tau_n(\lambda)$ is arational, it must be contained in the Masur domain for large $n$ as was shown in Lemma 3.4 in Lecuire \cite{Le}.
Let $\psi_n$ be a representation on the frontier of $\mathcal S_g$ with ending lamination $|\tau_n(\lambda)|$ for large $n$.
Fix $n$ and let $P_j\subset S$ be a pants decomposition converging projectively to $\tau_n(\lambda)$.
Now, we construct a $3$-manifold $M_j$ as follows.
We glue a $2$-handle to $\partial H_g$ along an annular neighbourhood (in $S$) of each component of $P_j$.
We get a $3$-manifold whose boundary is a union of spheres.
Then we glue a $3$-ball along each boundary component and denote the resulting 3-manifold by $M_j$.
It is easy to see that $S\subset M_j$ is a Heegaard surface.
We denote by $H_g^1$ the original $H_g$ and by $H_g^2(j)$ the one lying on the opposite side of $S$.
Then as $j \rightarrow \infty$, the meridian complex for $H_g^2(j)$ converges to the point at infinity represented by $\tau_n(\lambda)$ in the Gromov bordification of the curve graph of $S$.
It is known that any geodesic ray with endpoint equal to the support of a stable lamination has subsurface boundedness (see Minsky \cite{Mi1}).
It follows that $M_j$ satisfies the assumptions of Theorem \ref{main} for sufficiently large $j$.
Thus $M_j$ is hyperbolic and the representation $\phi_j:F_g\rightarrow \pslc$ induced by the inclusion $H_g\subset M_j$ is primitive stable. Furthermore, it follows easily from the arguments in the proof of Theorem \ref{main} and the Ending Lamination Theorem that $\psi_n$ is the limit of $\{\phi_j\}$ as $j \rightarrow \infty$.
Hence $\psi_n$ is a limit of a sequence of primitive stable unfaithful discrete representations.
Since $c_1\cup \dots \cup c_p$ is doubly incompressible, the representations $\psi_n$ converge in $\chi(F_g)$ as $n \rightarrow \infty$ by the main theorem of Kim-Lecuire-Ohshika \cite{KLO} and the limit is the maximal cusp $\rho$ by lower semi-continuity of the length function (see \cite{brock}).
Therefore, using a diagonal extraction, we see that $\rho$ is also a limit of primitive stable representations corresponding to closed hyperbolic 3-manifolds.
This completes the proof.
\end{proof}
|
2,869,038,156,748 | arxiv | \section{Introduction}
The study of the geometry of foliations often is related to the study of their transverse structure. Among the most comprehensible structures are those given by
actions of Lie groups on some homogeneous space. This is the case of the so called {\it transversely homogeneous foliations} as introduced by Blumenthal (\cite{Blumenthal,Godbillon}. One of the first cases of such a class of foliations, is the class of transversely affine foliations. Such foliations have been studied in the smooth real codimension one case by Bobo Seke in \cite{boboseke}. In \cite{scardua} the author considers the case of codimension one holomorphic foliations with singularities. A classification is given for such objects on complex projective spaces.
In this paper we consider the case of arbitrary codimension. We focus on the holomorphic case, already aiming the case of foliations with singularities. Nevertheless, most of the material in the first sections also holds in the (non-singular) smooth case.
In few words, our aim is to introduce the first ingredients in the study of the case of transversely homogeneous holomorphic foliations with singularities.
\subsection{Transversely affine foliations}
Let us clearly state the notions we use. The following definition is found in \cite{Blumenthal} or in \cite{Godbillon} pp. 245. We adapt it to the holomorphic case:
\begin{Definition}[transversely homogeneous foliation]
{\rm Let $\mathcal F$ be a holomorphic foliation on a
complex manifold $P$. Let $G$ be a simply-connected Lie group and $H
\subset G$ be a connected closed subgroup of $G$. We say that
$\mathcal F$ is {\it transversely homogeneous\/} in $P$ of model
$G/H$ if $P$ admits an open cover $\bigcup
\limits_{i \in I} U_i = P$ with holomorphic submersions
$y_i\colon U_i \to G/H$ satisfying: (i) $\mathcal F\big|_{U_i}$ is
defined by $y_i$, (ii) In each $U_i \cap U_j \ne \emptyset$ we have $y_i
= g_{ij}\circ y_j$ for some locally constant map $g_{ij}\colon U_i \cap
U_j \to G$. }
\end{Definition}
Notice that the group $G$ acts on the quotient $P=G/H$ by left translations.
In particular, we have:
\begin{Definition}
\label{Definition:transvaffine} {\rm A holomorphic codimension-$q$ foliation ${\mathcal F}$ on
$M^n$ is {\it transversely affine\/} if there is a family
$\{Y_i\colon U_i \to \co^q\}\_{i\in I}$ of holomorphic submersions
$Y_i\colon U_i \to \co^q$ defined in open sets $U_i \subset M$,
defining ${\mathcal F}\big|_{U_i}$, covering $M
= \bigcup\limits_{i \in I} \,U_i$ and such that for each $U_i \cap U_j \ne\phi$ we have $Y_i =
A_{ij}Y_j + B_{ij}$ for some locally constant maps $A_{ij}\colon U_i\cap U_j \to \GL_q(\co)$, $B_{ij}\colon U_i\cap U_j \to \co^q$. }
\end{Definition}
\subsection{Integrable systems and foliations}
Recall that a system of holomorphic 1-forms $\Omega:= \{\Omega_1,...,\Omega_q\}$ in an open set $U\subset M$ is {\it integrable} if for every $j \in \{1,...,q\}$ we have $d\Omega_j \wedge \Omega_1\wedge \ldots \wedge \Omega_q=0$ in $U$. If such a system of forms has maximal rank at each point, then it defines a codimension $q$ holomorphic foliation $\fa(\Omega)$ on $U$. The foliation is given by the integrable distribution of $(n-q)$-planes $\Ker(\Omega):= \bigcap \limits_{j=1} ^q \Ker(\Omega_j)$ where given $p \in M$ we define $\Ker(\Omega_j)(p) :=\{ v \in T_p(M) : \Omega_j(p) \cdot v =0\}$. Two such maximal rank integrable systems $\Omega$ and $\Omega^\prime$ define the same foliation in $U$ if, and only if, we have $\Omega_i = \sum\limits_{j=1} ^q a_{ij} \Omega_j$ for some holomorphic functions $a_{ij}$ in $U$, with the property that the $q\times q$ matrix $A=(a_{ij})_{i,j=1}^q$ is nonsingular at each point of $U$.
Given a system $\{\Omega_1,...,\Omega_q\}$ as above, we define a $q\times 1$ matrix valued 1-form $\Omega$ as having rows given by $\Omega_1,...,\Omega_q$.
We denote by $\fa(\Omega)$ the foliation defined by this system.
Let us now introduce some notation.
Given a $k \times \ell$ and a $\ell \times s$ matrix valued 1-form
$A=(a_{ij})$ and $B=(b_{jt})$ respectively, we may define the wedge product $A \wedge B$ in the natural way, as the $k \times s$ matrix valued 1-form $A\wedge B$ whose
entry at the position $(i,t)$ is the 2-form $\sum\limits_{j=1} ^\ell a_{ij} \wedge b_{jt}$. In the same way we may define the exterior derivative $dA$ as the $k \times \ell$ matrix valued $2$-form whose entry at the position $(i,j)$ is the $2$-form $da_{ij}$.
\vglue.1in
\begin{Example}
\label{Example:intersection} {\rm Let ${\mathcal F}_1,\ldots,{\mathcal F}_q$ be transversely affine
codimension-one foliations on $M^n$, which are transverse everywhere.
Then the intersection foliation $\bigcap\limits_{i=1}^{q}\,
{\mathcal F}_j$ is a codimension-$q$ foliation on $M$ which is transversely affine. Indeed,
assume that ${\mathcal F}_j$ is given by some holomorphic integrable 1-form $\Omega_j$ in
$M$. According to \cite{scardua} Chapter I Proposition 1.1 we have $d\Omega_j = \eta_j \wedge \Omega_j$,
$d\eta_j=0$, for some holomorphic 1-form $\eta_j$ in $M$. Define $\Om$ as the $q\times 1$ matrix valued 1-form in $M$ having $\Omega_1,...,\Omega_q$ as rows.
Also define
$\eta$ the $q\times q$ diagonal matrix valued holomorphic 1-form in $M$ having $\eta_1,...,\eta_q$ in its diagonal.
Then, in the above notation we have $d\Om = \eta \wedge\Om$. Since $\eta$ is diagonal, we have $ d \eta =0= \eta \wedge \eta$.}
\end{Example}
As for the general case we have the following description:
\begin{Theorem}
\label{Theorem:forms}
Let ${\mathcal F}$ be a holomorphic codimension-$q$
foliation on $M$. The
foliation ${\mathcal F}$ is transversely affine in $M$ if, and only
if, there exist an open cover $\bigcup\limits_{i \in I} \,U_i=M$ and
holomorphic $q\times1$, $q\times q$ matrix valued {\rm 1}-forms
$\Omega_i$, $\eta_i$ in $U_i$, $\forall\, i \in I$, satisfying:
\noindent {\rm a)} ${\mathcal F}\big|_{U_i} = {\mathcal
F}(\Omega_i)$
\noindent {\rm b)} $d\Omega_i = \eta_i \wedge \Omega_i$ and $d\eta_i
= \eta_i \wedge \eta_i$
\noindent {\rm c)} if $U_i \cap U_j \ne\phi$ then we have $\Omega_i
= G_{ij}\cdot\Omega_j$ and $\eta_i = \eta_j + dG_{ij}\cdot
G_{ij}^{-1}$ for some holomorphic $G_{ij}\colon U_i\cap U_j \to
\GL_q(\co)$.
\noindent Moreover, two such collections
$\{(\Omega_i,\eta_i,U_i)\}\_{i\in I}$ and
$\{(\Omega_i^\prime,\eta_i^\prime,U_i)\}\_{i\in I}$ define the same
affine transverse structure for ${\mathcal F}$, if and only if, we
have $\Omega_i^\prime = G_i\cdot\Omega_i$ and $\eta_i^\prime =
\eta_i + dG_i\cdot G_i^{-1}$ for some holomorphic $G_i\colon U_i \to
\GL_q(\co)$.
\end{Theorem}
\begin{Remark}
{\rm Theorem~\ref{Theorem:forms} is stated in a much more abstract context by Blumenthal (see Theorem 2 page 144 as well as its Corollary 3.2 page 149). Nevertheless, it is required some
triviality hypothesis on principal fiber-bundles of structural group
$G/H$, over the manifold $M$ (see also \cite{Godbillon} Prop. 3.6 pp. 249-250).
In our case, we will obtain it from some explicit computations and some classical results on Lie groups (see Theorem~\ref{Theorem:Darboux-Lie}).}
\end{Remark}
In the final section we prove that an extension result for the pair $(\Omega, \eta)$ associate to an affine transverse structure off some codimension one divisor, under the presence of generic singularities for the foliation on the divisor (cf. Theorem~\ref{Theorem:extensionlemma}).
\section{Auxiliary results}
We state some results of easy proof which will be used in the proof
of Theorem~\ref{Theorem:forms}.
We start by the following well-known lemma from real analysis, adapted to the holomorphic case:
\begin{Lemma}
\label{Lemma:basic}
Let $X\colon U\subset \co^n \to \GL_q(\co)$ be a holomorphic map,
then $d(X^{-1}) = -X^{-1}\cdot dX\cdot X^{-1}$.
\end{Lemma}
Next step is:
\begin{Lemma}
\label{Lemma:eta} Let $X\colon U \subset \co^n \to \GL_q(\co)$ be holomorphic and let $\eta$ be defined diagonal by $\eta
= dX\cdot X^{-1}$ then we have $d\eta= \eta \wedge \eta$. Given a holomorphic $q\times q$ matrix valued 1-form $\eta$ in $U\subset \mathbb C^n$, such that $d\eta= \eta \wedge \eta$, and a holomorphic map $G \colon U \to \GL_q(\mathbb C)$, then the 1-form $\tilde \eta := \eta + dG. G^{-1}$ satisfies $d \tilde \eta = \tilde \eta \wedge \tilde \eta$.
\end{Lemma}
\begin{proof} Using Lemma~\ref{Lemma:basic} we have $d(X^{-1}) =-X^{-1}\cdot dX\cdot X^{-1}$. Thus
$$
\aligned
d\eta = d(dX\cdot X^{-1}) &= d(dX) \wedge X^{-1} +(-1) dX \wedge d(X^{-1})\\
&= (-1) dX \wedge (-X^{-1}\cdot dX\cdot X^{-1})\\
&= (dX\cdot X^{-1}) \wedge (dX\cdot X^{-1}) = \eta \wedge \eta.
\endaligned
$$
As for the second part, we have $ d \tilde \eta = d \eta + d(dG.G^{-1})=
\eta \wedge \eta + dG.G^{-1} \wedge dG.G{-1}$. On the other hand
$\tilde \eta \wedge \tilde \eta = (\eta + dG.G^{-1}) \wedge (\eta + dG.G^{-1}) = \eta \wedge \eta + \eta \wedge dG.G^{-1} + dG.G^{-1} \wedge \eta + dG.G^{-1} \wedge dG. G^{-1}= \eta \wedge \eta + dG.G^{-1} \wedge dG. G^{-1}$.
\end{proof}
Finally, we have:
\begin{Lemma}
\label{Lemma:glueing} Let $G,G^\prime\colon U \subset \co^n \to \GL_q(\co)$ be
holomorphic maps. Then we have $dG.G^{-1} = dG^\prime.G^{\prime -1}$
if and only if $G^\prime = G.A$ for some locally constant $A\colon U
\to \GL_q(\co)$.
\end{Lemma}
\begin{proof} First we assume that $G^\prime = G\cdot A$ with $A$ locally
constant. Thus we have $G^{-1}\cdot G^\prime = A$ and therefore
$d(G^{-1}\cdot G^\prime) = dA = 0$ in $U$. This implies
$d(G^{-1})\cdot G^\prime + G^{-1}\cdot d(G^\prime) = 0$. Using that
$d(G^{-1})=-G^{-1}\cdot dG\cdot G^{-1}$ we have
$$
-G^{-1}\cdot dG\cdot G^{-1}\cdot G^\prime + G^{-1}\cdot dG^\prime =
0.
$$
Multiplying on the left this equality by $G$ we obtain
$$
-dG\cdot G^{-1}\cdot G^\prime + dG^\prime = 0.
$$
Multiplying on the right this last equality by $G^{\prime-1}$ we
obtain
$$
-dG\cdot G^{-1} + dG^\prime\cdot(G^\prime)^{-1} = 0,
$$
which proves the first part. Now we assume that $dG\cdot G^{-1} =
dG^\prime\cdot(G^\prime)^{-1}$ in $U$. Define $A = G^{-1}\cdot
G^\prime$ so that $G^\prime = G\cdot A$. We only have to show that
$dA=0$ in $U$.
\noindent In fact, we have
$$
d(A) = d(G^{-1}\cdot G^\prime) = d(G^{-1})\cdot G^\prime +
G^{-1}\cdot d(G^\prime).
$$
Since $d(G^{-1})=-G^{-1}\cdot dG\cdot G^{-1}$ we get
$$
\aligned
dA &= -G^{-1}\cdot dG\cdot G^{-1}\cdot G^\prime + G^{-1}\cdot dG^\prime\\
&= -G^{-1}\cdot (dG\cdot G^{-1} - dG^\prime\cdot
G^{\prime-1})G^\prime.
\endaligned
$$
Using the hypothesis $dG\cdot G^{-1} = DG^\prime\cdot G^{\prime-1}$
we obtain $dA=0$.
\end{proof}
Let $G$ be a Lie group and $\{\omega_1,...,\omega_\ell\}$ be a basis
of the Lie algebra of $G$. Then we have $d\omega_k =
\sum\limits_{i<j} c_{ij} ^k \omega_i \wedge \omega_j$ for a family
constants $\{c_{ij}^k\}$ called the {\it structure constants} of the
Lie algebra in the given basis (\cite{Godbillon}). With this we have the classical theorem due to Darboux and Lie below. In few words, it says that maximal rank systems of 1-forms satisfying the same equations are locally pull-back of the group Lie algebra. The map is unique up to left translations in the Lie group.
\begin{Theorem}[Darboux-Lie, \cite{Godbillon}]
\label{Theorem:Darboux-Lie} Let $G$ be a (complex) Lie group of dimension $\ell$. Let
$\{\omega_1,...,\omega_\ell\}$ be a basis of the Lie algebra of
$G$ with structure constants $\{c_{ij}^k\}$.
Given a maximal rank system of (holomorphic) 1-forms
$\Omega_1,...,\Omega_\ell$ in a (complex) manifold $V$, such that $d\Omega_k=\sum_{i,j}^k
c_{ij}^k\, \, \Omega_i \wedge \Omega_j$, then:
\begin{enumerate}
\item For each point $p\in V$ there is a neighborhood
$p\in U_p \subseteq V$ equipped with a (holomorphic) submersion $f_p\colon U_p \to
G$ which defines $\fa$ in $U_p$ such that $f_p^*
(\omega_j)=\Omega_j$ in $U_p$, for all $j\in \{1,...,q\}$.
\item If $V$ is simply-connected we can take $U_p = V$.
\item If $U_p \cap U_q \ne \emptyset$ then in the
intersection we have $f_q = L_{g_{pq}}(f_p)$ for some locally
constant left translation $L_{g_{pq}}$ in $G$.
\end{enumerate}
\end{Theorem}
\section{Transversely affine
foliations and differential forms}
The first step in the proof of Theorem~\ref{Theorem:forms} is:
\begin{Proposition}
\label{Proposition:formsglobal} Let ${\mathcal F}$ be a holomorphic codimension-$q$
foliation on $M$. Suppose that ${\mathcal F}$ is defined by some
integrable system $\{\Omega_1,\ldots,\Omega_q\}$ of holomorphic {\rm
1}-forms. If $\fa$ is transversely affine then there is a $q\times q$ matrix valued holomorphic {\rm
1}-form $\eta = (\eta_{ij})$ satisfying:
$$
d\Om = \eta\wedge\Om,\quad d\eta = \eta \wedge \eta \qquad\text{where}\qquad \Om =
\begin{pmatrix} \Omega_1\\ \vdots\\ \Omega_q\end{pmatrix}
$$
\end{Proposition}
\begin{proof}
Let $\{\Omega_1,\ldots,\Omega_q\}$ be an integrable
holomorphic system which defines ${\mathcal F}$ in $M$ and suppose
$\{Y_i\colon U_i \to \co^q\}\_{i\in I}$ is a transversal affine
structure for ${\mathcal F}$ in $M$ with
$$
Y_i = A_{ij}Y_j+B_{ij} \quad\text{in}\quad U_i \cap U_j \ne \emptyset
\, \, \, (1)
$$
as in Definition~\ref{Definition:transvaffine}.
\noindent Since the submersions $Y_i$ define ${\mathcal F}$ we can
write
$$
\Om = G_i.dY_i (2)
$$
in each $U_i$, for some holomorphic $G_i\colon U_i \to \GL_q(\co)$.
Here $\Om =
\begin{pmatrix} \Omega_1\\ \vdots\\ \Omega_q\end{pmatrix}$.
\noindent In each $U_i \cap U_j \ne \emptyset$ we have:
$$
G_idY_i = G_j dY_j (3)
$$
and as it follows from (1)
$$
G_j = A_{ij}.G_i\,. (4)
$$
According to Lemma~\ref{Lemma:glueing} this last equality implies:
$$
dG_j.G_j^{-1} = dG_i.G_i^{-1} (5)
$$
in each $U_i \cap U_j \ne \emptyset$.
\noindent This allows us to define $\eta$ in $M$ by
$$
\eta\big|_{U_i} = dG_i.G_i^{-1}. (6)
$$
According to Lemma~\ref{Lemma:eta} we have $d\eta = \eta \wedge \eta$. We also have in each $U_i$
$$
\aligned
d\Om = d(G_idY_i) &= dG_i \wedge dY_i\\
&= dG_i.G_i^{-1} \wedge dY_i\\
&= dG_i.G_i^{-1}\wedge G_i dY_i\\
&= \eta \wedge \Om.
\endaligned
$$
The pair $(\Om, \eta)$
satisfies the conditions of the statement.
\end{proof}
Now we study the converse of the proposition above.
\begin{Proposition}
\label{Proposition:formsgeneral} Let ${\mathcal F}$ be a holomorphic condimension-$q$
foliation on $M$. The
foliation ${\mathcal F}$ is transversely affine in $M$ if, and only
if, there exist an open cover $\bigcup\limits_{i \in I} \,U_i=M$ and
holomorphic $q\times1$, $q\times q$ matrix valued {\rm 1}-forms
$\Omega_i$, $\eta_i$ in $U_i$, $\forall\, i \in I$, satisfying:
\noindent {\rm a)} ${\mathcal F}\big|_{U_i} = {\mathcal
F}(\Omega_i)$
\noindent {\rm b)} $d\Omega_i = \eta_i \wedge \Omega_i$ and $d\eta_i
= \eta_i \wedge \eta_i$
\noindent {\rm c)} if $U_i \cap U_j \ne\phi$ then we have $\Omega_i
= G_{ij}\cdot\Omega_j$ and $\eta_i = \eta_j + dG_{ij}\cdot
G_{ij}^{-1}$ for some holomorphic $G_{ij}\colon U_i\cap U_j \to
\GL_q(\co)$.
\noindent Moreover, two such collections
$\{(\Omega_i,\eta_i,U_i)\}\_{i\in I}$ and
$\{(\Omega_i^\prime,\eta_i^\prime,U_i)\}\_{i\in I}$ define the same
affine transverse structure for ${\mathcal F}$, if and only if, we
have $\Omega_i^\prime = G_i\cdot\Omega_i$ and $\eta_i^\prime =
\eta_i + dG_i\cdot G_i^{-1}$ for some holomorphic $G_i\colon U_i \to
\GL_q(\co)$.
\end{Proposition}
\noindent In order to prove in details the proposition above we
explicitly calculate the Lie algebra of $\Aff(\co^q)$. We consider $\GL_q(\mathbb C)$ as an open subset of the vector space $M(q\times q, \mathbb C)$ of complex $q\times q$ matrices. Using this we have:
\begin{Lemma}
\label{Lemma:affineliealgebra} The Lie algebra $\aff(\co^q)$ of $\Aff(\co^q)$ has a basis
given by $\Om = X\cdot dY$, $\eta = dX\cdot X^{-1}$ where $X \in
\GL_q(\co)$ and $Y \in \co^q$ are global coordinates. Furthermore we
have $d\Om = \eta\wedge\Om$, $d\eta = \eta\wedge\eta$.
\end{Lemma}
\begin{proof} We denote by $M(q\times q, \mathbb C)$ the linear space of $q\times q$ complex matrices. Since $\GL_q(\co) \subset M(q\times q, \mathbb C)\cong \co^{q^2}$ as an
open set, we have a natural global coordinate $X$ in $\GL_q(\co)$.
Let us denote by $Y$ the natural global coordinate in $\co^q$. Fixed
any element $(X_o,Y_o) \in \Aff(\co^q)$ it defines a left
translation by
$$
\aligned &L_{(X_o,Y_o)}\colon \GL_q(\co)\times\co^q \longrightarrow
\GL_q(\co)\times\co^q\\
&L_{(X_o,Y_o)}(X,Y) = (X_oX, X_oY + Y_o).
\endaligned
$$
Therefore given any vector $(V,W) \in
T_{(X_o,Y_o)}(\GL_q(\co)\times\co^q)$ we have
$\DL_{(X_o,Y_o)}(X,Y)\cdot(V,W) = (X_oV,X_oW)$. Therefore a basis of
the left-invariant vector fields in $\Aff(\co^q)$ is given by:
$$
\X = (X,X) = X\cdot\frac{\po}{\po X} + X\cdot\frac{\po}{\po Y} \in
T(\Aff(\co^q)) = \GL_q(\co)\times\co^q.
$$
Thus a basis of $\aff(\co^q)$ is given by the dual basis
$\{\Om,\eta\}$ of $\{\X\}$. This shows that
$$
\begin{cases}
\Om &= X\cdot dY\\
\eta &= dX\cdot X^{-1}
\end{cases}
$$
is a basis for $\aff(\co^q)$.
\noindent It is now a straightforward calculation to show that
$d\Om = \eta\wedge\Om$ and $d\eta = \eta\wedge\eta$.
\end{proof}
\noindent Using these two lemmas and Darboux-Lie Theorem (Theorem~\ref{Theorem:Darboux-Lie}) or alternatively, the book of Spivak (\cite{spivak} Chapter 10, Theorem 17 page 397, \, Theorem 18 page 398 and Corollary 19 page 400) we obtain:
\begin{Corollary}
\label{Corollary:localformetaomega}
\begin{itemize}
\item[{\rm(a)}]
Let $\eta$ be a holomorphic $q\times q$ matrix valued {\rm
1}-form in $M$ satisfying $d\eta = \eta\wedge\eta$. Then locally in
$M$ we have $\eta = dX\cdot X^{-1}$ for some holomorphic $X\colon
U\subset M \to \GL_q(\co)$. If $M$ is simply-connected we can choose
$U=M$. Moreover given two such trivializations $(X,U)$ and
$(\widetilde X,\widetilde U)$ with $U \cap \widetilde U \ne \emptyset$
connected then we have $\widetilde X = X\cdot A$ for some $X \in
\GL_q(\co)$.
\item[{\rm b)}]
Let $\Om$, $\eta$ be holomorphic $q\times1$, $q\times
q$ matrix valued {\rm 1}-forms in $M$ satisfying $d\Om = \eta\wedge\Omega$
and $d\eta = \eta\wedge\eta$. Then given any point $p \in M$ and
given any simply- connected open neighborhood $p \in U_p \subset M$
we have $\Om = X\cdot dY$, $\eta = dX\cdot X^{-1}$ for some
holomorphic $\pi_p = (X,Y)\colon U_p \to \GL_q(\co)\times\co^q$.
Furthermore in each connected component of $U_p \cap \widetilde
U_{\tilde p} \ne \emptyset$ we have $\pi_q = L\circ\pi_{\tilde p}$ for
some left-translation $L\colon \GL_q(\co)\times \mathbb C^q \to \GL_q(\co)\times \mathbb C^q$. In
particular if $M$ is simply-connected we can choose $U_p = M$.
\end{itemize}
\end{Corollary}
The proof of Proposition~\ref{Proposition:formsgeneral} is now an easy consequence of
Corollary~\ref{Corollary:localformetaomega} above and of the arguments used in the proof of Proposition~\ref{Proposition:formsglobal}.
\begin{proof}[Proof of Proposition~\ref{Proposition:formsgeneral}]
Proposition~\ref{Proposition:formsglobal} shows that if $\fa$ is transversely affine in $M$ then we can construct collections $(\Omega_j, \eta_j)$ in open subsets $U_j\subset M$ covering $M$ as stated.
\noindent Conversely assume that $(\Om, \eta)$ is a pair, where $\Omega$ defines $\fa$ in $M$, like in
the statement. Since $\eta$ is holomorphic and satisfies $d\eta = \eta \wedge \eta$ in $M$, there
exists an open cover $\bigcup\,U_i$ of $M$ there are holomorphic
$G_i\colon U_i \to \GL_q(\co)$ such that $\eta\big|_{U_i} =
dG_i.G_i^{-1}$ (Corollary~\ref{Corollary:localformetaomega} (a)).
\noindent Now, from condition $d\Om = \eta \wedge \Om$ we have
$$
\aligned
d(G_i^{-1}.\Om) &= -G_i^{-1}\,dG_i G_i^{-1} \wedge \Om + G_i^{-1}\,d\Om\\
&= -G_i^{-1}\,\eta \wedge \Om + G_i^{-1}\,\eta \wedge \Om = 0
\endaligned
$$
and therefore $G_i^{-1} = dY_i$ for some holomorphic $Y_i\colon U_i
\to \co^q$ which is a submersion.
\noindent Therefore we have $\Om = G_i\,dY_i$ in $U_i$. Moreover
according to Lemma~\ref{Lemma:glueing} we have $G_i^{-1}\,G_j = A_{ij}$ for some
locally constant $A_{ij}\colon U_i\cap U_j \to \GL_q(\co)$, in each
$U_i \cap U_j \ne \emptyset$.
\noindent Therefore $G_i\,dY_i = \Om = G_j\,dY_j =
G_i\,A_{ij}\,dY_j$ so that $dY_i = A_{ij}\,dY_j = d(A_{ij}\,Y_j)$ in
each $U_i\cap U_j \ne \emptyset$ and thus $Y_i = A_{ij}\,Y_j + B_{ij}$
for some locally constant $B_{ij}\colon U_i\cap U_j \to \co^q$. This
shows that ${\mathcal F}$ is tranversely affine in $M$.
\end{proof}
Theorem~\ref{Theorem:forms} is now a straightforward consequence of Propositions~\ref{Proposition:formsglobal} and ~\ref{Proposition:formsgeneral}.
\section{A suspension example}
\noindent The following example generalizes Example 1.5 of Chapter
I in \cite{scardua}.
\begin{Example} {\rm We will define a transversely affine codimension-$q$
holomorphic foliation on a compact manifold by the suspension
method:
\noindent Let $M$ be a complex manifold and let $w$ be a $q\times1$
holomorphic matrix valued 1-form on $M$, closed and satisfying $f^*w =
Aw$ for some biholomorphism $f\colon M \to M$ and some hyperbolic
matrix $A \in \GL_q(\co)$.
Define $\Om$ and $\eta$ in the product
$M\times\GL_q(\co)$ by $\Om(x,T) = T.w(x)$ and $\eta(x,T) =
dT.T^{-1}$.
\noindent Then we have
$$
\aligned
d\Om(x,T) &= dT \wedge w(x) + T\,dw(x) =\\
&= dT \wedge w(x) = dT.T^{-1} \wedge Tw(x) =\\
&= \eta(x,T) \wedge \Om(x,T)
\endaligned
$$
and also,
$$
\aligned
d\eta(x,T) &= d(dT.T^{-1}) = dT.T^{-1} \wedge dT.T^{-1} =\\
&= \eta(x,T) \wedge \eta(x,T).
\endaligned
$$
Moreover the biholomorphism $F\colon M\times \GL_q(\co) \to
M\times\GL_q(\co)$ defined by $F(x,T) = (f(x),T.A^{-1})$ satisfies
$$
F^*\Om = TA^{-1}f^*w = TA^{-1}Aw = Tw = \Om
$$
and
$$
F^*\eta = d(TA^{-1})\cdot(TA^{-1})^{-1} = dT.T^{-1} = \eta.
$$
Thus, by Theorem~\ref{Theorem:forms} the pair $\Om$, $\eta$ induces a codimension-$q$
non-singular holomorhic foliation $\widetilde{\mathcal F}$ which is
transversely affine in $M\times\GL_q(\co)$. This foliation
induces a codimension-$q$ non-singular foliation ${\mathcal F}$ on
the quotient manifold $V = (M\times\GL_q(\co)/\mathbb Z$ by the
action $\mathbb Z\times(M\times\GL_q(\co)) \to M\times\GL_q(\co)$,
$\eta,(x,T) \mapsto (f^n(x),T.A^{-n})$. This last foliation
${\mathcal F}$ inherits and affine transverse structure from
$\widetilde{\mathcal F}$.}
\end{Example}
\section{Holomorphic foliations with singularities}
A a codimension $q$ holomorphic foliation with singularities $\fa$ on a complex manifold $M$ of dimension $n\geq 2$ is defined as a pair $(\fa_0, \sing(\fa))$, where
$\sing(\fa)\subset M$ is an analytic subset of codimension $\geq q+1$, and a holomorphic foliation $\fa_0$ in the classical, in the open manifold $M\setminus\sing(\fa)$.
Then, all the notions for $\fa$ are defined in terms of $\fa_0$. For instance, the leaves of $\fa$ are defined as the leaves of $\fa_0$, and their holonomy groups are defined in the same way. We may assume that the {\it singular set} $\sing(\fa)$ is saturated in the sense that there is no other pair $\fa^\prime=(\fa_0 ^\prime, \sing(\fa^\prime)$ with $\sing(\fa^\prime)\subsetneqq \sing(\fa)$ and such that $\fa_0 ^\prime$ coincides with $\fa_0$ on $M\setminus \sing(\fa)$.
\begin{Definition}
\label{Definition:transvaffinesing} {\rm A codimension-$q$ holomorphic foliation with singularities ${\mathcal F}$ on
$M^n$ is said to be {\it transversely affine\/} if there is a family
$\{Y_i\colon U_i \to \co^q\}\_{i\in I}$ of holomorphic submersions
$Y_i\colon U_i \to \co^q$ defined in open sets $U_i \subset M$,
defining ${\mathcal F}$, and satisfying $M\backslash \sing(\fa)
= \bigcup\limits_{i \in I} \,U_i$ and with affine relations $Y_i =
A_{ij}Y_j + B_{ij}$ for some $A_{ij}\colon U_i\cap U_j \to
\GL_q(\co)$, $B_{ij}\colon U_i\cap U_j \to \co^q$ locally constant
in each $U_i \cap U_j \ne\phi$. }
\end{Definition}
We usually distinguish two cases in the definition above: the codimension one and the dimension one cases. Since we are interested in the codimension $\geq 2$ case, we shall focus on the second case.
\subsection{Generic singularities}
In this paragraph we introduce what we will consider as generic type of a
singularity for a codimension-$q \ge 2$ foliation.
Given a holomorphic foliation with singularities $\fa$ on a complex manifold $M$, the singular set of $\fa$ is an analytic subset $\sing(\fa) \subset M$ of codimension $\geq 2$, also having dimension $\dim \sing(\fa) \leq \dim(\fa)$. In particular, it can have a component of dimension $\dim(\fa)$, as well as a component of dimension $\dim(\fa) -1$. As for this second case, by intersecting with appropriate transverse small discs we may consider the following model of generic singularity:
\subsubsection{Isolated singularities}
\begin{Definition} {\rm Let $\fa$ be a germ of an {\em isolated} one-dimensional foliation singularity at the origin
$0\in\mathbb C^{q+1}$. The singularity is called {\it Poincar\'e non-resonant} if the convex hull of the set of eigenvalues of the linear part $DX(0)$ does not contain the origin, and there is no resonant $\lambda_j = n_1 \lambda _1 + ... n_{q+1} \lambda_{q+1}$ for $n_1,...,n_{q+1} \in \mathbb N$.
In this case, by Poincar\'e linearization theorem (\cite{[Brjuno]}, \cite{[Dulac]})
the singularity {\it linearizable
without resonances} (\cite{mafra-scardua}): it is given in some neighborhood $U$ of $0\in\mathbb C^{q+1}$ by a holomorphic vector field $X$ which is
analytically linearizable as
$X={\displaystyle \sum_{j=1}^{q+1}\lambda_{j}z_{j}\dfrac{\partial}{\partial
z_{j}}},$ with eigenvalues $\lambda_{1},\cdots,\lambda_{q+1}$ satisfying the following non-resonance
hypothesis:
\noindent {\sl If $n_{1},\cdots,n_{q+1}\in\mathbb Z$ are such that
$\sum_{j=1}^{q+1}n_{j}\lambda_{j}=0,$
then $n_{1}=n_{2}=\cdots=n_{q+1}=0.$
}
}
\end{Definition}
In the above situation,
define 1-forms
$\omega^{1},\cdots,\omega^{q}$ on $U\setminus\Lambda$ by setting
$\omega^{\nu}(X)=0$ and
$\omega^{\nu}=\sum_{j=1}^{q+1}\alpha_{j}^{\nu}\frac{dz_{j}}{z_{j}}$,
where $\nu=1,\cdots,q$ and $\alpha_{j}^{\nu}\in\mathbb C$. From this we
get the following system of equation
$\sum_{j=1}^{q+1}\alpha_{j}^{\nu}\lambda_{j}=0,\quad\nu=1,\cdots,q.$
the equation
$\sum_{j=1}^{q+1}\lambda_{j}z_{j}=0$
defines a hyperplane in $\mathbb C^{q+1}$ implies that we can choose $q$
linearly independent vectors $\vec{\alpha}_{1},\cdots,\vec{\alpha}_{q}$
say
$\vec{\alpha}_{\nu}=(\alpha_{1}^{\nu},\cdots,\alpha_{q}^{\nu},\alpha_{q+1}^{\nu})\in\mathbb C^{q+1}$
so that
$\sum_{j=1}^{q+1}\alpha_{j}^{\nu}\lambda_{j}=0,\quad\nu=1,\cdots,q.$
and therefore the system $\omega^{1},\cdots,\omega^{q}$ has maximal rank
$q$ outside the coordinate hyperplanes.
\begin{Lemma}[\cite{mafra-scardua}]
\label{Lemma:nonresonantconstant}
Let $f(z)$ be a holomorphic function on the set
$U\setminus\left\{ z_{1}\cdot\ldots\cdot z_{q+1}=0\right\} $, where
$U$ is a connected neighborhood of the origin in $\mathbb C^{q+1}$. Then $f(z)$ is
constant provided that
$df\wedge\omega^{1}\wedge\cdots\wedge\omega^{q}=0$.
\end{Lemma}
\begin{Definition}[type II generic singularities]
\label{Definition:isolatedtype}
{\rm
A singularity $p \in \sing(\fa)$ will be called {\it type II generic singularity} if $p$ belongs to a smooth part of the set $\sing(\fa)$, where:
\begin{itemize}
\item There is a unique branch $\sing(\fa)_p\subset \sing(\fa)$ through $p$.
\item $\dim \sing(\fa)_p = \dim (\fa) - 1$
\item For some (and therefore for every) transverse disc $\Sigma_p$, with $\Sigma_p \cap \sing(\fa)_p = \Sigma_p \cap \sing(\fa) = \{p\}$, of dimension $q + 1$, the induced foliation $\fa\big|_{\Sigma_p}$ exhibits an isolated non-resonant Poincar\'e type singularity at the origin $p$.
\end{itemize}
}
\end{Definition}
\subsubsection{Non-isolated singularities} Now we focus on the components of
the singular set that cannot be reduced to isolated singularities by transverse sections. Let us first recall that some notions for codimension one foliations. Given a codimension-one holomorphic foliation with singularities $\fa$ on a complex manifold $M$, a singular point $p \in \sing(\fa)$ is a {\it Kupka-type singularity} (cf. \cite{omegar, scardua}), if $\fa$ is given in some neighborhood $U$ of $p$ by a holomorphic integrable 1-form $\omega$, such that $\omega(p)=0, \, d \omega( p ) \ne 0$.
In this case, if $U$ is small enough, there exists a system of local coordinates
$(x,y,z_1,\ldots,z_{n-2}) \in U$ of $M$, centered at $p$, such that
${\mathcal F}\big|_U$ is given by
$\alpha(x,y)=0$, for some holomorphic 1-form $\alpha= A(x,y) dx + B(x,y) dy$. The 1-form $\alpha$, so called the {\it transverse type of $\fa$ at $p$}, has an isolated singularity at the origin $0 \in \mathbb C^2$ and satisfies $d \alpha (0)\ne 0$.
The generic type is then defined as follows:
We shall say that a singularity $p \in\sing(\fa)$ is {\it Poincar\'e type} if it is Kupka type and its corresponding transverse type is of the form $xdy-\lambda ydx + hot=0,\qquad \la \in \mathbb C\backslash(\mathbb R_-\cup \mathbb Q_+)$.
The reasons for this are based on the classification of singularities of germs of foliations in dimension two (see \cite{seidenberg}, \cite{camacho-linsneto-sad}).
In this case, the singularity $\alpha(x,y)=0$ is analytically linearizable, so that
there are coordinates $(x,y,z_1,\ldots,z_{n-2})$ as above, such that $\fa$ is given in these coordinates by $xdt - \lambda y dx=0$. Let us now motivate our second type of generic singularity for codimension $q \geq 2$ foliations, by
discussing an example:
\begin{Example}{\rm
Let ${\mathcal F}_1,\ldots,{\mathcal F}_q$ be holomorphic singular codimension
one foliations on a complex manifold $M$ of dimension $q+1$. Assume that the foliations $\fa_j$ are transverse outside the union of
their singular sets and their set of tangent points. Then we can define
in the natural way the {\it intersection foliation\/} ${\mathcal F}
= \bigcap \limits_{j=1}^{q}\,{\mathcal F}_j$ (as in Example~\ref{Example:intersection}) whose leaves are
obtained as the connected components of the intersection of the
leaves of ${\mathcal F}_1,\ldots,{\mathcal F}_q$ through points of
$M$ and has singular set $\sing(\fa)
=\bigcup\limits_{j=1}^{q}\,\sing({\mathcal F}_j) \cup T_2$ where $T_2$
is the union of the codimension $\ge 2$ components of the set of
tangent points of the foliations.
Suppose that ${\mathcal F}_j$ has only Poincar\'e type singularities, as defined above. Then, given any point $p
\in \sing({\mathcal F}_j)\backslash\bigcup\limits_{ i \ne j} \,\sing({\mathcal F}_i)$, there exists a
local chart $(x,y,z_1,\ldots,z_{n-2}) \in U$ of $M$, centered at $p$, such that
${\mathcal F}_j\big|_U$ is given by
$$
xdy-\la ydx=0,\qquad \la \in \co\backslash(\re_-\cup \mathbb Q_+)
$$
and for each $i\ne j$, ${\mathcal F}_i\big|_U$ is regular given by $dz_{k_i}=0$ for some
$k_i \in \{1,\ldots,n-2\}$.
}
\end{Example}
\begin{Definition}[type I generic singularities]
\label{Definition:intersectiontype}
{\rm Let ${\mathcal F}$ be a codimension-$q$ foliation on $M^n$. A singularity $p \in \sing({\mathcal F})$ is a {\it type I generic singularity}, if $p$ belongs to a smooth part of the set $\sing(\fa)$, where:
\begin{itemize}
\item There is a unique branch $\sing(\fa)_p\subset \sing(\fa)$ through $p$.
\item $\dim \sing(\fa)_p = \dim (\fa)$
\item There is a local chart $(x,y,z_1,\ldots,z_{n-2}) \in U$ of
$M$, centered at $p$, such that ${\mathcal F}\big|_U$ is given by
$$
xdy -\la y dx = 0, \qquad \la \in \co\backslash(\mathbb R_- \cup \mathbb Q_+)
$$
and $dz_j=0$, $j=1,\ldots,q-1$.
\end{itemize}
\noindent Therefore in a neighborhood of $p$, the foliation ${\mathcal F}$ has the
structure of the {\em intersection} (not product) of a singular linear foliation $xdy - \la y dx = 0$ on $(\co^2,0)$ and $q-1$ regular trivial foliations.
\noindent We have $s({\mathcal F}) \cap U = \{(x,y,z_1,\ldots,z_{n-2}) \in U \mid
x=y=0\}$. If we define $\La = \{xy=0\} \cap \{z_1 =\cdots= z_{q-1} = 0\}$ then
$\La$ consists of two codimension-$q$ invariant local submanifolds $\La_1 \cup
\La_2$ which intersect transversely at the point $p = \La_1 \cap \La_2$.}
\end{Definition}
\section{Extending affine transverse structures with poles}
\noindent Now consider the following situation:
\begin{enumerate}
\item ${\mathcal F}$ is a codimension-$q$ singular foliation on $M$,
\item $\La \subset M$ is an analytic irreducible invariant subvariety of
codimension-$q$ (i.e., $\La\backslash \sing(\fa)$ is a leaf of
${\mathcal F}$),
\item There are analytic codimension-one subvarieties
$S_1,\ldots,S_q \subset M$ such that $\La$ is an irreducible
component of $\bigcap\limits_{j=1} ^q \,S_j$ and $S_j$ is foliated
by ${\mathcal F}$, $j = 1,\ldots,q$.
\end{enumerate}
\noindent Under these assumptions we make the following definition:
\begin{Definition}
\label{Definition:adaptedeta}{\rm Let $\{\Omega_1,\ldots,\Omega_q\}$ be an integrable system of
holomorphic 1-forms defining $\fa$. A $q\times q$ matrix valued {\sl meromorphic} 1-form $\eta$ defined in a neighborhood of $\La$ is said
to be {\it a partially-closed logarithmic derivative adapted to $\Om$ along $\La$\/} if:
\begin{itemize}
\item $d\Omega= \eta\wedge \Omega$ and $\eta$ is partially-closed, $d \eta = \eta \wedge \eta$, meromorphic with simple poles,
\item $(\eta)_\infty = \bigcup\limits_{j=1} ^q \,S_j$, a union of irreducible codimension one analytic subsets $S_j\subset V$ in a neighborhood $V$ of
$\La$,
\item given any regular point $p \in \La\backslash \sing(\fa)$ there exists a local chart $(y_1,\ldots,y_q,
z_1,\ldots,z_{n-q}) \in U$ for $M$, centered at $p$, such that:
$$
\aligned
&U \cap S_j = \{y_j=0\}, \quad j = 1,\ldots,q\\
&\Om = G.dY \qquad\text{and}\\
&\eta = dG.G^{-1}+\sum\limits_{j=1} ^q A_j.\frac{dy_j}{y_j}
\qquad\text{where}\\
&Y = \begin{pmatrix} y_1\\ \vdots\\ y_q\end{pmatrix}\,,
\endaligned
$$
$G\colon U \to \GL_q(\co)$ is holomorphic and $A_j$ is a
constant $q\times q$ complex matrix.
\end{itemize}
\noindent The matrix $A_j$ is called the {\it residue
matrix\/} of $\eta$ with respect to $S_j$.
}
\end{Definition}
In what follows we consider the problem of extending a form $\eta$ from an affine transverse structure of $\fa$, an analytic invariant
hypersurface.
The existence of such extension, as adapted closed logarithmic derivatives, is then assured by the following result:
\begin{Theorem} [Extension Lemma]
\label{Theorem:extensionlemma} Let ${\mathcal F}, \, \Lambda$ be as above. Suppose:
\item{{\rm (1)}} $\sing(\fa) \cap \La$ is nonempty and consists of type I and type II generic singularities, and singularities where $\dim\sing(\fa) \leq \dim(\fa) - 2$.
\item{{\rm (2)}} There exists a differential {\rm 1}-form $\eta$ defined in
some neighborhood $V$ of $\La$ minus $\La$ and its local separatrices
which defines a transverse affine structure for ${\mathcal F}$ in
this set $V\setminus (\Lambda \cup \sep(\Lambda))$, in the sense of Proposition~\ref{Proposition:formsglobal}.
\noindent Then $\eta$ extends meromorphically to a neighborhood of
$\La$ as an adapted form (in the sense of Definition~\ref{Definition:adaptedeta}) to $\Om$
along $\La$.
\end{Theorem}
We will extend $\eta$ to $\La$ through the singularities of $\fa$ in $\Lambda$. According to classical Hartogs' extension theorem (\cite{GunningII,Gunning-Rossi}), this implies the extension to $\Lambda$. Choose
$p \in \sing(\fa) \cap \La$ and choose local coordinates
$(x,y,z_1,\ldots,z_{n-2}) \in U$, centered at $p$, as in Definition~\ref{Definition:intersectiontype}.
\begin{Lemma}
\label{Lemma:eta_0} Let $\mathcal F$ be a codimension $q$ holomorphic
foliation with singularities, defined in an open polydisc $U\subset \mathbb C ^{q+n}$, with a type I generic singularity or a type II generic singularity at the origin $0 \in \sing
({\mathcal F})\subset U$. Assume that ${\mathcal F}$ is transversely affine in
$U\setminus \Lambda$, where $\Lambda\subset U$ is a finite union of
irreducible invariant hypersurfaces, each one containing the origin.
Assume that $\mathcal F$ is given in $U$ by a holomorphic $q\times
1$ matrix $1$-form $\Omega$ in $U$ with a $q\times q$ matrix {\rm
1}-form $\eta$ in $U$ satisfying:
\[
d\Omega = \eta \wedge \Omega, \, \, \, d\eta = \eta \wedge \eta.
\]
\noindent Then $\eta$ extends meromorphically to a neighborhood of
$\La$ as a partially-closed logarithmic derivative adapted to $\Omega$ along $\Lambda$ (in the sense of Definition~\ref{Definition:adaptedeta}).
\end{Lemma}
\begin{proof}
For the sake of simplicity of the notation we will assume that
$\mathcal F$ has codimension $q=2$ and the ambient has dimension
$q+1=3$. Let us also assume that the singularity is isolated, i.e., of non-resonant Poincar\'e of type II. The general case is pretty similar. Let then $X=\sum_{j=1}^3
\lambda _j \, x_j \, ({\partial}/{\partial x_j})$ be a holomorphic
vector field defining $\mathcal F$ in suitable coordinates $(x_1,
x_2, x_3) \in U^\prime$, in a connected neighborhood $0 \in
U^\prime\subset U$ of $0\in {\mathbb C}^3$, with
$\{\lambda_1,\lambda_2,\lambda_3\}$ linearly independent over
$\mathbb Q$.
Given complex numbers $a_1,a_2,a_3$ we define a closed 1-form $\omega=\sum\limits
_{k=1}^3 a_k \, dx_k/x_k$. Then $\omega(X)=0$ if and only if $\sum\limits_{k=1}^3
a_k \, \lambda_k=0$.
Thus, we can choose 1-forms $\omega_1,\omega_2$ given by
$\omega_j=\sum_{j=1}^3 a_k^j \, \, {dx_k}/{x_k}, \, a_k ^j \in
{\mathbb C}$, such that: $\omega_1$ and $\omega_2$ are linearly
independent in the complement of $\cup_{j=1}^3 (x_j=0)$ and
$\Theta_j(X)=0, j=1,2$.
Once we fix such 1-forms, the foliation $\mathcal F$ is defined by
the integrable system of meromorphic 1-forms $\{\omega_1,
\omega_2\}$ in $U$. Notice that the polar set of the $\omega_j$ in
$U^\prime$ consists of the coordinate hyperplanes $\{x_i=0\}\subset
U^\prime, \, i=1,2,3.$ \, Let $\Omega_0$ be the $2\times 1$ meromorphic matrix valued 1-form given by the system $\{\omega_1, \omega_2\}$.
\begin{Claim}
\label{Claim:eta_0}
Let $\eta_0$ be a $2\times 2$ holomorphic
matrix valued 1-form defined in $U^\prime \setminus \bigcup
\limits_{i=1}^3 \{x_i=0\}$, such that
$d\Omega_0 = \eta_0 \wedge \Omega_0, \, \, \, d\eta_0 = \eta
_0\wedge \eta_0.$
Then:
\begin{enumerate}
\item $\eta_0$ is closed, $d \eta_0 = 0$.
\item The matrix valued 1-form $\eta_0$ extends to a meromorphic matrix valued 1-form
in $U^\prime$, having polar divisor of order one in $U^\prime$.
\item The extension of $\eta_0$ is adapted to $\Omega_0$ along $\Lambda$.
\end{enumerate}
\end{Claim}
Let us see how the claim proves the lemma.
Indeed, as for the original forms $\Omega$ and $\eta$ we have $\Omega =
G\Omega_0$ for some holomorphic matrix $G\colon \widetilde U \to
\GL_q(\co)$. Thus if we define $\eta_0:= \eta - dG\cdot G^{-1}$
then we are in the situation of the above claim. Thus we conclude
that $\eta$ extends to $U ^\prime$ as a closed meromorphic 1-form
with simple poles and polar divisor consisting of the coordinate
planes. Therefore, the same conclusion of the above claim holds for
$\eta$ and we prove the lemma.
\begin{proof}[Proof of the claim]
Since each $\omega_j$ is closed the matrix form $\Omega_0$ is
closed. From $d\Omega_0 = \eta_0 \wedge \Omega_0$ we have $\eta_0
\wedge \Omega_0=0$.
Now we observe that there are holomorphic $2 \times 2$ scalar
matrices $M_1, M_2$ defined in $U^\prime \setminus \{x_1 x_2 x_3
=0\}$, such that $\eta_0= M_1 \omega_1 + M_2 \omega_2$, where the
multiplication of the matrix by the 1-form is the standard scalar
type multiplication. Indeed, it is enough to complete the pair $\omega_1, \omega_2$ into a basis of the space of holomorphic 1-forms and express $\eta_0$ in this basis. Then the condition $\eta_0 \wedge \Omega_0$ means that the coefficients of $\eta_0$ in the other elements of the basis are all identically zero.
For any holomorphic $2\times 2$ scalar (holomorphic) matrix $M$ and a $2
\times 1$ matrix valued 1-form $\Omega$ we have the easily verified formula for
the exterior derivative:
\[
d (M\Omega) = dM \wedge \Omega + M d \Omega
\]
Therefore we have
\[
d\eta_0= dM_1 \wedge \omega_1 + d M_2 \wedge \omega_2.
\]
Also of easy verification we have
\[
\eta_0 \wedge \eta_0 = [M_1, M_2] \omega_1 \wedge \omega_2
\]
where $[,]$ denotes the matrix Lie bracket. Thus we obtain
\[
dM_1 \wedge \omega_1 + d M_2 \wedge \omega_2= [M_1, M_2] \,\omega_1 \wedge \omega_2.
\]
Taking the exterior product with $\omega_2$ in the above equation
we obtain
\[
dM_1 \wedge \omega_1 \wedge \omega_2=0
\]
Hence, $M_1$ is a meromorphic first integral for the foliation defined by
the system $\{\omega_1, \omega_2\}$ in $\widetilde U:= U^\prime
\setminus \{x_1 x_2 x_3=0\}$. This foliation is exactly the
restriction of $\mathcal F$ to this open set. Since $\mathcal F$ is
defined by the vector field $X$ in $\widetilde U$ and this vector
field is linear without resonance, it follows from Lemma~\ref{Lemma:nonresonantconstant} that $M_1$ is constant in $\widetilde U$. Similarly we can
conclude that $M_2$ is constant. This implies the extension result
and the other items in Claim~\ref{Claim:eta_0}.
\end{proof}
The proof of Lemma~\ref{Lemma:eta_0} is complete.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Theorem:extensionlemma}]
The proof follows the same argumentation
as the proof of Lemma 3.2 in Chapter I in \cite{scardua}.
Indeed, Lemma~\ref{Lemma:eta_0} implies that $\eta$ extends
meromorphically to $\La \cup \text{sep}\,(\La)$. By construction this extension
is adapted to $\Om$ along $\La$. \end{proof}
\bibliographystyle{amsalpha}
|
2,869,038,156,749 | arxiv | \section*{Acknowledgements}
\noindent The authors are grateful to the Council of Scientific and Industrial Research, New Delhi, India for financial support through Grant No.03(1152)/10/EMR-II of a scheme.
\section*{References}
\noindent 1. T.V. Ramakrishnan, \textit {J. Phys.: Condens. Matter} \textbf{19}, 125211 (2007).\\
\noindent 2. A.P. Ramirez, \textit {J. Phys.: Condens. Matter} \textbf{9}, 8171 (1997).\\
\noindent 3. J.-H. Park, E. Vescovo, H.-J. Kim, C. Kwon, R. Ramesh, and T. Venkatesan, \textit {Nature} \textbf{392}, 794 (1998).\\
\noindent 4. H. Akai, \textit {Phys. Rev. Lett.} \textbf{81}, 3002 (1998).\\
\noindent 5. M. Horne, P. Strange, W.M. Temmerman, Z. Szotek, A. Svane and H. Winter, \textit {J. Phys.: Condens. Matter} \textbf{16}, 5061 (2004) .\\
\noindent 6. Velicky, S. Kirkpatrik and Ehrenreich, \textit {Phys. Rev.} \textbf{175}, 747 (1968).\\
\noindent 7. G. Bulk and R.J. Jelitto, \textit {Phys. Rev.} \textbf{B41}, 413 (1990).\\
\noindent 8. W. Nolting, S. Rex and S.M. Jaya, \textit {J. Phys.:Condens. Matter} \textbf{9}, 1301 (1997).\\
\noindent 9. S. Yunoki, J.Hu, A.L. Malvezzi, A. Moreo, N. Furukawa and E. Dagotto, \textit {Phys. Rev. Lett.} \textbf{80}, 845 (1998).\\
\noindent 10. M.Y. Kagan, D. I. Khomskii and M.V. Mostovoy, \textit {European Physical Journal} \textbf{B 12}, 217 (1999). \\
\noindent 11. A. Chattopadhyay and A. J. Mills and S. Das Sarma, \textit {Phys. Rev.} \textbf{B64}, 012416 (2001).\\
\noindent 12. C. Lin and A.J. Millis, \textit {Phys. Rev.} \textbf{B 72}, 245112 (2005).\\
\noindent 13. W. Nolting, G.G. Reddy, A. Ramakanth and D. Meyer, \textit {Phys. Rev.} \textbf{B64}, 155109 (2001).\\
\noindent 14. T. Schneider, M. H. Pedersen and J. J. Rodríguez-Nún˜ez, \textit {Z. Phys. B-Condens. Matter} \textbf{100}, 263 (1996).\\
\noindent 15. T. Herrmann and W. Nolting, \textit {J. Magn. Mater.} \textbf{170}, (1997) 253 (1997).\\
\noindent 16. D. Meyer, W. Nolting, G. G. Reddy and A. Ramakanth, \textit {Physica Stat. Solidi (b)} \textbf{208}, 473 (1998).\\
\noindent 17. Maciej Maska, \textit {Phys. Rev. B} \textbf {48}, 1160 (1993).\\
\noindent 18. A. Schwabe and W. Nolting, \textit {Phys. Rev. B} \textbf {80}, 214408 (2009).\\
\noindent 19. L. Haritha, G. Gangadhar Reddy and A. Ramakanth \textit {Physica B} \textbf{405}, 1701 (2010). \\
\noindent 20. R. Jellito, \textit {J. Phys. Chem. Solids} \textbf{30}, 609 (1969). \\
\noindent 21. W. Nolting, S. Rex, and S. Mathi Jaya, \textit {J. Phys.: Condensed Matter} \textbf{9}, 1301 (1997).\\
\noindent 22. C. Santos and W. Nolting, \textit {Phys. Rev.} \textbf{B65}, 144419 (2001).\\
\noindent 23. L. Haritha, G. Gangadhar Reddy, A. Ramakanth and S.K. Ghatak, to be published.\\
\noindent 24. A. Chattopadhyay, S. Das Sarma and A. J. Mills, \textit {Phys. Rev. Lett.} \textbf{87}, 227202 (2001).\\
\noindent 25. W. Nolting and M. Matlak, \textit {Phys. Status Solidi (b) } \textbf{123}, 155 (1984).\\
\noindent 26. R. A. de Groot, F.M. Muller, P.G. van Engen and K.H.J. Buschow, \textit {Phys. Rev. Lett.} \textbf{50}, 2024 (1983).\\
\noindent 27. V. Yu Irkhin and M. I. Katsnel'son, \textit {Physics Uspekhi} \textbf{37}, 659 (1994).\\
\noindent 28. G.- M. Zhao and H. Keller, W. Prellier and D.J. Kang, \textit {Phys. Rev.} \textbf{B63}, 172411 (2001).\\
\noindent 29. J.-H. Park, E. Vescovo, H.-J. Kim, C. Kwon, R. Ramesh, and T. Venkatesan, \textit {Phys. Rev. Lett.} \textbf{81}, 1953 (1998).\\
\end{document}
|
2,869,038,156,750 | arxiv | \section{Introduction}
In atoms the parity nonconserving (PNC) exchange of the Z$_{0}$ neutral gauge boson between the nucleons and electrons manifests itself as a non-vanishing electric dipole transition moment ($\mathrm{E1_{PNC}}$) between states of the same parity \cite{Bouchiat74, Bouchiat75}. Precision measurements of $\mathrm{E1_{PNC}}$, together with high accuracy atomic theory calculations, offer a unique opportunity to study electro-weak physics in a low energy experiment with the possibility to probe for physics beyond the standard model \cite{Marciano90, Fortson90, Fortson92, Haxton08, Blundell92}. To date the most accurate measurement of $\mathrm{E1_{PNC}}$ was performed on a beam of atomic cesium which achieved an experimental uncertainty of 0.39\% \cite{Wood97}. Combined with the most recent atomic theory \cite{Derevianko00,Porsev09,Dzuba12}, the result agrees with the standard model to within 1.5\,$\mathrm{\sigma}$. Also, at this level of uncertainty the authors were the first to observe a nuclear anapole moment.
\indent One promising proposal for future PNC measurements, with comparable or better experimental accuracy,
seeks $\mathrm{E1_{PNC}}$ through precision spectroscopy of heavy single trapped ions \cite{Norval93}. This approach has the potential to reach experimental uncertainties below $0.1\%$ and is currently being pursued in Ba$^{+}$ \cite{Sherman05} , Yb$^{+}$ \cite{Rahaman13}, and Ra$^{+}$ \cite{Versolato11}. Such experiments will measure the modulation of the Rabi frequency $\mathrm{\Omega}$ for a $\big(n\big)S_{1/2} \leftrightarrow \big(n-1\big)D_{3/2}$ transition (see Fig \ref{fig:barium}.) due to the non-zero interference between the electric quadrupole (E2) transition moment and $\mathrm{E1_{PNC}}$,
\begin{figure}[t]
\includegraphics[scale=0.52]{fig1.eps}
\caption{(Color online) A partial energy diagram for $\mathrm{Ba^{+}}$ showing its lowest laying states and the transitions that are relevant to the M1 measurement. The 2051 nm transitions, on which we will focus our attention, have contributions from both electric quadrupole and magnetic dipole transition moments. Parity nonconservation also induces a very small electric dipole transition moment $\mathrm{E1_{PNC}}$ to the $5D_{3/2}$ states. The other transitions are strongly connected by electric dipole moments; higher order moments along these transitions are inconsequential to the measurement proposed here.}
\label{fig:barium}
\end{figure}
\begin{equation}\label{eq:PNCRabi}
\mathrm{\Omega}^{2} = \big| \mathrm{\Omega_{E2}} + \mathrm{\Omega_{\mathrm{PNC}}} \big|^{2} \approx \mathrm{\Omega^{2}_{E2}} \pm
2 \mathrm{Re}\big( \mathrm{\Omega_{E2}}\mathrm{\Omega^{*}_{PNC}} \big),
\end{equation}
\noindent where $\mathrm{\Omega_{E2}}$ and $\mathrm{\Omega_{PNC}}$ are the E2 and E1$_{PNC}$ contributions to the total Rabi frequency ($\mathrm{\Omega}$). The parity violating $\mathrm{E1_{PNC}}$ can be extracted from the interference term in Eq. (\ref{eq:PNCRabi}) \,.
The experiment is complicated by the existence of a non-vanishing magnetic dipole (M1) moment between the same states. In particular, for Ba$^{+}$ the apposite reduced transition moments are,
\begin{subequations}\label{eq:M1E2}
\begin{equation}
\mathrm{M1} = \big\langle 6S_{1/2} \big|\big| \widehat{\mathrm{M1}} \big|\big| 5D_{3/2} \big\rangle
\end{equation}
\begin{equation}
\mathrm{E2} = \big\langle 6S_{1/2} \big|\big| \widehat{\mathrm{E2}} \big|\big| 5D_{3/2} \big\rangle
\end{equation}
\end{subequations}
\noindent Where $\widehat{\mathrm{M1}}$ and $\widehat{\mathrm{E2}}$ are the magnetic dipole and electric quadrupole operators, respectively. Coupling through the M1 moment can mimic the interference term in Eq.(\ref{eq:PNCRabi}), making it a potentially serious systematic problem for any trapped ion PNC experiment \cite{Mandal10}. Calculations of M1 and E2 were recently reported \cite{Sahoo06} and are given in Table \ref{tab:relativesizes} along with an estimation of E1$\mathrm{_{PNC}}$ for scale. This work predicts an M1 dominated by electron-electron correlation effects, but the value has yet to be corroborated. A measurement of M1 would be an important test of many-body theory and is an essential step toward a PNC experiment in Ba$^{+}$. In the present work we describe a method for measuring M1 that exploits the presence of the significantly larger E2 moment. \\
\begin {table}[ht]
\begin{center}
\begin{tabular}{ >{\centering\arraybackslash}m{1.25in} >{\centering\arraybackslash}m{.85in} >{\centering\arraybackslash}m{.75in} >{\centering\arraybackslash}m{.75in} >{\centering\arraybackslash}m{.75in} >{\centering\arraybackslash}m{.75in}}
\toprule[1.5pt]
{\bf $\mathrm{E2}$} & {\bf $\mathrm{M1}$} & {\bf $\mathrm{E1_{PNC}}$} \\
\hline
12.6 ($\frac{a_{0}}{\lambda}$) & 8.0 $\times$ 10$^{-4}$ ($\frac{\alpha}{2}$) & $\sim$ 2$\times$ 10$^{-11}$ \\
$\sim$ 1 & $\sim$ 10$^{-2}$ & $\sim$ 10$^{-7}$ \\
\bottomrule[1.25pt]
\end {tabular}
\caption {Calculated \cite{Sahoo06,Safronova13,Sahoo07,Dzuba06,Roberts13} and relative sizes of the transition moments between $6S_{1/2}$ and $5D_{3/2}$ in Ba$^{+}$. The calculated values are listed in units of ea$_{0}$, where e is the electron charge and a$_{0}$ is the Bohr radius. The fine structure constant $\mathrm{\alpha}$ and the transition wavelength $\mathrm{\lambda}$ enter so that a \emph{bona fide} comparison can be made with these units. Although the calculation for M1 shows enhancement from electron-electron correlation effects it is still small relative to E2.}
\label{tab:relativesizes}
\end{center}
\end {table}
\subsection{Measurement of Rabi Frequencies in Ba$^{+}$}
\indent The M1 measurement proposed in this work assumes the ability to determine several Rabi frequencies for transitions to particular Zeeman sub-levels of the $5D_{3/2}$ manifold from the $6S_{1/2}$ ground states. In this section we describe the shelving technique \cite{Nagourney86} which can be used to measure all of the required Rabi frequencies. Here the technique is presented with the use of lasers at 455 nm and 614 nm to drive the transitions indicated in Fig.\ref{fig:barium}\,. The method exploits the very long lifetimes of the $5D_{3/2}$ and $5D_{5/2}$ states, which are 80 s \cite{Yu97} and 32 s \cite{Nagourney86}, respectively.
\begin{figure}[ht]
\includegraphics[scale=1.03]{fig2.eps}
\caption{(Color online) Rabi oscillations on the $6S_{1/2}\big( \mathrm{m}=-1/2\big)$ to $5D_{3/2}\big( \mathrm{m}=-3/2\big)$ transition reported in \cite{Hoffman12}. A Rabi frequency of 2 kHz was obtained with a decoherence rate of about 300 Hz.}
\label{fig:RabiFlopCrop}
\end{figure}
\indent Ba$^{+}$ is laser cooled along the $6S_{1/2} \leftrightarrow 6P_{1/2}$ transition with 493 nm light. A repump laser at 650 nm is required because of the significant branching ratio for spontaneous decay to the $5D_{3/2}$ states from $6P_{1/2}$ \cite{Kurz08}. Fluorescence at 493 nm is collected so that the ion can be observed while it cycles on the 493 nm and 650 nm transitions. From $6S_{1/2}$ the ion can be pumped to the $5D_{5/2}$ state by a pulse of 455 nm light. Since an ion in this state has been removed from the cooling cycle no photons will be emitted with application of the cooling beams and the ion is said to be ``shelved". Conversely, if the ion is initially driven to the $5D_{3/2}$ state then the 455 nm laser will be unable to shelve the ion and it will fluoresce when addressed by the cooling beams. This generates a binary signal, in the form of a ``bright" or ``dark" ion, that indicates which state the ion occupies. The following pulse sequence uses this signal to determine the Rabi frequency for any of the 2051 nm transitions.\\
\indent After the ion is initially cooled, the 493 nm and 650 nm lasers are turned off and the ion state is initialized to $6S_{1/2}\big( \mathrm{m}=-1/2\big)$ by optical pumping with circularly polarized 493 nm light. One then attempts to drive the ion to a particular $5D_{3/2}$ Zeeman sub-level by delivering a pulse of resonantly tuned 2051 nm light for a time $\tau$, after which the ion will have some probability of being found in that particular $5D_{3/2}$ sub-level. A 455 nm shelving pulse is then delivered and the cooling lasers are used to interrogate whether the attempt to shelve the ion was successful. A 614 nm pulse returns the ion to the ground state at the end of the sequence if necessary. This pulse sequence is repeated until the probability the ion was shelved, $\mathrm{P_{s}(\tau)}$, is determined to the desired uncertainty. Depending on the size of the Rabi frequency $\mathrm{\Omega}$ compared to the experiment's decoherence rate $\mathrm{\gamma}$ the shelving probability can take two forms,
\begin{subequations}\label{eq:ShelveEff}
\begin{equation}\label{eq:ShelveEffA}
\begin{split}
\mathrm{P_{s}(\tau)}=&\mathrm{\frac{\epsilon}{2}\Big[1 + e^{-\gamma\tau}\big(\,\cos(\Omega^{'}\tau) + \frac{\gamma}{\Omega^{'}}\sin(\Omega^{'}\tau)\,\big) \Big] }\\
&\mathrm{when}\quad\mathrm{\Omega>\gamma}\quad\mathrm{(underdamped)}
\end{split}
\end{equation}
\begin{equation}\label{eq:ShelveEffB}
\begin{split}
\mathrm{P_{s}(\tau)} = &\mathrm{\frac{\epsilon}{2}\Big[1 + e^{-\gamma\tau}\big(\,\cosh(\gamma^{'}\tau) + \frac{\gamma}{\gamma^{'}}\sinh(\gamma^{'}\tau)\,\big)\Big]}\\
&\mathrm{when}\quad\mathrm{\gamma>\Omega} \quad\mathrm{(overdamped)}
\end{split}
\end{equation}
\end{subequations}
\noindent where $\mathrm{\Omega^{'} = \Omega\sqrt{1-\big(\gamma/ \Omega\big)^{2}}}$ and $\mathrm{\gamma^{'} = \gamma\sqrt{1-\big(\Omega/\gamma\big)^{2}}}$. For this procedure, the maximum shelving efficiency $\mathrm{\epsilon}$ will be less than unity and is theoretically limited to 0.87 by the branching ratio for spontaneous decay to $5D_{3/2}$ from $6P_{3/2}$ \cite{Kurz08}. Spectroscopy of the $5D_{3/2}$ states was recently reported with a frequency stabilized 2051 nm laser \citep{Hoffman12} using the shelving technique described here. An example of the shelving probability plotted against $\mathrm{\tau}$ is shown in Fig. \ref{fig:RabiFlopCrop} for the transition between $6S_{1/2}\big( \mathrm{m}=-1/2\big)$ and $5D_{3/2}\big( \mathrm{m}=-3/2\big)$. In that work a Rabi frequency of 2 kHz was achieved with a decoherence rate around 300 Hz due largely to ambient magnetic field drift. The technique to measure M1 that follows necessitates driving transitions at relatively low Rabi frequencies compared to what was found in \citep{Hoffman12} which suggests the importance of minimizing sources of decoherence.
\section{M1 Measurement Procedure}
\indent In principle, if the magnetic dipole transition moment's contribution to the total Rabi frequency is known then that transition moment can be extracted if the driving field's alignment, intensity, and polarization are known at the ion. In the odd isotope, $^{137}\mathrm{Ba}^{+}$, the E2 amplitude vanishes for $\mathrm{F=1} \rightarrow \mathrm{F=0}$ transitions, allowing for a pure magnetic dipole transition. However, a direct measurement is unfavorable because of the the modest size of M1 and complications with working in the odd isotope. In $^{138}\mathrm{Ba}^{+}$, coupling via E2 is not suppressed but the Rabi frequency can be modulated with experimental parameters so as to isolate $\mathrm{\Omega_{M1}}$. In the parameter space of interest, to be defined in the forthcoming discussion, the ratio of the relative contributions to the Rabi frequency from each moment, $\mathrm{\Omega_{M1}}$/ $\mathrm{\Omega_{E2}}$, will be of order 0.1\,. From the Rabi frequency reported in \cite{Hoffman12}, we estimate that $\mathrm{\Omega_{M1}}$ will be tens of hertz.
\begin{figure}[ht]
\includegraphics[width=84 mm]{fig3.eps}
\caption{(Color online) The primed coordinate system defines the laboratory frame with the Z$^{'}$-axis set along the 2051 nm laser. The unprimed coordinates define the atom's frame with the Z-axis set along the externally applied magnetic field $\mathrm{\textbf{B}}$. The Y and Y$^{'}$ axes are parallel and point out of the page.}
\label{fig:coor}
\end{figure}
\indent The controlled modulation of the Rabi frequency can be most simply performed on the $6S_{1/2}\big( \mathrm{m}=-1/2\big)\leftrightarrow 5D_{3/2}\big( \mathrm{m}=-1/2\big)$ transition. A quantization axis to lift the degeneracy between the Zeeman sub-levels is established with a static magnetic field applied by a pair of static current carrying coils in a Helmholtz-like configuration. We define $\theta$ to be the angle between the applied magnetic field and the 2051 nm beam, as depicted in Fig. \ref{fig:coor}. The Rabi frequency for the $\mathrm{\Delta m=0}$ transition can then be written in terms of its E2 and M1 contributions as,
\begin{subequations}\label{eq:Rabiform}
\begin{equation}\label{eq:RabiformA}
\mathrm{\Omega} = \big|\mathrm{\Omega_{E2}} + \mathrm{\Omega_{M1}} \big|
\end{equation}
\begin{equation}\label{eq:RabiformB}
\mathrm{\Omega_{E2}} = \frac{i k}{4 \hbar}\sqrt{\frac{1}{10}} \mathrm{E2}\,\sin(2\theta)\,\mathrm{E}_{x^{'}}
\end{equation}
\begin{equation}\label{eq:RabiformC}
\mathrm{\Omega_{M1}} = -\frac{1}{\hbar}\sqrt{\frac{1}{6}}\mathrm{M1}\,\sin(\theta)\,\mathrm{B}_{x^{'}}
\end{equation}
\end{subequations}
\noindent Where $\mathrm{E}_{x^{'}}$ and $\mathrm{B}_{x^{'}}$ refer to the components of the 2051 nm laser beam fields. To have both $\mathrm{\Omega_{E2}}$ and $\mathrm{\Omega_{M1}}$ be non-zero and add in-phase the transition should be driven with circularly polarized light. A plot of the expected value of $\mathrm{\Omega}$ driven by either sense of circular polarization and linear light is shown in Fig. \ref{fig:rabitheta}.
\indent It is evident in Eq. (\ref{eq:Rabiform}) that $\mathrm{\Omega_{M1}}$ and $\mathrm{\Omega_{E2}}$ possess even and odd symmetry, respectively, about $\theta = 90^{\circ}$. The $\mathrm{\Omega_{E2}}$ contribution to $\mathrm{\Omega}$ can be canceled by symmetrically shifting $\mathrm{\theta}$ about 90$^{\circ}$ by a small angle $\mathrm{\delta}$ as
\begin{equation}\label{eq:M1inter}
\mathrm{\Delta \Omega} =\big| \mathrm{\Omega}(90^{\circ}+\delta) - \mathrm{\Omega}(90^{\circ}-\delta) \big| = 2\mathrm{\Omega_{M1}}
\end{equation}
\begin{figure}[t]
\includegraphics[scale=0.33]{fig4.eps}
\caption{(Color online) The solid blue and red curves are $\mathrm{\Omega^{+}}({\theta})$ and $\mathrm{\Omega^{-}}({\theta})$ respectively, where the $\pm$ reflects the handedness of the 2051 nm laser polarization. The dashed curve shows $\mathrm{\Omega^{x}}(\theta)$, which is the same transition driven by horizontally polarized light, that could be useful for calibrating $\mathrm{\theta}$. All numerical values are estimated using theoretical values \cite{Sahoo06} for the transition moments and an electric field intensity estimated from previous measurements \citep{Hoffman12}}
\label{fig:rabitheta}
\end{figure}
\noindent with $\mathrm{\delta}$ chosen to be a few degrees. The value of $\mathrm{\Omega\big( \theta \big)}$ can be extracted from a decay curve as illustrated in Fig. \ref{fig:RabiFlopCrop}. An accurate determination of $\mathrm{\Omega_{M1}}$ from $\mathrm{\Delta \Omega}$ requires precise tuning of $\theta$. We estimate that to measure $\mathrm{\Omega_{M1}}$ to five percent accuracy each orientation must be known to within 0.02$^{\circ}$. The offset angle $\mathrm{\delta}$ can be tuned by rotating the magnetic field coils about the trap center or by adjusting a second set of orthogonally placed coils, but imperfections in the magnetic field make it difficult to know precisely how much they need to be adjusted \emph{a priori}. It is therefore crucial that $\mathrm{\theta}$ be calibrated for each measurement.
\indent None of the 5$D_{3/2}$ transitions are useful for such delicate angular calibration if driven with circularly polarized light. However, a suitable configuration could be to drive the $\mathrm{\Delta}$m = 0 transition with horizontally polarized light (electric field parallel to the X$^{'}$-axis) for which the Rabi frequency, $\mathrm{\Omega^{X}(\theta)}$, is sharply peaked and symmetric about 90$^{\circ}$, as shown in Fig. \ref{fig:rabitheta}. For the two orientations of $\mathrm{\theta}$, an agreement between either $\mathrm{\Omega^{X}(90^{\circ}\pm \delta)}$ to 1\% or better is sufficient to calibrate $\mathrm{\theta}$. This may be challenging if operating in the overdamped regime (Eq. \ref{eq:ShelveEffB}). Because the $\mathrm{\Delta m=0}$ Rabi frequency is expected to be small, it will be useful to reduce $\mathrm{\gamma}$ as much as is practical. Since the 300 MHz decoherence rate in \cite{Hoffman12} was mostly due to magnetic field noise in an unshielded setup, we expect significant improvement to $\mathrm{\gamma}$ with the use of magnetic shielding.
\indent An alternative method to modulate $\mathrm{\Omega}$, using only one position of $\theta$, is to drive the transition with both senses of circular polarization. In this approach $\mathrm{\Omega_{M1}}$ retains the proper relative phase and changes sign exactly as in Eq. (\ref{eq:M1inter}) but with respect to the handedness of the 2051 nm beam, indicated by a + or - superscript:
\begin{equation}\label{eq:M1pol}
\mathrm{\Delta \Omega} = \big| \mathrm{\Omega^{+}}(90^{\circ}\pm\delta) - \mathrm{\Omega^{-}}(90^{\circ}\pm\delta)\big| = 2\mathrm{\Omega_{M1}}
\end{equation}
\noindent In this approach the error in the relative positioning of $\theta$ between either measurement can be limited to the stability of the current source driving the magnetic field coils. Here then $\theta$ need only be known to $\mathrm{\sim 1^{\circ}}$, primarily to ensure that $\mathrm{\Omega_{M1}}$ is approximately maximized. Care must be taken, particularly with this approach, to have clean circular polarization in the 2051 nm beam. Systematic distortions to the polarization can, in principle, be compensated, however it will be desirable to limit distortions to a few percent.
\indent To extract the M1 moment from Eq. \ref{eq:RabiformC}, the 2051 nm light field at the ion must be measured. This can be done with the Rabi frequency of the $6S_{1/2}\big( \mathrm{m}=\mp 1/2\big)\leftrightarrow 5D_{3/2}\big( \mathrm{m}=\pm3/2\big)$ transitions denoted $\mathrm{\Omega^{\pm 2}_{E2}}$. The electric quadrupole matrix element is well known from theory and is given in Table \ref{tab:relativesizes}. The transitions are driven by both the $\mathrm{\hat{x}'}$ and $\mathrm{\hat{y}'}$ components of the laser's electric field but coupling through M1 is suppressed by virtue of its angular momentum selection rule,
\begin{equation}
\mathrm{\Omega^{\pm 2}_{E2} = }\frac{k}{4\sqrt{6}\hbar }\frac{1}{\sqrt{5}}\mathrm{E2\,\big|\sin(2\theta)\,E_{x^{'}} \mp 2}\,i\,\mathrm{\sin(\theta)\,E_{y^{'}}\big| }
\end{equation}
\noindent When driven with vertically polarized light (electric field parallel to Y$^{'}$-axis) and $\theta \sim 90^{\circ}$ both $\mathrm{\Omega^{\pm 2}_{E2}}$ are maximized and flat to leading order in $\theta$. A precise determination the 2051 nm field amplitude from $\mathrm{\Omega^{\pm 2}_{E2}}$ would thus be relatively straightforward granted that the whole 2051 nm field amplitude can accurately be aligned in the vertical polarization state. A noteworthy alternative is to measure both $\mathrm{\Omega^{+ 2}_{E2}}$ and $\mathrm{\Omega^{- 2}_{E2}}$ with the 2051 nm beam circularly polarized. With $\theta$ known these can yield the individual components of the 2051 nm field, $E_{x^{'}}$ and $E_{y^{'}}$. Measuring the components separately could be a useful check that pure circular polarization was achieved at the ion.
\begin{figure}[b]
\includegraphics[scale=0.14]{fig5.eps} \\
\caption{A schematic of the apparatus used to characterize the stress-optical behavior of the test viewport. }
\label{fig:testchamber}
\end{figure}
\section{Effect of stress induced birefringence on laser polarization}
\indent Precise and accurate control of the 2051 nm laser polarization will be of general concern to all aspects of the proposed measurement and is paramount to any trapped ion PNC measurement. While the polarization state of the 2051 nm beam can be controlled outside of the trap, stress induced birefringence in the viewports will tend to unpredictably distort the beam's polarization. In effect, each point on the viewport acts like a wave-plate with relative phase retardance $\mathrm{\Gamma}$ and an unknown optical axis orientation $\mathrm{\alpha}$, which can be either of the fast or slow axes. Generally the relative phase difference seen by a beam of wavelength $\mathrm{\lambda}$, that transmits through a transparent isotropic material under mechanical stress, is described by the stress-optic law \cite{Dally78}:
\begin{equation}\label{eq:SIB}
\mathrm{\Gamma} =\frac{2\pi C t}{\lambda}\big(\sigma_{11}-\sigma_{12}\big)
\end{equation}
\noindent where C is the stress-optics coefficient, t is the thickness of the sample, and $\sigma_{11}$ and $\sigma_{22}$ are first and second principle stresses, respectively. Distortions of this type need to be limited to a few percent to make possible accurate determination of $\mathrm{\Omega_{M1}}$ and the 2051 nm laser field strength. A direct measurement of the effect is difficult because the trapping apparatus is embedded within a vacuum system where the beam cannot be easily accessed. In order to estimate the size of the effect we have measured the amount of stress induced birefringnece in a single test viewport.
\indent A standard 1.33 inch diameter fused silica viewport from MDC Vacuum Products, LLC served as a test viewport. To replicate a viewport \emph{in situ}, the test viewport was baked for five days at a temperature of 150$^{\circ}$ C while pumped down to a pressure of $\mathrm{\sim 2.0 \times 10^{-7}}$ Torr. Although longer bakes and lower pressures are needed for real trapped ion experiments these were sufficient for our intentions here. A charge-coupled device (CCD) image of the stress induced birefringence present in the viewport was taken prior to baking and is shown in Fig. \ref{fig:ios}. Similar images were taken of previously used viewports and suggest that comparable amounts of stress were present and so we expect the results reported here to be typical in magnitude.
\begin{figure}[t]
\includegraphics[scale=0.26]{fig6.eps}
\caption{A CCD image of the stress pattern in the viewport obtained by shining incoherent red light through the viewport between a pair of crossed polarizers. The image shows the stress-optic behavior of the viewport qualitatively. Bright regions in the image correspond to higher amounts of stress induced birefringence. The viewport edge is indicated in white.}
\label{fig:ios}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=0.33]{fig7.eps}
\caption{(Color online) Ratio of $\mathrm{P_{Min}}$ to $\mathrm{P_{Max}}$ as a funtion of P1 orientation at the center and $\pm$2.25 mm off center of the viewports. The data is fit to Eq. (\ref{eq:ratio}) to get $\mathrm{\Gamma}$ and $\mathrm{\alpha}$ for each spatial incidence.}
\label{fig:ellipticity3}
\end{figure}
\indent To quantify the effect we have determined the orientation of the viewport's optical axes, $\mathrm{\alpha}$, and the relative phase retardation between the axes, $\mathrm{\Gamma}$, at its center and two points on a diameter. The measurements were made with a linearly polarized 650 nm beam with a full width at half max of 2 mm. The choice to take the measurements at this wavelength was made simply for our convenience. A schematic of the experiment is provided in Fig. \ref{fig:testchamber}. The 650 nm light was delivered via single mode optical fiber to a Glan-Thompson polarizer (P1) that controlled the linear polarization angle $\mathrm{\Theta}$ of the light incident on the front of the viewport. We take $\mathrm{\Theta}=0$ to mean horizontal polarization. A second identical Glan-Thomson polarizer (P2) is placed after the test viewport and was used to analyze the beam's polarization state after transmitting through the viewport. The Glan-Thompson polarizers had a manufacturer quoted extinction ratio of about 100 000:1 in power which was verified. The relative optical power after P2 was measured with a silicon photodiode. The optical fiber, polarizers, and photodiode were mounted on a translation stage that was moved transverse to the viewport with a micrometer. \\
\indent Each measurement consisted of placing P1 to a known orientation and then rotating P2 to find the minimum and maximum power in the beam, $\mathrm{P_{Min}}$ and $\mathrm{P_{Max}}$. The ratio $\mathrm{P_{Min}} / \mathrm{P_{Max}}$ is approximately the square of the ellipticity introduced into the beam's polarization by the viewport. The finite extinction ratio of P1 and P2 did cause a non-zero $\mathrm{P_{Min}} / \mathrm{P_{Max}}$ without a viewport placed between the polarizer, however this contribution was ten times smaller than what was caused by the viewport and so was ignored.
\indent The optical axis orientation and phase retardation of the viewport at a given location are related to $\mathrm{P_{Min}} / \mathrm{P_{Max}}$ by,
\begin{equation}\label{eq:ratio}
\frac{\mathrm{P_{Min}}}{\mathrm{P_{Max}}} =\frac{\mathrm{\sin}^2\big[\frac{\mathrm{\Gamma}}{2}\big] \,\mathrm{\sin}^2\big[2(\mathrm{\Theta}-\mathrm{\alpha})\big]}{1-\mathrm{\sin}^2\big[\frac{\mathrm{\Gamma}}{2}\big]\, \mathrm{\sin}^2\big[2(\mathrm{\Theta}-\mathrm{\alpha})\big]}
\end{equation}
\noindent To find $\mathrm{\Gamma}$ and $\alpha$ we measured the power ratio for 18 orientations of P1 between $0\,^{\circ}$ to $90\,^{\circ}$ at each spatial incidence, as shown in Fig \ref{fig:ellipticity3}. The parameters $\mathrm{\Gamma}$ and $\mathrm{\alpha}$ were found by fitting Eq. \ref{eq:ratio} to the data, the results of which are displayed in Table \ref{tab:fit}.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|}
\hline
Position&\multicolumn{2}{c|}{Measured at 650 nm}\\
&$\mathrm{\Gamma}$&$\mathrm{\alpha}$ \\
\hline
\;\,-2.25 mm& $6.47\,^{\circ} \pm 0.02\,^{\circ}$ & $28.08\,^{\circ} \pm 0.15\,^{\circ}$ \\ [1ex]
\;\;\,0.00 mm& $5.81\,^{\circ} \pm 0.02\,^{\circ}$ & $39.67\,^{\circ} \pm 0.18\,^{\circ}$ \\ [1ex]
+2.25 mm & $2.69\,^{\circ} \pm 0.05\,^{\circ}$ & $53.79\,^{\circ} \pm 0.57\,^{\circ}$ \\ [1ex]
\hline
\end{tabular}
\caption {The retardance and optical axis orientation of the viewport for the center and opposite points along the edge.}
\label{tab:fit}
\end{table}
\indent These measurements place upper bounds on the anticipated polarization distortion of the 2051 nm beam. The stress-optic law predicts that $\mathrm{\Gamma}$ will be about three times less for a 2051 nm beam. The effect is further suppressed at 2051 nm by dispersion of the stress-optic coefficient \cite{Vasudevan72}, although it is unknown by how much. At the center of the viewport, not accounting for dispersion, a wave-plate retardance of $\sim 2^{\circ}$ can be expected for a 2051 nm beam. If left uncorrected this value would shift M1 by about 5\%. This indicates that distortions in 2051 nm polarization from stress induced birefringence can likely be tolerated for the M1 measurement. However these distortions will need to be further suppressed or compensated for in a trapped ion PNC measurement.
\section{Conclusion}
A parity nonconservation experiment with single trapped Ba$^{+}$ requires the magnetic dipole transition moment for the 2051 nm $6S_{1/2} \leftrightarrow 5D_{3/2}$ transitions be known. To date, there is one calculation of M1 and it predicts a value dominated by electron-electron correlation effects \cite{Sahoo06}. We have therefore proposed an approach for extracting M1 from a measurement of $\mathrm{\Omega_{M1}}$ using an E2-M1 intensity interference. In our approach the relative phase between $\mathrm{\Omega_{E2}}$ and $\mathrm{\Omega_{M1}}$ is controlled experimentally for the $6S_{1/2}\big( \mathrm{m}=-1/2\big) \leftrightarrow 5D_{3/2}\big( \mathrm{m}=-1/2\big)$ transition so that $\mathrm{\Omega_{E2}}$ can be eliminated to reveal $\mathrm{\Omega_{M1}}$.
\indent We describe two versions of the measurement using the $\mathrm{\Delta m=0}$ transition. In one version the 2051 nm beam alignment is controlled by rotating the ion's quantization axis symmetrically about 90$^{\circ}$. A second approach is to drive the transition with either sense of circular polarization, which has the equivalent effect. These two approaches suffer from different systematics and can therefore be used to check for consistency. To extract M1 from $\mathrm{\Omega_{M1}}$ the 2051 nm beam alignment and intensity need be known. We suggest that these can be had from measurements of the Rabi frequencies for transitions to particular $5D_{3/2}$ states with judiciously selected 2051 nm beam polarizations. The feasibility of the measurement, in either approach, therefore depends critically on the ability to carefully control the 2051 nm beam's polarization.
\indent A general concern to the proposed measurement is the effect that stress induced birefringece will have on the polarization of a 2051 nm beam. To estimate how much polarization distortion can be expected we have measured the effect in a test viewport. Unfortunately it is difficult to measure the effect directly with the 2051 nm beam so we have used 650 nm beam instead. Stress induced biregringence falls off faster than 1/$\mathrm{\lambda}$ and accordingly our measurements place an upper bound for what will be found with the 2051 nm beam. The result of these measurements suggest that the polarization distortion to be expected in a 2051 nm beam will be small enough to be insignificant to the M1 measurement. \\
\section{Acknowledgments}
\indent The authors would like to thank the other members of the Blinov group; Richard Graham, Zichao Zhou, John Wright, Tomasz Sakrejda, Thomas Noel, Carolyn Auchter, Chen-Kuan Chou and Alexander Sivitilli for discussion and commentary. The authors would also like to thank Alan Jamison and Benjamin Plotkin-Swing from the University of Washington's Ultra-Cold Atoms group for a helpful discussion at an early stage of this work. This work was supported by the National Science Foundation, Grant No. PHY-09-06494.
|
2,869,038,156,751 | arxiv | \section{Introduction}
The successful description of the spatiotemporal evolution of complex systems typically relies on detailed mathematical models operating at a fine scale (e.g.~molecular dynamics, agent-based, stochastic or lattice-based methods).
Such microscopic, first principles models, keeping track of the interactions between huge numbers microscopic level degrees of freedom, typically lead to prohibitive computational cost for large-scale spatiotemporal simulations.
To address this issue (and since we are typically interested in macro-scale features -pressure drops, reaction rates- rather than the position and velocity of each individual molecule), reduced, coarse-scale models are developed and used, leading to significant computational savings in large-scale spatiotemporal simulations~\cite{Noid13}.
Macroscopically, the fine scale processes may often be successfully modeled using partial differential equations (PDEs) in terms of the right macroscopic observables (``coarse variables": not molecules and their velocities, say, but rather pressure drops and momentum fields).
Deriving the macroscopic PDE that effectively models the microscopic physics (the so-called ``closure problem'') requires, however, deep understanding/intuition about the complex system of interest and often extensive mathematical operations; the discovery of macroscopic governing equations is typically a difficult and time-consuming process.
To bypass the first principles discovery of a macroscopic PDE directly, several data-driven approaches provide ways to effectively determine good coarse observables and the approximate coarse-scale relations between them from simulation data.
In the early '90s researchers (including our group) employed artificial neural networks for system identification (both lumped and distributed)~\cite{Hudson90,Krischer93,Rico94,Anderson96,Gonzalez98}.
Projective time integration in dynamical systems~\cite{Kevrekidis03} and fluid dynamics~\cite{Sirisup05,Lee17D} also provides a good data-driven approximation of long-time prediction based not on closed-form equations, but rather on a ``black box" simulator.
Furthermore, the equation-free framework for designing fine scale computational experiments and systematically processing their results through ``coarse-time steppers'' has proven its usefulness / computational efficiency in analyzing macroscopic bifurcation diagrams~\cite{Siettos12,Theodoropoulos00}.
The easy availability of huge simulation data sets, and recent developments in the efficient implementation of machine learning algorithms, has made the revisiting of the identification of nonlinear dynamical systems from simulation time series an attractive -and successful- endeavor.
Working with observations at the macroscopic level, hidden macroscopic PDEs can be recovered directly by artificial neural networks~\cite{Gonzalez98}, (see also Ref.~\cite{Bar19}). Sparse identification of nonlinear dynamics (SINDy)~\cite{Rudy17} as well as Gaussian processes~\cite{Raissi17,Raissi18} have also been successfully used, resulting in {\em explicit} data-driven PDEs. All these approaches rely on macroscopic observations.
In this paper, we discuss the identification of unavailable coarse-scale PDEs {\em from fine scale observations} through a combination of machine learning and manifold learning algorithms.
Specifically, using Gaussian Processes, Artificial Neural Networks, and/or Diffusion Maps, and starting with candidate coarse fields (e.g.~densities), our procedure extracts relevant macroscopic features (e.g.~coarse derivatives) from the data, and then uncovers the relations between these macroscopic
features and their time evolution (the right-hand-side of the explicitly unavailable macroscopic PDE).
To effectively reduce the input data domain, we employ two feature selection methods: (1) a sensitivity analysis via Automatic Relevance Determination (ARD)~\cite{Qi04,Rasmussen06,Wipf08} in Gaussian processes and (2) a manifold learning technique, Diffusion Maps~\cite{Holiday19}.
Having selected the relevant macro features in terms of which the evolution can be modelled, we employ two machine learning algorithms to approximate a ``good'' right-hand-side of the underlying PDEs: (1) Gaussian process regression and (2) artificial neural networks.
Our framework is illustrated through the data-driven discovery of the macroscopic, concentration-level PDE resulting from a fine-scale, Lattice Boltzmann (LB) model of a reaction/transport process (the FitzHugh-Nagumo process in one spatial dimension).
Long-term macroscopic prediction is enabled by numerical simulation of the coarse-scale PDE {\em identified from the Lattice-Boltzmann data}.
Different possible feature combinations (leading to different realizations of the same evolution) will also be discussed.
The remainder of the paper is organized as follows: In section~\ref{sec:framework}, we present an overview of our proposed framework and briefly review theoretical concepts of Gaussian process regression, artificial neural networks, and Diffusion Maps.
Two methods for feature selection are also presented.
In section~\ref{sec:simulation}, we describe two simulators at different scales: (1) the FitzHugh-Nagumo model at the macro-scale and (2) its Lattice Boltzmann realization at the micro-scale.
In section~\ref{sec:result}, we demonstrate the effectiveness of our proposed framework and discuss the advantages and challenges of different feature selection methods and regression models for performing this task.
In section \ref{sec:conclusion}, we summarize our results and discuss open issues for further development of the data-driven discovery of the underlying coarse PDE from microscopic observations.
\section{Framework for recovering a coarse-scale PDE via machine learning}
\label{sec:framework}
\subsection{Overview}
\begin{figure}[!htp]
\centering
\includegraphics[width=0.45\textwidth]{LBM.jpg}
\caption{Schematic illustration of the extraction of coarse-scale observables $u$ from microscopic observations.
Through a Lattice Boltzmann model (here, D2Q9), we obtain particle distribution functions ($f_i$) on a given lattice.
Using the zeroth moment field of the particle distribution function at the grid point $x_n$, we extract the coarse observable $u$ (in this paper, we have two coarse observables, $u$ and $v$, which represent the density of the activator and the inhibitor, respectively).
}
\label{fig:LBM}
\end{figure}
\begin{figure*}[!htp]
\centering
\includegraphics[scale=0.48]{frameLBM2.pdf}
\caption{Workflow for uncovering coarse-scale PDEs.
First, we compute macroscopic variables $u$ and $v$ from the Lattice Boltzmann simulation data (see equation~\eqref{eqn:concentration} and figure~\ref{fig:LBM}) and estimate their spatial derivatives (e.g. by finite difference schemes on the lattice).
After that, we employ machine learning algorithms (here, Gaussian process regression or artificial neural networks) to identify ``proper'' time derivatives $u_t$ and $v_t$ from an original input data domain directly (no feature selection among several spatial derivatives) or from a reduced input data domain (feature selection among several spatial derivatives) using ARD in Gaussian processes or Diffusion Maps.
We then simulate the identified coarse-scale PDE for given coarse initial conditions ($\mathbf{u_0},\mathbf{v_0}$).
}
\label{fig:frame}
\end{figure*}
The workflow of our framework for recovering hidden coarse-scale PDEs from microscopic observations is schematically shown in figures~\ref{fig:LBM} and~\ref{fig:frame}.
Specifically, this framework consists of two subsections: (1) computing coarse-scale observables and (2) identifying coarse-scale PDEs and then numerically simulating them.
To clarify the algorithm, consider a single field (the activator $u$; later in this paper we will use two densities, $u$ and $v$, for the activator and the inhibitor, respectively).
As shown in figure~\ref{fig:LBM}, we compute the coarse-scale observable (here the $u$ concentration field) through the zeroth moment of the microscopic LB simulation (averaging the particle distribution functions ($f_i$) on a given lattice point, see section~\ref{sec:lbm} for more details).
Given the coarse-scale observable we estimate its time-derivative and several of its spatial derivatives (e.g.~$u_t$, $u_x$, and $u_{xx}$), typically using finite difference schemes in time and space as
necessary.
A PDE of the form $u_t=L(u)= F(u, u_x, u_{xx}, \cdots)$ is a relation between the time-derivative and a number of spatial derivatives; this relation holds at every moment in time and every point in space.
For a simple reaction diffusion equation, say $u_t=u_{xx} - u$, the data triplets $(u, u_t, u_{xx})$ will in general lie on a two-dimensional manifold in three-dimensional space, since $u_t$ is a function of $u$ and $u_{xx}$.
Knowing that this manifold is two-dimensional suggests (in the spirit of the the Whitney and Takens embedding theorems~\cite{Whitney36, Takens81}) that any five generic observables suffice to create an embedding - and thus learn $u_t$, a function on the manifold, as a function of these five observables.
One might choose, for example, as observables the values of $u$ at any five spatial points at a given time moment, possibly the five points used in a finite difference stencil for estimating spatial derivatives.
In the study of time series through delay embeddings one uses observations on a temporal stencil; it is interesting that here one might use observations on a spatial stencil - encoding information analogous to spatial derivatives (see Ref.~\cite{Bar19}).
Motivated by this perspective, in order to learn the time derivative $u_t$, we use an original input data domain including several (say, all up to some order) spatial derivatives. We also consider the selection of a reduced input data domain via two feature selection methods: (1) a sensitivity analysis by automatic relevance determination (ARD) in Gaussian processes~\cite{Williams96,Qi04,Wipf08} and (2) a manifold learning approach, Diffusion Maps~\cite{Coifman05,Coifman06}, with a regression loss (see section~\ref{sec:fs} in more details).
Then, we consider two different machine learning methods (Gaussian process regression and artificial neural networks) to learn $u_t$ based on the selected feature input data domain.
After training, simulation of the learned coarse-scale PDE given a coarse initial condition $u_0(x,t), v_0(x,t)$ can proceed with any acceptable discretization scheme in time and space (from simple finite differences to, say, high order spectral or finite element methods).
\subsection{Gaussian process regression}
One of the two approaches we employ to extract dominant features and uncover the RHS of coarse-scale PDEs is Gaussian process regression.
In Gaussian processes, to represent a probability distribution over target functions (here, the time derivative), we assume that our observations are a set of random variables whose finite collections have a multivariate Gaussian distribution with an {\em unknown} mean (usually set to zero) and an {\em unknown} covariance matrix $K$.
This covariance matrix is commonly formulated by a Euclidean distance-based kernel function $\kappa$ in the input space, whose hyperparameters are optimized by training data.
Here, we employ a radial basis kernel function (RBF), which is the \emph{de facto} default kernel function in Gaussian process regression, with ARD~\cite{Rasmussen06}.
\begin{equation}
K_{ij}=\kappa(\mathbf{x_i},\mathbf{x_j};\theta) = \theta_0\exp \left( -\frac{1}{2} \sum_{l=1}^k \frac{(x_{i,l} - x_{j,l})^2}{\theta_l
} \right).
\label{eqn:kernel}
\end{equation}
where $\theta = [\theta_0, \dots, \theta_k]^T$ is a $k+1$ dimensional vector of hyperparameters and $k$ is the number of dimensions of the input data domain.
The optimal hyperparameter set $\theta^*$ can be obtained by minimizing a negative log marginal likelihood with the training data set $\{\mathbf{x},\mathbf{y}\}$:
\begin{eqnarray}
\label{eqn:opt}
\theta^* &=& \argmin_{\theta} \; \{ -\log p(\mathbf{y}|\mathbf{x},\theta)\} \\
&=& \frac{1}{2}\mathbf{y}^T(K+\sigma^2I)^{-1}\mathbf{y} + \frac{1}{2}\log|(K+\sigma^2I)| + \frac{N}{2}\log 2\pi \nonumber
\end{eqnarray}
where $N$ is the number of training data points, $\sigma^2$ and $I$ represent the variance of the (Gaussian) observation noise and an $N \times N$ identity matrix, respectively.
To find the Gaussian distribution of the function values (here the time derivative) at test data points, we represent the multivariate Gaussian distribution with the covariance matrix constructed by equation~\eqref{eqn:kernel} as
\begin{equation}
\begin{bmatrix}
\mathbf{y} \\
\mathbf{y^*}
\end{bmatrix}
=
N \left(
\mathbf{0},
\begin{bmatrix}
K + \sigma^2 I & K_*\\
K_*^T & K_{**}
\end{bmatrix}
\right),
\end{equation}
where $\mathbf{y^*}$ is a predictive distribution for test data $\mathbf{x^*}$, $K_*$ represents a covariance matrix between training and test data while $K_{**}$ represents a covariance matrix between test data.
Finally, we represent a Gaussian distribution for time derivatives at the test point in terms of a predictive mean and its variance, through conditioning a multivariate Gaussian distribution as
\begin{equation}
\mathbf{\bar{y}^*} = K_*(K+\sigma^2I)^{-1}\mathbf{y},
\end{equation}
\begin{equation}
K(\mathbf{y^*}) = K_{**} - K_*^T(K+\sigma^2I)^{-1}K_*,
\end{equation}
and we assign the predictive mean ($\mathbf{\bar{y}^*}$) as the estimated time derivative for the corresponding data point.
\subsection{Artificial neural networks}
\label{sec:ANN}
Next, we consider (artificial, possibly deep) neural networks (ANN or NN or DNN) for identifying the RHS of coarse-scale PDEs.
Generally, neural networks consist of an input layer, one or more hidden layers, and an output layer, all comprised of several computational neurons, typically fully connected by weights ($\omega$), biases ($b$), and an activation function ($\psi(\cdot)$).
Macroscopic observables and their spatial derivatives are assigned at the input layer, while the corresponding time derivative is obtained at the output layer (here we are considering only first order PDEs in time; higher order equations, like the wave equation, involving second derivatives in time can also be accounted for within the framework).
In (feed-forward) neural networks, a universal approximation theorem~\cite{Cybenko89} guarantees that for a single hidden layer with (sufficient) finite number of neurons, an approximate realization $\tilde{y}$ of the target function, $y$ can be found.
Here, approximation implies that the target and learned functions are sufficiently close in an appropriately chosen norm ($\forall \delta >0: \Vert y - \tilde{y} \Vert < \delta$).
The approximate form of the target function obtained through the feedforward neural net can be written as
\begin{equation}
\tilde{y}(\mathbf{x}) = \sum_{i=1}^N \psi \left(\mathbf{\omega}_i^{T}\mathbf{x} + b_i \right).
\end{equation}
\noindent
The root-mean-square error cost function
\begin{equation}
E_D = \frac{1}{N} \sum_{j=1}^N(y_j - \tilde{y}(x_j))^2,
\end{equation}
typically measures the goodness of the approximation.
In order to obtain a \textit{generalizable} network, with
good performance on the test data set as well as on the training data set (e.g.~preventing overfitting), several regularization approaches have been proposed, mostly relying on modifications of the cost function.
Foresee and Hagan~\cite{Foresee97} showed that modifying the cost function by adding the regularization term $E_\omega=\Sigma_{j=1}^{N_\omega}\omega_j^2$, results in a network that will maximize the posterior probability based on Bayes' rule.
We thus trained our network based on a total cost function of the form:
\begin{equation}
E_{total} = \beta_1 E_D + \beta_2 E_\omega,
\end{equation}
in which $\beta_1$ and $\beta_2$ are network tuning parameters.
Here, we employ Bayesian regularized back-propagation for training, which updates weight and bias values through Levenberg-Marquardt optimization~\cite{Hagan94}; we expect that, for our data, comparable results would be obtained through other modern regularization/optimization algorithms.
\subsection{Diffusion Maps}
\label{sec:dmap}
Diffusion Maps (DMAP) have successfully been employed for dimensionality reduction and nonlinear manifold learning~\cite{Coifman05,Coifman06,Nadler06,Nadler08}.
The Diffusion Maps algorithm can guarantee, for data lying on a smooth manifold -and at the limit of infinite data- that the eigenvectors of the large normalized kernel matrices constructed from the data converge to the eigenfunctions of the Laplace-Beltrami operator on the manifold on which the data lie.
These eigenfunctions can also provide nonlinear parametrizations (i.e.~sets of coordinates) for such a (Riemannian) manifolds.
To approximate the Laplace-Beltrami operator from scattered data points on the manifold, a normalized diffusion kernel matrix between observation (data) points is commonly used:
\begin{equation}
\mathbf{W_{ij}} = \exp \left( -\frac{\Vert \mathbf{y_i}-\mathbf{y_j}\Vert_{2}^2}{\varepsilon}\right),
\end{equation}
where $\mathbf{y_i}$ are real-valued observations and $\varepsilon$ is the kernel width.
After that, one obtains a normalized matrix $\mathbf{W}^{(\alpha)}$ by
\begin{equation}
\mathbf{W}^{(\alpha)} = \mathbf{D}^{-\alpha}\mathbf{W}\mathbf{D}^{-\alpha},
\end{equation}
where $\mathbf{D}$ is a diagonal matrix whose $\mathrm{i^{th}}$ entry is the sum of corresponding row of $W$.
Here, $\alpha\in\{0,1\}$ is a tuning parameter:
$\alpha=0$ corresponds to the classical normalized graph Laplacian~\cite{Belkin02,Belkin03} while $\alpha=1$, which takes into account the local data density, yields the Laplace-Beltrami operator~\cite{Coifman06}; in this paper, we set $\alpha=1$.
Then, $\tilde{\mathbf{W}}$ is calculated simply as:
\begin{equation}
\tilde{\mathbf{W}} = \tilde{\mathbf{D}}^{-1}\mathbf{W}^{(\alpha)},
\end{equation}
where $\tilde{\mathbf{D}}$ is a diagonal matrix whose $\mathrm{i^{th}}$ entry is the sum of corresponding row of $\mathbf{W}^{(\alpha)}$.
Finally, an embedding of the manifold is constructed by the first few (say $m$) nontrivial eigenvectors of $\mathbf{\tilde{W}}$,
\begin{equation}
\mathbf{y_i} \mapsto \left(\lambda_1^t\phi_{1,i},\dots,\lambda_m^t\phi_{m,i}\right),\;\; i=1,\dots,n,
\end{equation}
where $t$ corresponds to the number of diffusion steps (here $t=0$) with descending ordered eigenvalues $\lambda_i$.
Instead of using the Euclidean distance between the data points in the Diffusion Map algorithm, it has recently been proposed to use a different, kernel-based similarity metric\cite{Bittracher19}; the associated kernel-based embeddings allow for control of the distortion of the resulting embedding manifold; we will return to this and its possible implications in our Conclusions (Section~\ref{sec:conclusion}).
\subsection{Feature selection}
\label{sec:FS}
Describing the coarse-scale spatiotemporal dynamics in the form of a PDE, involves learning the local field time-derivatives
as a function of a few, relevant local field spatial derivatives.
Starting with a ``full" input data domain consisting of all local field values as well as all their coarse-scale spatial derivatives (up to some order),
we must extract the few ``relevant" spatial derivatives as dominant features of this input data domain.
Such feature selection will typically reduce the dimensionality of the input data domain.
Among various feature selection methods, we employ two algorithms based on (1) sensitivity analysis via ARD in Gaussian processes~\cite{Williams96,Qi04,Wipf08} and (2) manifold parametrization through output-informed Diffusion Maps~\cite{Holiday19}.
First, we employ sensitivity analysis via automatic relevance determination (ARD) in Gaussian processes, which effectively reduces the number of input data dimensions.
In Gaussian processes, we obtain the optimal hyperparameter set $\theta^*$ by minimizing a negative log marginal likelihood (see equation~\eqref{eqn:opt}).
ARD assigns a different hyperparameter $\theta_i$ for each input dimension $d_i$.
As can be seen in equation~\eqref{eqn:kernel}, a large value of $\theta_i$ nullifies the difference between target function values along the $d_i$ dimension, allowing us to designate this dimension as ``insignificant".
Practically, we select the input dimensions with relatively small $\theta_j$ to build a reduced input data domain, which can still successfully represent the approximation of right-hand-side on the underlying PDEs.
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.8\textwidth]{dmap_schematic.pdf}
\caption{Input feature selection via output-informed Diffusion Maps.
Diffusion Maps provide intrinsic coordinatization of the output (the time derivatives) from combined input-output
data.
Guided by a regression loss ($L$), we find a low-dimensional intrinsic embedding space in which $u_t$ (and $v_t$) can be represented as a function of just a few intrinsic diffusion
map coordinates.
After that, we search and find minimal subsets of the input data domain that can parametrize the selected intrinsic coordinates (e.g. $\phi_1,\phi_4,\phi_5$)
as quantified by a small total regression loss (see equation~\eqref{eqn:tloss}).
}
\label{fig:fdmap}
\end{figure*}
Alternatively, we employ a manifold learning technique to find the intrinsic representation of the coarse-scale PDE, and then examine the relation between these intrinsic coordinates and given input features (spatial field derivatives).
Specifically, Diffusion Maps will provide an intrinsic parametrization of the combined input-output data domain (here, $\{u_t,u,u_x,u_{xx},v,v_x,v_{xx} \}$ for $u$ and $\{v_t,u,u_x,u_{xx},v,v_x,v_{xx} \}$ for $v$).
Selecting leading intrinsic coordinates, we can then find the lowest-dimensional embedding space for the PDE manifold (the manifold embodying $u_t$ and $v_t$ as a function of the embedding intrinsic coordinates.)
We then test several combinations of subsets of the input domain coordinates (spatial derivatives) as to their ability to parametrize the discovered intrinsic embedding coordinates.
Each such set of such inputs, that successfully parametrize the intrinsic embedding coordinates, provides us a new possibility of learning a PDE formulation that describes the spatiotemporal dynamics of our observation data set.
In principle, any subset of intrinsic coordinates that successfully parametrizes the manifold can be used to learn functions on the manifold, and in particular $u_t$ and $v_t$.
The success of any particular subset of leading intrinsic coordinates in so describing $u_t$ and $v_t$ is confirmed through regression, via a mean-squared-error loss ($L$).
Next, we investigate which set of features of the input domain (which sets of spatial derivatives) can be best used to parametrize the intrinsic embedding (and thus learn the PDE right-hand-side).
One can find the subset of features from a user-defined dictionary (here spatial derivatives) to parametrize the intrinsic embedding coordinates through a linear Group LASSO~\cite{Meila18}.
In this paper, we examine several combinations of input domain variables, and find subsets that can minimally parametrize the intrinsic embedding; this is quantified through a total regression loss ($L_T$) based on a mean-squared-error as
\begin{equation}
\label{eqn:tloss}
L_T = \left(\sum_{j=1}^{d}L_{\phi_j}^2\right)^{\frac{1}{2}},
\end{equation}
where $L_{\phi_j}$ represents a regression loss for representing the intrinsic coordinate $\phi_j$ using selected input features and $d$ represents the number of intrinsic coordinates we chose.
ARD for Gaussian processes suggests the ``best" input domain subset in terms of which we will try and predict $u_t$ and $v_t$.
In the manifold learning context, we may find several different input subsets capable of parametrizing the manifold on which the observed behavior lies.
Different minimal parametrizing subsets will lead to different (but, in principle, on the data, equivalent) right-hand-sides for the PDE evolution. One expects that some of them will be ``better conditioned" (have better Lipschitz constants) than others.
\section{Different scale simulators for one-dimensional reaction-diffusion systems}
\label{sec:simulation}
\subsection{Macro-scale simulator: FitzHugh-Nagumo model}
\label{sec:fhn}
\begin{figure}[!htp]
\centering
\subfigure[~$u$ by LBM]{
\includegraphics[scale=0.125]{u0.jpg}
}
\subfigure[~$u$ by FHN]{
\includegraphics[scale=0.125]{ufhn.jpg}
}
\subfigure[~$v$ by LBM]{
\includegraphics[scale=0.125]{v0.jpg}
}
\subfigure[~$v$ by FHN]{
\includegraphics[scale=0.125]{vfhn.jpg}
}
\subfigure[~Absolute difference for $u$]{
\includegraphics[scale=0.125]{uerrorfhn.jpg}
}
\subfigure[~Absolute difference for $v$]{
\includegraphics[scale=0.125]{verrorfhn.jpg}
}
\caption{Spatiotemporal behavior of $u$ and $v$ simulated by the Lattice-Boltzmann model and by the FitzHugh-Nagumo PDE.
(a) and (c): $u$ and $v$ from the Lattice Boltzmann model (LBM). (b) and (d): $u$ and $v$ from the FitzHugh-Nagumo PDE.
(e) and (f): Normalized absolute difference between the simulations of the two models.
}
\label{fig:origLBM}
\end{figure}
To describe a one-dimensional reaction-diffusion system that involves an activator $u$ and an inhibitor $v$,
the FitzHugh-Nagumo model consists of two coupled reaction-diffusion partial differential equations:
\begin{equation}
\label{eqn:fhn}
\begin{aligned}
\frac{\partial u}{\partial t} &= D^{u}\frac{\partial^2u}{\partial x^2} + u -u^3 - v,\\
\frac{\partial v}{\partial t} &= D^{v}\frac{\partial^2v}{\partial x^2} + \epsilon(u -a_1v - a_0),
\end{aligned}
\end{equation}
where $a_1$ and $a_0$ are model parameters, $\epsilon$ represents a kinetic bifurcation parameter, and $D^{u}$ and $D^{v}$ represent diffusion coefficients for $u$ and $v$, respectively.
Here, we set these parameters to $a_1=2$, $a_0=-0.03$, $\epsilon=0.01$, $D^{u}=1$, and $D^{v}=4$~\cite{Theodoropoulos00}.
We discretize a spatial domain on $[0, 20]$ with $\Delta x = 0.2$ and a time domain on $[0, 450]$ with $\Delta t=0.001$, respectively.
We impose homogeneous Neumann boundary conditions at both boundaries and solve these equations (for various initial conditions) numerically via the finite element method using the COMSOL Multiphysics\textregistered ~software \cite{COMSOL}.
\subsection{Micro-scale simulator: the Lattice Boltzmann model}
\label{sec:lbm}
We also introduce a Lattice Boltzmann model (LBM)~\cite{Chen98,Succi01}, which can be thought of as a mesoscopic numerical scheme for describing spatiotemporal dynamics using finite-difference-type discretizations of Boltzmann-BGK equations~\cite{Bhatnagar54}, retaining certain advantages of microscopic particle models.
In this paper, the Lattice Boltzmann model is our fine scale ``microscopic simulator" and its results are considered to be ``the truth" from which the coarse-scale PDE will be learned.
The time evolution of the particle distribution function on a given lattice can be described by
\begin{equation}
\label{eqn:lbm}
f^{l}_i(x_{j+i},t_{k+1}) = f^l_i(x_j,t_k) + \Omega^l_i(x_j,t_k) + R^l_i(x_j,t_k) \;\;\; l\in\{u, v\},
\end{equation}
where a superscript $l$ indicates the activator $u$ and the inhibitor $v$, and $\Omega^l_i$ represents a collision term defined by Bhatnagar-Gross-Krook (BGK)~\cite{Bhatnagar54}:
\begin{equation}
\Omega^l_i(x_j,t_k) = -\omega^l(f^l_i(x_j,t_k)-f_i^{l,eq}(x_j,t_k)),
\end{equation}
where $\omega^l$ represents a BGK relaxation coefficient defined as~\cite{Qian95}
\begin{equation}
\omega^l = \frac{2}{1+3D^l\frac{\Delta t}{\Delta x^2}}.
\end{equation}
To compute our coarse-scale observables $u$ and $v$, we employ the D1Q3 model, which uses three distribution functions on the one-dimensional lattice as $(f^l_{-1}, f^l_0, f^l_{1})$ for each density (totalling 6 distribution functions).
Through the zeroth moment (in the velocity directions) of the overall distribution function, finally we compute the coarse-scale observable $u$ and $v$ as
\begin{equation}
\label{eqn:concentration}
\begin{aligned}
u(x_j,t_k) &= \sum_{i=-1}^{1}f^u_i(x_j,t_k),\\
v(x_j,t_k) &= \sum_{i=-1}^{1}f^v_i(x_j,t_k).
\end{aligned}
\end{equation}
Based on spatially uniform local diffusion equilibrium, for which $f_i^{eq}$ is homogeneous in all velocity directions, the weights are chosen all equal to 1/3:
\begin{equation}
\begin{aligned}
f_i^{u,eq}(x_j,t_k) &= \frac{1}{3}u(x_i,t_k),\\
f_i^{v,eq}(x_j,t_k) &= \frac{1}{3}v(x_i,t_k).
\end{aligned}
\end{equation}
Thus, the reaction terms $R^l_i$ in equation~\eqref{eqn:lbm} are modeled by
\begin{equation}
\label{eqn:reaction}
\begin{aligned}
R_i^{u}(x_j,t_k) &= \frac{1}{3}\Delta t(u(x_j,t_k)-u(x_j,t_k)^3-v(x_j,t_k)),\\
R_i^{v}(x_j,t_k) &= \frac{1}{3}\Delta t \epsilon (u(x_j,t_k)-a_1 v(x_j,t_k)^3-a_0).
\end{aligned}
\end{equation}
All model parameters ($a_0, a_1, \epsilon, D^u, D^v$) are the same as the FHN PDE.
The corresponding spatiotemporal behavior of these coarse observables $u$ and $v$ is shown in figures~\ref{fig:origLBM}(a) and (c) while the FHN PDE simulation for the same coarse initial conditions is shown in
figures~\ref{fig:origLBM}(b) and (d).
\section{Results}
\label{sec:result}
\subsection{Learning without feature selection}
We begin by considering our proposed framework without feature selection, so as to later contrast with the results including feature selection.
\begin{figure}[!htp]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.233\textwidth]{iniu.pdf} &
\includegraphics[width=0.233\textwidth]{iniv.pdf} \\
(a) &
(b)
\end{tabular}
\caption{Five different coarse initial conditions for training and a test coarse initial condition (colored in black). (a) Coarse initial conditions for $u$. (b) Coarse initial conditions for $v$. Five initial conditions are randomly chosen near the stable periodic solution.}
\label{fig:ini}
\end{figure}
The data come from the fine-scale Lattice Boltzmann simulation. For the parameter values selected, the long-term dynamics of the LB simulation lie, for all practical purposes, on a stable time-periodic solution.
To predict the coarse time derivatives $u_t$ and $v_t$, we collect training data from five different initial conditions near this stable periodic solution (see figure~\ref{fig:ini}) with the following LB spatiotemporal discretization -- in space, 99 discretized points on $[0.2, 19.8]$ with $dx = 0.2$; and in time, 451 discretized points on $[0, 450]$ with $dt=1$ for each initial condition.
Since our data come from the fine scale LB code, we need to initialize at the fine, LB scale of particle distribution functions (and not just of the concentrations $u$ and $v$).
To initialize the particle distribution functions in the Lattice Boltzmann model we apply the equal weights rule, $1/3$ for $f_{-1}$, $f_{0}$, and $f_{1}$, motivated by near-equilibrium considerations.
We do expect that such initialization features will soon be ``forgotten" as higher distribution moments become quickly slaved to the lower (here the zeroth) moments
(see for example~\cite{Van05}).
To ensure that our results are not affected by the initialization details, we only start collecting training data after relaxation by short time simulation (here, 2000 time steps with $\Delta t=0.001$ or $t=2$), see appendix~\ref{sec:heal}.
We estimate the local coarse fields and their (several) spatial and temporal derivatives through finite differences, and then apply machine learning algorithms (here Gaussian processes as well as neural networks) to learn the time derivatives of the activator $u_t$ and the inhibitor $v_t$ using as input variables the local $u$, $v$ and all their spatial derivatives up to and including order two ($u, u_x, u_{xx}, v, v_{x}, v_{xx}$).
\begin{equation}
\label{eqn:concentration1}
\begin{aligned}
u_t(x,t) &= f^u(u, u_x, u_{xx},v, v_x, v_{xx}),\\
v_t(x,t) &= f^v(u, u_x, u_{xx},v, v_x, v_{xx}).
\end{aligned}
\end{equation}
\begin{figure}[!htp]
\centering
\subfigure[~$u_t$ by GP]{
\includegraphics[scale=0.17]{GPU_ANF.jpg}
}
\subfigure[~$v_t$ by GP]{
\includegraphics[scale=0.17]{GPV_ANF.jpg}
}
\subfigure[~$u_t$ by NN]{
\includegraphics[scale=0.17]{NNU_ANF.jpg}
}
\subfigure[~$v_t$ by NN]{
\includegraphics[scale=0.17]{NNV_ANF.jpg}
}
\caption{No feature selection: $u_t=f^u(u,u_{x},u_{xx},v,v_x,v_{xx})$ and $v_t=f^v(u,u_{x},u_{xx},v,v_x,v_{xx})$.
Regression results of the two methods for time derivatives: Gaussian processes (GP) and neural networks (NN).
}
\label{fig:derivative_nf}
\end{figure}
Specifically, for the neural networks approach, we build two different networks, one for the prediction of the activator and one for the inhibitor.
For both the activator and the inhibitor, we set use hidden layers consisting of 6 and 6 neurons using a hyperbolic tangent sigmoid activation function; as mentioned above, we use Levenberg-Marquardt optimization with a Bayesian regularization (see section~\ref{sec:ANN}).
Both networks use the mean-squared-error as their loss function.
For Gaussian processes, we employ a radial basis kernel function with ARD (see equation~\eqref{eqn:kernel}).
Regression results obtained by each the two methods for the time derivatives in the training data set are shown in figure~\ref{fig:derivative_nf}.
Both methods provide good approximations of the target time derivatives $u_t$ and $v_t$.
Given the test coarse initial condition (black curves in figure~\ref{fig:ini}), simulation results {\em with the learned PDE} from $t=0$ to $t=450$ with $\Delta t=0.001$ and their normalized absolute differences from the ``ground truth" LB simulations are shown in figures~\ref{fig:nf}.
The order of magnitude of these absolute differences for both models is the same as those between the LB FHN and the explicitly known FHN PDE (see figures~\ref{fig:origLBM}(e) and (f)).
\begin{figure}[!htp]
\centering
\subfigure[~Gaussian processes]{
\includegraphics[scale=0.125]{ugp_ANF.jpg}
}
\subfigure[~Neural networks]{
\includegraphics[scale=0.125]{unn_ANF.jpg}
}
\subfigure[~Gaussian processes]{
\includegraphics[scale=0.125]{vgp_ANF.jpg}
}
\subfigure[~Neural networks]{
\includegraphics[scale=0.125]{vnn_ANF.jpg}
}
\subfigure[~Absolute difference for $u$ (GP)]{
\includegraphics[scale=0.125]{uerrorgp_ANF.jpg}
}
\subfigure[~Absolute difference for $u$ (NN)]{
\includegraphics[scale=0.125]{uerrornn_ANF.jpg}
}
\subfigure[~Absolute difference for $v$ (GP)]{
\includegraphics[scale=0.125]{verrorgp_ANF.jpg}
}
\subfigure[~Absolute difference for $v$ (NN)]{
\includegraphics[scale=0.125]{verrornn_ANF.jpg}
}
\caption{No feature selection: $u_t=f^u(u,u_{x},u_{xx},v,v_x,v_{xx})$ and $v_t=f^v(u,u_{x},u_{xx},v,v_x,v_{xx})$. (a)-(d): Simulation results of the two methods for $u$ and $v$. (e)-(h): The normalized absolute differences from the ``ground truth" LB simulations for $u$ and $v$.}
\label{fig:nf}
\end{figure}
\subsection{Learning with feature selection}
\label{sec:fs}
Now, we consider the possibility of feature selection, in an attempt to learn the RHS of coarse-scale PDEs with a minimal number of input domain variables (spatial derivatives).
First, we apply the sensitivity analysis via ARD in the case of Gaussian process approximation.
The optimal ARD weights ($\theta^*$) for $u_t$ and $v_t$ are tabulated in table~\ref{tab:ard}.
$u_t$ has three relatively small weights for ($u, u_{xx}, v$) and $v_t$ has also three relatively small weights for ($u, v, v_{xx}$).
It is interesting to observe that the selected features via ARD are the same as those in the explicitly known FHN PDE (see equation~\eqref{eqn:fhn}).
This shows that ARD can effectively guide in selecting the appropriate dimensionality of the input data domain, resulting here in the same spatial derivative choices as in the explicitly known FHN PDE.
\begin{table}[!htp]
\caption{\label{tab:ard} Optimal ARD weights ($\theta^*$ for $u_t$ and $v_t$ in equation~\eqref{eqn:opt}). As mentioned in section~\ref{sec:FS}, features which have relatively small ARD weights can be regarded as dominant features for the target functions $u_t$ and $v_t$.}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
& $u$ & $u_x$ & $u_{xx}$ & $v$ & $v_x$ & $v_{xx}$ \\
\hline
$u_t$ & 5.28E+00 & 4.23E+06 & 9.13E+02 & 2.13E+03 & 5.32E+08 & 4.78E+07 \\
$v_t$ & 1.33E+02 & 6.69E+06 & 1.94E+06 & 5.09E+02 & 4.20E+06 & 1.75E+02\\
\end{tabular}
\end{ruledtabular}
\end{table}
Now, we use the reduced input data domain ($u, u_{xx}, v$) for $u_t$ and ($u, v, v_{xx}$) for $v_t$ to recover the RHS of the coarse-scale PDEs as
\begin{equation}
\label{eqn:f1}
\begin{aligned}
u_t(x,t) &= f_1^u(u, u_{xx},v),\\
v_t(x,t) &= f_1^v(u,v, v_{xx}).
\end{aligned}
\end{equation}
Regression results of our two methods for the time derivatives are shown in figure~\ref{fig:derivative_f1}.
\begin{figure}[!htp]
\centering
\subfigure[~$u_t$ by GP]{
\includegraphics[scale=0.17]{GPU_AF1.jpg}
}
\subfigure[~$v_t$ by GP]{
\includegraphics[scale=0.17]{GPV_AF1.jpg}
}
\subfigure[~$u_t$ by NN]{
\includegraphics[scale=0.17]{NNU_AF1.jpg}
}
\subfigure[~$v_t$ by NN]{
\includegraphics[scale=0.17]{NNV_AF1.jpg}
}
\caption{Feature selection 1: $u_t=f_1^u(u,u_{xx},v)$ and $v_t=f_1^v(u,v,v_{xx})$.
These selected variables are the same as those that appear in the right-hand-side of the explicitly known FHN PDE.
Regression results of the two methods for time derivatives: Gaussian processes (GP) and neural networks (NN).
}
\label{fig:derivative_f1}
\end{figure}
Results of long time simulation of the learned PDEs by each method, from $t=0$ to $t=450$, as well as normalized absolute differences from the simulation of the ``ground truth" LB are shown in figure~\ref{fig:f1}.
\begin{figure}[!htp]
\centering
\subfigure[~Gaussian processes]{
\includegraphics[scale=0.125]{ugp_AF1.jpg}
}
\subfigure[~Neural networks]{
\includegraphics[scale=0.125]{unn_AF1.jpg}
}
\subfigure[~Gaussian processes]{
\includegraphics[scale=0.125]{vgp_AF1.jpg}
}
\subfigure[~Neural networks]{
\includegraphics[scale=0.125]{vnn_AF1.jpg}
}
\subfigure[~Absolute difference for $u$ (GP)]{
\includegraphics[scale=0.125]{uerrorgp_AF1.jpg}
}
\subfigure[~Absolute difference for $u$ (NN)]{
\includegraphics[scale=0.125]{uerrornn_AF1.jpg}
}
\subfigure[~Absolute difference for $v$ (GP)]{
\includegraphics[scale=0.125]{verrorgp_AF1.jpg}
}
\subfigure[~Absolute difference for $v$ (NN)]{
\includegraphics[scale=0.125]{verrornn_AF1.jpg}
}
\caption{Feature selection 1: $u_t=f^u(u,u_{xx},v)$ and $v_t=f^v(u,v,v_{xx})$.
(a)-(d): Simulation results of the two methods for $u$ and $v$. (e)-(h): The normalized absolute differences from the ``ground truth" LB simulations for $u$ and $v$.
}
\label{fig:f1}
\end{figure}
The two machine learning methods operating with a reduced input data domain can still provide good approximations of the time derivatives and of the resulting dynamics.
The order of magnitude of these absolute differences is effectively the same as the difference of the FHN LB from the explicitly known FHN PDE.
It is, therefore, clear that our framework effectively recovers the coarse-scale PDE from fine scale observation data; the difference is that the right hand-side of the PDE is now given in terms of the ANN right-hand-side, or in terms of the observed data and the GP kernel/hyperparameters, rather than the simple algebraic formula of equation \eqref{eqn:fhn}.
\begin{table}[!htp]
\caption{The best candidates and the corresponding regression loss (L) for $u_t$ and $v_t$ with respect to the number of Diffusion map coordinates}
\label{tab:dimu}
\begin{ruledtabular}
\begin{tabular}{cllcc}
& \multicolumn{2}{l}{Optimal intrinsic coordinates} &\multicolumn{2}{l}{Regression Loss (L)} \\
& $u_t$ & $v_t$ & $u_t$ & $v_t$ \\ \hline
1d & ($\phi^u_5$) & ($\phi^v_2$) & 4.60E-04& 7.69E-06 \\
2d & ($\phi^u_1,\phi^u_5$) & ($\phi^v_1,\phi^v_2$) &1.40E-06 &1.50E-06\\
3d & ($\phi^u_1,\phi^u_4,\phi^u_5$) &($\phi^v_1,\phi^v_2,\phi^v_3$) &2.18E-08 &4.74E-08\\
4d & ($\phi^u_1,\phi^u_3,\phi^u_4,\phi^u_5$) & ($\phi^v_1,\phi^v_2,\phi^v_3,\phi^v_4$) &1.64E-08 &5.71E-09\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[!htp]
\caption{The best candidates and corresponding total loss for $u_t=(\phi^u_1,\phi^u_4,\phi^u_5)$ and $v_t=(\phi^v_1,\phi^v_2,\phi^v_3)$ with respect to the number of features.}
\label{tab:f1u}
\begin{ruledtabular}
\begin{tabular}{clclc}
& \multicolumn{2}{c}{$u_t=(\phi^u_1,\phi^u_4,\phi^u_5)$} &\multicolumn{2}{c}{$v_t=(\phi^v_1,\phi^v_2,\phi^v_3)$} \\
& Features & Total Loss ($L_T$) & Features & Total Loss ($L_T$) \\ \hline
1d & ($u$) & 6.51E-05 & ($u$) & 7.93E-05 \\
2d & ($u,v$) & 1.65E-08 &($u,v$) &1.49E-05 \\
3d & ($u, u_{xx}, v$) &6.52E-09 &($u, v, v_{xx}$) &3.32E-07\\
& ($u, u_{x}, v$) &7.39E-09 &($u, u_{x}, v_{xx}$) &6.21E-07\\
4d & ($u, u_{x},u_{xx}, v$) & 2.68E-09 &($u, v, v_{x}, v_{xx}$)& 4.47E-09\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[!htp]
\centering
\subfigure[~$u_t = f_{ud}(\phi^u_1, \phi^u_4, \phi^u_5)$]{
\includegraphics[scale=0.2]{phiu.jpg}
}
\subfigure[~$v_t = f_{vd}(\phi^v_1, \phi^v_2, \phi^v_3)$]{
\includegraphics[scale=0.2]{phiv.jpg}
}
\caption{Three leading Diffusion map coordinates: Colors represent $u_t$ in (a) and $v_t$ in (b).}
\label{fig:dmap}
\end{figure}
Next, we consider an alternative approach for feature selection, via our manifold learning technique, Diffusion Maps.
The best candidate set among different combinations of intrinsic coordinates (varying the number of leading intrinsic dimensions and recording the corresponding Gaussian Process regression loss) are shown in table~\ref{tab:dimu}.
Since this three-dimensional intrinsic embedding space exhibits a (tiny) regression loss of order $10^{-8}$, we choose an input domain for $u_t$ consisting of ($\phi^u_1,\phi^u_4,\phi^u_5$) as shown in figure~\ref{fig:dmap}(a).
For $v_t$, by the same token, we choose the three-dimensional embedding space consisting of ($\phi^v_1,\phi^v_2,\phi^v_3$) as shown in figure~\ref{fig:dmap}(b).
Based on these identified intrinsic embedding spaces, we examined several subsets of input domain features (spatial derivatives) using the total loss of equation~\eqref{eqn:tloss}.
``Good" subsets of input features (those that result in small regression losses with minimal input dimension)
are presented in table~\ref{tab:f1u}.
Clearly, different choices of such input feature subsets can give comparable total losses; this suggests that we may construct different right-hand-sides of the unknown coarse-scale PDE that are comparably successful in representing the observed dynamics.
The good candidates for $u_t$ and $v_t$ identified this way, consisting of three input features, are ($u, u_{xx}, v$) and ($u, v, v_{xx}$); they are the same as those found from GP via ARD, and also the same as the ones in the explicitly known FHN PDE.
Interestingly, another possible alternative candidate set is also identified: ($u, u_{x}, v$) for $u_t$ and ($u, u_{x}, v_{xx}$) for $v_t$.
\begin{figure}[!htp]
\centering
\subfigure[~$u_t$ by GP]{
\includegraphics[scale=0.17]{GPU_AF2.jpg}
}
\subfigure[~$v_t$ by GP]{
\includegraphics[scale=0.17]{GPV_AF2.jpg}
}
\subfigure[~$u_t$ by NN]{
\includegraphics[scale=0.17]{NNU_AF2.jpg}
}
\subfigure[~$v_t$ by NN]{
\includegraphics[scale=0.17]{NNV_AF2.jpg}
}
\caption{Feature selection 2: $u_t=f_2^u(u,u_{x},v)$ and $v_t=f_2^v(u,v,v_{xx})$.
Regression results of the two methods for time derivatives: Gaussian processes (GP) and neural networks (NN).
}
\label{fig:derivative_f2}
\end{figure}
\begin{figure}[!htp]
\centering
\subfigure[~Gaussian processes]{
\includegraphics[scale=0.125]{ugp_AF2.jpg}
}
\subfigure[~Neural networks]{
\includegraphics[scale=0.125]{unn_AF2.jpg}
}
\subfigure[~Gaussian processes]{
\includegraphics[scale=0.125]{vgp_AF2.jpg}
}
\subfigure[~Neural networks]{
\includegraphics[scale=0.125]{vnn_AF2.jpg}
}
\subfigure[~Absolute difference for $u$ (GP)]{
\includegraphics[scale=0.125]{uerrorgp_AF2.jpg}
}
\subfigure[~Absolute difference for $u$ (NN)]{
\includegraphics[scale=0.125]{uerrornn_AF2.jpg}
}
\subfigure[~Absolute difference for $v$ (GP)]{
\includegraphics[scale=0.125]{verrorgp_AF2.jpg}
}
\subfigure[~Absolute difference for $v$ (NN)]{
\includegraphics[scale=0.125]{verrornn_AF2.jpg}
}
\caption{Feature selection 2: $u_t=f^u(u,u_{x},v)$ and $v_t=f^v(u,v,v_{xx})$.
(a)-(d): Simulation results of the two methods for $u$ and $v$. (e)-(h): The normalized absolute differences from the ``ground truth" LB simulations for $u$ and $v$.
}
\label{fig:f2}
\end{figure}
Using these alternative candidate feature sets, we model different ``versions" of what, on the data, is effectively the same coarse-scale PDE.
The ``alternative" version of the PDE can be symbolically written as
\begin{equation}
\label{eqn:f2}
\begin{aligned}
u_t(x,t) &= f_2^u(u, u_{x},v),\\
v_t(x,t) &= f_2^v(u,v, v_{xx}),
\end{aligned}
\end{equation}
and the corresponding regression results of the time derivatives are shown in figure~\ref{fig:derivative_f2}.
Specifically, we use the first spatial derivative $u_x$ instead of the second spatial derivative $u_{xx}$ for learning $u_t$.
As shown in figure~\ref{fig:f2}, both models provide good predictions of the ``ground truth" LB simulations;
we observe, however, that the accuracy of the neural network based predictions is enhanced.
These results confirm that, on the data, alternative coarse-scale PDE forms can provide successful macroscopic description.
To further explore this possibility of alternative PDE forms that represent the observed data with {\em qualitatively comparable accuracy}, we also explored the efficacy of a third coarse-scale PDE description, in terms of a yet different input feature set:
$(u, u_{xx},v)$ for $u_t$ and $(u,u_{x}, v_{xx})$ for $v_t$, so that the PDE can symbolically be written as
\begin{equation}
\label{eqn:f3}
\begin{aligned}
u_t(x,t) &= f_3^u(u, u_{xx},v),\\
v_t(x,t) &= f_3^v(u,u_{x}, v_{xx}).
\end{aligned}
\end{equation}
The corresponding prediction results of the time derivatives are shown in figure~\ref{fig:derivative_f3}.
\begin{figure}[!htp]
\centering
\subfigure[~$u_t$ by GP]{
\includegraphics[scale=0.17]{GPU_AF3.jpg}
}
\subfigure[~$v_t$ by GP]{
\includegraphics[scale=0.17]{GPV_AF3.jpg}
}
\subfigure[~$u_t$ by NN]{
\includegraphics[scale=0.17]{NNU_AF3.jpg}
}
\subfigure[~$v_t$ by NN]{
\includegraphics[scale=0.17]{NNV_AF3.jpg}
}
\caption{Feature selection 3: $u_t=f_3^u(u,u_{xx},v)$ and $v_t=f_3^v(u,u_{x},v_{xx})$.
Regression results of the two methods for time derivatives: Gaussian processes (GP) and neural networks (NN).
}
\label{fig:derivative_f3}
\end{figure}
As shown in figure~\ref{fig:derivative_f3}, both regression methods provide an inaccurate approximation of $v_t$ near $v_t=0$; the order of magnitude of this error is $10^{-3}$.
The long term prediction results are not as accurate representations of the ground truth LB simulation as the previous two coarse-scale PDE realizations; yet they may still be qualitatively informative.
Normalized absolute differences of long-time simulation for both machine learning methods are shown in figure~\ref{fig:f3}.
As was the case in the previous alternative PDE realizations, the NN model appears more accurate than the GP one.
\begin{figure}[!htp]
\centering
\subfigure[~Gaussian processes]{
\includegraphics[scale=0.125]{ugp_AF3.jpg}
}
\subfigure[~Neural networks]{
\includegraphics[scale=0.125]{unn_AF3.jpg}
}
\subfigure[~Gaussian processes]{
\includegraphics[scale=0.125]{vgp_AF3.jpg}
}
\subfigure[~Neural networks]{
\includegraphics[scale=0.125]{vnn_AF3.jpg}
}
\subfigure[~Absolute difference for $u$ (GP)]{
\includegraphics[scale=0.125]{uerrorgp_AF3.jpg}
}
\subfigure[~Absolute difference for $u$ (NN)]{
\includegraphics[scale=0.125]{uerrornn_AF3.jpg}
}
\subfigure[~Absolute difference for $v$ (GP)]{
\includegraphics[scale=0.125]{verrorgp_AF3.jpg}
}
\subfigure[~Absolute difference for $v$ (NN)]{
\includegraphics[scale=0.125]{verrornn_AF3.jpg}
}
\caption{Feature selection 3: $u_t=f^u(u,u_{xx},v)$ and $v_t=f^v(u,u_{x},v_{xx})$.
(a)-(d): Simulation results of the two methods for $u$ and $v$. (e)-(h): The normalized absolute differences from the ``ground truth" LB simulations for $u$ and $v$.
}
\label{fig:f3}
\end{figure}
To compare our identified coarse-scale PDEs with the explicitly known FHN PDE (see equations~\eqref{eqn:fhn}), we also compare the predictions of our coarse-scale PDEs to those of the FHN PDE via mean normalized absolute differences for the test coarse initial condition followed from $t=0$ to $t=450$ as
\begin{equation}
\label{eqn:mnae}
\begin{aligned}
\mathrm{MNAD_u} &= \frac{1}{N_T}\sum_{i=1}^{99}\sum_{j=0}^{450} \frac{|u(i,j)-u_f(i,j)|}{\max(u_f)-\min(u_f)},\\
\mathrm{MNAD_v} &= \frac{1}{N_T}\sum_{i=1}^{99}\sum_{j=0}^{450} \frac{|v(i,j)-v_f(i,j)|}{\max(v_f)-\min(v_f)},
\end{aligned}
\end{equation}
where $N_T$ is a total number of data points and $u_f$ and $v_f$ represent simulation results of the FHN PDE, respectively.
The comparison of these representative simulation of our various coarse-scale PDEs is summarized in table~\ref{tab:mae}.
The differences across our various coarse-scale identified PDEs are of order $10^{-2}$ and below, comparable to the difference between each of them and the FHN PDE.
\begin{table}[!htp]
\caption{Mean normalized absolute difference (MNAD) for different coarse-scale PDEs. `GP' and `NN' represent `Gaussian processes' and `Neural networks', respectively.} \label{tab:mae}
\begin{ruledtabular}
\begin{tabular}{lcc}
& $\mathrm{MNAD}_u$ & $\mathrm{MNAD}_v$\\ \hline
No Feature selection with GP & 1.59E-02 & 1.62E-02 \\
No Feature selection with NN & 1.53E-02 & 1.56E-02 \\
Feature selection 1 with GP & 1.58E-02 & 1.62E-02 \\
Feature selection 1 with NN & 1.54E-02 & 1.57E-02 \\
Feature selection 2 with GP & 2.39E-02 & 2.20E-02 \\
Feature selection 2 with NN & 2.00E-02 & 2.11E-02 \\
Feature selection 3 with GP & 3.20E-02 & 3.31E-02 \\
Feature selection 3 with NN & 2.08E-02 & 2.16E-02
\end{tabular}
\end{ruledtabular}
\end{table}
Specifically, `feature selection 1' (figure~\ref{fig:f1}), whose variables are the same as those of the explicit FHN PDE, provides the best PDE realization via {\em both} the GP and the NN models.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we demonstrated the data-driven discovery of macroscopic, concentration-level PDEs for reaction/transport processes resulting from fine-scale observations (here, from simulations of a Lattice Boltzmann mesoscopic model).
Long-term macroscopic prediction is then obtained by simulation of the identified (via machine-learning methods) coarse-scale PDE.
We explored the effect of input feature selection capability on the effectiveness of our framework to identify the underlying macroscopic PDEs.
Our framework suggests four different PDEs (one without and three with feature selection), all comparable with the explicit FitzHugh-Nagumo PDE {\em on the data}: all of them provide good approximations of sample coarse-scale dynamic trajectories.
The FHN PDE terms have a well-established mechanistic physical meaning (reaction and diffusion); it would be interesting to explore if any physical meaning can be ascribed to our alternative parametrizations of the right-hand-side of the coarse-scale evolution PDE.
Clearly, the identified PDE depends on our choice of observables - for example, our Diffusion Map embedding coordinates.
We plan to explore the use of kernel-based embeddings (as discussed in Ref.~\cite{Bittracher19} mentioned above) as an approach that can control the well-posedness of the embedding and the distortion of the resulting manifold; this will clearly affect the identified PDE, and it will be interesting to study the interplay between differently distorted embedding manifolds and different identified approximate PDEs.
In our framework, we employed finite differences to estimate spatial derivatives in the formulation of the PDE.
Instead of numerical spatial derivatives, we may use the values of coarse observables at neighboring points directly to uncover the coarse evolution law.
The effect of this alternative embedding for the PDE right-hand-side, explored in Ref.~\cite{Bar19}, on the accuracy of the identified model predictions, is the subject of ongoing research.
We believe that the framework we presented is easily generalizable to multiscale and/or multifidelity data.
Here we worked across a single scale gap and a single fine-scale simulator providing the ground truth.
We envision that data fusion tools can be combined with the approach to exploit data at several cascaded scales, and taking advantage of simulation data from several heterogeneous simulators~\cite{Lee17D,Lee19}.
\begin{acknowledgments}
S. Lee, M. Kooshkbaghi, and I. G. Kevrekidis gratefully acknowledge partial support by NIH and by DARPA. Also, This material is based upon work supported in part by, the U. S. Army Research Laboratory and the U. S. Army Research Office under contract/grant number W911NF1710306.
Discussions of the authors with Dr. Felix Dietrich are gratefully acknowledged.
\end{acknowledgments}
|
2,869,038,156,752 | arxiv |
\section{Introduction}\label{sec:introduction}
The $s$-$t$ maximum flow problem and its dual, the $s$-$t$ minimum cut on graphs are amongst the most fundamental problems in combinatorial optimization with a wide range of applications. Furthermore, they serve as a testbed for new algorithmic concepts which have found uses in other areas of theoretical computer science and optimization. This is because the max-flow and min-cut problems demonstrate the prototypical primal-dual relation in linear programs. In the well-known $s$-$t$ maximum flow problem we are given a graph $G=(V,E)$ with $m$ edges and $n$ vertices with edge capacities $u_e \leq U$, and aim to route as much flow as possible from $s$ to $t$ while restricting the magnitude of the flow on each edge to its capacity.\\
Several decades of work in combinatorial algorithms for this problem led to a large set of results culminating in the work of Goldberg-Rao \cite{GR98} which gives a running time bound of $O(m \min\{m^{1/2},n^{2/3}\}\log(\frac{n^2}{m})\log U)$. This bound remained unimproved for many years. In a breakthrough paper, Christiano et al \cite{CKMST11} show how to compute approximate maximum flows in $\widetilde{O}(mn^{1/3}\log(U)\mathsf{poly}(1/\varepsilon))$. Their new approach uses electrical flow computations which are Laplacian linear system solves which can be solved in nearly-linear time \cite{ST14} to take steps to minimize a softmax approximation of the congestion of edges via a second order approximation. A straightforward analysis leads to a $O(\sqrt{m})$ iteration algorithm. However, they present an insight by trading off against another potential function and show that $O(m^{1/3})$ iterations suffice. This work led to an extensive line of work exploiting Laplacian system solving and continuous optimization techniques for faster max flow algorithms. Lee et al. \cite{LRS13} also present another $O(n^{1/3}\poly(1/\varepsilon))$ iteration algorithm for unit-capacity graphs also using electrical flow primitives. Finally Kelner et al. \cite{KLOS14} and Sherman \cite{Sherman13,Sherman17a} present algorithms achieving $O(m^{o(1)}\poly(1/\varepsilon))$ iteration algorithm for max-flow and its variants, which are based on congestion approximators and oblivious routing schemes as opposed to electrical flow computations. This has now been improved to near linear time \cite{Peng16,Sherman17b}. Crucially this line of work can only guarantee weak approximations to max flow due to the $\poly(1/\varepsilon)$ in the iteration complexity.\\
In order to get highly accurate solutions which depend only polylogarithmically on $1/\varepsilon$, work has relied on second-order optimization techniques which use first and second-order information (the Hessian of the optimization function). To solve the max flow problem to high accuracy, several works have used interior point methods (IPMs) for linear programming \cite{NN94,Ren01}. These algorithms approximate non-negativity/$\ell_\infty$ constraints by approximating them by a \textit{self-concordant} barrier, an approximation to an indicator function of the set which satisfies local smoothness and strong convexity properties and hence can be optimized using Newton's method. In particular, Daitch and Spielman \cite{DS08} show how to combine standard path-following IPMs and Laplacian linear system solves to obtain $\widetilde{O}(m\sqrt{m}\log (U/\varepsilon))$ iterations, matching Goldberg and Rao up to logarithmic factors. The $O(\sqrt{m})$ iterations is a crucial bottleneck here due to the $\ell_\infty$ norm being approximated by $\ell_2$ norm to a factor of $\sqrt{m}$. Then Lee and Sidford \cite{LS14} devised a faster IPM using weighted logarithmic barriers to achieve a $\widetilde{O}(m\sqrt{n}\log(U/\varepsilon)$ time algorithm. Madry \cite{M13,M16} opened up the weighted barriers based IPM algorithms for max flow to show that instead of $\ell_2$ norm governing the progress of each iteration, one can actually make the progress only maintaining bounds on the $\ell_4$ norm. Combining this with insights from \cite{CKMST11}, by using another potential function, which again depends on the energy of the next flow step and carefully tuning the weights in the barriers, he achieved an $\widetilde{O}(m^{3/7})$ iteration algorithm which leads to a $\widetilde{O}(m^{11/7}U^{1/7}\log(m/\varepsilon))$ time. Note that the algorithm depends polynomially on the maximum capacity edge $U$ and hence is mainly an improvement for mildly large edge capacities. This work can also be used to solve min cost flow problems in the same running time \cite{CMSV17}.\\
Another line of work beyond IPMs is to solve $p$-norm regression problems on graphs. Such problems interpolate between electrical flow problems $p=2$, maximum flow problems $p=\infty$ and transshipment problems $p=1$. While these problems can also be solved in $O(\sqrt{m})$ iterations to high accuracy using IPMs\cite{NN94}, it was unclear if this iteration complexity could be improved depending on the value of $p$. Bubeck et al. \cite{BCLL18} showed that for any self-concordant barrier for the $\ell_p$ ball, the iteration complexity has to be at least $O(\sqrt{m})$ thus making progress using IPMs unlikely. They however showed another \textit{homotopy-based} method, of which IPMs are also a part of, can be used to solve the problem in $\widetilde{O}_p(m^{\frac{1}{2}-\frac{1}{p}}\log(1/\varepsilon))$ iterations, where $O_p$ hides dependencies on $p$ in the runtime. This leads to improvements on the runtime for constant values of $p$. Next, Adil et al. \cite{AKPS19}, inspired by the work of \cite{BCLL18} showed that one can measure the change in $p$-norm using a second order term based on a different function which allows them to obtain approximations to the $p$-norm function in different norms with strong condition number. These results can be viewed in the framework of relative convexity \cite{LFN18}. Thus, they can focus on just solving the optimization problem arising from the residual. Using insights from $\cite{CKMST11}$, they arrive at a $\widetilde{O}_p(m^{4/3}\log(1/\varepsilon)$-time algorithm. Then follow-up work by Kyng et al. \cite{KPSW19} opened up the tools used by Spielman and Teng \cite{ST14} for $\ell_2$-norm flow problems to show that one can construct strong preconditioners for the residual problems for mixed $\ell_2$-$\ell_p$-norm flow problems, a generalization of $\ell_p$-norm flow and obtain an $\widetilde{O}_p(m^{1+o(1)}\log(1/\varepsilon)$ algorithm. These results however do not lead to faster max flow algorithms however due to their large dependence on $p$.\\
However, Liu and Sidford \cite{LS20} improving on Madry \cite{M16} showed that instead of carefully tuning the weights based on the electrical energy, one can consider the separate problem of finding a new set of weights under a certain budget constraint to maximize the energy. They showed that a version of this problem reduce to solving $\ell_2$-$\ell_p$ norm flow problems and hence can be solved in almost-linear time using the work of \cite{KPSW19,AS20}. This leads to a $O(m^{11/8+o(1)}U^{1/4})$-time algorithm for max flow. However, this result still relies on the amount of progress one can take in each iteration being limited to the bounds one can ensure on the $\ell_4$ norm of the congestion vector, as opposed to the ideal $\ell_\infty$ norm. We remark here that there are IPMs for linear programming which only measure centrality in $\ell_\infty$ norm as opposed to the $\ell_2$ or $\ell_4$ norm. In particular \cite{CLS19,LSZ19,BLSS20} show how to take a step with respect to a softmax function of the duality gap and trace the central path only maintaining $\ell_\infty$ norm bounds. \cite{Tuncel95,Tuncel94} also designed potential reduction based IPMs which trace the central path only maintaining centrality in $\ell_\infty$.
\subsection{Our Contribution}
In this paper, we devise a faster interior point method for $s$-$t$ maximum flow in directed graphs. Precisely, our algorithm runs in time $\widetilde{O}(m^{4/3+o(1)}U)$. During the process of writing this paper, we were informed by Yang Liu and Aaron Sidford \cite{LS20b} that they have also obtained an algorithm achieving the same runtime. They also end up solving the same subproblems that we will end up solving, although they arrive at it from the perspective of considering the Bregman divergence of the barrier as opposed to considering the potential funcion that is the inspiration for our work. Our algorithm builds on top of both Madry \cite{M16} and Liu-Sidford \cite{LS20} and is arguably simpler than both in some regards. \\
In particular, our algorithm is based on potential reduction algorithms which are a kind of interior point methods for linear programs. These algorithms are based on a potential function which measures both the duality gap as well as accounts for closeness to the boundary via a barrier function. The algorithms differ from path-following IPMs in that they have the potential to not strictly follow the path closely but only trace it loosely, which is also experimentally observed. Usually, the step taken is a scaled gradient step/Newton step on the potential function. Provided that we can guarantee sufficient decrease of the potential function and relate the potential function to closeness to optimality, we can show convergence. We refer to \cite{Ans96,Todd96,NN94} for excellent introductions to potential reduction IPMs. \\
We will however use a different step; instead of a Newton step, we consider taking the step, subject to augmenting a certain amount of flow in each iteration, which maximizes the decrease in the potential function after taking the step. We then show that this optimization problem can be efficiently solved in $\widetilde{O}(m)$ time using electrical flow computations. While we can show that the potential function decreases by a large amount which guarantees that we can solve the max flow problem in $O(\sqrt{m})$ iterations, we forego writing it in this manner as we are unable to argue such a statement when the weights and hence the potential function is also changed. Instead, we stick to keeping track of the centrality of our flow vector while making sufficient progress. Crucially however, the amount of progress made by our algorithm only depends on bounds on the $\ell_\infty$ of the congestion vector of the update step rather than the traditional $\ell_2$ or $\ell_4$ norm bounds in \cite{M16,LS20}. In order to improve the iteration complexity by obtaining stronger bounds on the $\ell_\infty$ norm of the congestion vector, we show that like in Liu-Sidford \cite{LS20}, we can change weights on the barrier term for each edge. Instead of using energy as a potential function to be maximized, inspired by oracles designed for multiplicative weights algorithms, we use the change in the potential function itself as the quantity to be maximized subject to a $\ell_1$ budget constraint on the change in weights. While we are unaware of how to maximize the $\ell_1$ constrained problem, we relax it to an $\ell_q$ constrained problem, which we solve using a mixed $\ell_2$-$\ell_p$ norm flow problem using the work of \cite{KPSW19,AS20}. Combining this with an application of H\"{o}lder\xspace's inequality gives us sufficiently good control on the $\ell_1$-norm of the weight change while ensuring that our step has significantly better $\ell_\infty$ norm bounds on the congestion vector. We believe our potential reduction framework as well as the concept of changing weights based on the update step might be useful in designing faster algorithms for max flow beyond our $m^{4/3}$ running time.
\section{Preliminaries}\label{sec:prelims}
Throughout this paper, we will view graphs as having both forward and backward capacities. Specifically, we will denote by $G=(V,E,\bf{u})$, a directed graph with vertex set $V$ of size $n$, an edge set $E$ of size $m$, and two non-negative capacities $u_e^-$ and $u_e^+$ for each edge $e\in E$. For the purpose of this paper, all edge capacities are bounded by $U=1$. Each edge $e=(u,v)$ has a head vertex $u$ and a tail vertex $v$. For a vector $v \in \mathbb R^m$, we define $\|v\|_p=(\sum\limits_{i=1}^{m}|v_i|^p)^{1/p}$ and $\|v\|_\infty = \max\limits_{i=1}^{m}|v_i|$ and refer to $\text{Diag}(v)\in\mathbb R^{m \times m}$ as the diagonal matrix with the $i^{th}$ diagonal entry equal to $v_i$.
\textbf{Maximum Flow Problem} Given a graph $G$, we call any assignment of real values to the edges of $E$, i.e., $f\in\mathbb R^m$, a flow. For a flow vector $f$, we view $f_e$ as the amount of the flow on edge $e$ and if this value is negative, we interpret it as having a flow of $|f_e|$ flowing in the direction opposite to the edge's orientation. We say that a flow $f$ is an $\sigma$-flow, for some demands $\sigma\in\mathbb R^n$ iff it satisfies \textit{flow conservation constraints} with respect to those demands. That is, we have
\begin{align*}
\sum\limits_{e \in E^+(v)}f_e-\sum\limits_{e \in E^-(v)}f_e = \sigma_v \ \text{for every vertex } v \in V
\end{align*}
where $E^+(v)$ and $E^-(v)$ is the set of edges of $G$ that are entering and leaving vertex $v$ respectively. We will require $\sum\limits_{v \in V} \sigma_v=0$.
Furthermore, we say that a $\sigma$-flow $f$ is feasible in $G$ iff $f$ satisfies the capacity constraints
\begin{align*}
-u_e^-\leq f_e \leq u_e^+ \ \text{for each edge } e \in E
\end{align*}
One type of flows that will be of interest to us are $s-t$ flows, where $s$ (the \textit{source}) and $t$(the \textit{sink}) are two distinguishing vertices of G. Formally, an $s-t$ flow is a $\sigma$-flow whose demand vector $\sigma=F\chi_{s,t}$, where $F$ is the value of the flow and $\chi_{s,t}$ is a vector with $-1$ and $+1$ at the coordinates corresponding to $s$ and $t$ respectively and zero elsewhere.
Now, the maximum flow problem corresponds to the problem in which we are given a directed graph $G=(V,E,u)$ with integer capacities as well as a source vertex $s$ and a sink vertex $t$ and want to find a feasible $s$-$t$ flow of maximum value. We will denote this maximum value $F^*$
\textbf{Residual Graphs} A fundamental object in many maximum flow algorithms is the notion of a residual graph. Given a graph $G$ and a feasible flow $\sigma$-flow $f$ in that graph, we define the \textit{residual graph} $G_f$ as a graph $G=(V,E,\hat{u}(f))$ over the same vertex and edge set as $G$ and such that, for each edge $e=(u,v)$, it's forward and backward residual capacities are defined as
\begin{align*}
\hat{u}^+_e(f)=u_e^+ - f_e \text{ and } \hat{u}^-_e(f)=u_e^- + f_e
\end{align*}
We will also denote $\hat{u}_e(f) = \min\{\hat{u}^+_e(f),\hat{u}^-_e(f)\}$. When the value of $f$ is clear from context, we will omit writing it explicitly. Observe that the feasibility of $f$ implies that all residual capacities are always non-negative.
\textbf{Electrical Flows and Laplacian Systems} Let $G$ be a graph and let $r\in\mathbb R^m_{++}$ be a vector of edge resistances, where the resistance of edge $e$ is denoted by $r_e$. For a flow $f \in \mathbb R^E$ on $G$, we define the energy of $f$ to be $\mathcal{E}_r(f) = f^\top R f = \sum\limits_{e \in E} r_e f_e^2$ where $R = \text{Diag}(f)$. For a demand $\chi$, we define the electrical $\chi$-flow $f_r$ to be the $\chi$-flow which minimizes energy $f_r = \arg\min\limits_{B^\top f=\chi} \mathcal{E}_r(f)$, where $B\in \mathbb R^{m \times n}$ is the edge-vertex incidence matrix. This flow is unique as the energy is a strcitly convex function.
The Laplacian of a graph $G$ with resistances $r$ is defined as $L=B^\top R^{-1} B$. The electrical $\chi$ flow is given by the formula $f_r = R^{-1}BL^{\dagger}\chi$. We also define electrical potentials as $\phi=L^{\dagger}\chi$ There is a long line of work starting from Spielman and Teng which shows how to solve $L\phi = \chi$ in nearly linear time \cite{ST14,KMP14,KOSZ13,PS14,CKMPPRX14,KS16,KLPSS16}.
\textbf{p-Norm Flows} As mentioned above, a line of work \cite{BCLL18,AKPS19,KPSW19} shows how to solve more general $p$-norm flow problems. Precisely, given a "gradient" vector $g \in \mathbb R^E$, resistances $r \in \mathbb R_{+}^E$ and a demand vector $\chi$, the problem under consideration is
\begin{align*}
OPT=\min\limits_{B^\top f = \chi} \sum\limits_{e \in E} g_e f_e + r_e f_e^2 + |f_e|^p
\end{align*}
\cite{KPSW19} call such a problem as a mixed $\ell_2$-$\ell_p$-norm flow problem and denote the expression inside the min as $val(f)$. The main result of the paper is
\begin{theorem}[Theorem 1.1 in \cite{KPSW19}]\label{thm:kpswthm} For any even $p\in [\omega(1), o(\log^{2/3-o(1)} n)]$ and an initial solution $f^{(0)}$ such that all parameters are bounded by $2^{\poly(\log(n))}$, we can compute a flow $\widetilde{f}$ satisfying the demands $\chi$ such that
\begin{align*}
val(\widetilde{f}) - OPT \leq \frac{1}{2^{O(\poly (\log m))}}(val(f^{(0)}) - OPT) + \frac{1}{2^{O(\poly (\log m))}}
\end{align*}
in $2^{O(p^{3/2})}m^{1+O(1/\sqrt{p})}$ time.
\end{theorem}
We remark that strictly speaking the theorem in \cite{KPSW19} states the error to be polynomial but \cite{LS20} observe that their proof actually implies quasi-polynomial error as stated above.
While our subproblems that we need to solve to change weights cannot be exactly put into this form, we show that mild modifications to their techniques can be done to then use their algorithm as a black-box. Hence, we elaborate on their approach below.
One main thing to establish in their paper is how the $p$-norm changes when we move from $f$ to $f+\delta$.
\begin{lemma}[Lemma in \cite{KPSW19}]\label{lem:kpsw} We have for any $f \in \mathbb R^E$ and $\delta\in\mathbb R^E$ that
\begin{align*}
f_i^p + p f_i^{p-1}\delta_i+2^{-O(p)} h_p(f_i^{p-2},\delta_i) \leq (f_i + \delta_i)^p \leq f_i^p + p f_i^{p-1}\delta_i+2^{O(p)} h_p(f_i^{p-2},\delta_i)
\end{align*}
where $h_p(x,\delta) = x\delta^2 + \delta^p$
\end{lemma}
Hence, given an initial solution, it suffices to solve the residual problem of the form
\begin{align*}
\min\limits_{B^\top f = 0} g(f)^\top \delta + \sum\limits_{e\in E} h_p(f^{p-2}_i,\delta_i)
\end{align*}
where $g(f)_i = pf_i^{p-1}$. Next, they notice that bounding the condition number with respect to the function $h_p(\cdot,\cdot)$ actually suffices to get linear convergence and hence tolerate quasi-polynomially low errors. The rest of the paper goes into designing good preconditioners which allow them to solve the above subproblem quickly.
We will also need some basics about min-max saddle point problems \cite{BNO03}. Given a function $f(x,y)$ such that $\mathsf{dom}(f,x) = \mathcal{X}$ and $\mathsf{dom}(f,y) = \mathcal{Y}$. The problem we will be interested in is of the form
\begin{align*}
\min\limits_{x \in \mathcal{X}}\max\limits_{y \in \mathcal{Y}}f(x,y)
\end{align*}
Define the functions $f_y(y) = \min\limits_{x \in \mathcal{X}} f(x,y)$ and $f_x(x) = \max\limits_{y \in \mathcal{Y}} f(x,y)$ for every fixed $y\in\mathcal{Y}$. We have the following theorem from Section 2.6 in \cite{BNO03}
\begin{theorem}\label{thm:minmaxthm} If $f(x,y)$ is convex in $x$ and concave in $y$ and let $\mathcal{X,Y}$ be convex and closed. Then $f_x$ is a convex function and $f_y$ is a concave function. \end{theorem}
\section{Warm up : $\sqrt{m}$ Iteration Algorithm}\label{sec:warmup}
In this section, we first set up our IPM framework and show how to recover the $\sqrt{m}$ iterations bound for max flow. In the next section, we will then change the weights to obtain our improved runtime. Our framework is largely inspired by \cite{M16} and \cite{LS20} and indeed a lot of the arguments can be reused with some modifications.
\subsection{IPM Setup}For every edge $e=(u,v)$, we consider assigning two non-negative weights for the forward and backward edges $w_e^+$ and $w_e^-$. Based on the weights and the edge capacities, for any feasible flow, we define a barrier functional
\begin{align*}
\phi_w(f) = -\sum\limits_{e \in E} w_e^+ \log(u_e^+ - f_e) + w_e^- \log(u_e^- + f_e)
\end{align*}
IPMs iterate towards the optimal solution by trading off the amount of progress of the current iterate, i.e., $B^\top f = F\chi$ and the proximity of the point to the constraints measured through the barrier $\phi_w(f)$, known as centrality. Previous IPMs taking a Newton step with respect to the barrier with a size which ensures that we increase the value of the flow $F$ by a certain amount. Due to the fact that a Newton step is the minimization of a second order optimization problem, it can be shown that the step can be computed via electrical flow computations. Typically, taking a Newton step can be decomposed into progress and centering steps where one first takes a progress step which increases the flow value which causes us to lose centrality by some amount. Then one takes a centering step which improves the centrality without increasing the flow value. Depending on the amount of progress we can make in each iteration such that we can still recenter determines the number of iterations our algorithm will take. \cite{M16,LS20} follow this prototype and loosely speaking the amount of flow value we can increase in each iteration for the progress step depends on the $\ell_\infty$ norm of the congestion vector, which measures how much flow we can add before we saturate an edge. However, the bottleneck ends up being the centering step which requires that the flow value can only be increased by an amount depending on the $\ell_4$ norm of the congestion vector which is a stronger condition than $\ell_\infty$ norm.
\cite{M13,M16} notes that when the $\ell_\infty$ and $\ell_4$ norms of the congestion vector are large then increasing the resistances of the congested edges increases the energy of the \textit{resulting} electrical flow. So he repeatedly increases the weights of the congested edges (called boosting) until the congested vector has sufficiently small norm. By using electrical energy of the resulting step as a global potential function and analyzing how it evolves over the progress, centering and boosting steps, they can control the amount of weight change and number of boosting steps necessary to reduce the norm of the congestion vector. Carefully trading these quantities yields their runtime of $\widetilde{O}(m^{11/7})$. To improve on this, Liu and Sidford \cite{LS20} consider the problem of finding a set of weight increases which maximize the energy of the resulting flow. As we need to ensure that the weights don't increase by too much, they place a budget constraint on the weight vector. By showing that a small amount of weight change suffices to obtain good bounds on the congestion vector. Fortunately, this optimization problem ends up being efficiently solvable in almost linear time by using the mixed $\ell_2$-$\ell_p$ norm flow problem of \cite{KPSW19}. However, this step still essentially requires $\ell_4$-norm bounds to ensure centering is possible.
In this paper, we will consider taking steps with respect to a potential function. The potential function $\Phi_w$ comes from potential reduction IPM schemes and trades off the duality gap with the barrier.
\begin{align*}
\Phi_w(f,s)=m\log\left(1+\frac{f^{\top}s}{m}\right)+\phi_{w}(f)
\end{align*}
For self-concordant barriers like weighted log barriers are, the negative gradient $-\nabla\phi_w(f)$ is feasible for the dual \cite{Ren01} and so for any $f'$ feasible for the primal, we have $f'^\top (-\nabla\phi_w(f))\geq 0$. We will consider dual "potential" variables $y\in \mathbb R^V$. Now, like in \cite{M16,LS20}, we consider a centrality condition
\begin{equation}
y_v - y_u = \frac{w_e^+}{u_e^+-f_e} - \frac{w_e^-}{u_e^-+f_e} \text{ for all } e=(u,v)
\end{equation}
If $(f,y,w)$ satsify the above condition, we call it \textit{well-coupled}.
Also, given a tuple $(f,y,w)$ and a candidate step $\hat{f}$, define the forward and backward congestion vectors $\rho^+,\rho^-\in \mathbb R^E$ as
\begin{align}
\rho^+_e = \frac{|\hat{f}_e|}{u_e^+-f_e} \text{ and } \rho^-_e = \frac{|\hat{f}_e|}{u_e^-+f_e} \text{ for all } e \in E
\end{align}
We can now assume via binary search that we know the optimal flow value $F^*$ \cite{M16}.
\cite{M16,LS20} consider preconditioning the graph which allows them to ensure that for a well-coupled point we can ensure sufficient progress. The preconditioning strategy to ensure this is to add $m$ extra (undirected) edges between $s$ and $t$ of capacity $2U$ each. So the max flow value increases at most by $2mU$. The following lemma can be seen from the proof of Lemma 4.5 in \cite{LS20}
\begin{theorem}\label{lem:precond} Let $(f,y,w)$ be a well-coupled point for flow value $F$ in a preconditioned graph $G$. Then we have for every preconditioned edge $e$ that $\hat{u}_e(f) = \min\{u_e^+-f_e,u_e^-+f_e\}\geq \frac{F^*-F}{7\|w\|_1}$. In particular, if $\|w\|_1\leq 3m$, then we have $\hat{u}_e(f) \geq \frac{F^*-F}{21m}$. If we also have $F^* - F \geq m^{1/2-\eta}$, then $\hat{u}_e(f) \geq m^{-(1/2+\eta)}/21$
\end{theorem}
Now that our setup is complete, we can focus on the step that we will be taking. In this section, we will keep the weights all fixed to 1, i.e., $w_e^+ = w_e^- = 1$ for all $e \in E$. Hence $\|w\|_1 = 2m$. Consider the change in the potential function when we move from $f$ to $f+\hat{f}$ while keeping the dual variable $-\nabla\phi_w(f) = By$ fixed. This change is
\begin{align*}
m\log\left(1-\frac{(f+\hat{f})^{\top}\nabla\phi_{w}(f)}{m}\right)-m\log\left(1-\frac{f^{\top}\nabla\phi_{w}(f)}{m}\right)+\phi_{w}(f+\hat{f})-\phi_{w}(f)
\end{align*}
We are interested in minimizing this quantity which corresponds to maximizing the decrease in the potential function value while guaranteeing that we send say $\delta$ more units of flow $\hat{f}$. Hence the problem is
\begin{align*}
\arg\min\limits_{B^\top \hat{f}=\delta \chi} m\log\left(1-\frac{(f+\hat{f})^{\top}\nabla\phi_{w}(f)}{m}\right) + \phi_w(f+\hat{f})
\end{align*}
Unfortunately, this problem is not convex as the duality gap term is concave in $\hat{f}$. However, we instead can minimize an upper bound to this term which is convex:
\begin{align*}
&\arg\min\limits_{B^\top \hat{f}=\delta \chi}\phi_w(f+\hat{f}) - (f+\hat{f})^\top\nabla\phi_w(f)
\\
&= \arg\min\limits_{B^\top \hat{f}=\delta \chi}-\sum\limits_{e \in E} w_e^+\log\left(1-\frac{\hat{f}_e}{u_e^+-f_e}\right) + w_e^-\log\left(1+\frac{\hat{f}_e}{u_e^-+f_e}\right) - \hat{f}_e\left(\frac{w_e^+}{u_e^+-f_e} - \frac{w_e^-}{u_e^-+f_e}\right)
\end{align*}
as $\log(1+x) \leq x$ for non-negative $x$ which holds from duality as mentioned above. We will refer to the value of the problem in the last line as the \textit{potential decrement} and will henceforth denote the function inside the minimization as $\Delta\Phi_w(f,\hat{f})$. It is instructive to first see how the coupling condition changes if we were to take the optimal step of the above problem, while remaining feasible. To calculate this, from the optimality conditions of the above program, we can say that there exists a $\hat{y}$ such that for all $e=(u,v)$
\begin{align*}
\hat{y}_v-\hat{y}_u &= \left(\frac{w_e^+}{u_e^+-f_e-\hat{f}_e} - \frac{w_e^-}{u_e^-+f_e+\hat{f}_e}\right) - \left(\frac{w_e^+}{u_e^+-f_e} - \frac{w_e^-}{u_e^-+f_e}\right)\\
&=\left(\frac{w_e^+}{u_e^+-f_e-\hat{f}_e} - \frac{w_e^-}{u_e^-+f_e+\hat{f}_e}\right) - (y_v-y_u)
\end{align*}
Hence, if we update $y$ to $y+\hat{y}$ and $f$ to $f+\hat{f}$, we get a flow of value $F+\delta$ such that the coupling condition with respect to the new $y$ and $f$ still hold.
Hence, we can now focus on actually computing the step and showing what $\delta$ we can take to ensure that we still satisfy feasibility, i.e., bounds on the $\ell_\infty$ norm of the congestion vector. The function we are trying to minimize comprises of a self-concordant barrier term and a linear term. Unfortunately, we cannot control the condition number of such a function to optimize it in efficiently over the entire space as this is arguably as hard as the original problem itself. However, due to self-concordance, the function behaves smoothly enough (good condition number) in a box around the origin but that seemingly doesn't help us solve the problem over the entire space. Fortunately, a fix for this was already found in \cite{BCLL18}. In particular they (smoothly) extend the function quadratically outside a box to ensure that the (global) smoothness and strong convexity properties inside the box carries over to that outside the box as well while still arguing that the minimizer is the same provided the minimizer of the original problem was inside the box. Specifically, the following lemma can be inferred from Section 2.2 of \cite{BCLL18}.
\begin{lemma}
Given a function $f(x)$ which is $L$-smooth and $\mu$-strongly convex inside an interval $[-\ell,\ell]$. Then, we define the quadratic extension of $f$, defined as \[
f_\ell(x) = \left.
\begin{cases}
f(x), & \text{for } -\ell \leq x \leq \ell \\
f(-\ell)+f'(-\ell)(x+\ell)+\frac{1}{2}f''(-\ell)(x+\ell)^2, & \text{for } x < -\ell \\
f(\ell)+f'(\ell)(x-\ell)+\frac{1}{2}f''(\ell)(x-\ell)^2, & \text{for } x> \ell
\end{cases}
\right\}
\]
The function $f_\ell$ is $C^2$, $L$-smooth and $\mu$-strongly convex. Furthermore, for any convex function $\psi(x)$ provided $x^* = \arg\min\limits_{x \in \mathcal{X}}\psi(x) + \sum\limits_{i=1}^{n}f(x_i)$ lies inside $\prod\limits_{i=1}^{n}[-\ell_i,\ell_i]$, then $\arg\min\limits_{x \in \mathcal{X}} \psi(x) + \sum\limits_{i=1}^{n} f_{\ell_i}(x_i) = x^*$
\end{lemma}
Hence, it suffices to consider a $\delta$ small enough such that the minimizer is the same as for the original problem and we can focus on minimizing this quadratic extension of the function. For minimization, we can use Accelerated Gradient Descent or Newton's method.
\begin{theorem}[\cite{Nes04}]\label{thm:agd} Given a convex function $f$ which satisfies $D \preceq \nabla^2 f(x) \preceq \kappa D \forall x \in R^n$ with some given fixed diagonal matrix $D$ and some fixed $\kappa$. Given an initial point $x_0$ and
an error parameter $0 < \varepsilon < 1/2$, the accelerated gradient descent (AGD) outputs x such that
\begin{align*}
f(x) - \min\limits_x f(x) \leq \varepsilon(f(x_0) - \min\limits_x f(x))
\end{align*}
in $O(\sqrt{\kappa} \log(\kappa/\varepsilon))$ iterations. Each iteration involves computing $\nabla f$ at some point x and projecting the function onto the subspace defined by the constraints and some linear-time calculations.
\end{theorem}
Notice that the Hessian of the function in the potential decrement problem is a diagonal matrix with the $e^{th}$ entry being $$\frac{w_e^+}{(u_e^+-f_e-\hat{f}_e)^2}+\frac{w_e^-}{(u_e^-+f_e+\hat{f}_e)^2}$$ So provided $\rho^+_e,\rho^-_e$ are less than some small constant, the condition number $\kappa$ of the Hessian is constant with respect to the diagonal matrix which is $\nabla^2\phi_w(f)$ and hence we can use Theorem \ref{thm:agd} to solve it in $\widetilde{O}(1)$ to quasi-polynomially good error. Furthermore notice that the algorithm is just computing a gradient and then doing projection and so can be computing using a Laplacian linear system solve and hence runs in nearly linear time. Furthermore, quasi-polynomially small error will suffice for our purposes \cite{M13,M16,LS20}.
Now, we just need to ensure that we can control the $\ell_\infty$-norm of the congestion vector, as that controls how much flow we can still send without violating constraints. Note further, that we need to set $\ell$ while solving the quadratic extension of the potential decrement problem so that it's greater than the $\ell_\infty$ norm that we can guarantee. We will want both of these to be some constants.
As mentioned above, the point of preconditoning the graph is to ensure that the preconditioned edges themselves can facilitiate sufficient progress. To bound the congestion, we show an analog of Lemma 3.9 in \cite{M16}.
\begin{lemma}\label{lem:constcongrootm}
Let $(f,y,w)$ be a well-coupled solution with value $F$ and let $\delta=\frac{F^*-F}{1000\sqrt{m}}$. Let $\hat{f}$ be the solution to the potential decrement problem. Then we have, $\rho_e^+,\rho_e^-\leq 0.1$ for all edges $e$.
\end{lemma}
\begin{proof}
Consider a flow $f'$ which sends $\frac{2\delta}{m}$ units of flow on each of the $m/2$ preconditioned edges. Certainly the potential decrement flow $\hat{f}$ will have smaller potential decrement than that of $f'$ which is
\begin{align*}
\Delta\Phi_w(f,f')&= -\sum\limits_{e \in E} w_e^+\log\left(1-\frac{f'_e}{u^+_e-f_e}\right) + w_e^- \log\left(1+\frac{f'_e}{u^+_e-f_e}\right) - f'_e\left(\frac{w_e^+}{u_e^+-f_e} - \frac{w_e^-}{u_e^-+f_e}\right)\\
&\leq \sum\limits_{e \in E} w_e^+\left(\frac{f'_e}{\hat{u}_e^+(f)}\right)^2 + w_e^-\left(\frac{f'_e}{\hat{u}_e^-(f)}\right)^2\\
&\leq \|w\|_1 \left(\frac{42\delta}{F^*-F}\right)^2\\
&< \frac{0.002\|w\|_1}{m}\leq 0.004
\end{align*}
where the second inequality follows from $-\log(1-x) \leq x+x^2$ and $-\log(1+x)\leq -x+x^2$ for non-negative $x$ and the third inequality follows from plugging in the value of the flow on the preconditioned edges and using Lemma \ref{lem:precond}. Finally we use $\|w\|_1=2m$.
Now it suffices to prove a lower bound on the potential decrement in terms of the congestion vector. For this, we start by considering the inner product of $\hat{f}$ with the gradient of the $\Delta\Phi_w(f,\hat{f})$
\begin{align*}
\sum\limits_{e \in E} \hat{f}_e \left(\frac{w_e^+}{u_e^+-f_e-\hat{f}_e} - \frac{w_e^-}{u_e^-+f_e+\hat{f}_e} - \frac{w_e^+}{u_e^+-f_e} + \frac{w_e^-}{u_e^-+f_e}\right)&= \sum\limits_{e \in E} \left(\frac{w_e^+\hat{f}_e^2}{(\hat{u}_e^+-\hat{f}_e)\hat{u}^+_e} + \frac{w_e^-\hat{f}_e^2}{(\hat{u}_e^-+\hat{f}_e)\hat{u}_e^-}\right)\\
& \leq \sum\limits_{e \in E} 1.1\left(\frac{w_e^+\hat{f}_e^2}{(\hat{u}^+_e)^2} + \frac{w_e^-\hat{f}_e^2}{(\hat{u}_e^-)^2}\right)\end{align*}
\begin{align*}
&\leq 1.1\sum\limits_{e \in E} \left(\frac{w_e^+\hat{f}_e^2}{(\hat{u}^+_e)^2} + \frac{w_e^-\hat{f}_e^2}{(\hat{u}_e^-)^2}\right)\\
&\leq 2.2\sum\limits_{e \in E} -w_e^+\log\left(1-\frac{f'_e}{\hat{u}^+_e}\right) - w_e^- \log\left(1+\frac{f'_e}{\hat{u}^+_e}\right) - f'_e\left(\frac{w_e^+}{\hat{u}_e^+} - \frac{w_e^-}{\hat{u}_e^-}\right)\\
&= 2.2\Delta\Phi_w(f,\hat{f})\\
&\leq 0.0088
\end{align*}
where the second-to-last inequality follows from $x+x^2/2\leq -\log(1-x)$ and $-x+x^2/2\leq -\log(1+x)$. Strictly speaking, the first inequality only holds for $\hat{f}_e \leq \hat{u}_e(f)/10$. However, instead of considering the inner product of $\hat{f}$ with the gradient of $\Delta\Phi_w(f,\hat{f})$, we will instead consider it's quadratic extension with $\ell_e=\hat{u}_e(f)/10$ for each edge $e$. It is easy to see that if $\hat{f}$ is outside the box, then also the desired inequality still holds (by computing the value the quadratic extension takes on $f'$ in the cases outside the box).
To finish the proof,
\begin{align*}
\sum\limits_{e \in E} \hat{f}_e \left(\frac{w_e^+}{u_e^+-f_e-\hat{f}_e} - \frac{w_e^-}{u_e^-+f_e+\hat{f}_e} - \frac{w_e^+}{u_e^+-f_e} + \frac{w_e^-}{u_e^-+f_e}\right)&= \sum\limits_{e \in E} \left(\frac{w_e^+\hat{f}_e^2}{(\hat{u}_e^+-\hat{f}_e)\hat{u}^+_e} + \frac{w_e^-\hat{f}_e^2}{(\hat{u}_e^-+\hat{f}_e)\hat{u}_e^-}\right)\\
&\geq 9/10 \sum\limits_{e \in E} \left(\frac{w_e^+\hat{f}_e^2}{(\hat{u}^+_e)^2} + \frac{w_e^-\hat{f}_e^2}{(\hat{u}_e^-)^2}\right)\\
&\geq 0.9\|\rho\|_\infty^2
\end{align*}
Hence, combining the above, we get that $\|\rho\|_\infty\leq 0.1$
\end{proof}
Notice that since $\|\rho\|_\infty<0.1$, the minimizer of the quadratic smoothened function is the same as the function without smoothing and hence the new step is well-coupled as per the argument above. Hence, in every iteration, we decrease the amount of flow that we could send multiplicatively by a factor of $1-1/\sqrt{m}$ and hence in $\sqrt{m}$ iterations we will get to a sufficiently small amount of remaining flow that we can round using one iteration of augmenting paths. This completes our $\sqrt{m}$ iteration algorithm.
\section{Improved $m^{4/3+o(1)}U^{1/3}$ Time Algorithm}\label{sec:new}
In this section, we show how to change weights to improve the number of iterations in our algorithm. We will follow the framework of Liu and Sidford \cite{LS20} of finding a set of weights to add under a norm constraint such that the step one would take with respect to the new set of weights maximizes a potential function. In their case, since the step they are taking is an electrical flow, the potential function considered is the energy of such a flow. As our step is different, we will instead take the potential decrement as the potential function with respect to the new set of weights. Perhaps suprisingly however, we can make almost all their arguments go through with minor modifications. Let the initial weights be $w$ and say we would like to add a set of weights $w'$. Then we are interested in maximizing the potential decrement with respect to the new set of weights. This can be seen as similar to designing oracles for multiplicative weight algorithms for two-player games where a player plays a move to penalize the other player the most given their current move. Our algorithm first finds a finds a new set of weights and then takes the potential decrement step with respect to the new weights. Finally, for better control of the congestion vector, we show that one can decrease some of the weight increase like in \cite{LS20}. We first focus on the problem of finding the new set of weights. We are going to introduce a set $r'\in \mathbb R^E_{++}$ of "resistances" and will optimize these resistances and then obtain the weights from them. Let $w$ be the current set of weights and $w'$ be the set of desired changes. Without loss of generality, assume that $\hat{u}_e(f) = \hat{u}_e^+(f)$ and now given a resistance vector $r'$, we define the weight changes as
\begin{align*}
(w^+_e)'=r'_e(\hat{u}_e^+(f))^2 \text{ and } (w^-_e)'=\frac{(w_e^+)'\hat{u}_e^-(f)}{\hat{u}_e^+(f)}
\end{align*}
This is the same set of weight changes done in \cite{LS20} in the context of energy maximization. This set of weights ensures that our point $(f,y,w)$ is well-coupled with respect to $w+w'$ as well, i.e., \begin{align*}\frac{(w_e^+)'}{\hat{u}^+_e(f)} = \frac{(w_e^-)'}{\hat{u}^-_e(f)}\end{align*}
The problem we would now like to solve is
\begin{align*}
g(W) = \max\limits_{r'>0, \|r'\|_1\leq W}\min\limits_{B^\top \hat{f}=\delta\chi} \Delta\Phi_{w+w'}(f,\hat{f})
\end{align*}
Here $w'$ is based on $r'$ in the form written above. While this is the optimization problem we would like to solve, we are unable to do so due to the $\ell_1$ norm constraint on the resistances. We will however be able to solve a relaxed $q$-norm version of the problem. \small
\begin{align*}
&g_q(W) = \max\limits_{r'>0, \|r'\|_q\leq W}\min\limits_{B^\top \hat{f}=\delta\chi} \Delta\Phi_{w+w'}(f,\hat{f})\\
&= \max\limits_{r'>0, \|r'\|_q\leq W}\min\limits_{B^\top \hat{f}=\delta\chi} \Delta\Phi_w(f,\hat{f}) -\sum\limits_{e \in E} (w_e^+)'\log\left(1-\frac{f'_e}{u^+-f_e}\right) + (w_e^-)' \log\left(1+\frac{f'_e}{u^+-f_e}\right) - f'_e\left(\frac{(w_e^+)'}{u_e^+-f_e} - \frac{(w_e^-)'}{u_e^-+f_e}\right)
\end{align*}\normalsize
Notice that this is a linear (and hence concave) function in $w'$ and hence in $r'$ and is closed and convex in $\hat{f}$ and the constraints are convex as they are only linear and norm ball constraints. Hence, using Theorem \ref{thm:minmaxthm}, we can say that $$\min\limits_{B^\top \hat{f}=\delta\chi} \Delta\Phi_w(f,\hat{f}) -\sum\limits_{e \in E} (w_e^+)'\log\left(1-\frac{f'_e}{u^+-f_e}\right) + (w_e^-)' \log\left(1+\frac{f'_e}{u^+-f_e}\right) - f'_e\left(\frac{(w_e^+)'}{u_e^+-f_e} - \frac{(w_e^-)'}{u_e^-+f_e}\right)$$ is concave in $r'$ and $$\max\limits_{r'>0, \|r'\|_q\leq W} \Delta\Phi_w(f,\hat{f}) -\sum\limits_{e \in E} (w_e^+)'\log\left(1-\frac{f'_e}{u^+-f_e}\right) + (w_e^-)' \log\left(1+\frac{f'_e}{u^+-f_e}\right) - f'_e\left(\frac{(w_e^+)'}{u_e^+-f_e} - \frac{(w_e^-)'}{u_e^-+f_e}\right)$$ is convex in $\hat{f}$. Now, as in \cite{LS20}, we use Sion's minimax lemma to get
\small\begin{align*}
&\min\limits_{B^\top \hat{f}=\delta\chi}\Delta\Phi_w(f,\hat{f})+\max\limits_{r'>0, \|r'\|_q\leq W} -\sum\limits_{e \in E} (w_e^+)'\log\left(1-\frac{\hat{f}_e}{u^+-f_e}\right) + (w_e^-)' \log\left(1+\frac{\hat{f}_e}{u^+-f_e}\right) - \hat{f}_e\left(\frac{(w_e^+)'}{u_e^+-f_e} - \frac{(w_e^-)'}{u_e^-+f_e}\right)\end{align*}\normalsize
\begin{equation}\label{eqn:minmax} \min\limits_{B^\top \hat{f}=\delta\chi}\Delta\Phi_w(f,\hat{f})+W \left[\sum\limits_{e\in E}g_e(\hat{f})^p\right]^{1/p}
\end{equation}
where $g_e(\hat{f}) = (\hat{u}_e^+(f))^2\log\left(1-\frac{\hat{f}_e}{\hat{u}^+}\right) + \hat{u}_e^+(f)\hat{u}_e^-(f) \log\left(1+\frac{\hat{f}_e}{\hat{u}_e^-(f)}\right)$ and we plugged in the value of $w'$ in terms of $r'$ and used that $\max\limits_{\|x\|_q\leq W} y^\top x = W\|y\|_p $ with $1/p + 1/q =1$. As mentioned above, the function inside the minimization problem is convex. Furthermore, from the proof of Theorem \ref{thm:minmaxthm}, it can be inferred that any smoothness and strong convexity properties that the function inside the min-max had carries over on the function after the maximization. Hence as in Section \ref{sec:warmup}, we will consider the quadratic extension of the function (as a function of $f$ for the function inside the min-max with $\ell_e = \hat{u}_e(f)/10$. This is just the quadratic extension of $\Delta\Phi_w(f,\hat{f})$ and the quadratic extension of $g_e(f)$. Now, the strategy will be to consider adding flow using this step while the remaining flow to be routed $F^*-F\geq m^{1/2-\eta}$. After which, running $m^{1/2-\eta}$ iterations of augmenting paths gets us to the optimal solution. We will need to ensure that that throughout the course of the algorithm the $\ell_1$ norm of the weights doesnt get too large. For doing that, we will first compute the weight changes and then do a weight reduction procedure \cite{LS20} in order to always ensure that $\|w\|_1\leq 3m$.
We will take $\eta = 1/6-o(1)-\frac{1}{3}\log_m(U)$ and $W=m^{6\eta}$. Provided we can ensure that the $\|w\|_1\leq 3m$ throughout the course of the algorithm, that the $\ell_\infty$ of the congestion vector is always bounded by a constant and that we can solve the resulting step in almost-linear time, we will obtain an algorithm which runs in time $m^{4/3+o(1)}U^{1/3}$ time.
\begin{theorem} There exists an algorithm for solving $s-t$ maximum flow in directed graphs in time $m^{4/3 + o(1)}U^{1/3}$ time.
\end{theorem}
To summarize, our algorithm starts off with $(f,y)=(0,0)$ and $w_e^+=w_e^-=1$ for all edges $e$. Then in each iteration, starting with a well-coupled $(f,y,w)$ with flow value $F$ and $\delta=(F^*-F)/m^{1/2-\eta}$ and $W=m^{6\eta}$ we then solve Equation \ref{eqn:minmax} (which is the potential decrement problem with the new weights) problem to obtain $\hat{f}$ which will be the step we will take (and has flow value $F+\delta$ and then all that remains is to actually find the update weights $w'$ which will have a closed form expression in terms of $\hat{f}$ and then we perform a weight reduction step to obtain the new $w'$ which ensures that we still remain well-coupled for $\hat{f}$ and repeat while $F^*-F\geq m^{1/2-\eta}$. Finally, we round the remaining flow using $m^{1/2-\eta}$ iterations of augmenting paths. We first state the lemma the proof of which is similar to Lemma \ref{lem:constcongrootm}
\begin{lemma}\label{lem:cong}Let $(f,y,w)$ be a well-coupled solution with value F and let $\delta=\frac{F^*-F}{5000m^{1/2-\eta}}$. Let $\hat{f}$ be the solution to the potential decrement problem considered in Equation \ref{eqn:minmax}. Then, we have for all edges $e$ that $\rho^+_e,\rho^-_e\leq 0.1$ and $|\hat{f}_e|\leq 9m^{-2\eta}$
\end{lemma}
We will prove this lemma in the Appendix \ref{app:missingProofs}. Next notice that $(f,y)$ are still a well-coupled solution with respect to the new weights $w+w'$ as the weights were chosen to ensure that the coupling condition is unchanged.
\begin{lemma}\label{lem:wtcontrol}
Our new weights, after weight reduction, satisfy $\|w'\|_1\leq m^{4\eta+o(1)}U\leq m/2$ and $(f+\hat{f},y+\hat{y})$ is well-coupled with respect to $w+w'$
\end{lemma}
\begin{proof}
Using optimality conditions of the program in Equation \ref{eqn:minmax}, we see that there exists a $\hat{y}$ such that
\begin{align*}
\hat{y}_v-\hat{y}_u= \hat{f}_e \left(\frac{w_e^+}{(\hat{u}^+_e-\hat{f}_e)\hat{u}_e^+} - \frac{w_e^-}{(\hat{u}_e^-+\hat{f}_e)\hat{u}_e^-}\right)+W\hat{f}_e\frac{g_e^{p-1}}{\|g\|_p^{p-1}}\left(\frac{\hat{u}_e^+}{\hat{u}^+_e-\hat{f}_e}-\frac{\hat{u}^-_e}{\hat{u}^-_e+\hat{f}_e} \right)
\end{align*}
where $g\in\mathbb R^E$ is the vector formed by taking $g_e(\hat{f})$ for the $e^{th}$ coordinate. We will take \begin{align*}
(r_e)'=W\frac{g_e^{p-1}}{\|g\|_p^{p-1}} \text{ and } (w_e^+)' = W\frac{g_e^{p-1}}{\|g\|_p^{p-1}}(\hat{u}_e^+)^2 \text{ and } (w_e^-)' = W\frac{g_e^{p-1}}{\|g\|_p^{p-1}}(\hat{u}_e^+\hat{u}_e^-)
\end{align*}
which satisfies the well-coupling condition we want to ensure. Also notice that $\|r\|_q=W$ so we satisfy the norm ball condition as well. Now, we need to upper bound the $\ell_1$ norm of $w'$. We will take $p=\sqrt{\log m}$
\begin{align*}
\|w'\|_1&\leq m^{1/p}\|w'\|_q\\
&\leq m^{o(1)}\left(\sum\limits_{e \in E}(w_e^+)'+(w_e^-)'\right)^{1/q}\\
&\leq 2m^{o(1)} WU^2=O(m^{6\eta+o(1)}U^2)
\end{align*}
as $\hat{u}_e^+,\hat{u}_e^-\leq U$. Plugging in the value of $\eta$, we get that this is less than $m/2$. Now, we will perform weight reductions to obtain a new set of weights $w''$ such that they still ensure the coupling condition doesnt change and we can establish better control on the weights. The weight reduction is procedure is the same as that in \cite{LS20} where we find the smallest non-negative $w''$ such that for all edges
\begin{align*}
\frac{(w_e^+)'}{\hat{u}_e^+-\hat{f}_e}-\frac{(w_e^-)'}{\hat{u}_e^-+\hat{f}_e}=\frac{(w_e^+)''}{\hat{u}_e^+-\hat{f}_e}-\frac{(w_e^-)''}{\hat{u}_e^-+\hat{f}_e}
\end{align*}
Notice that we also have that $
\frac{(w_e^+)'}{\hat{u}_e^+}=\frac{(w_e^-)'}{\hat{u}_e^-}
$ and
\begin{align*}
\frac{\hat{u}_e^+-\hat{f}_e}{\hat{u}_e^-+\hat{f}_e}=(1\pm O(\max\{\rho_e^+,\rho_e^-\})\frac{\hat{u}_e^+}{\hat{u}_e^-}
\end{align*}
Hence, it follows that
\begin{align*}
(w_e^+)'' + (w_e^-)'' \leq O(\max\{\rho_e^+,\rho_e^-\}) ((w_e^+)' + (w_e^-)')
\end{align*}
As $|\hat{f}_e|\leq 9m^{-2\eta}$ from Lemma \ref{lem:cong}, we get
\begin{align*}
\|w''\|_1&\leq m^{-2\eta}\sum\limits_{e \in E}\frac{W g_e^{p-1}}{\|g\|_p^{p-1}}(\hat{u}_e^++\hat{u}_e^-)\\
&\leq O(m^{4\eta+o(1)}U)\leq m/2
\end{align*}
As before, while this argument is done for the non-quadratically extended function while we are optimizing the quadartically extended function, as our $\rho^+_e,\rho^-_e\leq 0.1$, the minimizers are the same and hence the above argument works.
\end{proof}
Now, provided that we can show how to solve Equation \ref{eqn:minmax} in almost-linear time, we are done. This is because we run the algorithm for $m^{1/2-\eta}$ iterations and the $\ell_1$ norm of the weights increases by at most $m^{4\eta+o(1)}U$ in each iteration. Hence the final weights are $\|w\|_1 \leq 2m + m^{1/2+3\eta+o(1)}U\leq 5m/2$. So we can use Lemma \ref{lem:precond} throughout the course of our algorithm. Also, as mentioned above, notice that the flow $\hat{f}$ that we augment in every iteration is just the solution to the potential decrement problem with the new weights. Hence, from the argument in Section \ref{sec:warmup}, we always maintain the well-coupled condition.
To show that we can solve the problem in Equation \ref{eqn:minmax}, we will appeal to the work of \cite{KPSW19}. As mentioned above, their work establishes Lemma \ref{lem:kpsw} and then shows that for any function which can be sandwiched in that form plus a quadratic term which is the same on both sides, one can just minimize the resulting upper bound to get a solution to the optimization problem with quasi-polynomially low error. Hence, we will focus on showing that the objective function in our problem can also be sandwiched into terms of this form after which appealing to their algorithm, we will get a high accuracy solution to our problem in almost linear time. The first issue that arises is that srictly speaking, their algorithm only works for minimizing objectives of the form
\begin{align*}
OPT=\min\limits_{B^\top f=\chi} \sum\limits_{e \in E} g_e f_e + r_e f_e^2 + |f_e|^p
\end{align*}
whereas for our objective, the $p$-norm part is not raised to the power $p$ but is just the $p$-norm itself. The solution for this however was already given in Liu-Sidford \cite{LS20} where they show (Lemma B.3 in their paper) that for sufficiently nice functions minimizing problems of the form $\min\limits f(x)+h(g(x))$ can be obtained to high accuracy if we can obtain minimizers to functions of the form $f(x)+g(x)$. The conditions they require on the functions are also satisfied for our functions and is a straightforward calculation following the proof in their paper \cite{LS20}. Hence, we can focus on just showing how to solve the following problem
\begin{align*}
OPT=\min\limits_{B^\top \hat{f} = \chi} \sum\limits_{e\in E} -\left(w_e^+\log_{0.1}\left(1-\frac{\hat{f}_e}{\hat{u}_e^+}\right) + w_e^-\log_{0.1}\left(1+\frac{\hat{f}_e}{\hat{u}_e^-}\right) + \hat{f}_e\left(\frac{w_e^+}{\hat{u}_e^+} - \frac{w_e^+}{\hat{u}_e^-} \right)\right)+ (g_e)_{0.1}(\hat{f})^p
\end{align*}
Where the subscripts of $0.1$ denote that we are solving the quadratically smoothened function with the box size being $\hat{u}_e(f)/10$ for each $e$ and $g_e(\hat{f}) = (\hat{u}_e^+(f))^2\log\left(1-\frac{\hat{f}_e}{\hat{u}^+}\right) + \hat{u}_e^+(f)\hat{u}_e^-(f) \log\left(1+\frac{\hat{f}_e}{\hat{u}_e^-(f)}\right)$ Call the term in the sum for a given edge $e$ as $val_e(\hat{f})$ and the overall objective function is $val(\hat{f})$. In particular, we consider for a single edge and prove the following lemma
\begin{lemma}\label{lem:sandwhich}
We have the following for any feasible $f$ and $\delta \geq 0$
\begin{align*}
val_e(f) + \delta\partial_f val_e(f)+ (9/10)^2\delta^2\left(\frac{w_e^+}{(\hat{u}_e^+-f)^2}+\frac{w_e^-}{(\hat{u}_e^-+f)^2}\right) + 2^{-O(p)}(f_e^{2p-4}\delta^2 + \delta^p) \leq val_e(f+\delta)
\end{align*}
and
\begin{align*}
val_e(f+\delta) \leq val_e(f) + \delta\partial_f val_e(f)+(11/10)^2 \delta^2\left(\frac{w_e^+}{(\hat{u}_e^+-f)^2}+\frac{w_e^-}{(\hat{u}_e^-+f)^2}\right) + 2^{O(p)}(f_e^{2p-4}\delta^2 + \delta^p)
\end{align*}
where $\partial_x$ denotes the derivative of a function with respect to $x$.
\end{lemma}
We prove this lemma in Appendix \ref{app:missingProofs}. Let $r_e = \left(\frac{w_e^+}{(\hat{u}_e^+-f)^2}+\frac{w_e^-}{(\hat{u}_e^-+f)^2}\right)$
\begin{lemma}\label{lem:relconv} Now, given an initial point $f_0$ such that $B^\top f_0 = \chi$ and an almost linear time solver for the following problem
\begin{align*}
\min\limits_{B^\top \delta = 0} \sum\limits_{e \in E}
\delta_e \alpha_e + (11/10)^22^{O(p)}((r_e+f_e^{2p-4})\delta^2 + \delta^p)
\end{align*}
where the $\alpha_e$ vector is the gradient of $val$ at a given point $f$, we can obtain an $\hat{f}$ in $\widetilde{O}_p(1)$ calls to the solver such that $val(\hat{f})\leq OPT+ 1/2^{\poly\log m}$
\end{lemma}
The proof is similar to the proof of the iteration complexity of gradient descent for smooth and strongly convex function and it follows from \cite{LFN18,KPSW19}.
Note that since \cite{KPSW19} give an almost linear time solver for exactly the subproblem in the above lemma provided the resistances are quasipolynomially bounded, we are done. This is because Section D.1 in \cite{LS20} already proves that the resistances are quasipolynomially bounded.
\section{Conclusion}\label{sec:conclusion}
In this paper, we showed how to use steps inspired by potential reduction IPMs to solve max flow in directed graphs in $O(m^{4/3+o(1)}U^{1/3})$ time. We believe our framework for taking the step corresponding to the maximum decrease of the potential function may be useful for other problems including $\ell_p$ norm minimization. In particular, can one set up a homotopy path for which steps are taken according to a potential function. Presumably if this can be done, this might also offer hints for how to use ideas corresponding to different homotopy paths induced by other potential functions (rather than the central path we consider) to solve max flow faster. Finally, there is no reason to believe that the procedure for selecting weight changes corresponding to the potential decrement being maximized to be the best way to change weights. This may lead to a faster algorithm as well if one can find another strategy which establishes tighter control on weight changes. A question along the way to such a strategy might be to understand how the potential decrement optimum changes as we change weights/resistances. Such an analog for change in energy of electrical flow as we change resistances is used in \cite{CKMST11,M16,LS20}. Another open problem that remains is obtaining faster algorithms for max flow on weighted graphs with logarithmic dependence on $U$ as opposed to the polynomial dependence in this paper.
\section{Missing Proofs}\label{app:missingProofs}
\begin{proof}{[of Lemma \ref{lem:cong}]} We follow the strategy used in the proof of Lemma \ref{lem:constcongrootm}. Recall that the problem we are trying to understand is
\begin{align*}
\min\limits_{B^\top \hat{f}=\delta\chi}\Delta\Phi_w(f,\hat{f}) + W\left[\sum\limits_{e\in E}g_e(\hat{f})^p\right]^{1/p}
\end{align*}
where $g_e(\hat{f}) = (\hat{u}_e^+(f))^2\log\left(1-\frac{\hat{f}_e(f)}{\hat{u}^+_e(f)}\right) + \hat{u}_e^+(f)\hat{u}_e^-(f) \log\left(1+\frac{\hat{f}_e}{\hat{u}_e^-(f)}\right)$. As in Lemma \ref{lem:constcongrootm}, we will consider a flow $f'$ which sends $\frac{2\delta}{m}$ units of flow on each of the $m/2$ preconditioned edges. Certainly, the objective value of the above function at $\hat{f}$ will have a smaller value than that at $f'$. For the first term $\Delta\Phi_w(f,\hat{f})$, running the same argument as in Lemma \ref{lem:constcongrootm}, we get that \begin{align*}\Delta\Phi_w(f,\hat{f}) &\leq \|w\|_1 \left(\frac{42\delta}{F^*-F}\right)^2\\
&\leq 0.000071m^{2\eta}
\end{align*}
For the the second term, we use $\log(1-x)\leq -x + x^2$ and $\log(1+x) \leq x+x^2$, to get that $g_e(f)\leq \hat{f}_e^2\left(1+\frac{\hat{u}_e^+(f)}{\hat{u}_e^+(f)}\right)\leq 2\hat{f}_e^2$ where we have used that $\hat{u}^+_e(f)\leq \hat{u}_e^-(f)$. Now, since there is non-zero flow on the preconditioned edges, we get that
\begin{align*}
W\left[\sum\limits_{e\in E}g_e(f')^p\right]^{1/p}&\leq 2W (\delta/m)^2 m^{o(1)}\\
&\leq 2m^{6\eta+o(1)}\left(\frac{F^*-F}{5000m(m^{1/2-\eta})}\right)^2\\
&\leq 0.0000004m^{8\eta-1+o(1)}U^2
\end{align*}
using $p=\sqrt{\log n}$, the fact that $F^*-F\leq mU$ and the value of $\delta=\frac{F^*-F}{5000m^{1/2-\eta}}$. Also using the value of $\eta$, we can see that this term is less than $0.0000001m^{2\eta}$.
Hence, combining the two, we get that the objective value at $\hat{f}$ is less than $0.000072m^{2\eta}$. As the objective function is made up of two non-negative quantities, we can obtain two inequalities using this upper bound by dropping one term from the objective value each time. For the second part, we ignore the first term of the objective function and lower bound the second term using the fact that $\log(1+x) \geq x+x^2/2$ and $\log(1-x)\geq -x+x^2/2$
\begin{align*}
0.000072m^{2\eta}&\geq W\left[\sum\limits_{e\in E}g_e(\hat{f})^p\right]^{1/p}\geq W|g_e(\hat{f})|\\
&\geq W \hat{f}_e^2 (1+\hat{u}_e^+(f)/\hat{u}_e(f))\\
&\geq W\hat{f}_e^2
\end{align*}
This gives us that $|\hat{f}_e|\leq 0.009 m^{-2\eta}$ by plugging in the value of $W=m^{6\eta}$
For the first part now, assume for the sake of contradiction that $\rho_e> 0.1$, otherwise we are done. Now, dropping the second term we want to establish that $\frac{1}{\hat{u}_e(f)} \leq 9 m^{2\eta}$, which we will do so by a proof similar to the proof of Lemma 4.3 in \cite{M16}. Now using the argument as in Lemma \ref{lem:constcongrootm}, we get for an edge $e=(u,v)$,
\begin{align*}
0.000072m^{2\eta}&\geq \Delta\Phi_w(f,\hat{f})\\
&\geq\frac{1}{2.2}\sum\limits_{e \in E} \hat{f}_e \left(\frac{w_e^+}{u_e^+-f_e-\hat{f}_e} - \frac{w_e^-}{u_e^-+f_e+\hat{f}_e} - \frac{w_e^+}{u_e^+-f_e} + \frac{w_e^-}{u_e^-+f_e}\right)\\
&=\frac{1}{2.2}\hat{f}^\top B\hat{y}\\
&=\frac{1}{2.2}\delta \chi^\top\hat{y}\\
&=\frac{F^*-F}{11000m^{1/2-\eta}} \chi^\top \hat{y}\\
&\geq \chi^\top \hat{y}/11000\\
&=(\hat{y}_s-\hat{y}_t)/11000\\
&\geq (\hat{y}_u-\hat{y}_v)/11000\\
&= \frac{1}{11000}\left(\frac{w_e^+}{u_e^+-f_e-\hat{f}_e} - \frac{w_e^-}{u_e^-+f_e+\hat{f}_e} - \frac{w_e^+}{u_e^+-f_e} + \frac{w_e^-}{u_e^-+f_e}\right)\\
&\geq \frac{9\hat{f}_e}{110000}\left(\frac{w_e^+}{(u_e^+-f_e)^2} + \frac{w_e^-}{(u_e^-+f_e)^2} \right)\\
&\geq \frac{9\rho_e}{110000\hat{u}_e(f)}\\
&\geq \frac{0.9}{110000\hat{u}_e(f)}
\end{align*}
where the first and second equalities follows from optimality and feasibility conditions of the potential decrement problem respectively and the third inequality follows from the condition that we run the program while the flow left to augment is at least $m^{1/2-\eta}$. This implies that $1/\hat{u}_e(f)\leq 9m^{2\eta}$. Multiplying this with $|\hat{f}_e|\leq 0.009m^{-2\eta}$, we get that $\rho_e \leq 0.1$, which finishes the proof. We also need to argue the inequality $\hat{y}_s-\hat{y}_t\geq \hat{y}_u-\hat{y}_v$. The optimality conditions of $$\hat{y}_u-\hat{y}_v=\hat{f}_e \left(\frac{w_e^+}{(u_e^+-f_e-\hat{f}_e)(u_e-f_e)} + \frac{w_e^-}{(u_e^-+f_e)(u_e^-+f_e+\hat{f}_e)}\right)$$
and noticing that the quantity in brackets in the right hand side above is non-negative, tells us that there is a fall in potential along the flow. This along with noticing that the sum of the potential difference in a directed cycle is zero, tells us that the graph induced by just the flow $\hat{f}$ is a DAG. Since, it's a DAG, it can be decomposed into disjoint $s-t$ paths along which flow is sent and every edge belongs to one of these paths. Hence, the potential difference across an edge is less than the potential difference across the whole path which is the potential difference between $s$ and $t$ and hence, we are done.
As before, all these arguments go through with the quadratically smoothened cases cases rather than the original function to still get the same bounds and since $\rho_e \leq 0.1$, the minimizers of the two are the same which completes the proof.
\end{proof}
\begin{proof}{[of Lemma \ref{lem:sandwhich}]}
Note that while we are solving for the quadratically smoothened version of the problem, we can assume we solve it for the non-smoothened version in the box corresponding to a congestion of at most $0.1$ as the extension is $C^2$ and will ensure that any inequalities we need henceforth (upto the second order terms) are bounded as well.
There are two terms, one corresponding to the potential decrement term and the other is a similar expression but raised to the $p^{th}$ power. We tackle the first term first. This is easily done using Taylor's theorem. The function is $g(x+y) = -\log(1-(x+y)/u) - (x+y)/u$. Computing the first two derivatives with respect to $y$, we get that
$g'(x+y) = \frac{1}{u-x-y} - 1/u$ and $g''(x+y) = \frac{1}{(u-x-y)^2}$. Now, using Taylor's theorem, we get that
\begin{align*}
g(x+y) &= g(x) + g'(x)y+\frac{1}{2}g''(x+\zeta)y^2\\
&= g(x) + y\left(\frac{1}{u-x-y}-\frac{1}{u}\right) + y^2 \left(\frac{1}{(u-x-\zeta)^2}\right)
\end{align*}
for some $ \zeta$ such that $-u/10\leq x+\zeta \leq u/10$ which easily gives us the bound
\begin{align*}
g(x) + y\left(\frac{1}{u-x-y}-\frac{1}{u}\right) + (9/10)^2 y^2 \left(\frac{1}{(u-x)^2}\right) \leq g(x+y) \leq g(x) + y\left(\frac{1}{u-x-y}-\frac{1}{u}\right) + (11/10)^2 y^2 \left(\frac{1}{(u-x)^2}\right)
\end{align*}
Similarly for $-\log(1+x/u)+x/u$.
Now, for the second term, we will largely follow the strategy of \cite{KPSW19}.
Now for the $p^{th}$ order term, we have a function
$g(x)=u_1^2 \log(1-x/u_1) + u_1u_2 \log(1-x/u_2)$. We first use Lemma \ref{lem:kpsw} with $f_i = g(x)$ and $\delta_i = g(x+y)-g(x)$ to get
\begin{align*}
g(x+y)^p &\leq g(x)^p + p g(x)^{p-1}(g(x+y)-g(x)) + 2^{O(p)}(g(x)^{p-2}(g(x+y)-g(x))^2 + (g(x+y)-g(x))^p)
\end{align*}
Now, adding and subtracting $pg(x)^{p-1}yg'(x)$ from both sides and noticing that $g(x+y)-g(x)-yg'(x) \leq 0$ from concavity of $g$ , we get
\begin{align*}
g(x+y)^p &\leq g(x)^p + pyg(x)^{p-1}g'(x) + 2^{O(p)}(g(x)^{p-2}(g(x+y)-g(x))^2 + (g(x+y)-g(x))^p)
\end{align*}
Now, notice that using inequalities of $\log(1-x/u)$ and $\log(1+x/u)$, to get $x^2 \leq g(x) \leq 2 x^2$ and we also use Taylor's theorem get that $g(x+y)-g(x) \leq 10(|xy| + |y|^2)$
\begin{align*}
g(x+y)^p &\leq g(x)^p + p yg(x)^{p-1}g'(x) + 2^{O(p)}(x^{2p-4}(x^2 y^2 + y^4) + 2^{p-1}(x^py^p + y^{2p})\\
&\leq g(x)^p + p yg'(x) + 2^{O(p)}(x^{2p-2}y^2 + y^{2p})
\end{align*}
where we have used $(x+y)^p \leq 2^{p-1}(x^p + y^p)$ and that $y \leq x$ because that's the neighborhood we are considering. Beyond that neighborhood, we could just do the calculation with the quadratic extension parts (as we only used upto the second order information so that's still preserved)
The proof of the lower bound is similar.
\end{proof}
\section*{Acknowledgements}
We would like to thank Jelena Diakonikolas, Yin Tat Lee, Yang Liu, Aaron Sidford and Daniel Spielman for helpful discussions. We also thank Jelani Nelson for several helpful suggestions regarding the presentation of the paper.
\addcontentsline{toc}{section}{References}
\bibliographystyle{amsalpha}
|
2,869,038,156,753 | arxiv | \section{Introduction}
\label{intro}
Translational symmetries, which are ubiquitous in the physical world, play an important role in collective properties of large-scale networks of interacting systems. For example, thermodynamic and mechanical characteristics of crystalline solids (including the heat capacity and speed of sound) are substantially affected by spatial periodicity in the arrangements of atoms in such states of matter and translation invariance of their mutual interaction, which is taken into account by the phonon theory\cite{S_1990}.
Translation invariant interconnections are also used in quantum metamaterials\cite{QSMGH_2011,RZSN_2008,Za_2011,Zh_2011}, where coupled identical quantum systems form one, two or three-dimensional periodic arrays\cite{Za_2012}. The resulting quantum composite system is effectively homogeneous (in the sense of translational symmetries) on the scale of relevant wavelengths. These artificial materials aim to unveil and exploit qualitatively new properties of light-matter interaction, such as in artificial crystals of atoms trapped at nodes of an optical lattice which can be controlled by external fields and used for entanglement generation\cite{CBFFFRCIP_2013} or as a quantum memory\cite{HCHJ13,NDMRLLWJ_2010,YJ14}. Similar architectures (in the form of one-dimensional chains) are present in cascaded quantum systems for generating pure Gaussian states\cite{KY12:pra,MWPY_2014,Y_2012}.
The present paper is concerned with networks of identical linear quantum stochastic systems\cite{NY_2017,P_2017}, or open quantum harmonic oscillators (OQHOs), which interact with each other and external bosonic fields in a translation invariant fashion. The systems are associated with sites of a multidimensional lattice and are governed by coupled linear quantum stochastic differential equations (QSDEs) driven by quantum Wiener processes in the sense of the Hudson-Parthasarathy calculus\cite{HP_1984,P_1992,P_2015}. In accordance with the translation invariance of the quantum network (with respect to the additive group structure of the lattice), the coefficients of these QSDEs are organised as block Toeplitz matrices and are specified by the energy and coupling parameters which quantify the Hamiltonian and coupling operators for the component systems. This parameterization secures the fulfillment of physical realizability (PR) conditions, which extend those for OQHOs with a finite number of degrees of freedom\cite{JNP_2008,SP_2012} and are similar to the network counterpart from Ref. \refcite{VP_2014} using the spatial Fourier transforms (SFTs).
We employ the homomorphism between the algebra of block Toeplitz matrices, the corresponding convolution algebra of matrix-valued maps on the lattice and the algebra of SFTs with the pointwise multiplication over an appropriately dimensioned torus. This machinery represents system theoretic operations (such as concatenation and feedback interconnection\cite{GJ_2009,JG_2010}) over translation invariant quantum networks on a common carrier lattice in terms of algebraic operations over their spatio-temporal transfer functions and energy parameters. Network interconnections arise in quantum control settings, where performance specifications include stability and minimization of cost functionals\cite{ZJ_2012}.
Under a stability condition in the spatial frequency domain, the network has an invariant multipoint Gaussian quantum state\cite{P_2010,VPJ_2018a} in the case of statistically independent vacuum input fields. We consider a quadratic function of the network variables of interest with a block Toeplitz weighting matrix for a finite fragment of the lattice and over a bounded time interval. The tail probabilities for this self-adjoint quantum variable admit upper bounds involving its exponential moments, which, similarly to Ref. \refcite{VPJ_2018a}, lead to a quadratic exponential functional (QEF) as a risk-sensitive performance criterion for finite fragments of the network over finite time horizons.
The QEF is a quantum mechanical counterpart of the cost functionals used in classical risk-sensitive control\cite{BV_1985,J_1973,W_1981} which has links with minimax linear-quadratic-Gaussian control\cite{DJP_2000,P_2006,PJD_2000} addressing the issue of system robustness against statistical uncertainties with a relative entropy description. The latter has its analogue in terms of quantum relative entropy\cite{OW_2010} leading to similar robustness properties\cite{VPJ_2018b} in the context of risk-sensitive quantum feedback control and filtering problems\cite{B_1996,J_2004,J_2005,YB_2009}, some of which employ a different yet related\cite{VPJ_2019a} class of time-ordered exponentials.
Assuming the invariant state of the network, we study the spatio-temporal asymptotic rate of the QEF per unit time and per lattice site in the thermodynamic limit\cite{R_1978} of unboundedly growing time horizons and fragments of the network. The resulting spatio-temporal frequency domain formula for the QEF rate is organised as an integral of the log-determinant of a matrix-valued function over the product of the multidimensional torus with the frequency axis. The integrand involves two spectral functions, which are associated with the real and imaginary parts of the invariant quantum covariance kernel of the network variables and form their quantum spectral density. One of these matrix-valued spectral functions, originating from the two-point commutator kernel, enters the frequency-domain representation of the QEF rate in composition with trigonometric functions\cite{H_2008}. Combined with the multivariate nature of the integral, this makes the evaluation of the QEF rate inaccessible to the standard application of the residue theorem. We obtain a differential equation and an asymptotic expansion for the QEF rate as a function of the risk sensitivity parameter, which can be used for its numerical computation, similar to the homotopy methods for solving parameter dependent algebraic equations\cite{MB_1985}.
Continuing the development of methods for computing the QEFs, this paper employs a number of results from a series of recent publications on Lie-algebraic techniques\cite{VPJ_2019a}, parametric randomization\cite{VPJ_2018c} and quantum Karhunen-Loeve expansions\cite{VPJ_2019b,VJP_2019} developed for this purpose. These results have led to an integral operator representation of the QEF\cite{VPJ_2019c} and a frequency-domain formula\cite{VPJ_2020b} for their infinite time horizon rates for OQHOs with finitely many degrees of freedom in Gaussian quantum states, which has been extended to more general Gaussian quantum processes in Ref. \refcite{VPJ_2021}. In addition to their relevance to quantum risk-sensitive control, these approaches have deep connections with operator exponential structures studied in mathematical physics and quantum probability (for example, in the context of operator algebras\cite{AB_2018}, moment-generating functions for quadratic Hamiltonians\cite{PS_2015} and the quantum L\'{e}vy area\cite{CH_2013,H_2018}).
The paper is organised as follows.
Section~\ref{sec:net} specifies the class of translation invariant quantum networks being considered.
Section~\ref{sec:PR} represents PR conditions for the network in the spatial frequency domain.
Section~\ref{sec:par} provides a parameterization of the network QSDEs in terms of the energy and coupling matrices and outlines their computation for interconnections of networks.
Section~\ref{sec:inv} considers the invariant Gaussian state of the network, satisfying a stability condition and driven by vacuum fields.
Section~\ref{sec:QEF} specifies QEFs for finite fragments of the network over bounded time intervals and clarifies their role for large deviations estimates for network trajectories.
Sections~\ref{sec:temp} and \ref{sec:spat} establish the temporal and spatio-temporal QEF growth rates.
Section~\ref{sec:homo} discusses the computation of the QEF rate using homotopy and asymptotic expansion techniques.
Section~\ref{sec:conc} makes concluding remarks.
\ref{sec:Toeplitz} to \ref{sec:aver}
provide
subsidiary material (on block Toeplitz matrices, and averaging for trace-analytic functionals of such matrices and integral operators) and some of the particularly long proofs.
\section{Translation Invariant Quantum Network}
\label{sec:net}
We consider a network of identical linear quantum stochastic systems at sites of a $\nu$-dimensional integer lattice ${\mathbb Z}^\nu$. For any $j\in {\mathbb Z}^\nu$, the $j$th component system is a multi-mode open quantum harmonic oscillator (OQHO) with an even number $n$ of internal dynamic variables which are time-varying self-adjoint operators on (a dense domain of) a Hilbert space $\mathfrak{H}$. These system variables are assembled into a vector\footnote{vectors are organised as columns unless specified otherwise} $X_j(t)$ (the time argument $t\> 0$ will often be omitted for brevity) and act initially (at $t=0$) on a copy $\mathfrak{H}_j$ of a common complex separable Hilbert space. It is assumed that they satisfy the canonical commutation relations (CCRs)
\begin{equation}
\label{Xcomm}
[X_j(t), X_k(t)^\rT] = 2i\delta_{jk}\Theta,
\qquad
j,k\in {\mathbb Z}^\nu,
\quad
t\>0,
\end{equation}
where the transpose $(\cdot)^\rT$
applies to matrices and vectors of operators as if the latter were scalars,
$i:=\sqrt{-1}$ is the imaginary unit, $\delta_{jk}$ is the Kronecker delta,
and $\Theta$ is a nonsingular real antisymmetric matrix of order $n$. Here,
$[\alpha, \beta^\rT] := ([\alpha_a, \beta_b])_{1\< a\< r, 1\< b\< s} = \alpha\beta^\rT - (\beta\alpha^\rT)^\rT $ is the matrix of commutators $[\alpha_a, \beta_b] = \alpha_a \beta_b-\beta_b\alpha_a$ for vectors $\alpha:= (\alpha_a)_{1\< a\< r}$, $\beta:= (\beta_b)_{1\< b\< s}$ formed from linear operators.
In particular, if the internal variables of the component system are the quantum mechanical positions and momenta\cite{S_1994} $q_1, \ldots, q_{n/2}$ and $p_1:= -i\partial_{q_1}, \ldots, p_{n/2}:= -i\partial_{q_{n/2}}$ on the Schwartz space\cite{V_2002}, then the CCR matrix takes the form $\Theta = \frac{1}{2}\mathbf{J}\otimes I_{n/2}$, where $\otimes$ is the Kronecker product, the matrix
\begin{equation}
\label{bJ}
\mathbf{J} :=
{\begin{bmatrix}
0 & 1\\
-1 & 0
\end{bmatrix} }
\end{equation}
spans the subspace of antisymmetric matrices of order $2$, and $I_r$ is the identity matrix of order $r$. However, this special structure of $\Theta$ is not assumed in the general case considered in what follows.
In addition to the internal variables, the $j$th OQHO has multichannel input and output bosonic fields $W_j$, $Y_j$ which consist of $m$ and $r$ time-varying self-adjoint quantum variables, respectively (the dimensions $m$, $r$ are even and satisfy $r\< m$). The input field $W_j$ is a quantum Wiener process on a symmetric Fock space $\mathfrak{F}_j$. The network-field space has the tensor-product structure $\mathfrak{H} := \otimes_{j\in {\mathbb Z}^\nu}(\mathfrak{H}_j \otimes \mathfrak{F}_j)$, with the composite Fock space $\mathfrak{F}:= \otimes_{j\in {\mathbb Z}^\nu} \mathfrak{F}_j$ accommodating the input fields. These fields satisfy the two-point CCRs
\begin{equation}
\label{Wcomm}
[W_j(s), W_k(t)^\rT]
=
2i\delta_{jk}
\min(s,t)
J_m,
\qquad
j,k\in {\mathbb Z}^\nu,
\
s,t\>0,
\end{equation}
where
\begin{equation}
\label{Jm}
J_m := \mathbf{J} \otimes I_{m/2}
=
\begin{bmatrix}
0 & I_{m/2}\\
- I_{m/2} & 0
\end{bmatrix}
\end{equation}
is an orthogonal real antisymmetric matrix of order $m$ defined in terms of (\ref{bJ}), so that $J_m^2 = -I_m$. The right-hand side of (\ref{Wcomm}) vanishes at $s=0$ or $t=0$ since the initial input field operators act as the identity operator on $\mathfrak{F}$, which commutes with any operator. Due to the continuous tensor-product structure\cite{PS_1972} of the Fock space filtration, the relation (\ref{Wcomm}) is equivalent to its fulfillment for all $s=t\>0$, whose incremental form is given by
\begin{align}
\nonumber
\rd [W_j, W_k^\rT]
& =
[\rd W_j, W_k^\rT]
+
[W_j, \rd W_k^\rT]
+
[\rd W_j, \rd W_k^\rT]\\
\label{dWcomm}
& =
[\rd W_j, \rd W_k^\rT]
=
2i\delta_{jk}
J_m
\rd t.
\end{align}
Here, use is also made of the quantum Ito lemma\cite{HP_1984} and the property of
the future-pointing Ito increments of the input quantum Wiener processes to commute with adapted processes (in the sense of the filtration of the network-field space $\mathfrak{H}$).
In particular,
\begin{equation}
\label{WXYdWcomm}
[W_j(s), \rd W_k(t)^{\rT}] = 0,
\qquad
[X_j(s), \rd W_k(t)^{\rT}] = 0,
\qquad
[Y_j(s), \rd W_k(t)^{\rT}] = 0
\end{equation}
for all $j,k\in {\mathbb Z}^\nu$, $t\> s\> 0$.
We model the Heisenberg evolution of the network by a denumerable
set of linear quantum stochastic differential equations (QSDEs)
\begin{align}
\label{dXj}
\rd X_j
& =
\sum_{k \in {\mathbb Z}^\nu} (A_{j-k}X_k \rd t + B_{j-k} \rd W_k), \\
\label{dYj}
\rd Y_j
& =
\sum_{k \in {\mathbb Z}^\nu} (C_{j-k}X_k \rd t + D_{j-k} \rd W_k),
\qquad
j \in {\mathbb Z}^\nu,
\end{align}
which are coupled to each other and driven by the input fields in a translation invariant fashion. Their coefficients
are specified by the matrices
\begin{equation}
\label{ABCD}
A_\ell \in \mR^{n\times n},
\qquad
B_\ell\in \mR^{n\times m},
\qquad
C_\ell\in \mR^{r\times n},
\qquad
D_\ell \in \mR^{r\times m},
\end{equation}
which depend on the relative location $\ell \in {\mathbb Z}^\nu$ of the lattice sites. For what follows, these matrices are assumed to be absolutely summable over $\ell\in{\mathbb Z}^\nu$, which is equivalent to
\begin{equation}
\label{ABCDsum}
\sum_{\ell \in {\mathbb Z}^\nu}
\left\|
{\begin{bmatrix}
A_\ell & B_\ell\\
C_\ell & D_\ell
\end{bmatrix}}
\right\|
< +\infty,
\end{equation}
where $\|\cdot \|$ is the operator norm (the largest singular value) of a matrix. The particular choice of a matrix norm does not affect the validity of (\ref{ABCDsum}).
The set of QSDEs (\ref{dXj}), (\ref{dYj}) can be represented formally in terms of the augmented vectors $X:= (X_k)_{k \in {\mathbb Z}^\nu}$, $W:= (W_k)_{k \in {\mathbb Z}^\nu}$, $Y:= (Y_k)_{k \in {\mathbb Z}^\nu}$ of the internal variables and external fields of the network as
\begin{align}
\label{dX}
\rd X
& =
AX\rd t + B \rd W, \\
\label{dY}
\rd Y
& =
CX \rd t + D\rd W,
\end{align}
where $A:= (A_{j-k})_{j,k\in {\mathbb Z}^\nu} \in \mathfrak{T}_{n,n}$, $B:= (B_{j-k})_{j,k\in {\mathbb Z}^\nu} \in \mathfrak{T}_{n,m}$, $C:= (C_{j-k})_{j,k\in {\mathbb Z}^\nu}\in \mathfrak{T}_{r,n}$, $D:= (D_{j-k})_{j,k\in {\mathbb Z}^\nu}\in \mathfrak{T}_{r,m}$ are appropriately dimensioned real block Toeplitz matrices with finite norms $\|A\|_1$, $\|B\|_1$, $\|C\|_1$, $\|D\|_1$ in view of (\ref{ABCDsum}), (\ref{f1}); see \ref{sec:Toeplitz}.
The absolute summability condition secures well-posedness of
the spatial Fourier transforms (SFTs)
\begin{equation}
\label{cABCD}
{\begin{bmatrix}
\mathcal{ A}(\sigma) & \mathcal{ B}(\sigma)\\
\mathcal{ C}(\sigma) & \mathcal{D}(\sigma)
\end{bmatrix}}
:=
\sum_{\ell\in {\mathbb Z}^\nu}
\re^{-i\ell^\rT \sigma}
{\begin{bmatrix}
A_\ell & B_\ell\\
C_\ell & D_\ell
\end{bmatrix}},
\qquad
\sigma \in {\mathbb T}^\nu,
\end{equation}
so that
$\mathcal{ A}$, $\mathcal{ B}$, $\mathcal{ C}$, $\mathcal{D}$ are appropriately dimensioned complex matrix-valued functions, continuous and $2\pi$-periodic over their $\nu$ variables.
The matrices in (\ref{ABCD}) are recovered from (\ref{cABCD}) through the inverse SFT as
\begin{equation*}
\label{inv}
{\begin{bmatrix}
A_\ell & B_\ell\\
C_\ell & D_\ell
\end{bmatrix}}
=
\frac{1}{(2\pi)^\nu}
\int_{{\mathbb T}^\nu}
\re^{i\ell^\rT \sigma}
{\begin{bmatrix}
\mathcal{ A}(\sigma) & \mathcal{ B}(\sigma)\\
\mathcal{ C}(\sigma) & \mathcal{D}(\sigma)
\end{bmatrix}}
\rd \sigma.
\end{equation*}
Since the matrices (\ref{ABCD}) are real,
their SFTs $\mathcal{ A}$, $\mathcal{ B}$, $\mathcal{ C}$, $\mathcal{D}$ are Hermitian in the sense that $\overline{\mathcal{ A}(\sigma)} = \mathcal{ A}(-\sigma)$ for all $\sigma \in {\mathbb T}^\nu$ (and similarly for $\mathcal{ B}$, $\mathcal{ C}$, $\mathcal{D}$), and hence,
\begin{align}
\label{cAB*}
\mathcal{ A}(\sigma)^*
& = \mathcal{ A}(-\sigma)^\rT,
\qquad\!\!\!
\mathcal{ B}(\sigma)^* = \mathcal{ B}(-\sigma)^\rT, \\
\label{cCD*}
\mathcal{ C}(\sigma)^*
& = \mathcal{ C}(-\sigma)^\rT,
\qquad
\mathcal{D}(\sigma)^* = \mathcal{D}(-\sigma)^\rT
\end{align}
for all $\sigma\in{\mathbb T}^\nu$. The right-hand sides of (\ref{cAB*}), (\ref{cCD*}) are the SFTs of the matrices $A_{-\ell}^\rT$, $B_{-\ell}^\rT$, $C_{-\ell}^\rT$, $D_{-\ell}^\rT$ which constitute $A^\rT$, $B^\rT$, $C^\rT$, $D^\rT$, respectively. Dynamic properties of the translation invariant network can be represented in the spatial frequency domain using the SFTs $\mathcal{ A}$, $\mathcal{ B}$, $\mathcal{ C}$, $\mathcal{D}$. Such properties include the preservation of commutation relations.
\section{Physical Realizability Conditions in the Spatial Frequency Domain}
\label{sec:PR}
Similarly to OQHOs with a finite number of external field channels and internal dynamic variables, the matrices (\ref{ABCD}) of the network QSDEs (\ref{dXj}), (\ref{dYj}) satisfy physical realizability (PR) conditions which reflect the preservation of the CCRs (\ref{Xcomm}) together with
\begin{equation}
\label{XYcomm}
[X_j(t), Y_k(s)^{\rT}] = 0,
\qquad
j,k\in {\mathbb Z}^\nu,\
t\> s\> 0.
\end{equation}
The fulfillment of (\ref{XYcomm}) at $s=t=0$ is secured by the commutativity of operators, acting on different initial and Fock spaces $\mathfrak{H}_j$, $\mathfrak{F}_k$ and appropriately extended to $\mathfrak{H}_j\otimes \mathfrak{F}_k$. An additional PR condition\footnote{which is important in the context of concatenating quantum networks as input-output maps, considered in Section~\ref{sec:par}} comes from the requirement that the commutation structure of the output fields of the network is similar to that of the input fields in (\ref{Wcomm}), (\ref{dWcomm}):
\begin{equation}
\label{Ycomm}
[Y_j(s), Y_k(t)^\rT]
=
2i\delta_{jk}
\min(s,t)
J_r,
\qquad
j,k\in {\mathbb Z}^\nu,
\
s,t\>0,
\end{equation}
where $J_r$ is defined according to (\ref{Jm}). The following theorem represents the PR conditions in the spatial frequency domain as a network counterpart of the previous results for OQHOs with a finite number of variables\cite{JNP_2008,SP_2012} and extends Ref. \refcite{VP_2014}.
\begin{theorem}
\label{th:PR}
The network QSDEs (\ref{dXj}), (\ref{dYj}) preserve the CCRs (\ref{Xcomm}), (\ref{XYcomm}), (\ref{Ycomm}) if and only if the SFTs (\ref{cABCD}) satisfy
\begin{align}
\label{PR1}
\mathcal{ A}(\sigma) \Theta + \Theta \mathcal{ A}(\sigma)^* + \mathcal{ B}(\sigma) J_m \mathcal{ B}(\sigma)^*
& = 0,\\
\label{PR2}
\Theta \mathcal{ C}(\sigma)^* + \mathcal{ B}(\sigma) J_m \mathcal{D}(\sigma)^*
& = 0,\\
\label{PR3}
\mathcal{D}(\sigma) J_m \mathcal{D}(\sigma)^*
& = J_r
\end{align}
for all $\sigma \in {\mathbb T}^\nu$. \hfill$\square$
\end{theorem}
As can be seen from the proof of this theorem in \ref{sec:PRproof}, the PR conditions (\ref{PR1})--(\ref{PR3}) are obtained by applying the homomorphism between the algebra of block Toeplitz matrices, the corresponding convolution algebra of matrix-valued maps on the lattice ${\mathbb Z}^\nu$ and the algebra of SFTs with the pointwise multiplication over the torus ${\mathbb T}^\nu$ to the PR conditions
\begin{align}
\label{bPR1}
A \bit{\Theta} + \bit{\Theta} A^\rT + B \bit{J}_m B^\rT
& = 0,\\
\label{bPR2}
\bit{\Theta} C^\rT + B\bit{J}_m D^\rT
& = 0,\\
\label{bPR3}
D\bit{J}_m D^\rT
& = \bit{J}_r
\end{align}
for the QSDEs (\ref{dX}), (\ref{dY}). Here, the block diagonal matrices
\begin{equation}
\label{TJJ}
\bit{\Theta}
:=
(\delta_{jk}\Theta)_{j,k\in {\mathbb Z}^\nu},
\qquad
\bit{J}_m:= (\delta_{jk}J_m)_{j,k\in {\mathbb Z}^\nu},
\qquad
\bit{J}_r:= (\delta_{jk}J_r)_{j,k\in {\mathbb Z}^\nu}
\end{equation}
specify the CCRs for the internal network variables and the external fields, respectively:
\begin{equation*}
\label{XXcomm}
[X,X^\rT] = 2i\bit{\Theta},
\qquad
[\rd W,\rd W^\rT] = 2i\bit{J}_m\rd t,
\qquad
[\rd Y,\rd Y^\rT] = 2i\bit{J}_r\rd t.
\end{equation*}
Indeed, the matrices $\bit{\Theta}$, $\bit{J}_m$, $\bit{J}_r$ in (\ref{TJJ}) have constant SFTs $\Theta$, $J_m$, $J_r$, respectively, which together with (\ref{cAB*}), (\ref{cCD*}), makes (\ref{PR1})--(\ref{PR3}) equivalent to the corresponding conditions in (\ref{bPR1})--(\ref{bPR3}).
Also note that the PR conditions (\ref{PR1})--(\ref{PR3}) in the spatial frequency domain can be represented in the form
\begin{equation}
\label{PRmat}
\begin{bmatrix}
\mathcal{ A}(\sigma) & \mathcal{ B}(\sigma) & I_n & 0\\
\mathcal{ C}(\sigma) & \mathcal{D}(\sigma) & 0 & I_r
\end{bmatrix}
\begin{bmatrix}
0 & 0& \Theta & 0\\
0 & J_m & 0 & 0 \\
\Theta & 0 & 0 & 0\\
0& 0 & 0 & -J_r
\end{bmatrix}
\begin{bmatrix}
\mathcal{ A}(\sigma)^* & \mathcal{ C}(\sigma)^*\\
\mathcal{ B}(\sigma)^* & \mathcal{D}(\sigma)^*\\
I_n & 0\\
0 & I_r
\end{bmatrix}
=
0,
\qquad
\sigma \in {\mathbb T}^\nu.
\end{equation}
Similarly to OQHOs with finitely many dynamic variables\cite{SP_2012}, the PR conditions (\ref{PR1})--(\ref{PR3}) (or (\ref{PRmat})) imply a $(J,J)$-unitarity property\cite{K_1997} for the spatio-temporal transfer function of the network from $\rd W$ in (\ref{dX}) to $\rd Y$ in (\ref{dY}) defined as
\begin{equation}
\label{trans}
F(\sigma,s)
= \mathcal{ C}(\sigma)(sI_n- \mathcal{ A}(\sigma))^{-1} \mathcal{ B}(\sigma) + \mathcal{D}(\sigma),
\qquad
\sigma \in {\mathbb T}^\nu,\
s \in \mC\setminus \mathfrak{S}(\sigma),
\end{equation}
by analogy with the finite-dimensional case, where $\mathfrak{S}(\sigma)$ denotes the spectrum of the matrix $\mathcal{ A}(\sigma)$. The corresponding conjugate of the transfer function is given by
\begin{align}
\nonumber
F^\diam(\sigma,s)
& :=
F(\sigma,-\overline{s})^*\\
\nonumber
& =
-\mathcal{ B}(\sigma)^*(sI_n+ \mathcal{ A}(\sigma)^*)^{-1} \mathcal{ C}(\sigma)^* + \mathcal{D}(\sigma)^*\\
\label{Fdiam}
& =
F(-\sigma,-s)^\rT
\end{align}
for any $s \in \mC\setminus (-\mathfrak{S}(-\sigma)) $ in view of the relations (\ref{cAB*}), (\ref{cCD*}) and the invariance of the spectrum of a square matrix under the transpose.
\begin{theorem}
\label{th:JJ}
Under the PR conditions (\ref{PR1})--(\ref{PR3}) on the network QSDEs (\ref{dXj}), (\ref{dYj}), the transfer function (\ref{trans}) satisfies
\begin{equation}
\label{JJ}
F(\sigma,s)J_m F^\diam(\sigma,s) = J_r,
\qquad
\sigma \in {\mathbb T}^\nu,\
s \in \mC\setminus (\mathfrak{S}(\sigma) \bigcup (-\mathfrak{S}(-\sigma))).
\end{equation}
\hfill$\square$
\end{theorem}
\begin{proof}
We will use an auxiliary spatio-temporal transfer function $\mathcal{T}$ (from $B \rd W$ in (\ref{dX}) to the drift $CX$ of $\rd Y$ in (\ref{dY})) and its conjugate $\mathcal{T}^\diam$ given by
\begin{equation}
\label{cZ}
\mathcal{T}(\sigma,s)
:=
\mathcal{ C}(\sigma)(sI_n - \mathcal{ A}(\sigma))^{-1},
\qquad
\mathcal{T}^\diam(\sigma,s)
:=
-(sI_n + \mathcal{ A}(\sigma)^*)^{-1} \mathcal{ C}(\sigma)^*.
\end{equation}
A combination of (\ref{trans}), (\ref{cZ}) leads to the identity
\begin{align}
\label{sZ_ZF}
\begin{bmatrix}
\mathcal{T}(\sigma,s) & I_r
\end{bmatrix}
\begin{bmatrix}
\mathcal{ A}(\sigma) & \mathcal{ B}(\sigma)\\
\mathcal{ C}(\sigma) & \mathcal{D}(\sigma)
\end{bmatrix}
& =
\begin{bmatrix}
s\mathcal{T}(\sigma,s) & \mathcal{F}(\sigma,s)
\end{bmatrix}
\end{align}
and its conjugate counterpart
\begin{align}
\label{sZ1_ZF1}
\begin{bmatrix}
\mathcal{ A}(\sigma)^* & \mathcal{ C}(\sigma)^*\\
\mathcal{ B}(\sigma)^* & \mathcal{D}(\sigma)^*
\end{bmatrix}
\begin{bmatrix}
\mathcal{T}^\diam(\sigma,s) \\
I_r
\end{bmatrix}
& =
\begin{bmatrix}
-s\mathcal{T}^\diam(\sigma,s)\\
\mathcal{F}^\diam(\sigma,s)
\end{bmatrix}.
\end{align}
Since the fulfillment of (\ref{PR1})--(\ref{PR3}) is equivalent to (\ref{PRmat}), then by left and right multiplying both sides of (\ref{PRmat})
by
$
{\begin{bmatrix}
\mathcal{T}(\sigma,s) & I_r
\end{bmatrix}}
$,
$
{\begin{bmatrix}
\mathcal{T}^\diam(\sigma,s)\\
I_r
\end{bmatrix}}
$
and using (\ref{sZ_ZF}), (\ref{sZ1_ZF1}), it follows that
\begin{align}
\nonumber
0 & =
\begin{bmatrix}
s\mathcal{T}(\sigma,s) & \mathcal{F}(\sigma,s) & \mathcal{T}(\sigma,s) & I_r
\end{bmatrix}
\begin{bmatrix}
0 & 0& \Theta & 0\\
0 & J_m & 0 & 0 \\
\Theta & 0 & 0 & 0\\
0& 0 & 0 & -J_r
\end{bmatrix}
\begin{bmatrix}
-s\mathcal{T}^\diam(\sigma,s) \\
\mathcal{F}^\diam(\sigma,s) \\
\mathcal{T}^\diam(\sigma,s) \\
I_r
\end{bmatrix}\\
\nonumber
& =
\mathcal{F}(\sigma,s)J_m\mathcal{F}^\diam(\sigma,s) - J_r
\end{align}
for $(\sigma,s)$ belonging to the intersection of domains of the functions $\mathcal{F}$, $\mathcal{F}^\diam$ in (\ref{trans}), (\ref{Fdiam}),
which establishes (\ref{JJ}).
\end{proof}
The validity of (\ref{JJ}), as a corollary of the PR conditions, does not employ a particular form of the CCR matrix $\Theta$ of the internal variables and is a property of the network as an input-output operator. Also note that (\ref{PRmat}), (\ref{JJ}) are organised as indefinite quadratic constraints on the quadruple $(\mathcal{ A}, \mathcal{ B}, \mathcal{ C}, \mathcal{D})$ of the SFTs and the transfer function $\mathcal{F}$.
\section{Energy and Coupling Matrices, and Network Interconnections}
\label{sec:par}
The fulfillment of the PR conditions (\ref{PR1}), (\ref{PR2}) is secured by the parameterisation of the coefficients (\ref{ABCD}) of the QSDEs (\ref{dXj}), (\ref{dYj}) in terms of energy and coupling matrices $R:=(R_{j-k})_{j,k\in {\mathbb Z}^\nu} = R^\rT \in \mathfrak{T}_{n,n}$ and $M:=(M_{j-k})_{j,k\in {\mathbb Z}^\nu} \in \mathfrak{T}_{m,n}$ specifying the network Hamiltonian and the operators of coupling of the component systems to the input fields. More precisely, in accordance with (\ref{PR1}), (\ref{PR2}),
\begin{align}
\label{cA}
\mathcal{ A}(\sigma)
& = 2\Theta (\mathcal{ R}(\sigma) + \mathcal{M}(\sigma)^* J_m \mathcal{M}(\sigma)),\\
\label{cB}
\mathcal{ B}(\sigma)
& = 2\Theta \mathcal{M}(\sigma)^* ,\\
\label{cC}
\mathcal{ C}(\sigma)
& = 2\mathcal{D}(\sigma)J_m \mathcal{M}(\sigma) ,
\qquad
\sigma \in {\mathbb T}^\nu,
\end{align}
where $\mathcal{ R}$, $\mathcal{M}$ are the SFTs associated with $R$, $M$, respectively. The blocks $R_\ell = R_{-\ell}^\rT \in \mR^{n\times n}$ of the energy matrix $R$
parameterise the Hamiltonian
\begin{align}
\nonumber
H_G
& :=
\frac{1}{2}
X_G^\rT
R_G
X_G\\
\nonumber
& = \frac{1}{2}
\sum_{j,k\in G}
X_j^\rT R_{j-k} X_k\\
\label{HG}
& =
\frac{1}{2}
\sum_{j\in G}
X_j^\rT R_0 X_j
+
\frac{1}{2}
\sum_{j,k \in G, j\ne k}
X_j^\rT R_{j-k} X_k
\end{align}
for the fragment of the network on a nonempty finite subset $G \subset {\mathbb Z}^\nu$ of the lattice consisting of $\#G <+\infty$ sites, where the relevant network variables are assembled into the vector
\begin{equation}
\label{XG}
X_G : = (X_k)_{k \in G},
\end{equation}
and use is made of the matrix $R_G:= (R_{j-k})_{j,k\in G} = R_G^\rT \in \mR^{G\times G}$.
In the Hamiltonian (\ref{HG}), the matrix $R_0$ specifies the self-energy of the component systems, while $R_{j-k}$ parameterises the direct (energy) coupling of the $j$th and $k$th systems, with $j\ne k$. For any $j,k \in {\mathbb Z}^\nu$, the matrix $M_{j-k}$ specifies the vector $M_{j-k}X_k$ of operators of coupling of the $k$th component system to the input field $W_j$. Therefore, (\ref{cA})--(\ref{cC}) are equivalent to
\begin{align}
\label{Aell}
A_\ell
& =
2\Theta
\Big(
R_\ell+
\sum_{c\in {\mathbb Z}^\nu}
M_c^\rT
J_m
M_{\ell+c}
\Big),\\
\label{Bell}
B_\ell
& = 2\Theta M_{-\ell}^\rT,\\
\label{Cell}
C_\ell
& =
2
\sum_{c\in {\mathbb Z}^\nu}
D_{\ell-c}J_m M_c,
\qquad
\ell \in {\mathbb Z}^\nu.
\end{align}
In the case of \emph{finite range interaction} (between the component systems in the network and with the external fields), the matrices $R_\ell$, $M_\ell$, $D_\ell$ vanish for all sufficiently large $\ell \in {\mathbb Z}^\nu$, and hence, so also do the matrices $A_\ell$, $B_\ell$, $C_\ell$ in (\ref{Aell})--(\ref{Cell}). In particular, a network with nearest neighbour coupling between the subsystems, with each of them being affected by the local input field, is illustrated in Fig.~\ref{fig:net}.
\begin{figure}[h!]
\centering
\unitlength=0.8mm
\linethickness{0.4pt}
\begin{picture}(100.00,31.00)
\multiput(15,2)(10,10){3}{
\multiput(0,0)(20,0){3}
{
\put(0, 0){\line(1,0){10}}
\put(10, 0){\line(1,1){5}}
\put(5, 5){\line(1,0){10}}
\put(0, 0){\line(1,1){5}}
\put(7.5, 10){\vector(0,-1){7.5}}
\put(7.5, 0){\vector(0,-1){5}}
\put(-8.5, 1.5){\vector(1,0){10}}
\put(3.5, 3.5){\vector(-1,0){10}}
\put(11.5, 5){\vector(1,1){5}}
\put(13.5, 10){\vector(-1,-1){5}}
}
\put(51.5, 1.5){\vector(1,0){10}}
\put(63.5, 3.5){\vector(-1,0){10}}
}
\multiput(15,2)(20,0){3}{
\put(2, -5){\vector(1,1){5}}
\put(4, 0){\vector(-1,-1){5}}}
\end{picture}\vskip3mm
\caption{An illustration of a $(3\times 3)$-fragment of a two-dimensional ($\nu = 2$) quantum network, where each component system is coupled to its nearest neighbours and a local input field (the external quantum fields are represented by vertical arrows).}
\label{fig:net}
\end{figure}
We will now outline the computation of energy and coupling matrices for network interconnections. Consider two translation invariant quantum networks on the common lattice ${\mathbb Z}^\nu$ with quadruples $(A^{[k]}, B^{[k]}, C^{[k]}, D^{[k]})$ of block Toeplitz matrices and input, internal and output dimensions $m_k$, $n_k$, $r_k$, respectively, $k=1,2$. The corresponding augmented vectors of input, internal and output fields are denoted by $W^{[k]}$, $X^{[k]}$, $Y^{[k]}$, and the CCR matrices of the internal variables of the networks in the sense of (\ref{Xcomm}) are denoted by $\Theta_k$. The spatio-temporal transfer functions of the networks are
\begin{equation}
\label{Fk}
F_k(\sigma,s):= \mathcal{ C}_k(\sigma)(sI_{n_k}- \mathcal{ A}_k(\sigma))^{-1} \mathcal{ B}_k(\sigma) + \mathcal{D}_k(\sigma)
\end{equation}
with values in $\mC^{r_k\times m_k}$ for $s\in \mC$ with $\Re s$ sufficiently large, and $\sigma \in {\mathbb T}^\nu$. If $r_1=m_2$, and the output fields of the first network are fed as the input fields to the second network (see Fig.~\ref{fig:FF}),
\begin{figure}[htbp]
\unitlength=1mm
\linethickness{0.4pt}
\centering
\begin{picture}(110,11.00)
\put(35,0){\framebox(10,10)[cc]{{$F_2$}}}
\put(65,0){\framebox(10,10)[cc]{{$F_1$}}}
\put(35,5){\vector(-1,0){20}}
\put(65,5){\vector(-1,0){20}}
\put(95,5){\vector(-1,0){20}}
\put(10,5){\makebox(0,0)[cc]{$Y^{[2]}$}}
\put(100,5){\makebox(0,0)[cc]{$W^{[1]}$}}
\put(55,10){\makebox(0,0)[cc]{$W^{[2]}=Y^{[1]}$}}
\end{picture}
\caption{The concatenation of translation invariant quantum networks on a common lattice with spatio-temporal transfer functions $F_1$, $F_2$.}
\label{fig:FF}
\end{figure}
the resulting composition is a translation invariant quantum network with input, internal and output dimensions $m_1$, $n:= n_1+n_2$, $r_2$, respectively, and the spatio-temporal transfer function
$$
F(\sigma,s)
=
F_2(\sigma,s)F_1(\sigma,s)
=
\mathcal{ C}(\sigma)(sI_n- \mathcal{ A}(\sigma))^{-1} \mathcal{ B}(\sigma) + \mathcal{D}(\sigma),
$$
which is the pointwise product of the transfer functions (\ref{Fk}). Here, as in the case of cascaded classical linear time invariant systems,
\begin{equation}
\label{ser}
{\begin{bmatrix}
\mathcal{ A}(\sigma) & \mathcal{ B}(\sigma)\\
\mathcal{ C}(\sigma) & \mathcal{D}(\sigma)
\end{bmatrix}}
=
\left[
{\begin{array}{cc|c}
\mathcal{ A}_1(\sigma) & 0& \mathcal{ B}_1(\sigma)\\
\mathcal{ B}_2(\sigma)\mathcal{ C}_1(\sigma) & \mathcal{ A}_2(\sigma) & \mathcal{ B}_2(\sigma)\mathcal{D}_1(\sigma) \\
\hline
\mathcal{D}_2(\sigma)\mathcal{ C}_1(\sigma) & \mathcal{ C}_2(\sigma) & \mathcal{D}_2(\sigma)\mathcal{D}_1(\sigma)
\end{array}}
\right],
\qquad
\sigma \in{\mathbb T}^\nu.
\end{equation}
The concatenated network has the CCR matrix
$$
\Theta
=
\begin{bmatrix}
\Theta_1 & 0\\
0 & \Theta_2
\end{bmatrix}
$$
for its internal variables in the sense of (\ref{Xcomm})
and the energy and coupling matrices which, in view of (\ref{cA})--(\ref{cC}) and (\ref{ser}), can be recovered from the SFTs
\begin{align*}
\mathcal{ R}(\sigma)
& =
\begin{bmatrix}
\mathcal{ R}_1(\sigma) & -\mathcal{M}_1(\sigma)^* J_{m_1} \mathcal{D}_1(\sigma)^* \mathcal{M}_2(\sigma)\\
\mathcal{M}_2(\sigma)^*\mathcal{D}_1(\sigma) J_{m_1} \mathcal{M}_1(\sigma) & \mathcal{ R}_2(\sigma)
\end{bmatrix},\\
\mathcal{M}(\sigma)
& =
\begin{bmatrix}
\mathcal{M}_1(\sigma) & \mathcal{D}_1(\sigma)^* \mathcal{M}_2(\sigma)
\end{bmatrix},
\qquad
\sigma \in {\mathbb T}^\nu,
\end{align*}
which are expressed in terms of the SFTs $\mathcal{ R}_k$, $\mathcal{M}_k$ of the energy and coupling matrices of the networks, $k=1,2$.
Other algebraic operations for translation invariant networks on ${\mathbb Z}^\nu$ are carried out in a similar pointwise fashion over the torus ${\mathbb T}^\nu$. For example, feedback interconnections of such networks involve linear fractional transformations of spatio-temporal transfer functions. In particular, Fig.~\ref{fig:loop}
\begin{figure}[htbp]
\unitlength=1mm
\linethickness{0.4pt}
\centering
\begin{picture}(110,11.00)
\put(35,0){\framebox(10,10)[cc]{{$F_2$}}}
\put(65,0){\framebox(10,10)[cc]{{$F_1$}}}
\put(15,5){\vector(1,0){20}}
\put(65,8){\vector(-1,0){20}}
\put(45,2){\vector(1,0){20}}
\put(95,5){\vector(-1,0){20}}
\put(10,5){\makebox(0,0)[cc]{$W^{[2]}$}}
\put(100,5){\makebox(0,0)[cc]{$W^{[1]}$}}
\put(55,10){\makebox(0,0)[cb]{$Y^{[1]}$}}
\put(55,0){\makebox(0,0)[ct]{$Y^{[2]}$}}
\end{picture}\vskip2mm
\caption{A field-mediated feedback interconnection of translation invariant quantum networks on a common lattice with external quantum fields $W^{[1]}$, $W^{[2]}$.}
\label{fig:loop}
\end{figure}
illustrates a quantum feedback network, resulting from a field-mediated connection of a translation invariant network $F_1$, interpreted as a plant, with another such network $F_2$ (on the same carrier lattice ${\mathbb Z}^\nu$), playing the role of a controller. This gives rise to coherent (measurement-free) quantum control settings\cite{NJP_2009,SVP_2015}, where the energy parameters of the controller and its coupling with the plant can be varied so as to satisfy performance specifications for the closed-loop network such as stability and minimization of cost functionals in the steady-state regime.
\section{Invariant Gaussian State in the Case of Vacuum Input Fields}
\label{sec:inv}
We will be concerned with the case of statistically independent input fields in the vacuum state, defined in terms of the quasi-characteristic functional (QCF)\cite{CH_1971,HP_1984,P_1992} of the incremented quantum Wiener processes as
\begin{align}
\nonumber
\bE
\re^{i\int_0^T u(t)^\rT \rd W(t)}
& =
\bE
\prod_{k \in {\mathbb Z}^\nu}
\re^{i\int_0^T u_k(t)^\rT \rd W_k(t)} \\
\label{vac}
& =
\prod_{k \in {\mathbb Z}^\nu}
\bE
\re^{i\int_0^T u_k(t)^\rT \rd W_k(t)}
=
\re^{-\frac{1}{2} \int_0^T |u(t)|^2\rd t}
\end{align}
for any time horizon $T>0$ and any square integrable map $u:=(u_k)_{k \in {\mathbb Z}^\nu}: [0,T]\to \ell^2({\mathbb Z}^\nu, \mR^m)$, where the standard Euclidean norm $|\cdot|$ is extended to $\ell^2({\mathbb Z}^\nu, \mR^m)$ as $|u|:= \sqrt{\sum_{k \in {\mathbb Z}^\nu}|u_k|^2}$ along with the inner product $u^\rT w$. Here, $\bE \zeta := \Tr(\rho \zeta)$ is the quantum expectation over the density operator
\begin{equation}
\label{rho}
\rho:= \rho_0\otimes \upsilon,
\end{equation}
where $\rho_0$ is the initial network state on $\otimes_{k\in {\mathbb Z}^\nu}\mathfrak{H}_k$, and $\upsilon:= \otimes_{k\in {\mathbb Z}^\nu}\upsilon_k$ is the vacuum state on the Fock space $\mathfrak{F}$, with $\upsilon_k$ the vacuum states on the corresponding Fock spaces $\mathfrak{F}_k$. The averaging in (\ref{vac}) reduces to that over $\upsilon$, and the factorizations come from the tensor-product structure of $\mathfrak{F}$, $\upsilon$ and the commutativity between the quantum Wiener processes $W_k$ on the spaces $\mathfrak{F}_k$ with different $k\in {\mathbb Z}^\nu$. The state $\rho_0$ in (\ref{rho}) is said to be \emph{proper} if the initial network variables have finite second moments,
and the matrix
\begin{equation}
\label{K}
K:= (K_{jk})_{j,k\in {\mathbb Z}^\nu}:= \Re \bE(X(0)X(0)^\rT),
\qquad
K_{jk}:= \Re \bE (X_j(0)X_k(0)^\rT),
\end{equation}
acting on $u:=(u_k)_{k \in {\mathbb Z}^\nu} \in \ell^2({\mathbb Z}^\nu, \mR^n)$ as
$ K u
:=
\big(
\sum_{k \in {\mathbb Z}^\nu}
K_{jk}u_k
\big)_{j \in {\mathbb Z}^\nu}
$,
specifies a bounded operator in the sense of the $\ell^2$-induced norm:
\begin{equation}
\label{Knorm}
\|K\| < +\infty.
\end{equation}
\begin{theorem}
\label{th:inv}
Suppose the translation invariant network, described together with related quantities by (\ref{dXj})--(\ref{cABCD}), satisfies the stability condition
\begin{equation}
\label{stab}
\max_{\sigma \in {\mathbb T}^\nu}
\mathbf{r}(\re^{\mathcal{ A}(\sigma)})
< 1
\end{equation}
(with $\mathbf{r}(\cdot)$ the spectral radius of a matrix),
has a proper initial state in the sense of (\ref{K}), (\ref{Knorm})
and is driven by the vacuum input fields as specified by (\ref{vac}).
Then there is weak convergence to a unique invariant Gaussian quantum state for the internal network variables with zero mean and block Toeplitz quantum covariances
\begin{equation}
\label{EXX}
\bE(X_j(t)X_k(t)^\rT) = P_{j-k} + i\delta_{jk} \Theta,
\qquad
j,k\in {\mathbb Z}^\nu.
\end{equation}
The SFT
\begin{equation}
\label{cP}
\mathcal{P}(\sigma)
:=
\sum_{\ell\in {\mathbb Z}^\nu}
\re^{-i\ell^\rT \sigma}
P_\ell,
\qquad
\sigma \in {\mathbb T}^\nu,
\end{equation}
for the real parts $P_\ell = P_{-\ell}^\rT \in \mR^{n\times n}$ of (\ref{EXX})
is found uniquely from the algebraic Lyapunov equation (ALE)
\begin{equation}
\label{cPALE}
\mathcal{ A}(\sigma) \mathcal{P}(\sigma)+ \mathcal{P}(\sigma)\mathcal{ A}(\sigma)^* + \mathcal{ B}(\sigma) \mathcal{ B}(\sigma)^*
= 0.
\end{equation}
\hfill$\square$
\end{theorem}
The matrix $P$, obtained in (\ref{Elim}) of the proof of the above theorem in \ref{sec:invproof}, can be shown to belong to $\mathfrak{T}_{n,n}$ if the SFTs $\mathcal{ A}$, $\mathcal{ B}$ have an appropriate degree of smoothness.
\begin{lemma}
\label{lem:smooth}
In addition to the assumptions of Theorem~\ref{th:inv}, suppose the SFTs $\mathcal{ A}$, $\mathcal{ B}$ in (\ref{cABCD}) are $r$ times continuously differentiable, with
\begin{equation}
\label{rnu}
r> \nu.
\end{equation}
Then the block Toeplitz matrix $P$ in (\ref{Elim}) of the invariant real covariances in (\ref{EXX}) belongs to the Banach algebra $\mathfrak{T}_{n,n}$.
\hfill$\square$
\end{lemma}
\begin{proof}
Due to (\ref{vec}), the SFT $\mathcal{P}$ inherits the $r$ times continuous differentiability from $\mathcal{ A}$, $\mathcal{ B}$. This implies that the partial derivatives $\partial_{\sigma_k}^r\mathcal{P}(\sigma)$ with respect to the coordinates of $\sigma:= (\sigma_k)_{1\< k \< \nu}\in {\mathbb T}^\nu$ are continuous and hence, square integrable over the torus ${\mathbb T}^\nu$. Therefore, application of the Plancherel identity yields
\begin{align}
\nonumber
+\infty
& >
\frac{1}{(2\pi)^\nu}
\int_{{\mathbb T}^\nu}
\sum_{k = 1}^\nu
\|\partial_{\sigma_k}^r\mathcal{P}(\sigma)\|_\rF^2
\rd \sigma\\
\label{Hold}
& =
\sum_{\ell\in {\mathbb Z}^\nu}
\|\ell\|_{2r}^{2r}
\|P_\ell\|_\rF^2
\>
\nu^{1-r}
\sum_{\ell\in {\mathbb Z}^\nu}
|\ell|^{2r}
\|P_\ell\|_\rF^2,
\end{align}
where $\|\cdot\|_{\rF}$ is the Frobenius norm of matrices\cite{HJ_2007}, and use is made of the inequality $\|\ell\|_{2r} := \sqrt[2r]{\sum_{k=1}^\nu \ell_k^{2r}} \> \nu^{\frac{1-r}{2r}}|\ell|$ for a vector $\ell:= (\ell_k)_{1\< k\< \nu}$. It follows from the convergence of the rightmost series in (\ref{Hold}) that
$
\|P_\ell\|_\rF = o(|\ell|^{-r})
$, as $|\ell| \to +\infty$, which, in combination with (\ref{rnu}), leads to $\|P\|_1 \< \sum_{\ell\in {\mathbb Z}^\nu}
\|P_\ell\|_\rF < +\infty$, whereby $P \in \mathfrak{T}_{n,n}$.
\end{proof}
In the finite range interaction case, mentioned in Section~\ref{sec:par}, the SFTs $\mathcal{ A}$, $\mathcal{ B}$ in (\ref{cA}), (\ref{cB}) are trigonometric polynomials and are, therefore, infinitely differentiable. Therefore, in this case, (\ref{rnu}) is satisfied, and it follows from Lemma~\ref{lem:smooth} and its proof that the invariant covariances of the network variables have an infinitely differentiable SFT $\mathcal{P}(\sigma)$ whose entries are organised as ratios of trigonometric polynomials of $\sigma \in {\mathbb T}^\nu$. In the univariate case of $\nu=1$, this makes $\mathcal{P}$ have the structure of spectral densities associated with linear discrete-time invariant systems and admit appropriate inner-outer factorizations\cite{W_1972}.
Similarly to Ref. \refcite{VPJ_2018a}, under the conditions of Theorem~\ref{th:inv}, the internal network variables have an invariant multipoint zero-mean Gaussian quantum state which is specified completely by the two-point quantum covariances:
\begin{equation}
\label{EXX2}
\bE(X(t)X(\tau)^\rT)
=
\left\{
\begin{matrix}
\re^{(t-\tau)A}(P + i\bit{\Theta}) & {\rm if} & t \> \tau \> 0 \\
(P + i\bit{\Theta}) \re^{(\tau-t)A^\rT} & {\rm if} & \tau \> t \> 0
\end{matrix}
\right.,
\end{equation}
where $\bit{\Theta}$ is given by (\ref{TJJ}). In accordance with the translation invariant structure of the network, (\ref{EXX2}) is also a block Toeplitz matrix, which, under the conditions of Lemma~\ref{lem:smooth}, is an element of $\mathfrak{T}_{n,n}$.
\section{Finite-Horizon Quadratic-Exponential Functional}
\label{sec:QEF}
Associated with every lattice site $j \in {\mathbb Z}^\nu$ is a vector $Z_j$ of $q\< n$ time-varying self-adjoint quantum variables, which represent physical quantities (in regard to the $j$th component system and its neighbourhood) whose moderate values are preferable for network performance.
These ``critical'' quantum variables are assumed to be linearly related to the internal network variables by a given real block Toeplitz weighting matrix $S:= (S_{j-k})_{j,k\in {\mathbb Z}^\nu} \in \mathfrak{T}_{q,n}$ and form an auxiliary quantum process
\begin{equation}
\label{ZX}
Z
:=
(Z_j)_{j \in {\mathbb Z}^\nu}
:=
\Big(
\sum_{k\in {\mathbb Z}^\nu} S_{j-k} X_k
\Big)_{j \in {\mathbb Z}^\nu}
=
SX.
\end{equation}
The matrix $S$ quantifies the relative importance of the network variables in (\ref{ZX}) depending on a particular control application and is not constrained by PR conditions.
Consider a fragment of the network at a nonempty finite subset $G \subset {\mathbb Z}^\nu$. Similarly to (\ref{XG}), the corresponding restriction
\begin{equation}
\label{ZG}
Z_G
:=
(Z_j)_{j\in G}
=
S_G X
\end{equation}
of the process (\ref{ZX}) is related to the network variables by the matrix
\begin{equation}
\label{SG}
S_G:= (S_{j-k})_{j \in G, k \in {\mathbb Z}^\nu}
\end{equation}
with $\# G$ rows. In
the risk-sensitive framework, the performance of the network fragment in terms of the process $Z_G$ over a bounded time interval $[0,T]$ can be described by using a quadratic-exponential functional (QEF)\cite{VPJ_2018a}
\begin{equation}
\label{XiGT}
\Xi_{\theta,G,T}
:=
\bE \re^{\theta Q_{G,T} }.
\end{equation}
This cost imposes an exponential penalty (whose severity is controlled by a scalar parameter $\theta >0$)
on the positive semi-definite self-adjoint quantum variable
\begin{align}
\nonumber
Q_{G,T}
& :=
\frac{1}{2}
\int_0^T
\sum_{j\in G}
Z_j(t)^\rT Z_j(t)
\rd t\\
\label{QGT}
& =
\frac{1}{2}
\int_0^T
Z_G(t)^\rT Z_G(t)
\rd t
=
\frac{1}{2}
\int_0^T
X(t)^\rT
S_G^\rT S_GX(t)
\rd t,
\end{align}
where the integrand is organised similarly to the Hamiltonian (\ref{HG}). The restricted weighting matrix $S_G$ in (\ref{SG}) specifies the quadratic dependence of $Q_{G,T}$ on the past history of the network variables. The quantum average of (\ref{QGT}) is related to the asymptotic behaviour of the QEF (\ref{XiGT}) for small values of the risk sensitivity parameter $\theta$ as
\begin{align}
\nonumber
\bE Q_{G,T}
& =
\partial_\theta \ln \Xi_{\theta,G,T}\big|_{\theta = 0}\\
\label{EQGT}
& =
\frac{1}{2}
\int_0^T
\Tr (S_G \Re \bE(X(t)X(t)^\rT) S_G^\rT)
\rd t.
\end{align}
In what follows, it is assumed that the network satisfies the conditions of Theorem~\ref{th:inv} and is in the invariant multipoint Gaussian quantum state. In this case, the mean square cost functional (\ref{EQGT}) has the following rate per unit time and lattice site:
\begin{align}
\nonumber
\frac{1}{T\#G}
\bE Q_{G,T}
& =
\frac{1}{2}
\bE(Z_0(0)^\rT Z_0(0)) \\
\nonumber
& =
\frac{1}{2}
\sum_{j,k\in {\mathbb Z}^\nu}
\Tr (S_{-j} P_{j-k}S_{-k}^\rT)\\
& =
\frac{1}{2(2\pi)^\nu}
\int_{{\mathbb T}^\nu}
\Tr ({\mathcal S}(\sigma) \mathcal{P}(\sigma){\mathcal S}(\sigma)^*)
\rd \sigma,
\label{EQGT1}
\end{align}
where use is made of the Plancherel identity along with the SFT $\mathcal{P}$ from (\ref{cP}), (\ref{cPALE}) and the SFT for the weighting matrix $S$ in (\ref{ZX}):
\begin{equation}
\label{cS}
{\mathcal S}(\sigma)
:=
\sum_{\ell\in {\mathbb Z}^\nu}
\re^{-i\ell^\rT \sigma}
S_\ell,
\qquad
\sigma \in {\mathbb T}^\nu.
\end{equation}
The relations (\ref{EQGT}), (\ref{EQGT1}) suggest that similar limits exist for the infinite spatio-temporal horizon asymptotic behaviour of the QEF (\ref{XiGT}):
\begin{align}
\label{UpsGdef}
\Upsilon_{\theta,G}
& :=
\lim_{T\to +\infty}
\Big(
\frac{1}{T}
\ln \Xi_{\theta, G, T}
\Big),\\
\label{Ups0}
\Upsilon(\theta)
& :=
\lim_{G\to \infty}
\Big(
\frac{1}{\# G}
\Upsilon_{\theta,G}
\Big),
\end{align}
where ``$G\to \infty$'' will be specified in Section~\ref{sec:spat} and includes, as a particular case, sequences of unboundedly growing cubes in ${\mathbb Z}^\nu$.
The QEF growth rate (\ref{Ups0}), as a function of $\theta> 0$,
can be used for large deviations estimates for quantum trajectories of the network in the form of upper bounds on tail probabilities, similar to those in Refs. \refcite{VPJ_2018a,VPJ_2021}. More precisely, application of an exponential inequality\cite{S_1996} to
the probability distribution\cite{H_2001} $\bP_{G,T}$ of the self-adjoint quantum variable $Q_{G,T}$ in (\ref{QGT}) leads to
\begin{equation}
\label{Cramer}
\bP_{G,T}([\epsilon, +\infty))
\<
\inf_{\theta >0}
(
\Xi_{\theta, G,T}
\re^{-\epsilon \theta }),
\qquad
\epsilon\> 0,
\end{equation}
for any $T>0$ and nonempty finite set $G\subset {\mathbb Z}^\nu$. By using (\ref{Cramer}) with $\epsilon = \alpha T\#G$ in combination with (\ref{UpsGdef}), (\ref{Ups0}), it follows that
\begin{equation}
\label{tail}
\limsup_{G\to \infty}
\Big(
\frac{1}{\# G}
\limsup_{T\to +\infty}
\Big(
\frac{1}{T}
\ln
\bP_{G,T}([\alpha T \#G, +\infty))
\Big)
\Big)
\<
\inf_{\theta>0}
(
\Upsilon(\theta)
-
\alpha\theta
)
\end{equation}
for any $\alpha>0$. The relation (\ref{tail}) provides asymptotic upper bounds for the tail probability distribution of $Q_{G,T}$ in terms of the spatio-temporal QEF growth rate (\ref{Ups0}). These bounds can be enhanced by minimizing $\Upsilon(\theta)$ (at a suitably chosen $\theta>0$) over an admissible range of parameters of the quantum network. This provides a risk-sensitive performance criterion for quantum feedback network control by interconnection, exemplified in Fig.~\ref{fig:loop}. The computation of the bounds (\ref{tail}) and the QEF minimization require systematic techniques for evaluating the functional (\ref{Ups0}).
In order to establish the existence of and compute the limits (\ref{UpsGdef}), (\ref{Ups0}) in Sections~\ref{sec:temp}--\ref{sec:homo}, we will now discuss the quantum probabilistic structure of the process $Z$ in (\ref{ZX}). The multipoint zero-mean Gaussian structure of the invariant quantum state of the internal network variables is inherited by the process $Z$ which has the two-point quantum covariances
\begin{align}
\nonumber
\bE(Z(t)Z(\tau)^\rT)
& =
S\bE(X(t)X(\tau)^\rT) S^\rT\\
\nonumber
& =
\left\{
\begin{matrix}
S\re^{(t-\tau)A}(P + i\bit{\Theta})S^\rT & {\rm if} & t \> \tau \> 0 \\
S(P + i\bit{\Theta}) \re^{(\tau-t)A^\rT}S^\rT & {\rm if} & \tau \> t \> 0
\end{matrix}
\right.\\
\label{EZZ2}
& =
V(t-\tau) + i\Lambda(t-\tau),
\qquad
t,\tau \> 0.
\end{align}
This time-invariant\footnote{that is, depending on the time difference} $\mathfrak{T}_{q,q}$-valued quantum covariance kernel is
obtained by an appropriate transformation of (\ref{EXX2}). Its real part is
given by
\begin{equation}
\label{V}
V(\tau)
=
\left\{
{\begin{matrix}
S\re^{\tau A}P S^\rT& {\rm if}\ \tau\> 0\\
S P\re^{-\tau A^\rT}S^\rT & {\rm if}\ \tau < 0
\end{matrix}}
\right.
=
V(-\tau)^\rT,
\qquad
\tau \in \mR,
\end{equation}
where $P$ is the matrix (\ref{Elim}) of real parts of the invariant one-point quantum covariances of the internal network variables. The imaginary part of (\ref{EZZ2}) is given by
\begin{equation}
\label{Lambda}
\Lambda(\tau)
=
\left\{
{\begin{matrix}
S \re^{\tau A}\bit{\Theta} S^\rT & {\rm if}\ \tau\> 0\\
S \bit{\Theta}\re^{-\tau A^{\rT}}S^\rT & {\rm if}\ \tau< 0\\
\end{matrix}}
\right.
=
-\Lambda(-\tau)^\rT,
\qquad
\tau \in \mR,
\end{equation}
and describes the two-point CCRs\cite{VPJ_2018a}
\begin{equation}
\label{ZZcomm}
[Z(t), Z(\tau)^\rT]
=
2i\Lambda(t-\tau),
\qquad
t,\tau\>0,
\end{equation}
from which the one-point CCR matrix of $Z$ is recovered as $\Lambda(0) = S\bit{\Theta} S^\rT$. Accordingly, the process $Z_G$ in (\ref{ZG}) is in a multipoint zero-mean Gaussian state with the time-invariant $\mC^{G\times G}$-valued quantum covariance kernel
\begin{align}
\nonumber
\bE(Z_G(t)Z_G(\tau)^\rT)
& =
S_G\bE(X(t)X(\tau)^\rT) S_G^\rT\\
\nonumber
& =
\left\{
\begin{matrix}
S_G\re^{(t-\tau)A}(P + i\bit{\Theta})S_G^\rT & {\rm if} & t \> \tau \> 0 \\
S_G(P + i\bit{\Theta}) \re^{(\tau-t)A^\rT}S_G^\rT & {\rm if} & \tau \> t \> 0
\end{matrix}
\right.\\
\label{EZZG}
& =
V_G(t-\tau) + i\Lambda_G(t-\tau),
\qquad
t,\tau \> 0,
\end{align}
which is
obtained as an appropriate restriction of (\ref{EZZ2}) to the set $G\subset {\mathbb Z}^\nu$ in view of (\ref{SG}) and is split into the real and imaginary parts $V_G$, $\Lambda_G$. The latter is given by
\begin{equation}
\label{LambdaG}
\Lambda_G(\tau)
=
\left\{
{\begin{matrix}
S_G \re^{\tau A}\bit{\Theta} S_G^\rT & {\rm if}\ \tau\> 0\\
S_G \bit{\Theta}\re^{-\tau A^{\rT}}S_G^\rT & {\rm if}\ \tau< 0\\
\end{matrix}}
\right.
=
-\Lambda_G(-\tau)^\rT,
\qquad
\tau \in \mR,
\end{equation}
and, in accordance with (\ref{Lambda}), (\ref{ZZcomm}), describes the two-point CCRs
\begin{equation}
\label{ZZcommG}
[Z_G(t), Z_G(\tau)^\rT]
=
2i\Lambda_G(t-\tau),
\qquad
t,\tau\>0,
\end{equation}
where $\Lambda_G(0) = S_G\bit{\Theta} S_G^\rT$ is the one-point CCR matrix of $Z_G$.
The two-point CCR kernel (\ref{LambdaG}) gives rise to a skew self-adjoint integral operator $\mathsf{L}_{G,T} : f\mapsto g$ which acts on the Hilbert space $L^2([0,T],\mC^G)$ of square integrable $\mC^G$-valued functions on the time interval $[0,T]$ as
\begin{equation}
\label{sLGT}
g(t)
:=
\int_0^T
\Lambda_G(t-\tau) f(\tau)
\rd \tau,
\qquad
0\< t \< T.
\end{equation}
The commutation structure (\ref{LambdaG}), (\ref{ZZcommG}) of the process $Z_G$, and the related operator $\mathsf{L}_{G,T}$ in (\ref{sLGT}), do not depend on a particular network-field state (\ref{rho}). The real part of the quantum covariance kernel (\ref{EZZG}) is given by
\begin{equation}
\label{VG}
V_G(\tau)
=
\left\{
{\begin{matrix}
S_G\re^{\tau A}P S_G^\rT& {\rm if}\ \tau\> 0\\
S_G P\re^{-\tau A^\rT}S_G^\rT & {\rm if}\ \tau < 0
\end{matrix}}
\right.
=
V_G(-\tau)^\rT,
\qquad
\tau \in \mR,
\end{equation}
in accordance with (\ref{V}). The kernel $V_G$ specifies a positive semi-definite self-adjoint integral operator $\mathsf{V}_{G,T} : f\mapsto g$ acting on $L^2([0,T], \mC^G)$ as
\begin{equation}
\label{sVGT}
g(t)
:=
\int_0^T
V_G(t-\tau)f(\tau)\rd \tau,
\qquad
0\< t \< T.
\end{equation}
The fact that $\mathsf{V}_{G,T} \succcurlyeq 0$ also follows from the stronger property of positive semi-definiteness of the self-adjoint operator $\mathsf{V}_{G,T} +i\mathsf{L}_{G,T} $ on $L^2([0,T], \mC^G)$. With (\ref{EZZG}) being a continuous kernel, both $\mathsf{V}_{G,T}$ and $\mathsf{L}_{G,T}$ are compact operators\cite{RS_1980}. Application of appropriately modified results of Refs.~\refcite{VPJ_2019c,VPJ_2021} to the quantum process $Z_G$ in the multipoint Gaussian quantum state allows the QEF (\ref{XiGT}) to be represented as
\begin{equation}
\label{lnXi}
\ln \Xi_{\theta,G,T}
=
- \frac{1}{2}
\Tr (\ln\cos (\theta\mathsf{L}_{G,T} ) + \ln (\mathcal{I} - \theta \mathsf{V}_{G,T} \mathsf{K}_{\theta,G,T} )).
\end{equation}
Here, $\mathcal{I}$ is the identity operator on $L^2([0,T],\mC^G)$, and use is made of a positive definite self-adjoint operator
\begin{equation}
\label{sK}
\mathsf{K}_{\theta,G,T}
:=
\mathrm{tanhc}(i\theta \mathsf{L}_{G,T} )
=
\mathrm{tanc} (\theta\mathsf{L}_{G,T} ),
\end{equation}
where $\mathrm{tanhc} z := \mathrm{tanc} (-iz)$ is a hyperbolic version of $\mathrm{tanc} z := \frac{\tan z}{z}$ extended by continuity as $\mathrm{tanc} 0:=1$. The operator $\mathsf{K}_{\theta,G,T} $ is nonexpanding in the sense that $\mathsf{K}_{\theta,G,T} \preccurlyeq \mathcal{I}$. With $\mathsf{V}_{G,T} \mathsf{K}_{\theta,G,T}$ being a compact operator (which is isospectral to the positive semi-definite self-adjoint operator $\sqrt{\mathsf{K}_{\theta,G,T} } \mathsf{V}_{G,T} \sqrt{\mathsf{K}_{\theta,G,T} }$), the representation (\ref{lnXi}) is valid under the condition
\begin{equation}
\label{spec}
\theta \lambda_{\max}(\mathsf{V}_{G,T} \mathsf{K}_{\theta,G,T} ) < 1.
\end{equation}
The representation (\ref{lnXi}) is obtained by applying the results of Refs. \refcite{VPJ_2019c,VPJ_2021} to the Gaussian quantum process $Z_G$ in (\ref{ZG}), (\ref{QGT}) using its quantum Karhunen-Loeve expansion over an orthonormal eigenbasis of the operator $\mathsf{L}_{G,T} $ in (\ref{sLGT}), provided the latter has no zero eigenvalues.
A sufficient condition for this property to hold for all sufficiently large subsets $G\subset {\mathbb Z}^\nu$ and time horizons $T>0$ can be developed in terms of the parameters of the quantum network and the weighting matrix $S$ in (\ref{ZX}) and its SFT (\ref{cS}). However, in the network setting, this development is more complicated than in the case of a single OQHO (see Theorem 10.1 of Ref. \refcite{VPJ_2021}) and requires a separate investigation, which is beyond the scope of the present study and will be discussed elsewhere. In what follows, the absence of zero eigenvalues will be used as an assumption.
\section{Temporal QEF Growth Rate}\label{sec:temp}
We will first compute the infinite time horizon asymptotic growth rate (\ref{UpsGdef}) of the QEF (\ref{XiGT}) for a fixed but otherwise arbitrary nonempty finite set $G \subset {\mathbb Z}^\nu$. The dependence on $G$ will be indicated for the subsequent computation of the limit (\ref{Ups0}) in Section~\ref{sec:spat}. As a preliminary for the theorem below, note that the representation (\ref{lnXi}) is organised as ``trace-analytic''\cite{VP_2010} functionals of operators in the sense that
\begin{equation}
\label{lnXi1}
\ln \Xi_{\theta,G,T}
=
- \frac{1}{2}
\Tr (\varphi(\theta \mathsf{V}_{G,T} \mathsf{K}_{\theta,G,T} ) + \psi(\theta\mathsf{L}_{G,T} )),
\end{equation}
where
\begin{equation}
\label{phipsi}
\varphi(z):= \ln(1-z),
\qquad
\psi(z):= \ln \cos z,
\qquad
z \in \mC,
\end{equation}
are analytic functions whose domains contain the spectra of the operators $\theta \mathsf{V}_{G,T} \mathsf{K}_{\theta,G,T} $ (under the condition (\ref{spec})) and $\theta\mathsf{L}_{G,T} $, at which these functions are evaluated. The structure of the operators $\mathsf{V}_{G,T}$ in (\ref{sVGT}) and $\mathsf{L}_{G,T} $ in (\ref{sLGT}) (with the latter giving rise to $\mathsf{K}_{\theta,G,T}$ in (\ref{sK})) plays a part together with the averaging relations of \ref{sec:averint} in the following theorem on the asymptotic behaviour of the quantity (\ref{lnXi1}), as
$T\to +\infty$, which is an adaptation of Theorem 8.1 of Ref. \refcite{VPJ_2021}. Its formulation employs the $\mC^{G\times G}$-valued Fourier transforms
\begin{align}
\label{PhiG}
\Phi_G(\lambda)
& :=
\int_\mR \re^{-i\lambda t }
V_G(t)
\rd t
=
F_G(i\lambda) F_G(i\lambda)^*,\\
\label{PsiG}
\Psi_G(\lambda)
& :=
\int_\mR \re^{-i\lambda t }
\Lambda_G(t)
\rd t
=
F_G(i\lambda) \bit{J}_m F_G(i\lambda)^*,
\qquad
\lambda \in \mR,
\end{align}
of the covariance and commutator kernels (\ref{VG}), (\ref{LambdaG}); see also Eq.~(5.8) in Ref. \refcite{VPJ_2019a}. Here,
\begin{equation}
\label{FG}
F_G(v)
:=
S_G
(v\bit{I}_n - A)^{-1}B,
\qquad
v \in \mC,
\end{equation}
is the $\mC^{G\times {\mathbb Z}^\nu}$-valued transfer function from the incremented input quantum Wiener process $W$ of the network in (\ref{dX}) to the stationary Gaussian quantum process $Z_G$ in (\ref{ZG}), with $\bit{I}_n:= (\delta_{jk}I_n)_{j,k\in {\mathbb Z}^\nu}$. Note that $\Phi_G(\lambda)$ is a complex positive semi-definite Hermitian matrix, while $\Psi_G(\lambda)$ is skew Hermitian for any $\lambda \in \mR$, with $\Phi_G+i\Psi_G$ being the Fourier transform of the quantum covariance kernel $V_G+i\Lambda_G$ from (\ref{EZZG}).
\begin{theorem}
\label{th:limXi}
Suppose the translation invariant network in (\ref{dXj})--(\ref{cABCD}) satisfies the conditions of Theorem~\ref{th:inv}, and the integral operator $\mathsf{L}_{G,T}$ in (\ref{sLGT}) has no zero eigenvalues for all sufficiently large $T>0$. Also, let the risk sensitivity parameter $\theta>0$ in (\ref{XiGT}) satisfy
\begin{equation}
\label{spec1}
\theta
\sup_{\lambda \in \mR}
\lambda_{\max}
(
\Phi_G(\lambda)
\mathrm{tanc}
(\theta \Psi_G(\lambda))
)
< 1,
\end{equation}
where the functions $\Phi_G$, $\Psi_G$ are associated with the finite subset $G \subset {\mathbb Z}^\nu$ by (\ref{PhiG})--(\ref{FG}). Then the QEF $\Xi_{\theta,G,T}$, defined by (\ref{XiGT}), (\ref{QGT}), has the following infinite time horizon growth rate (\ref{UpsGdef}):
\begin{equation}
\label{UpsG}
\Upsilon_{\theta,G}
=
-
\frac{1}{4\pi}
\int_{\mR}
\ln\det
D_{\theta,G}(\lambda)
\rd \lambda,
\end{equation}
where
\begin{equation}
\label{DG}
D_{\theta,G}(\lambda)
:=
\cos(
\theta \Psi_G(\lambda)
) -
\theta
\Phi_G(\lambda)
\mathrm{sinc}
(\theta \Psi_G(\lambda))
\end{equation}
is a $\mC^{G\times G}$-valued function,
and $\mathrm{sinc} z := \frac{\sin z}{z}$ (which is extended by continuity as $\mathrm{sinc} 0 := 1$).\hfill$\square$
\end{theorem}
\begin{proof}
The proof is similar to that of Theorem 8.1 of Ref. \refcite{VPJ_2021} and is outlined for completeness. Since the case of one integral operator is free from noncommutativity, (\ref{lim1}) applies directly to the term $\Tr \psi(\theta\mathsf{L}_{G,T})$ in (\ref{lnXi1}), with the function $\psi$ given by (\ref{phipsi}):
\begin{align}
\nonumber
\lim_{T\to +\infty}
\Big(
\frac{1}{T}
\Tr \psi(\theta\mathsf{L}_{G,T} )
\Big)
& =
\frac{1}{2\pi}
\int_{\mR}
\Tr \ln\cos(
\theta \Psi_G(\lambda)
)
\rd \lambda\\
\label{psilim}
& =
\frac{1}{2\pi}
\int_{\mR}
\ln\det \cos(
\theta \Psi_G(\lambda)
)
\rd \lambda,
\end{align}
where the identity $\Tr \ln N = \ln\det N$ for square matrices $N$ is used along with the Fourier transform (\ref{PsiG})
of the commutator kernel (\ref{LambdaG}).
Application of (\ref{lim1}) to $\Tr \varphi(\theta \mathsf{V}_{G,T} \mathsf{K}_{\theta,G,T} )$ in (\ref{lnXi1}), with the function $\varphi$ from (\ref{phipsi}), involves two noncommuting integral operators $\mathsf{V}_{G,T} $, $\mathsf{L}_{G,T} $ in (\ref{sVGT}), (\ref{sLGT}) and the related operator $\mathsf{K}_{\theta,G,T} $ from (\ref{sK}) as
\begin{align}
\nonumber
\varphi(\theta \mathsf{V}_{G,T} \mathsf{K}_{\theta,G,T} )
& =
-
\sum_{N=1}^{+\infty}
\frac{1}{N}
\theta^N
(\mathsf{V}_{G,T} \mathsf{K}_{\theta,G,T})^N\\
\label{phiPK}
& =
-
\sum_{N=1}^{+\infty}
\frac{1}{N}
\theta^N
\sum_{k_1, \ldots, k_N = 0}^{+\infty}
\mathop{\overrightarrow{\prod}}_{j=1}^N
\big(
c_{k_j}
\theta^{2k_j}
\mathsf{V}_{G,T}
\mathsf{L}_{G,T} ^{2k_j}
\big)
\end{align}
under the condition (\ref{spec}).
Here, the Maclaurin series expansion $\mathrm{tanc} z = \sum_{k=0}^{+\infty} c_k z^{2k}$ (with coefficients $c_k \in \mR$) takes into account the symmetry of the {\tt tanc} function. By applying (\ref{lim1}) to (\ref{phiPK}) in combination with a dominated convergence argument, it follows that
\begin{align}
\nonumber
\lim_{T\to +\infty}&
\Big(
\frac{1}{T}
\Tr \varphi(\theta\mathsf{V}_{G,T} \mathsf{K}_{\theta,G,T} )
\Big)\\
\nonumber
& =
-
\frac{1}{2\pi}
\sum_{N=1}^{+\infty}
\frac{1}{N}
\theta^N
\sum_{k_1, \ldots, k_N = 0}^{+\infty}
\int_\mR
\Tr
\mathop{\overrightarrow{\prod}}_{j=1}^N
\big(
c_{k_j}
\theta^{2k_j}
\Phi_G(\lambda)
\Psi_G(\lambda)^{2k_j}
\big)
\rd \lambda\\
\nonumber
& =
\frac{1}{2\pi}
\int_{\mR}
\Tr
\ln(
I_{\#G} -
\theta
\Phi_G(\lambda)
\mathrm{tanc}
(\theta \Psi_G(\lambda))
)
\rd \lambda\\
\label{philim}
& =
\frac{1}{2\pi}
\int_{\mR}
\ln\det(
I_{\#G} -
\theta
\Phi_G(\lambda)
\mathrm{tanc}
(\theta \Psi_G(\lambda))
)
\rd \lambda,
\end{align}
where the Fourier transforms (\ref{PhiG}), (\ref{PsiG}) are used. The limit relation (\ref{philim}) holds under the condition (\ref{spec1}) which is a frequency-domain counterpart of (\ref{spec}).
A combination of (\ref{psilim}), (\ref{philim}) leads to the following asymptotic growth rate (\ref{UpsGdef}) for the quantity (\ref{lnXi1}):
\begin{align}
\nonumber
\Upsilon_{\theta, G}
=&
-
\frac{1}{4\pi}
\int_{\mR}
\ln\det(
I_{\#G} -
\theta
\Phi_G(\lambda)
\mathrm{tanc}
(\theta \Psi_G(\lambda))
)
\rd \lambda\\
\nonumber
& -
\frac{1}{4\pi}
\int_{\mR}
\ln\det \cos(
\theta \Psi_G(\lambda)
)
\rd \lambda\\
\label{Xilim}
= &
-
\frac{1}{4\pi}
\int_{\mR}
\ln\det(
\cos(
\theta \Psi_G(\lambda)
) -
\theta
\Phi_G(\lambda)
\mathrm{sinc}
(\theta \Psi_G(\lambda))
)
\rd \lambda,
\end{align}
where the identity
$\mathrm{tanc} z \cos z = \mathrm{sinc} z$ is applied to the matrix $\theta \Psi_G(\lambda)$. In view of (\ref{DG}), the relation (\ref{Xilim}) is identical to (\ref{UpsG}).
\end{proof}
Under the condition (\ref{spec1}), the quantity $-\ln\det D_{\theta,G}(\lambda)$ is a nonnegative-valued symmetric function of the frequency $\lambda\in \mR$. This symmetry allows the integration in (\ref{UpsG}) to be reduced as
$ \Upsilon_{\theta,G}
=
-
\frac{1}{2\pi}
\int_0^{+\infty}
\ln\det
D_{\theta,G}(\lambda)
\rd \lambda
$.
\section{Spatio-Temporal Growth Rate of the QEF}
\label{sec:spat}
We will now proceed to the spatio-temporal growth rate (\ref{Ups0}) of the QEF (\ref{XiGT}). In view of (\ref{DG}), the representation (\ref{UpsG}) of the temporal QEF growth rate also has a trace-analytic structure
\begin{equation}
\label{UpsG1}
\Upsilon_{\theta,G}
=
-
\frac{1}{4\pi}
\int_{\mR}
\Tr
\big(
\varphi(
\theta
\Phi_G(\lambda)
\mathrm{tanc}
(\theta \Psi_G(\lambda))
)
+
\psi(\theta \Psi_G(\lambda))
\big)
\rd \lambda,
\end{equation}
involving the analytic functions (\ref{phipsi}) along with the $\mC^{G\times G}$-valued functions $\Phi_G$, $\Psi_G$ from (\ref{PhiG}), (\ref{PsiG}). At any given frequency $\lambda\in \mR$, each of the matrices $\Phi_G(\lambda)$, $\Psi_G(\lambda)$ is organised as the restriction $f_G:= (f_{j-k})_{j,k\in G}$ of a complex block Toeplitz matrix $f:= (f_{j-k})_{j,k\in {\mathbb Z}^\nu} \in \mathfrak{T}_{n,n}$ to $G \subset {\mathbb Z}^\nu$. This will be combined with the averaging relations of \ref{sec:aver} in the theorem below on the asymptotic behavior of (\ref{UpsG}) for ``large'' fragments of the network. More precisely, a nonempty finite set $G \subset {\mathbb Z}^\nu$ is said to tend to infinity ($G\to \infty$) if
\begin{equation}
\label{Ginf}
\Delta_G(\ell)
:=
\frac{\#(G \setminus (G+\ell))}{\#G}
\to 0,
\qquad
\ell \in {\mathbb Z}^\nu.
\end{equation}
The function $\Delta_G: {\mathbb Z}^\nu\to [0,1]$ is symmetric (that is, $\Delta_G(\ell) = \Delta_G(-\ell)$ for all $\ell\in {\mathbb Z}^\nu$) and quantifies the relative discrepancy between the set $G$ and its translations $G + \ell = \{z+\ell: z\in G\}$, so that
$$
\frac{\#(G\Delta (G+\ell))}{\# G}
=
2\Delta_G(\ell),
\qquad
\frac{\#(G\bigcap(G+\ell))}{\# G} = 1- \Delta_G(\ell),
$$
where $\alpha \Delta \beta $ denotes the symmetric difference of sets $\alpha$, $\beta$. Accordingly, $\Delta_G(\ell)<1$ holds if and only if $\ell \in G-G:= \{x-y: x,y\in G\}$. Also note that $\sum_{\ell \in {\mathbb Z}^\nu} (1- \Delta_G(\ell)) = \#G$, whereby (\ref{Ginf}) implies that $\#G \to +\infty$. The latter property is not only necessary but is also sufficient for $G \to \infty$ in certain classes of sets $G$. In particular, for a cube $G:= \{0,\ldots, L-1\}^\nu$, which consists of $\#G = L^\nu$ lattice sites, where $L$ is a positive integer, the left-hand side of (\ref{Ginf}) takes the form $\Delta_G(\ell) = 1- \prod_{k=1}^\nu \max(0,1-|\ell_k|/L)$ for any $\ell:=(\ell_k)_{1\< k\< \nu} \in {\mathbb Z}^\nu$. In this case, the condition $G\to \infty$ in the sense of (\ref{Ginf}) reduces to the side length of the cube unboundedly growing: $L\to +\infty$. Returning to (\ref{Ginf}) in the general case (when $G$ is not necessarily a cube), we note that the convergence $G\to \infty$ is metrizable in the sense of its equivalence to
\begin{equation}
\label{Ginfmet}
\sum_{\ell \in {\mathbb Z}^\nu}
2^{-|\ell_1|-\ldots-|\ell_\nu|}
\Delta_G(\ell)\to 0.
\end{equation}
The following theorem, which is concerned with the asymptotic behaviour of the quantity (\ref{UpsG}), as $G\to \infty$, employs the $\mC^{q\times q}$-valued spatio-temporal Fourier transforms
\begin{align}
\label{Phi}
\Phi(\sigma,\lambda)
& :=
\sum_{\ell \in {\mathbb Z}^\nu}
\int_{\mR}
\re^{-i(\ell^\rT \sigma + \lambda t )}
V_\ell(t)
\rd t
=
F(\sigma,i\lambda)
F(\sigma,i\lambda)^*,\\
\label{Psi}
\Psi(\sigma,\lambda)
& :=
\sum_{\ell \in {\mathbb Z}^\nu}
\int_{\mR}
\re^{-i(\ell^\rT \sigma + \lambda t )}
\Lambda_\ell(t)
\rd t
=
F(\sigma,i\lambda)
J_m
F(\sigma,i\lambda)^*,
\qquad
\sigma\in {\mathbb T}^\nu,\
\lambda \in \mR,
\end{align}
of the invariant two-point covariance and commutator kernels of the process $Z$ in (\ref{ZX}). Here,
\begin{equation}
\label{F}
F(\sigma,s)
:=
{\mathcal S}(\sigma)(sI_n - \mathcal{ A}(\sigma))^{-1}\mathcal{ B}(\sigma),
\qquad
\sigma \in {\mathbb T}^\nu,\
s \in \mC,
\end{equation}
is the spatio-temporal transfer function from the incremented input fields of the network to $Z$. Similarly to (\ref{PhiG}), (\ref{PsiG}), $\Phi(\sigma,\lambda)$ is a complex positive semi-definite Hermitian matrix, while $\Psi(\sigma,\lambda)$ is skew Hermitian for any $\sigma \in {\mathbb T}^\nu$, $\lambda \in \mR$, and $\Phi+i\Psi$ is the Fourier transform of the quantum covariance kernel $V+i\Lambda$ from (\ref{EZZ2}). The function $\Phi+i\Psi: {\mathbb T}^\nu \times \mR \to \mC^{q\times q}$ can be interpreted as a ``quantum spectral density'' of the process $Z$.
\begin{theorem}
\label{th:limXiG}
Suppose the translation invariant network in (\ref{dXj})--(\ref{cABCD}) satisfies the conditions of Theorem~\ref{th:inv}, and the integral operator $\mathsf{L}_{G,T}$ in (\ref{sLGT}) has no zero eigenvalues for all sufficiently large $T>0$ and finite sets $G\subset {\mathbb Z}^\nu$ in the sense of (\ref{Ginf}) (or (\ref{Ginfmet})). Also, let the risk sensitivity parameter $\theta>0$ in (\ref{XiGT}) satisfy
\begin{equation}
\label{rad}
\theta
\sup_{\sigma \in {\mathbb T}^\nu,\, \lambda \in \mR}
\lambda_{\max}
(
\Phi(\sigma,\lambda)
\mathrm{tanc}
(\theta \Psi(\sigma,\lambda))
)
< 1,
\end{equation}
where the functions $\Phi$, $\Psi$ are given by (\ref{Phi}), (\ref{Psi}). Then the QEF $\Xi_{\theta,G,T}$, defined by (\ref{XiGT}), (\ref{QGT}), has the following spatio-temporal growth rate (\ref{Ups0}):
\begin{equation}
\label{Ups}
\Upsilon(\theta)
=
-
\frac{1}{2(2\pi)^{\nu+1}}
\int_{{\mathbb T}^\nu\times \mR}
\ln\det
D_{\theta}(\sigma,\lambda)
\rd \sigma\rd \lambda,
\end{equation}
where the function $D_\theta: {\mathbb T}^\nu\times \mR \to \mC^{q\times q}$ is given by
\begin{equation}
\label{D}
D_{\theta}(\sigma,\lambda)
:=
\cos(
\theta \Psi(\sigma,\lambda)
) -
\theta
\Phi(\sigma,\lambda)
\mathrm{sinc}
(\theta \Psi(\sigma,\lambda)).
\end{equation}
\hfill$\square$
\end{theorem}
\begin{proof}
The proof is similar to that of Theorem~\ref{th:limXi} except that the averaging relations of \ref{sec:aver} are used here instead of \ref{sec:averint} and are applied to the integrands in (\ref{UpsG}) pointwise at every frequency $\lambda$, which is followed by a dominated convergence argument. Application of (\ref{limhG}) to the second integrand in (\ref{UpsG}) yields
\begin{align}
\nonumber
\lim_{G\to \infty}
\Big(
\frac{1}{\#G}
\Tr \psi(\theta \Psi_G(\lambda))
\Big)
& =
\frac{1}{(2\pi)^\nu}
\int_{{\mathbb T}^\nu}
\Tr \ln\cos(
\theta \Psi(\sigma,\lambda)
)
\rd \sigma\\
\label{psilimG}
& =
\frac{1}{(2\pi)^\nu}
\int_{{\mathbb T}^\nu}
\ln\det \cos(
\theta \Psi(\sigma,\lambda)
)
\rd \sigma,
\qquad
\lambda \in \mR,
\end{align}
where use is made of the function $\psi$ from (\ref{phipsi}) and the Fourier transform (\ref{Psi}) of the commutator kernel (\ref{Lambda}). Application of (\ref{limhG}) to the first integrand in (\ref{UpsG1}) leads to
\begin{align}
\nonumber
\lim_{G\to \infty}&
\Big(
\frac{1}{\#G}
\Tr \varphi(\theta\Phi_G(\lambda)\mathrm{tanc}(\theta\Psi_G(\lambda)))
\Big)\\
\nonumber
& =
\frac{1}{(2\pi)^\nu}
\int_{{\mathbb T}^\nu}
\Tr
\ln(
I_q -
\theta
\Phi(\sigma,\lambda)
\mathrm{tanc}
(\theta \Psi(\sigma,,\lambda))
)
\rd \sigma\\
\label{philimG}
& =
\frac{1}{(2\pi)^\nu}
\int_{{\mathbb T}^\nu}
\ln\det(
I_q -
\theta
\Phi(\sigma,\lambda)
\mathrm{tanc}
(\theta \Psi(\sigma,\lambda))
)
\rd \sigma,
\qquad
\lambda \in \mR,
\end{align}
where
the Fourier transform (\ref{Phi}) of the real covariance kernel (\ref{V}) is used together with (\ref{Psi}). The limit (\ref{philimG}) holds under the condition (\ref{rad}) which is a spatio-temporal frequency-domain counterpart of (\ref{spec1}).
By combining (\ref{psilimG}), (\ref{philimG}), it follows that the quantity (\ref{UpsG1}) has the following asymptotic growth rate (\ref{Ups0}):
\begin{align}
\nonumber
\Upsilon(\theta)
=&
-
\frac{1}{2(2\pi)^{\nu+1}}
\int_{{\mathbb T}^\nu\times \mR}
\ln\det(
I_q -
\theta
\Phi(\sigma,\lambda)
\mathrm{tanc}
(\theta \Psi(\sigma,\lambda))
)
\rd \sigma\rd \lambda\\
\nonumber
& -
\frac{1}{2(2\pi)^{\nu+1}}
\int_{{\mathbb T}^\nu\times \mR}
\ln\det \cos(
\theta \Psi(\sigma,\lambda)
)
\rd \sigma\rd \lambda\\
\label{XilimG}
= &
-
\frac{1}{2(2\pi)^{\nu+1}}
\int_{{\mathbb T}^\nu\times \mR}
\ln\det(
\cos(
\theta \Psi(\sigma,\lambda)
) -
\theta
\Phi(\sigma,\lambda)
\mathrm{sinc}
(\theta \Psi(\sigma,\lambda))
)
\rd \sigma\rd \lambda.
\end{align}
In view of (\ref{D}), the relation (\ref{XilimG}) establishes (\ref{Ups}).
\end{proof}
Consider Theorem~\ref{th:limXiG} in the limiting classical case obtained formally by letting $\Theta = 0$ in (\ref{Xcomm}) and $J_m = 0$ in (\ref{Wcomm}). In this case, (\ref{dX}) is an SDE driven by independent standard Wiener processes $W_k$ with values in $\mR^m$ at lattice sites $k\in {\mathbb Z}^\nu$. The classical invariant measure of the network makes $Z$ in (\ref{ZX}) a stationary $(\mR^q)^{{\mathbb Z}^\nu}$-valued Gaussian random process\cite{GS_2004} with zero mean and the spectral density $\Phi$ in (\ref{Phi}). Accordingly, the function $\Psi$ in (\ref{Psi}) vanishes, and the condition (\ref{rad}) takes the form
\begin{equation}
\label{class}
\theta
<
\theta_*
:=
\frac{1}{\sup_{\sigma \in {\mathbb T}^\nu,\, \lambda \in \mR}
\lambda_{\max}
(
\Phi(\sigma, \lambda)
)}
=
\frac{1}{\|F\|_\infty^2},
\end{equation}
involving the spatio-temporal counterpart
$$
\|F\|_\infty
:=
\sup_{\sigma \in {\mathbb T}^\nu,\, \lambda \in \mR}\|F(\sigma, i\lambda)\|
$$
of the Hardy space $\cH_\infty$-norm for the transfer function $F$ in (\ref{F}) which factorizes the spectral density $\Phi$ in (\ref{Phi}). In this case, the right-hand side of (\ref{Ups}) reduces to
\begin{equation}
\label{Ups*}
\Upsilon_*(\theta)
:=
-
\frac{1}{2(2\pi)^{\nu+1}}
\int_{{\mathbb T}^\nu \times \mR}
\ln\det(
I_q
-
\theta
\Phi(\sigma,\lambda)
)
\rd \sigma
\rd \lambda
\end{equation}
in view of (\ref{D}) and corresponds to the $\cH_\infty$-entropy integral of Ref.~\refcite{MG_1990} (see also Ref.~\refcite{AK_1981}).
In contrast to its classical counterpart (\ref{Ups*}), the QEF growth rate (\ref{Ups}) in the quantum case depends on both functions $\Phi$, $\Psi$ which constitute the quantum spectral density $\Phi + i\Psi$ of the process $Z$ in (\ref{ZX}). Furthermore, the condition (\ref{rad}) is transcendental in $\theta$ and, unlike (\ref{class}), does not admit a closed-form representation. However, since {\tt tanc} on the imaginary axis (that is, {\tt tanhc} on the real axis) takes values in the interval $(0,1]$ and hence, $0\prec \mathrm{tanc} (\theta \Psi)\preccurlyeq I_q $, then
$$
\lambda_{\max}
(
\Phi
\mathrm{tanc}
(\theta \Psi)
)
=
\lambda_{\max}
\big(
\sqrt{\mathrm{tanc}
(\theta \Psi)}
\Phi
\sqrt{\mathrm{tanc}
(\theta \Psi)}
\big)
\< \lambda_{\max}(\Phi)
$$
everywhere in ${\mathbb T}^\nu\times \mR$, so that the fulfillment of the classical constraint (\ref{class}) secures (\ref{rad}).
\section{A Homotopy Technique for Computing the QEF Growth Rate}
\label{sec:homo}
Consider the computation of the QEF growth rate (\ref{Ups}) by a technique, which
resembles the homotopy methods for numerical solution of parameter dependent algebraic equations\cite{MB_1985} and exploits the specific dependence of $\Upsilon(\theta)$ on the risk sensitivity parameter $\theta$. With the function $D_\theta$ in (\ref{D}), we associate a function $U_\theta: {\mathbb T}^\nu\times \mR \to \mC^{q\times q}$ by
\begin{equation}
\label{U}
U_\theta(\sigma,\lambda):= -D_\theta(\sigma,\lambda)^{-1}\partial_\theta D_\theta(\sigma, \lambda)
\end{equation}
for all $\theta >0$ satisfying (\ref{rad}) (which ensures that $\det D_\theta (\sigma,\lambda)\ne 0$ for all $\sigma \in {\mathbb T}^\nu$, $\lambda \in \mR$). The following theorem provides a network counterpart of Theorem 9.1 from Ref. \refcite{VPJ_2021} (the latter corresponds formally to the single OQHO case with $\nu=0$).
\begin{theorem}
\label{th:diff}
Under the conditions of Theorem~\ref{th:limXiG}, the QEF growth rate $\Upsilon(\theta)$ in (\ref{Ups}) satisfies the differential equation
\begin{equation}
\label{Ups'}
\Upsilon'(\theta)
=
\frac{1}{2(2\pi)^{\nu+1}}
\int_{{\mathbb T}^\nu\times \mR}
\Tr U_\theta(\sigma,\lambda)
\rd \sigma\rd \lambda,
\end{equation}
with the initial condition $\Upsilon(0)=0$. Here, the function (\ref{U}) is computed as
\begin{equation}
\label{U1}
U_\theta
=
\Psi
( \Psi\cos(
\theta \Psi
) -
\Phi
\sin
(\theta \Psi)
)^{-1}
(\Phi \cos(\theta \Psi)
+\Psi\sin(\theta \Psi)
)
\end{equation}
(the arguments $\sigma$, $\lambda$ are omitted for brevity),
takes values in the subspace of Hermitian matrices of order $n$ and satisfies a Riccati equation
\begin{equation}
\label{U'}
\partial_\theta U_\theta(\sigma,\lambda)
=
\Psi(\sigma,\lambda)^2
+
U_\theta(\sigma,\lambda)^2,
\qquad
\sigma \in {\mathbb T}^\nu,\
\lambda \in \mR,
\end{equation}
with the initial condition $U_0 = \Phi$ given by (\ref{Phi}).\hfill$\square$
\end{theorem}
\begin{proof}
The relation (\ref{Ups'}) is obtained by combining (\ref{Ups}) with $(\ln\det D_\theta)' = -\Tr U_\theta$, which follows from (\ref{U}) and the identity $(\ln\det N)' = \Tr (N^{-1}N')$, where $(\cdot)':= \partial_\theta(\cdot)$.
Since the function $D_\theta$ in (\ref{D}) admits the representation
\begin{equation}
\label{D1}
D_\theta
=
\cos(
\theta \Psi
) -
\Phi
\Psi^{-1}
\sin
(\theta \Psi)
\end{equation}
for any $\sigma \in {\mathbb T}^\nu$, $\lambda\in \mR$ (with $\frac{\sin(\theta z)}{z}$ extended by continuity to $\theta$ at $z=0$), its derivative with respect to $\theta$ takes the form
\begin{equation}
\label{D'}
D_\theta'
=
-\Psi
\sin(
\theta \Psi
)
-
\Phi
\cos
(\theta \Psi).
\end{equation}
The equality (\ref{U1}) results from
substitution of (\ref{D1}), (\ref{D'}) into (\ref{U}). By differentiating (\ref{D'}) in $\theta$, it follows that (\ref{D1})
satisfies the linear second-order ODE
\begin{equation}
\label{D''}
D_\theta''
=
-\Psi^2
\cos(
\theta \Psi
)
+
\Phi
\Psi
\sin
(\theta \Psi)
=
- D_\theta \Psi^2,
\end{equation}
with the initial conditions $D_0 = I_q$, $D_0' = -\Phi$. In view of the relation $(N^{-1})' = -N^{-1}N'N^{-1}$, the differentiation of (\ref{U}) leads to
\begin{equation}
\label{U'1}
U_\theta'
=
-D_\theta^{-1}D_\theta''
+D_\theta^{-1}D_\theta' D_\theta^{-1}D_\theta'
=
\Psi^2 + U_\theta^2,
\end{equation}
which uses (\ref{D''}) and establishes (\ref{U'}). The solution $U_\theta$ of this differential equation inherits the Hermitian property from its initial condition $U_0 = \Phi$, since $\Psi(\sigma,\lambda) = -\Psi(\sigma,\lambda)^*$ in (\ref{Psi}) for any $\sigma \in {\mathbb T}^\nu$, $\lambda\in \mR$, and $(N^2)^* = N^2$ for Hermitian or skew Hermitian matrices $N$.
\end{proof}
The transformation $D_\theta\mapsto U_\theta$ in (\ref{U}), which involves a matrix-valued counterpart of the logarithmic derivative and relates the quadratically nonlinear Riccati ODE (\ref{U'}) to the linear ODE (\ref{D''}), resembles the Hopf-Cole transformation\cite{C_1951,H_1950} linking the viscous Burgers equation with the heat equation. The role of (\ref{U}) in (\ref{U'1}) is also similar to that of the logarithmic transformation in dynamic programming
equations for stochastic control\cite{F_1982} (see also Ref. \refcite{VP_2010}).
The right-hand side of (\ref{Ups'}) can be evaluated by numerical integration over the spatio-temporal frequencies
and used for computing (\ref{Ups}) as
$$
\Upsilon(\theta)
=
\int_0^\theta \Upsilon'(\vartheta)\rd \vartheta
=
\frac{1}{2(2\pi)^{\nu+1}}
\int_{{\mathbb T}^\nu\times \mR \times [0,\theta]}
\Tr U_\vartheta(\sigma,\lambda)
\rd \sigma \rd \lambda \rd \vartheta.
$$
In particular, (\ref{Ups'}) yields
\begin{align}
\nonumber
\Upsilon'(0)
& =
\frac{1}{2(2\pi)^{\nu+1}}
\int_{{\mathbb T}^\nu\times\mR}
\Tr \Phi(\sigma,\lambda)
\rd \sigma
\rd \lambda\\
\label{Ups'0}
& =
\frac{1}{2}
\|F\|_2^2 =
\frac{1}{2}
\bE(Z_0(0)^\rT Z_0(0)),
\end{align}
which, in accordance with (\ref{EQGT}), (\ref{EQGT1}), reproduces the mean square cost rate for the process $Z$ in (\ref{ZX}) in the invariant Gaussian state of the network. In (\ref{Ups'0}), use is also made of a spatio-temporal version
$$
\|F\|_2
:=
\sqrt{
\frac{1}{(2\pi)^{\nu+1}}
\int_{{\mathbb T}^\nu\times\mR}
\|F(\sigma,i\lambda)\|_\rF^2
\rd \sigma
\rd \lambda}
$$
of the Hardy space $\cH_2$-norm for the transfer function $F$ in (\ref{F}) which factorizes $\Phi$ in (\ref{Phi}).
The function $\det (I_q - \theta \Phi(\sigma,\lambda))$ in the classical QEF rate (\ref{Ups*}) is rational with respect to $\lambda$, simplifying the evaluation of the integral. This observation can be combined with the Maclaurin series expansions of the trigonometric functions, which allows (\ref{D}) to be approximated as
\begin{align}
\nonumber
D_\theta
& =
I_q - \frac{1}{2}\theta^2 \Psi^2 - \theta \Phi \Big(I_q - \frac{1}{6}\theta^2 \Psi^2\Big)
+
o(\theta^3)\\
\label{Dprox}
& =
I_q - \theta \Phi
-
\frac{1}{2}\theta^2
\Big(I_q - \frac{\theta}{3}\Phi\Big)\Psi^2
+
o(\theta^3),
\qquad
{\rm as}\
\theta \to 0.
\end{align}
Substitution of (\ref{Dprox}) into (\ref{Ups}) allows the quantum QEF growth rate to be computed approximately through a perturbation of its classical counterpart (\ref{Ups*}):
\begin{align}
\nonumber
\Upsilon(\theta)
= &
\Upsilon_*(\theta)\\
\nonumber
& +
\frac{\theta^2}{4(2\pi)^{\nu+1}}
\int_{{\mathbb T}^\nu \times\mR}
\Tr
\Big((I_q - \theta\Phi(\sigma,\lambda))^{-1}\Big(I_q - \frac{\theta}{3}\Phi(\sigma,\lambda)\Big)\Psi(\sigma,\lambda)^2
\Big)
\rd \sigma
\rd \lambda\\
\label{VUps}
& + o(\theta^3),
\qquad
{\rm as}\
\theta \to 0.
\end{align}
Since $\Psi(\sigma,\lambda)^2\prec 0$ for all $\sigma \in{\mathbb T}^\nu$, $\lambda \in \mR$, the relation (\ref{VUps}) implies that $\Upsilon(\theta)< \Upsilon_*(\theta)$ for all sufficiently small $\theta >0$.
\section{Conclusion}
\label{sec:conc}
We have considered a class of translation invariant networks of multimode OQHOs on a multidimensional lattice, governed by linear QSDEs driven by external quantum fields. The block Toeplitz structure of their coefficients has been exploited in order to represent the PR conditions in the spatio-temporal frequency domain, relate them with the energy and coupling matrices, and compute the energy parameters for interconnections of networks. Such interconnections arise in quantum control settings with network performance specifications including stability and minimization of cost functionals. We have discussed the invariant Gaussian quantum state for stable networks, driven by vacuum fields, and a quadratic-exponential cost functional as a risk-sensitive performance criterion for finite fragments of the network over bounded time intervals. This cost gives rise to exponential upper bounds for tail distributions of a quadratic function of network variables weighted by a block Toeplitz matrix. A spatio-temporal frequency-domain formula has been obtained for the asymptotic QEF rate per unit time and per lattice site in the thermodynamic limit of infinite time horizons and unboundedly growing network fragments. This representation involves the quantum spectral density, associated through the Fourier transform with the invariant quantum covariance kernel of the network variables and factorised by the spatio-temporal transfer function of the network. We have obtained a differential equation for the QEF rate as a function of the risk sensitivity parameter and outlined its computation using a homotopy technique and asymptotic expansions. These results provide a solution of the risk-sensitive performance analysis problem in the spatio-temporal frequency domain for translation invariant linear quantum stochastic networks, which can be applied to coherent and measurement-based control and filtering settings for such systems with QEF criteria.
|
2,869,038,156,754 | arxiv | \section{Introduction}
\noindent
One of the most basic classes of algorithmic problems in combinatorial optimization is the computation of shortest paths for all pairs of nodes in a directed graph.
The reader may consult the monograph of Schrijver \cite{Schrijver03:CO_A} for a comprehensive survey.
Here we study parameterized versions where some of the arc weights are unspecified.
It turns out that standard tools such as the Floyd--Warshall algorithm \cite[\S8.4]{Schrijver03:CO_A} or Dijkstra's algorithm \cite[\S7.2]{Schrijver03:CO_A} admit interesting generalizations.
While it is known that the shortest path problem is connected to max-plus linear algebra and tropical geometry (see, e.g., \cite[Chap.~4]{Butkovic10}, \cite[\S5.2]{Tropical+Book}
and their references), this paper investigates how the geometric underpinnings can be exploited algorithmically.
Dijkstra's algorithm and its siblings are among the core tools used, e.g., in devices which help a car driver to navigate a road network.
These efficient methods allow for solving the corresponding shortest path problems almost instantly, even on cheap hardware, and even for fairly large networks.
Methods from robust optimization have been used to take some uncertainty about the link travel times into account, see, e.g., \cite{YuYang:1998}, \cite{MR3907225} and the references there.
Yet the situation for the network provider is quite different from the perspective of the network user.
One reason is that the provider's goal does not necessarily agree with the one of the user:
While the individual driver might be interested in short travel times, the traffic authorities of a metropolitan city might want to, e.g., minimize the total amount of pollution.
More importantly, the traffic authorities seek to achieve a system optimum, whereas the driver cares for an individual objective; cf.\ \cite{Moehring:2005}.
Typically, in relevant cases it is next to impossible to even describe a system optimum.
Here we propose methods from tropical geometry to gain new insight.
For instance, this may be applied to assess the impact of local changes to a network a priori, provided that the number of simultaneous modifications is not too high.
Tropical geometry is a thriving field, which combines algebraic and polyhedral geometry with optimization; cf.~\cite{Tropical+Book}.
The benefit is mutual: geometry gains algorithmic methods through optimization, and topics in theoretical computer science and optimization may be analyzed more deeply through geometric insight.
Examples for the latter include interpreting the decision problem MEAN-PAYOFF in terms of tropical linear programs \cite{AkianGaubertGutermann12}, ordinary linear programs which exhibit a non-strongly polynomial behavior for interior point methods \cite{ABGJ:2018} or applications to multicriteria optimization~\cite{JoswigLoho:1707.09305}.
Examples for the former abound, too; to name just one: the perfect matching problem, classically solved by the Hungarian method \cite[\S17.2]{Schrijver03:CO_A}, computes tropical determinants \cite[Prop.~1.2.5]{Tropical+Book}.
It is worth noting that our parameterized version of computing shortest paths is of particular interest geometrically.
Namely, this is related to enumerating a class of convex polytopes known as \emph{polytropes} and has been studied, e.g., by Tran~\cite{Tran:2017}.
Our setup is the following.
Let $\Gamma$ be a directed graph with $n$ nodes and $m$ arcs.
Throughout we will assume that $\Gamma$ has no parallel arcs.
Additionally, each arc will be equipped with a weight.
Then the graph together with the weight function, can be encoded as an $n{\times}n$-matrix where the coefficient at position $(u,v)$ is the weight on the arc from $u$ to $v$.
Necessarily we have $m\leq n^2$, with equality if and only if $\Gamma$ is a complete directed graph with $n$ loops.
Since we will be interested in shortest path problems we consider smaller weights as better, and this suggests to use $\infty$ to signal the absence of an arc.
The resulting matrix is a \emph{weighted adjacency matrix} of $\Gamma$.
This leaves the question: what are the weights?
In the context of the shortest path problem a very general answer is the following.
Let $(G,+)$ be a totally ordered abelian group such that $\infty$ is not an element of~$G$.
Then $G\cup\{\infty\}$, equipped with \enquote{$\min$} as the addition and \enquote{$+$} as the multiplication, is a semiring; this is the \emph{$(\min,+)$-semiring} associated with $G$.
Here $\infty$ is neutral with respect to the addition and absorbing with respect to the multiplication.
Via the usual rules for the addition and the multiplication of matrices this entails a semiring structure on the set $(G\cup\{\infty\})^{n\times n}$ of $n{\times}n$-matrix with coefficients in $G\cup\{\infty\}$.
The classical shortest path problem occurs when $G$ is the additive group of the real numbers.
We denote the extension $\RR\cup\{\infty\}$ of this group by $\TT$.
However, it is interesting and useful to go one step beyond by only requiring that $G$ is a commutative semigroup equipped with a partial ordering, which is not necessarily total.
Then, in general, for a pair of nodes there are competing shortest paths whose total weights are incomparable.
We will see that basic algorithmic ideas for solving shortest path problems still remain valid, with minor adjustments.
The case when $G$ is the additive semigroup of tropical polynomials with real coefficients in a fixed number of indeterminates is of particular interest to us.
To look at parameterized versions of shortest path problems is not a new idea.
A first paper which explores connections to polyhedral geometry is Fredman \cite{Fredman:1976}.
Another important precursor of our approach is a paper by Gallo, Grigoriadis and Tarjan \cite{Gallo+Grigoriadis+Tarjan:1989} on a parametric version of the celebrated push--relabel method for computing maximum flows by Goldberg and Tarjan \cite{Goldberg+Tarjan:1988}.
Moreover, shortest path computations have been considered in the context of robust optimization; cf.\ \cite{MR2546839} for a general reference.
For instance, Yu and Yang observed that in a digraph equipped with interval weights, for given nodes $s$ and $t$, it is NP-complete to decide whether there is a shortest $s$--$t$ path whose total weight stays below a certain threshold \cite[Theorem~1]{YuYang:1998}.
Other modern concepts in this area include online techniques (e.g., see \cite{AlonEtAl:2006}) as well as robustness combined with randomization (e.g., see \cite{MatuschkeSkutellaSoto:2015}) and dynamic algorithms (e.g., see \cite{Bernstein:2016}).
The $s$--$t$ shortest path problems addressed in the above models are relevant, e.g., for a single driver who wants to navigate her car through a road network with uncertain link travel times.
Here instead we are considering the all-pairs shortest path problem, which amounts to taking the perspective of the provider of the network; cf.\ Section~\ref{sec:computations} below for computational experiments on real-world data.
These show that our method is also practically useful, provided the output size is not too large.
Our paper is organized as follows.
Section~\ref{sec:floyd-warshall} starts out with a brief sketch on how to generalize the classical algorithm of Floyd and Warshall to the scenario with parameterized arc weights.
Standard results on tropical hypersurfaces are invoked to reveal basic structural insight into the shortest-path problem.
The algorithmic core of this paper, explained in Section~\ref{sec:dijkstra}, is a procedure for enumerating all parameterized shortest-path trees to a fixed target node.
This can be seen as a parameterized analog to Dijkstra's algorithm.
We demonstrate that this algorithm is feasible in practice for few variable arc weights.
Our computational results are summarized in Sections~\ref{sec:computations} and~\ref{sec:concluding}.
\section{Parameterizing the Floyd--Warshall algorithm}
\label{sec:floyd-warshall}
\noindent
A standard method for computing all shortest paths between any pair of nodes in a directed graph is the Floyd--Warshall algorithm.
This is well-known to have a straightforward interpretation in tropical arithmetic as follows.
We will briefly sketch the method and refer to \cite[\S8.4]{Schrijver03:CO_A} or \cite[Section 5.9]{Ahoetal:1975} for details.
Let $\Gamma$ be a directed graph on $n$ nodes with weighted adjacency matrix $D=(d_{uv})_{u,v}\in \TT^{n\times n}$.
A naive algorithm for obtaining all-pairs shortest paths is to compute the $(n{-}1)$st tropical power $(I\oplus D)^{\odot(n-1)}$ of the matrix $I\oplus D$ where $I=D^{\odot 0}$ is the tropical identity matrix, with coefficients $0$ on the diagonal and $\infty$ otherwise.
Here we used \enquote{$\oplus$} for the tropical matrix addition, which is defined as the coefficientwise minimum, and \enquote{$\odot$} for the tropical matrix multiplication, i.e., the analog of classical matrix multiplication where \enquote{$\min$} and \enquote{$+$} replace the addition and the multiplication, respectively. In particular, $I\oplus D$ is the weighted adjacency matrix of $\Gamma$ with zeros along the diagonal.
For computing shortest paths the $(n{-}1)$st power is enough since any shortest path, if it exists, takes at most $n-1$ arcs.
Each of the $n-2$ multiplications takes $O(n^3)$ time, resulting in a total cost of $O(n^4)$.
We will not discuss clever strategies for multiplying these matrices as we will beat this naive matrix multiplication approach via \eqref{eq:floyd-warshall} below.
Unless there are negative cycles the coefficient of $D^{\odot(n-1)}$ at position $(u,v)$ is the length of a shortest path from node $u$ to~$v$ using exactly $n-1$ arcs, and the coefficient of $(I\oplus D)^{\odot(n-1)}$ at position $(u,v)$ is the length of a shortest path from node $u$ to~$v$.
Moreover, a negative cycle exists if and only if a coefficient on the diagonal of $(I\oplus D)^{\odot n}$ is negative.
Formally, the solution to the all-pairs shortest path problem can be written as
\begin{equation}\label{eq:kleene}
D^* \ = \ I \oplus D \oplus D^{\odot 2} \oplus \cdots \oplus D^{\odot(n-1)} \oplus \cdots \enspace ,
\end{equation}
which converges to $(I\oplus D)^{\odot (n-1)}= I\oplus \dots \oplus D^{\odot(n-1)}$ if and only if there is no negative cycle.
The matrix $D^*$ is called the \emph{Kleene star} of $D$; cf.\ Butkovi\v{c} \cite[\S1.6.2.1]{Butkovic10}.
Floyd and Warshall's algorithm reduces the complexity of computing $D^*$ to $O(n^3)$ via dynamic programming.
The key ingredient is the weight of a shortest path from $u$ to $v$ with all intermediate nodes restricted to the set $\{1,2,\dots,r\}$, which is
\begin{equation}\label{eq:floyd-warshall}
d_{uv}^{(r)} \ = \
\begin{cases}
0 & \text{if } u=v\\
d_{uv} & \text{if } r=0 \text{ and } u\neq v\\
\min\left( d_{uv}^{(r-1)},\, d_{ur}^{(r-1)}+d_{rv}^{(r-1)}\right) & \text{if } r\geq 1 \enspace .
\end{cases}
\end{equation}
That is, in the nontrivial step of the computation we check if going through the new node~$r$ gives an advantage.
We set $D^{(r)}=\big(d_{uv}^{(r)}\big)_{u,v}$.
By applying the formula \eqref{eq:floyd-warshall} recursively, the Floyd--Warshall algorithm computes $D^{(n)}$ in $O(n^3)$ time.
The trick is that, with $D^{(r-1)}$ known explicitly, the computation of a single coefficient $d_{uv}^{(r)}$ requires only constant time.
Note that this method is also suitable for detecting negative cycles by checking the diagonal of the result.
A negative cycle exists if and only if some diagonal coefficient of $D^{(n)}$ is negative.
Otherwise $D^{(n)}=(I \oplus D)^{\odot(n-1)}=D^*$.
In general, $D^{(r)}$ is distinct from any tropical power~$(I\oplus D)^{\odot k}$ whenever $r\geq 1$.
\begin{remark}
For computing all-pairs shortest path in a dense graph with arbitrary weights there is no algorithm known that beats the $O(n^3)$ complexity bound; see \cite[\S8.6]{Schrijver03:CO_A}.
Yet Floyd and Warshall's algorithm was improved by Fredman in \cite{Fredman:1976} when the edge weights are restricted to be nonnegative.
Fredman's bound is based on the reduction of computing the Kleene star $D^*$ to tropical matrix multiplication; see \cite[Theorem~5.7 and Corollary~2]{Ahoetal:1975}.
Moreover, he subdivides the matrix multiplication into the multiplication of smaller block matrices of sizes $n\times\sqrt{n}$ and $\sqrt{n}\times n$, respectively.
In combination with a clever search method this leads to a bound of $O(n^{5/2})$ comparisons.
Further, this approach leads to an algorithm of complexity $O(n^3 \log\log(n)/\log(n)^2)$ by Han and Takaoka \cite{Han+Takaoka:2016}; see \cite[\S7.5]{Schrijver03:CO_A} for a general overview of all-shortest paths with nonnegative weights and \cite{Han+Takaoka:2016} for more recent developments.
\end{remark}
\begin{figure}[t]
\centering
\begin{minipage}{.47\textwidth}\centering
\input{line.tex}
\end{minipage}
\hfill
\begin{minipage}{.47\textwidth}\centering
\input{hypersurfaces.tex}
\end{minipage}
\caption{Two generic tropical plane curves.
Each region is marked with the term at which the minimum is attained.
Left: tropical line defined by $\min (2, 1+x, y)$. Right: tropical quadric defined by $\min (4, 3+x, 4+2x, 2+x+y, 6+2y, \frac{9}{2}+y)$.}
\label{fig:tropical}
\end{figure}
Our first observation is that the same ideas can be applied in the presence of variable arc weights.
To this end we consider a weighted adjacency matrix where each coefficient is a multivariate polynomial whose coefficients lie in the $(\min,+)$-semiring $\TT=\RR\cup\{\infty\}$.
These polynomials again form a semiring, and thus, via the usual addition and multiplication, the set of $n{\times}n$-matrices with coefficients in $\TT[x_1,\dots,x_k]$ is a semiring, too.
Formally, a $k$-variate tropical polynomial, $f\in\TT[x_1,\dots,x_k]$, with finite \emph{support set} $S\subset\ZZ^k$, is a map which assigns each exponent $a\in S$ a coefficient $\gamma_a\in\RR$.
This gives rise to the \emph{evaluation function} $f(t_1,\dots,t_k) = \min\smallSetOf{\gamma_a + a_1 t_1 + \dots + a_k t_k}{a\in S}$ which sends $t\in\RR^k$ to a real number.
That function is piecewise linear, continuous and concave; cf.\ \cite[\S1.1]{Tropical+Book}.
For each $a\in S$ the set $\smallSetOf{t\in\RR^k}{f(t)=\gamma_a + a_1 t_1 + \dots + a_k t_k}$ is convex polyhedron.
That set is a \emph{region} of $f$ if that polyhedron is of maximal dimension $k$.
The regions are the domains of linearity of $f$, and they form a polyhedral subdivision of $\RR^k$; cf.\ Figure~\ref{fig:tropical}.
The \emph{tropical hypersurface} $\cT(f)$ is the locus where at least two terms of $f$ are minimal; i.e., $\cT(f)$ \enquote{lies between} pairs of regions.
Now, if $D$ is a matrix with coefficients in $\TT[x_1,\dots,x_k]$, then each coefficient of $D$ defines a tropical hypersurface and a polyhedral subdivision of~$\RR^k$.
The maximal cells of the common refinement of the polyhedral subdivision taken over all coefficients of $D$, are the \emph{regions} of $D$.
Each region of $D$ is the intersection of the regions of the tropical hypersurfaces corresponding to some set of coefficients of $D$.
\begin{observation}\label{obs:initial}
The solution to the all-pairs shortest paths problem of a directed graph with $n$ nodes and weighted adjacency matrix $D\in\TT[x_1,\dots,x_k]$ is a polyhedral decomposition of $\RR^k$ induced by up to $n^2$ tropical hypersurfaces corresponding to the nonconstant coefficients of $D^{\odot (n-1)}$.
On each polyhedral cell the lengths of all shortest paths are linear functions in the $k$ parameters.
\end{observation}
By multiplying the nonconstant tropical polynomials of a matrix $D\in\TT[x_1,\dots,x_k]^{n\times n}$ we obtain one tropical polynomial which yields the tropical hypersurface \emph{induced} by $D$, and this is denoted $\cT(D)$.
The regions of $\cT(D)$ are precisely the regions of $D$.
\begin{example}\label{exmp:initial}
The weighted adjacency matrix
\begin{equation}\label{eq:initial}
D \ = \ \begin{pmatrix}
0 & \infty & \infty & 1 \\
1 & 0 & \infty & \infty \\
y & 1 & 0 & \infty \\
\infty & x & 1 & 0
\end{pmatrix} \enspace ,
\end{equation}
whose coefficients lie in the semiring $\TT[x,y]$ of bivariate tropical polynomials, define a directed graph, $\Gamma$, on four nodes.
The third tropical power of $D=I\oplus D$ reads
\[
D^{\odot 3} \ = \ \small\begin{pmatrix}
\min( 2+x, 2+y, 0 ) & \min( 1+x, 3 ) & 2 & 1 \\
1 & \min( 2+x, 0 ) & 3 & 2 \\
\min( y, 2 ) & \min( 1+x+y, 1 ) & \min( 2+y, 0 ) & \min( 1+y, 3 ) \\
\min( 1+x, 1+y, 3 ) & \min( x, 2 ) & 1 & \min( 2+x, 2+y, 0 )
\end{pmatrix} \enspace .
\]
Ten among the 16 coefficients $d_{uv}^{\odot 3}$ are nonconstant.
Three of the corresponding tropical polynomials are linear and generic, i.e., they have three terms: $d_{1,1}^{\odot 3} = d_{4,4}^{\odot 3} = \min( 2+x, 2+y, 0 )$ and $d_{4,1}^{\odot 3} = \min( 1+x, 1+y, 3 )$.
Six coefficients are linear but degenerate.
Among these we have the equalities $d_{1,2}^{\odot 3} = \min( 1+x, 3 ) = 1 + d_{4,2}^{\odot 3}$ and $d_{3,4}^{\odot 3} = \min( 1+y, 3) = 1 + d_{3,1}^{\odot 3}$.
Only the coefficient $d_{3,2}^{\odot 3} = \min(1+x+y,1)$ is nonlinear; it is a (degenerate) tropical polynomial of degree two.
In Figure~\ref{fig:initial} the resulting two nondegenerate and four degenerate tropical lines are marked by circles at their apices.
Those lie at infinity in the degenerate cases.
The tropical quadric degenerates to an ordinary line, which is marked by two squares.
We obtain an arrangement of $2+4+1=7$ tropical hypersurfaces.
Their union is the tropical hypersurface $\cT(D^{\odot 3})$ induced by $D^{\odot 3}$, up to multiplicities, and it has $15$ regions.
Now we want to extract information about shortest paths from this geometric data.
The diagonal of $D^{\odot 3}$ reveals that there are no negative cycles unless $x<-2$ or $y<-2$.
All coefficients are finite, and thus $\Gamma$ is strongly connected.
The \emph{feasible domain} is the set
\[
\SetOf{(x,y)\in\RR^2}{x\geq -2,\, y\geq -2} \enspace ,
\]
where shortest paths between any two nodes exist.
It is subdivided into seven regions, four bounded and three unbounded ones.
Eight of the $15$ regions of $D^{\odot 3}$ are infeasible.
\end{example}
\begin{figure}[th]\centering
\input{example1_regions.tex}
\caption{%
Decomposition of $\RR^2$ into the $15$ regions of the Kleene star $D^{\odot 3}$ from Example~\ref{exmp:initial}.
These are induced by an arrangement of six tropical lines and one tropical quadric.
On each of the seven feasible regions the lengths of the shortest paths are linear functions in $x$ and $y$.
The eight infeasible regions are shaded.
}
\label{fig:initial}
\end{figure}
Comparing tropical polynomials $f,g\in\TT[x_1,\dots,x_k]$ as real functions we set
\begin{equation}\label{eq:partial-ordering}
f \leq g \ :\!\iff \ f(z) \leq g(z) \quad \text{for all } z\in\TT^k \enspace .
\end{equation}
This defines a partial ordering.
It is easy to see that $f$ and $g$ are comparable if and only if $f=\infty$, $g=\infty$ or $f(z)=c+g(z)$ for some constant $c\in\RR$.
Consider a matrix $D\in\TT[x_1,\dots,x_k]^{n\times n}$.
We say that $D$ has \emph{separated variables} if each indeterminate occurs, with multiplicity one, in the weight of at most one arc.
In this way there is no dependence among the weights for distinct arcs.
Then each coefficient of $D$ involves a constant plus a sum of indeterminates. We may reduce the number of variables by subststituting the sum of indeterminates by a single variable.
Thus we assume that the entries of $D$ are linear tropical polynomials.
It then follows that $k \leq m \leq n^2$.
This property is satisfied by the matrix \eqref{eq:initial} in Example~\ref{exmp:initial}.
\begin{theorem}\label{thm:floyd-warshall}
Let $D\in\TT[x_1,\dots,x_k]^{n\times n}$ be the weighted adjacency matrix of a directed graph on $n$ nodes with separated variables.
Then, between any pair of nodes, there are at most $2^k$ pairwise incomparable shortest paths.
Moreover, the Kleene star $D^*$, which encodes all parameterized shortest paths, can be computed in $O(k \cdot 2^k \cdot n^{3})$ time, if it exists.
\end{theorem}
\begin{proof}
We consider the case without negative cycles, i.e., cycles whose total weight is comparable to and strictly less than zero.
Then there is at least one shortest path between any two nodes; for convenience here we take paths of weight $\infty$ into account.
In each shortest path each arc occurs at most once unless the total weight of the path is $\infty$.
By our assumption this means that the total weight is equal to $\lambda+x_{i_1}+\dots+x_{i_\ell}$ for some $\lambda\in\TT$ and $x_{i_1}+\dots+x_{i_\ell}$ is a multilinear tropical monomial, i.e., each indeterminate occurs with multiplicity zero or one.
There are $2^k$ distinct multilinear monomials, and hence this bounds the number of incomparable shortest paths between any two nodes.
To obtain our complexity result we use the Floyd--Warshall algorithm with the computation of the coefficients $d_{uv}^{\odot(r)}$ via \eqref{eq:floyd-warshall} as the key step.
In our parameterized scenario each coefficient of $D^{\odot(r-1)}$ is a multilinear tropical polynomial.
The tropical multiplication, i.e., ordinary sum, of two multilinear monomials takes linear time in the number of indeterminates, which is at most $k$.
Each coefficient of $D^{(r-1)}$ has at most $2^k$ terms by our bound on the number of incomparable shortest paths.
We infer that computing $d_{uv}^{(r)}$ takes not more than $O(k\cdot 2^k)$ time, and this yields our claim.
\end{proof}
If the coefficients of $D\in\TT[x_1,\dots,x_k]^{n\times n}$ are linear tropical polynomials, then each coefficient in $D^{\odot(n-1)}$ is a tropical polynomial of degree at most $n-1$, and thus the degree of the tropical hypersurface $\cT(D^{\odot (n-1)})$ does not exceed $n^2(n-1)$, which is of order $O(n^3)$.
This occurs, e.g., when $D$ has separated variables.
\begin{corollary}\label{cor:floyd-warshall}
With the same conditions as in Theorem~\ref{thm:floyd-warshall}, and if $k$ is considered a fixed constant, all parameterized shortest paths can be computed in $O(n^{3})$ time, if they exist.
\end{corollary}
\begin{remark}
The special case when \emph{all} the arc weights are variable is of particular interest to tropical geometry.
This is equivalent to enumerating a class of convex polytopes known as \emph{polytropes}; cf.\ Tran~\cite{Tran:2017}.
The precise connection to our results is beyond the scope of this article.
\end{remark}
\section{A parameterized analog to Dijkstra's algorithm}
\label{sec:dijkstra}
\noindent
The Floyd-Warshall algorithm considered in the previous section is very useful to get a conceptual overview of the shortest-path problem.
The Kleene star $D^*=(I\oplus D)^{\odot (n-1)}=D^{(n)}$ itself does not directly provide us with the information about all shortest paths for all choices of parameters simultaneously.
Instead this is only determined by the polyhedral decomposition of the parameter space $\RR^k$ into the regions of $D^*$, induced by the tropical hypersurface $\cT(D^*)$.
In this section we propose a method, inspired by Dijkstra's algorithm, to find the regions of $D^*$, given $D$.
Dijkstra’s algorithm is the main method for computing shortest paths used in applications; cf.~\cite[\S7.2]{Schrijver03:CO_A}.
It computes a shortest-path tree directed toward a fixed node.
In this setting it is common to assume that all weights are nonnegative, and this is what we will do here.
Let $\Gamma$ be a directed graph with $n$ nodes, without parallel arcs, and weighted adjacency matrix $D=(d_{uv})_{u,v}\in \TT[x_1,\dots,x_k]^{n\times n}$.
Working with nonnegative weights means that we consider the feasible region of the matrix $D$ within the positive orthant.
The nonnegativity assumption entails that a shortest path from any node to any other node is well defined or, equivalently, the Kleene star $D^*$ exists.
Since we do not assume that $\Gamma$ is strongly connected, we allow for \enquote{shortest paths} of infinite length.
Motivated by an application to traffic networks (cf.\ Section~\ref{sec:computations}) we choose the following setup.
Each arc $(u,v)$ in $\Gamma$ is equipped with a \emph{weight interval} $[\lambda_{uv},\mu_{uv}]$ subject to
\[
0 \ \leq \ \lambda_{uv} \ \leq \ \mu_{uv} \ \leq \ \infty \enspace .
\]
If $\mu_{uv}=\infty$ then we abuse the notation $[\lambda_{uv},\mu_{uv}]$ for the ray $\SetOf{x\in\RR}{x\geq\lambda_{uv}}$.
Similarly, if $\lambda_{uv}=\mu_{uv}=\infty$, then $[\infty,\infty]$ is the empty set which signals the absence of an arc.
In Section~\ref{sec:computations} below, the weight interval will describe a range of possible travel times along a link in a traffic network.
We explicitly allow for the case $\lambda_{uv}=\mu_{uv}$, i.e., the arc $(u,v)$ may be equipped with a constant weight.
Assuming that there are precisely $k$ arcs with nonconstant weights, we can identify those arcs with the variables, for which we also use the notation $x_{uv}$.
Conversely, we also write \enquote{$\lambda(x_i)$} for the given lower bound on $x_i$ and \enquote{$\mu(x_i)$} for the given upper bound.
Setting the coefficients of $D$ to
\[
d_{uv} \ = \
\begin{cases}
x_{uv} & \text{if } \lambda_{uv}<\mu_{uv} \\
\lambda_{uv} & \text{otherwise} \enspace ,
\end{cases}
\]
we arrive at the case of separated variables.
So Theorem~\ref{thm:floyd-warshall} applies, but here we restrict the feasible domain to the polyhedron $[\lambda(x_1),\mu(x_1)]\times\dots\times[\lambda(x_k),\mu(x_k)]$ in $\RR^k$.
That polyhedron is compact, if and only if all upper bounds are finite.
From now on we will compare two tropical polynomials $f,g\in\TT[x_1,\ldots,x_k]$ with respect to this polyhedron, i.e., we let $f\leq g$ if $f(z)\leq g(z)$ for all $z\in[\lambda(x_1),\mu(x_1)]\times\dots\times[\lambda(x_k),\mu(x_k)]$.
Whenever the tropical polynomials $f$ and $g$ are sums of the arc weights on paths with separated variables, then checking $f\leq g$ can be done with a single evaluation of each of the polynomials: that is, in this case $f\leq g$ if and only if $f(\mu)\leq g(\lambda)$ where $\lambda$ and $\mu$ are the minimum and the maximum, respectively, taken over all variable bounds involved.
If not further specified then $\lambda(x_i) = 0$ and $\mu(x_i)=\infty$.
A compact way to represent the set of shortest paths to a single target is a shortest-path tree.
A shortest-path tree is the result of Dijkstra's algorithm when all weights are constant.
Motivated by \cite[Theorem~7.3]{Tarjan:1983}, we extend the notion of shortest-path trees to the partial ordering $\leq$.
We call a spanning tree $T$ with all edges directed toward the target node $t$ a \emph{shortest-path tree} if, for every arc $(v,w)$,
\begin{equation}\label{eq:shortest-path-tree}
d_{vw}+p_w \ \not< \ p_v \enspace ,
\end{equation}
where $p_v$ is the length of the path from $v$ to $t$ in the tree $T$.
Often we will denote such a directed spanning tree as a pair $(T,p)$ in order to stress that all subsequent complexity bounds require the function $p$ to be given explicitly.
A direct consequence of \eqref{eq:shortest-path-tree} is:
\begin{observation}\label{obs:number_ineq}
A given directed spanning tree $(T,p)$ has $n-1$ arcs out of the $m$ arcs of~$\Gamma$. Thus there are at most $m-n+1$ arcs $(u,v)$ such that $d_{vw}+p_w$ and $p_v$ are incomparable.
In particular, it can be tested in $O(k\cdot m)$ time whether a directed spanning tree is a shortest-path tree.
\end{observation}
The solution to computing shortest-path trees toward the node $t$ in a directed graph with $n$ nodes and weighted adjacency matrix $D\in\TT[x_1,\dots,x_k]$ is a polyhedral decomposition $\cS$ of $\RR^k$ induced by up to $n-1$ tropical hypersurfaces corresponding to the nonconstant coefficients in the column labeled $t$ in the Kleene star $D^*$.
Note that all diagonal entries are zero as there are no negative cycles.
On each polyhedral cell the lengths of all shortest paths are linear functions in the $k$ parameters.
Each such cell is a union of cells of the subdivision induced by the tropical hypersurface~$\cT(D^*)$.
\begin{lemma}
Every shortest-path tree $(T,p)$ gives rise to a polyhedral cell in the decomposition~$\cS$.
That cell is described by the inequalities
\begin{equation}\label{eq:shortest-path-cell}
p_v \ \leq \ d_{vw}+p_w \quad \text{for all arcs } (v,w) \enspace.
\end{equation}
Every region of $D$ arises in this way.
\end{lemma}
\begin{proof}
The inequalities \eqref{eq:shortest-path-cell} are linear, and thus they define a polyhedron $P=P(T)$.
For every nonnegative point $x\in\TT^k$ Dijkstra's algorithm produces a shortest-path tree, $T_x$, and we have $x\in P(T_x)$.
As a consequence these polyhedra cover the feasible domain.
The terms $p_v$ and $d_{vw}+p_w$ appear in the entry $d^{(n)}_{v,t}$ of the Kleene star $D^*$.
Thus each cell of $\cS$ is either contained in $P$, or they are disjoint.
On the other hand, if $x\in P(T)$ then every path in $T$ has to be a shortest path after substitution of the variables.
In other words, if $q$ is a term of $d^{(n)}_{v,t}$ which is minimal for $x$ then $p_v$ and $q$ evaluate to the same value at $x$.
This implies that $P$ is contained in a cell of $\cS$.
We conclude that $P\in\cS$, and every region is of that form.
\end{proof}
Clearly, it is enough to take only those arcs into account for which $p_v$ is incomparable to $d_{vw}+p_w$.
The following example shows that a shortest-path tree $T$ may yield a lower dimensional cell or even the empty set.
\begin{example} \label{ex:infeasible}
Consider the directed graph on four nodes shown in Figure~\ref{fig:infeasible} whose weights lie in the semiring $\TT[x]$ of univariate tropical polynomials.
The Kleene star of its weighted adjacency matrix is
\begin{equation}\label{eq:infeasible}
D^* \ = \ \begin{pmatrix}
0 & \infty & \infty & \infty \\
x & 0 & \infty & \infty \\
\min( 2+x, 5 ) & 2 & 0 & \infty \\
\min( 3+x, 4 ) & 3 & \infty & 0
\end{pmatrix} \enspace .
\end{equation}
The first column of $D^*$ yields four shortest-path trees with node $a$ as the target.
The four corresponding systems of inequalities read
\[
x \leq 1 \ , \quad 1 \leq x \leq 3 \ , \quad 3 \leq x \ , \quad x \leq 1 \text{ and } 3\leq x \enspace ;
\]
where the final system is infeasible.
That is, there are only three regions.
\end{example}
\begin{figure}[t]
\centering
\input{example_infeasible1.tex}
\caption{The shortest-path tree in the directed graph of Example~\ref{ex:infeasible} that does not correspond to a feasible region.}
\label{fig:infeasible}
\end{figure}
\begin{remark}
Finding the dimension of a polyhedral cell given in terms of linear inequalities can be reduced to solving linear programs; cf.\ \cite[Theorem 6.5.5]{GLS}.
\end{remark}
Our aim is it to enumerate all shortest-path trees, and hence, via solving linear programs, all maximal dimensional polyhedral regions.
We will discuss our choices and other options at the end of this section.
We consider the graph $\graphOfShortestPathTrees=\graphOfShortestPathTrees(D)$ whose nodes are all shortest-path trees, and which has an edge between two nodes if the corresponding trees share $n-2$ common edges, i.e., there is exactly one node $u$ with two outgoing edges, and the two paths from $u$ to the target $t$ are incomparable.
\begin{remark}\label{rem:dualgraph}
The graph $\graphOfShortestPathTrees(D)$ contains the dual graph of the polyhedral subdivision $\cS$ as a connected subgraph.
\end{remark}
A graph traversal enumerates all nodes in the connected component of some first node.
This is the core of our approach, which employs the following two procedures.
\begin{Algorithm}[Find an initial shortest-path tree]\label{alg:startnode}
Set each unknown $x_i$ to its minimal value $\lambda_i$.
Run Dijkstra's algorithm to obtain a shortest-path tree, with fixed arc weights, for the target node $t$.
Let $T$ be this shortest-path tree, equipped with the original weights.
For each node $u$ this yields a parameterized distance $p^T_u\in\TT[x_1,\ldots,x_k]$ from $u$ to $t$ in~$T$.
\end{Algorithm}
That initial tree $T$ is a first node of the graph $\graphOfShortestPathTrees(D)$.
\begin{Algorithm}[Traversing $\graphOfShortestPathTrees(D)$]\label{alg:traverse}
We will maintain a queue, $Q$, of pairs of trees and parameterized distances.
That queue is initialized with a single shortest-path tree obtained from Algorithm~\ref{alg:startnode}.
While $Q$ is nonempty, pick and remove from $Q$ the next tree $T$, together with the parameterized distances $p_v^T$ from $v$ to $t$.
For every arc $(v,w)$, compare $p_v^T$ with $d_{vw}+p_w^T$.
If they are incomparable add $p_v^T \leq d_{vw}+p_w^T$ to a system of inequalities associated with $T$, and replace the outgoing arc of $v$, by $(v,w)$ to obtain a new tree $T'$.
Compute the new parameterized distances $p^{T'}$, and check whether $T'$ is a shortest-path tree.
In that case and if additionally $T'$ has not been considered before, add $T'$ to $Q$.
Output the triplet of the tree $T$, the distance function $p^T$ and the system of inequalities describing the region of $T$, when there is no arc left to compare.
\end{Algorithm}
\begin{remark}
Algorithm~\ref{alg:traverse} is a breadth first search on $\graphOfShortestPathTrees(D)$; cf.\ \cite[Chapter~1]{Tarjan:1983}.
The order in which the traversal is organized is not particularly relevant.
Similarly, the initial shortest-path tree constructed in Algorithm~\ref{alg:startnode} could be replaced by any other shortest-path tree with a non-empty feasible region of parameters.
\end{remark}
Let us now determine bounds on the number $\combinatorialTypes=\combinatorialTypes(D,t)$ of shortest-path trees with target $t$ and weighted adjacency matrix $D$.
That is also the number of nodes in $\graphOfShortestPathTrees(D)$ and a crucial parameter for the complexity of the Algorithm~\ref{alg:traverse}.
We will call a variable \emph{active} in a shortest-path tree when it occurs in the weight of one of its arcs.
In both special cases treated in the next two lemmas all variables are active for all shortest-path trees.
\begin{lemma}\label{lem:freenodes}
Let $D\in\TT[x_1,\dots,x_k]^{n\times n}$ be the weighted adjacency matrix of a directed graph with $n\geq k+1$ nodes satisfying the following:
there is a node $t\in[n]$ such that $d_{u,t} = x_u$ for all $u\in[k]$ and $d_{uv}=\infty$ for all $v\in[n]\setminus\{t\}$, and all other arcs have a constant weight.
Then $\combinatorialTypes(D,t)\leq (k+1)^{n-k-1}$.
\end{lemma}
\begin{proof}
There is a unique shortest path from each node $u\in[k]$ to $t$.
In particular, there is exactly one shortest-path trees with target node $t$ when $n=k+1$.
Now, let $v\in[n]\setminus[k]$ be some other node.
A path form $v$ to $t$ is either of constant length or it goes via exactly one of the first $k$ nodes.
In particular, any two paths via the same node $u\in[k]$ are comparable.
Also, paths of constant length are pairwise comparable.
Thus, there are at most $k+1$ incomparable paths from $v$ to $t$ for each node $v\not\in[k]\cup\{t\}$, and hence at most $(k+1)^{n-k-1}$ shortest-path trees.
\end{proof}
\begin{remark}\label{rem:infeasible}
The graph from Example~\ref{ex:infeasible} satisfies the conditions of Lemma~\ref{lem:freenodes}.
In general, it can be shown that the number of regions of such a graph does not exceed $\tbinom{n-1}{k}$.
There are at most $n-k-1$ nodes with at most $k+1$ incomparable shortest path to $t$. They correspond to $n-k-1$ tropical linear polynomials in $k$ variables.
For which it is known that their common refinement has at most $\tbinom{n-1}{k}$ maximal cells.
It is clear that in this situation typically there are shortest-path trees that do not correspond to feasible regions.
\end{remark}
We need another definition, before we consider a second special case of a directed graph.
We call an adjacency matrix \emph{generic} if no two collections of arcs have the same sum of weights.
We take the values $\lambda$, and $\mu<\infty$ into account when an arc has variable weight.
\begin{lemma}\label{lem:varnodes}
Let $D\in\TT[x_1,\dots,x_{k+\ell}]^{n\times n}$ be the weighted adjacency matrix of a directed graph with $n=2k+\ell+1$ nodes satisfying the following:
we have $d_{u,u+k+\ell}=x_u$ for all $u\in[k]$ and $d_{u,v}=\infty$ for $v\neq u+k$,
we have $d_{u,w}=x_u$ for all $u\in[k+\ell]\setminus[k]$ and some $w\in[k+u-1]\cup\{n\}$ and $d_{u,v}=\infty$ for $v\neq w$,
$d_{u+k+ell,v}\in\RR$ is a positive constant for all $u\in[k]$ and $v\in[k+\ell]\cup\{n\}$
If $D$ is generic, then $\combinatorialTypes(D,n)\leq \tfrac{(k+\ell)!}{\ell!}$.
\end{lemma}
\begin{proof}
Without loss of generality assume that $d_{u+k+\ell,n} < d_{v+k+\ell,n}$ when $u<v\leq k$.
That is, the path from $u+k+\ell$ to $n = 2k+1$ via $v$ never lies in a shortest-path tree as $d_{u+k+\ell,n}\leq d_{u+k+\ell,v} + x_v +d_{v+k+\ell,n}$ whenever $x_v\geq 0$.
Thus there are at most $1+\ell$ shortest path from $1+k+\ell$ to $n$, either the constant path or over a node in $[k+\ell]\setminus[k]$.
Inductively, we obtain at most $(u+\ell)$ shortest-path trees from $u+k+\ell$ to $n$, as the node $u+k+\ell$ has precisely $u$ outgoing arcs to nodes in $[u-1]\cup\{t\}$ and $\ell$ in $[k+\ell]\setminus[k]$. That is in total at most $\tfrac{(k+\ell)!}{\ell!}$ shortest-path trees.
\end{proof}
\begin{remark}
The bound given in Lemma~\ref{lem:varnodes} is tight whenever there are $\ell$ arcs with variable weight that end at the target node $n=2k+\ell+1$.
\end{remark}
We consider the function $\Phi:\NN\times\NN\to\NN$ defined as
\[
\Phi(n,k) \ := \ \sum_{i = 0}^k \binom{k}{i} (i+1)^{n-i-1} \enspace .
\]
For instance, we have $\Phi(n,0)=1$ and $\Phi(n,1)=1+2^{n-2}$.
Clearly,
\[
\Phi(n,k) \ \leq \ \sum_{i = 0}^k \binom{k}{i} (k+1)^{n-i-1} \ = \ (k+1)^{n-1} \cdot \Bigl(1+\frac{1}{k+1}\Bigr)^k \ < \ \text{e}\cdot (k+1)^{n-1}\enspace ,
\]
where e is Euler's constant.
Combining Lemma~\ref{lem:freenodes} and Lemma~\ref{lem:varnodes}, yields the following.
\begin{theorem}\label{thm:shortestpathtrees}
Let $D\in\TT[x_1,\dots,x_k]^{n\times n}$ be the weighted adjacency matrix of a directed graph with $n$ nodes and $m$ arcs.
Suppose that $D$ is generic and has separated variables (with lower and upper bounds), and let $t\in[n]$ be some node.
Further, let $\combinatorialTypes=\combinatorialTypes(D,t)$ be the number of shortest-path trees with target node $t$.
Then
\[
\combinatorialTypes \ \leq \ \min\left( \Phi(n,k),\, n^{n-2},\,\binom{m}{n-1} \right) \enspace,
\]
and the graph traversal Algorithm~\ref{alg:traverse} computes the shortest-path trees together with an inequality description for each region of $D^*$ in $O(k\cdot m^2\cdot \combinatorialTypes + n^2)$ time.
The space complexity is bounded by $O(k\cdot (n+m) \cdot \combinatorialTypes)$.
\end{theorem}
\begin{proof}
Since each spanning tree in a graph with $n$ nodes has only $n-1$ edges, there are at most $\tbinom{m}{n-1}$ (shortest-path) trees in the graph defined by $D$.
Next let us discuss the extremal case $k=n^2-n$.
Then we have as many variables as possible, say, with weight intervals $[0,\infty]$, and the graph defined by $D$ is $\widetilde{K}_n$, the complete directed graph on $n$ nodes.
In this case any two arcs and paths are incomparable, and thus all labeled spanning trees of the undirected graph are produced as output.
By Cayley's formula the complete undirected graph $K_n$ has precisely $n^{n-2}$ labeled spanning trees.
Note that fixing the target node $t$ in an undirected spanning tree amounts to picking $t$ as the root and directing all edges toward it.
Since increasing the number of variables cannot decrease the number of shortest-path trees we obtain the second inequality $\combinatorialTypes\leq n^{n-2}$.
Now we will look into the general case.
We want to count the number of shortest-path trees toward $t$ with exactly $i$ active variables.
Fix a set of $i$ variables which amounts to fixing $i$ arcs, as we have separated variables.
Furthermore, let $\ell$ denote the number of arcs within the set whose end point has outgoing arc in the set or is the target node $t$.
Now pick some directed spanning tree with root $t$ that includes the $i$ chosen arcs.
Note that this tree might not exist, whence we may overestimate the number of shortest-path trees.
In the next step we contract the arcs with constant weight whose start point has no incoming arc of variable weight.
That is we do not contract $i-\ell$ of the arcs with constant weight. Thus we arrive at a tree with exactly $2i-\ell+1$ nodes and $2i-\ell=(i-\ell)+i$ arcs.
This tree is a subgraph of the graph in Lemma~\ref{lem:varnodes}. Hence there are at most $\tfrac{i!}{\ell!}$ of those trees.
Undoing the contraction of the constant arcs we obtain shortest path trees with at most $n-2i+\ell-1$ additional nodes, and this situation is similar to the situation in Lemma~\ref{lem:freenodes}.
From any node a shortest path either leads via an arc with variable weight or it goes directly to~$t$.
Hence there are at most $(i+1)^{n-2i+\ell-1}$ choices and in total at most $(i+1)^{n-2i+\ell-1}\cdot \tfrac{i!}{\ell!}$ shortest-path trees for a fixed set of $i$ active variables.
Clearly, $\ell < i+1$, and thus $(i+1)^{n-2i+\ell-1}\cdot \tfrac{i!}{\ell!}\leq (i+1)^{n-i-1}$.
Since there are $\tbinom{k}{i}$ such sets we conclude that the total number of shortest-path trees satisfies also the final inequality $\combinatorialTypes\leq \Phi(n,k)$.
Now let us estimate the complexity of Algorithm~\ref{alg:traverse} in terms of the number of variables~$k$, the number of nodes~$n$, the number of arcs~$m$ and the number of shortest-path trees~$\combinatorialTypes$.
The initial step is to compute a shortest-path tree, $T$, with Algorithm~\ref{alg:startnode}.
This means, first, to create an $n\times n$ adjacency matrix with constant weights, second, to apply Dijkstra's algorithm, and, third, to find the inequality description of the feasible region for $T$.
That takes $O(n^2)$ for the first two steps and $O(k\cdot n)$ for the third, adding up to $O(k\cdot n+n^2)$.
The queue $Q$ of the Algorithm~\ref{alg:traverse} treats every shortest-path tree at most once.
It follows from Observation~\ref{obs:number_ineq} that such a tree $T$ has at most $m$ arcs that lead to an inequality of the region of $T$, and hence at most $m$ potential neighbors in $\graphOfShortestPathTrees(D)$.
It takes $O(k\cdot n)$ to update the distances $p^{T'}$ and $O(k\cdot m)$ to check whether $T'$ is a shortest-path tree.
Thus in total the time complexity of the Algorithm~\ref{alg:traverse} is at most $O(k\cdot n+n^2+k\cdot m\cdot \combinatorialTypes\cdot (n+m))=O(n^2+k\cdot m^2\cdot\combinatorialTypes)$, as $n-1\leq m$.
In terms of space complexity the Algorithm~\ref{alg:traverse} is a typical breadth-first search: the dominating contribution comes from the queue $Q$, whose length is bounded by $\combinatorialTypes$.
Each entry of the queue contains a tree, on $n$ nodes, the distances to the target, and the list of constraints of the corresponding region.
The distance function has $n$ entries, and the at most $m$ edges give the linear constraint in the $k$ indeterminates.
We infer that the total space complexity is of order at most $O(k\cdot (n+m) \cdot \combinatorialTypes)$, in the unit cost model.
\end{proof}
\begin{corollary}
Let $D$ be a generic weighted adjacency matrix with separated variables.
Algorithm~\ref{alg:traverse} enumerates all shortest-path trees of $D$ to a fixed node, together with their distance functions and inequality description of their polyhedral cell, that are path-connected in $\graphOfShortestPathTrees(D)$ to a shortest-path tree of a feasible region. In particular, it enumerates a shortest-path tree for every feasible region.
\end{corollary}
\begin{proof}
A node of $\graphOfShortestPathTrees(D)$ is a shortest-path tree, as well as every of its neighbors.
Thus, by definition of an edge in $\graphOfShortestPathTrees(D)$ every neighbor that is unseen will be added to the queue of Algorithm~\ref{alg:traverse} and hence be visited and enumerated.
The shortest-path tree computed in Algorithm~\ref{alg:startnode} corresponds to a region, and by Remark~\ref{rem:dualgraph} is every shortest-path tree of a region in the same connected component.
\end{proof}
\begin{remark}
The assumption concerning the genericity of the arc weights in Theorem~\ref{thm:shortestpathtrees} is not essential.
In fact, via picking a fixed total ordering of the arcs one can break ties between two candidates of shortest paths by comparing them lexicographically.
This technique is known as \enquote{symbolic perturbation} and implemented in our \polymake code; see also Section~\ref{sec:computations}.
\end{remark}
\begin{remark}
Two variables cannot simultaneously be active if they share the same initial vertex.
This situation occurs, e.g., when $k\geq n$; hence $\Phi(k,n)$ over-estimates the number of shortest-path trees in that range.
The function $\Phi(k,n)$ is the sum over the maximal number of shortest-path trees with $i\leq k$ active variables.
These maxima cannot be attained simultaneously if $k\geq 2$; thus $\Phi(k,n)$ over-estimates the number of shortest-path trees also for $k\geq 2$.
Clearly, our bound is tight for (directed) trees, the complete directed graph $\widetilde K_n$ on $n$ nodes with $k=n^2-n$ variables, and on graphs with $k\leq 1$ variables.
\end{remark}
In general, many trees enumerated by Algorithm~\ref{alg:traverse} and counted in Theorem~\ref{thm:shortestpathtrees} will correspond to regions which are infeasible.
So it is desirable to independently bound the number of feasible regions.
\begin{proposition}\label{prop:feasible}
Let $D$ be the weighted adjacency matrix of a directed graph on $n$ nodes.
Then the number of feasible regions, which are induced by the shortest paths to a fixed target node, is at most
\begin{equation}\label{eq:feasible}
\sum_{i=0}^k \binom{k}{i} \frac{(n-1)!}{(n-i-1)!} \ \leq \ \sum_{i=0}^k \binom{k}{i} (n-1)^i \ = \ n^k \enspace .
\end{equation}
\end{proposition}
\begin{proof}
There is a shortest-path tree for every feasible region.
Thus we may count the number of feasible regions again for a fixed set of $i$ active variables in such a tree.
We take only paths into account that use an arc with an active variable when the path passes through the corresponding node and that do not contain an arc of non active variable weight.
Now there are two types of tropical hypersurfaces.
Those which correspond to nodes whose outgoing arc weight is one of the active variables, there are $i$ of those, and those nodes whose outgoing arcs are of constant weight of which exist $n-i-1$.
The common refinement of the first once has at most $i!$ many regions as this is the bound of shortest-path trees in Lemma~\ref{lem:varnodes}.
The latter hypersurfaces result form nodes which have at most $i+1$ incomparable shortest paths to the target.
Moreover, the total weights of those paths are linear, up to substitution of variables.
Hence, the common refinement of these hypersurfaces has at most $\tbinom{n-1}{i}$ regions; see also Remark~\ref{rem:infeasible}.
There are $\tbinom{k}{i}$ ways to choose $i$ active variables hence the total number of feasible regions is at most
\[
\sum_{i=0}^k \binom{k}{i} \cdot i! \cdot \binom{n-1}{i} \ = \ \sum_{i=0}^k \binom{k}{i} \frac{(n-1)!}{(n-i-1)!} \enspace .
\]
\end{proof}
The number of feasible regions in Proposition~\ref{prop:feasible} is much smaller than the bound of the number of trees in Theorem~\ref{thm:shortestpathtrees}.
This raises the question why we let Algorithm~\ref{alg:traverse} take such a \enquote{detour}.
For instance, we could employ the parametric Floyd--Warshall algorithm, which runs in $O(k\cdot 2^k n^3)$ time and thus is faster than Algorithm~\ref{alg:traverse}.
However, from Floyd--Warshall we only obtain the Kleene star, a $n\times n$ matrix with entries in $\TT[x_1,\ldots,x_k]$, from which it is rather expensive to extract all shortest-path trees or regions.
Conceptually we could compute the decomposition into regions by a dual convex hull computation in $\RR^{k+1}$ but this requires at least as many as \eqref{eq:feasible} linear constraints.
In Section~\ref{sec:computations} we will see that Algorithm~\ref{alg:traverse} can easily compute instances for $k=10$ and $n$ in the range of hundreds of even thousands.
Convex hull computations of that size are entirely out of question; cf.\ \cite{polymake:2017} for a recent survey on the subject.
Thus a much superior strategy is to first enumerate the trees and then to filter for the feasible regions by linear programming.
As an additional advantage the latter step can be parallelized trivially.
Yet, from a practical point of view, it may also be useful to combine the parameterized Floyd-Warshall algorithm with Algorithm~\ref{alg:traverse}.
The two methods are quite different and thus exhibit different advantages.
Imagine a car driver with a very long drive to her destination city.
The traffic situation in the city depends on the time when she reaches the city.
Therefore, the driver's navigation system might evaluate the parameterized Kleene-star with up-to-date data whenever there is a branching point.
This would lead to a complete choice of reasonable routes at any given time.
Other scenarios are conceivable, where the Kleene star is used to compute an inequalities description of a feasible region in $O(k\cdot n^2)$ time from a generic feasible point.
Such information could be valuable for network operators or providers.
While investigations of this kind look promising, in the next section we restrict our attention to several instances of one scenario.
\section{Computations}
\label{sec:computations}
\noindent
We report on extensive computational experiments with \polymake \cite{DMV:polymake}.
\subsection{Implementation}
In the following we collect some details on implementing Algorithm~\ref{alg:traverse}.
The genericity of the matrix $D$ can be achieved by a symbolic perturbation of the arc weights.
Choosing an ordering on all arcs induces a (lexicographic) ordering on arbitrary sets of edges.
In particular, this gives a total ordering on the set of all (shortest-path) trees.
We may pick the ordering on the arcs in such a way that the shortest-path tree produced by Algorithm~\ref{alg:startnode} is minimal.
The lexicographic ordering on the shortest-path trees allows to traverse $\graphOfShortestPathTrees(D)$ without a lookup table or cache.
This can be interpreted in terms of Dijkstra variants based on \enquote{labeling} and \enquote{scanning}; cf.\ \cite[\S7.1]{Tarjan:1983}.
See also, e.g., \cite{Gawrilow:2008} for a dynamic routing algorithm employing that idea.
For maximal speed it is relevant to organize the trees and especially the queue of trees to be processed by means of suitable data structures.
Most importantly, there is an improved version of Dijkstra's algorithm by Fredman and Tarjan \cite{Fredman+Tarjan:1987} based on \emph{Fibonacci-Heaps}.
The latter leads to a complexity of $O(m+n \log(n))$ in the unparameterized setting; see also \cite[\S7.4]{Schrijver03:CO_A}.
An optimized variant of Algorithm~\ref{alg:traverse} has been implemented by Ewgenij Gawrilow in the \polymake software system \cite{DMV:polymake}.
This implementation uses dynamic programming and backtracking to traverse the graph implicitly.
The code is available as an extension to \polymake, version 4.1; see \url{https://polymake.org/extensions/polytropes}.
\subsection{Real-world traffic data}
\label{subsec:TNTP}
To show that our methods are feasible in practice we tried the parameterized Dijkstra Algorithm \ref{alg:traverse} on real-world data sets from the Transportation Networks repository \cite{TransportationNetworks}.
Before we explain our experimental setup we wish to spend a few words on those traffic data.
We focus on the files with the extension \texttt{tntp}.
Each file encodes a directed graph which comes from a road network and additional information about the travel time along the arcs.
For every arc $(u,v)$ the \emph{link travel time}, depending on the flow $x$, is the quantity
\begin{equation}\label{eq:BPR}
\realtravel_{(u,v)}(x) \ = \ \freetravel_{(u,v)} \cdot \bigl(1+\bias(\tfrac{x}{\capacity_{(u,v)}})^{\power}\bigr) \enspace ,
\end{equation}
where $\freetravel_{(u,v)}$ is the \emph{free flow travel time}, $\bias$ is the \emph{bias}, $\capacity_{(u,v)}$ is the capacity and $\power$ is the \emph{power}.
This formula was devised by the \emph{Bureau of Public Roads (BPR)}, a predecessor organization of the \emph{Federal Highway Administration (FHWA)} of the United States.
The \texttt{tntp}-files contain all these parameters for every arc.
In our data we found $\bias=1$ and $\power=4$ throughout; these parameters are used to model certain nonlinearities extracted from empirical data.
Usually there are also some \emph{zones}, i.e., nodes which no traffic can go through.
For a more comprehensive discussion of the data and the parameters we recommend the web site \cite{TransportationNetworks} and the references given there.
\begin{figure}[t]\centering
\input{plots.tex}
\caption{Data set \enquote{Berlin-Mitte-Center}. \polymake running times versus number of solutions, both log-scaled.
Left: $p=0.05$ yielding 25 variable weights. Right: $p=0.08$ yielding 42 variable weights (computation for one node aborted after a week).
}
\label{fig:tntp-timings}
\end{figure}
All timings were taken on Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz, turbo 3.73 GHz (BogoMIPS: 6399.75) with openSUSE 42.3 (Linux 4.4.132-53).
The memory consumption did not exceed 200~MB.
\subsection{One graph, all nodes}\label{subsubsec:berlin-mitte-center}
In the first scenario we considered the \enquote{Berlin-Mitte-Center} data set from \cite{TransportationNetworks}, which was originally provided by Jahn et al.\ \cite{Moehring:2005}.
This network describes a directed graph with 398 nodes and 871 arcs.
The first 36 nodes are zones.
Since no traffic can go through a zone, we removed them, along with the incident arcs.
The remaining network has 362 nodes and 583 arcs.
From this data file we created random instances in the following way.
As an additional parameter we fix some probability $p\geq 0$.
Each arc independently receives a variable weight with probability $p$.
For the constant weights we take the free flow travel times, which are always positive.
Each variable weight is constrained to an interval from the free flow travel time to the link travel time \eqref{eq:BPR} for a flow value set to a random proportion of the link capacity.
That is, e.g., for $p=0$ we get a usual weighted digraph where the arc weights are the free flow travel times.
For positive $p$ we get some variable weights which are intervals $[\freetravel_{(u,v)}, \realtravel_{(u,v)}(r\cdot\capacity_{(u,v)})]$ with $0\leq r\leq 1$, and $0<r<1$ almost surely.
This is the scenario discussed in Section~\ref{sec:dijkstra}.
The complexity of Algorithm \ref{alg:traverse} is primarily controlled by the number of arcs with variable arc weights.
Moreover, for a fixed graph that complexity is proportional to the size of the output, i.e., the number of combinatorial types of shortest path trees.
So, in order to obtain a computationally feasible setup, the probability $p$ cannot be too high.
That is, on most of our arcs the flow is set to zero (and the arc weight is $\freetravel_{(u,v)}$), while on a small percentage of the arcs the flow is between zero and some fraction of the capacity (and the arc weight is a variable with lower bound $\freetravel_{(u,v)}$).
In this way our experiment models the situation early in the morning, when most roads are still empty and the first few vehicles start to enter the traffic.
For the first experiment, by setting the probability to $p=0.05$, we obtained 25 arcs with variable weights, and this is about 4.3\% of the total number of arcs.
The second experiment is similar, with $p=0.08$ and 42 variable weights (about 7\% of the arcs).
For both instances we applied the parameterized Dijkstra algorithm to all the 362 nodes.
Figure~\ref{fig:tntp-timings} has an overview of the timings.
For $p=0.05$ most computations could be completed by \polymake within less than a second.
The largest one took nearly 100 seconds with more than one million combinatorial types of shortest path trees.
This network is displayed in Figure~\ref{fig:bmc+p005+v25+nz}.
\begin{figure}[t]\centering
\includegraphics[height=.67\textheight]{bmc+p005+v25+nz.pdf}
\caption{Data set \enquote{Berlin-Mitte-Center} with $p=0.05$ resulting in 25 variable weights.
Arcs with variable weights are red; their line width is proportional to the difference between maximum and minimum travel times.
Node sizes proportional to $\log\log(\#\text{solutions})$.
Layout obtained via \neato from the \graphviz package \cite{graphviz}.}
\label{fig:bmc+p005+v25+nz}
\end{figure}
The case where $p=0.08$ is quite different.
For some nodes the computations took several hours, and one computation was aborted after more than one week.
By and large this shows the limits of our approach.
Note that not only the total number of variable arc weights matter but also how clustered they are near the target node; this can also be seen in Figure~\ref{fig:bmc+p005+v25+nz} in the smaller case $p=0.05$.
The largest complete computations produced several billions of shortest path trees.
The diagrams in Figure~\ref{fig:tntp-timings}, which are log-scaled in both directions, reflect the output-sensitivity of Algorithm~\ref{alg:traverse} as predicted by Theorem~\ref{thm:shortestpathtrees}.
\subsection{Many graphs, some nodes}
\begin{table}[p]
\centering
\caption{The maximum and average performance on the the data from \cite{TransportationNetworks}. Ten runs each with $k=10$ variables. The bar--whisker plots show logarithmic performance.}
\label{tab:tntp-all}
\renewcommand{\arraystretch}{0.9}
\begin{tabular*}{0.95\linewidth}{l@{\hspace{0.3cm}}rr@{\hspace{1cm}}rrr@{\hspace{0.5cm}}rrr}
\toprule
& \multirow{2}{*}{$n$} & \multirow{2}{*}{$m$} & \multicolumn{2}{c}{Maximum} & \multicolumn{2}{c}{Average}\\
&&&\# Sol.& Sol./sec &\# Sol.&Sol./sec\\
\midrule
Anaheim & 378 & 796 & 32 & 35199.5 & 5.4 & 6189.77\\
Barcelona & 910 & 1957 & 5184 & 12548.4 & 524.7 & 1997.03\\
Berlin-Center & 12116 & 19724 & 36480 & 417772.0 & 4622.4 & 68506.98\\
Berlin-Mitte-Center & 362 & 583 & 144 & 55907.3 & 19.9 & 11500.81\\
Berlin-MPFC & 877 & 1410 & 4590 & 59761.6 & 524.4 & 23118.37\\
Berlin-Pberg-Center & 314 & 451 & 4 & 113636.0 & 1.6 & 40322.51\\
Berlin-Tiergarten & 335 & 560 & 8 & 14670.6 & 4.5 & 7865.83\\
ChicagoRegional & 11192 & 35436 & 192 & 5094.8 & 36.8 & 981.14\\
ChicagoSketch & 546 & 2176 & 512 & 202605.0 & 103.7 & 41586.64\\
Berlin-Fhain-Center & 201 & 339 & 90 & 120967.0 & 12.7 & 42332.21\\
Hessen-Asym & 4415 & 6184 & 1 & 172.6 & 1.0 & 167.29\\
Philadelphia & 11864 & 30779 & 24 & 713.0 & 11.2 & 328.49\\
Sydney & 29849 & 67381 & 12 & 150.6 & 9.2 & 114.05\\
Terrassa-Asym & 1554 & 2953 & 32 & 11790.8 & 4.1 & 1522.54\\
Winnipeg-Asym & 903 & 1923 & 4 & 2423.2 & 1.6 & 978.36\\
Winnipeg & 905 & 2284 & 160 & 3530.2 & 18.8 & 1436.35\\
\bottomrule
\end{tabular*}
\input{logboxdiagram.tex}
\end{table}
In the second scenario we looked at all the data sets (\texttt{\_{}net.tntp} files) from \cite{TransportationNetworks}.
These were preprocessed as in the first scenario, i.e., by removing the zones.
This lead to excluding the three smallest data sets Austin, Braess, and SiouxFalls because too few arcs remained.
As a result we processed 16 directed graphs.
The largest one is Berlin-Center with $n=12116$ nodes and $m=19724$ arcs.
This time we fixed the number of variable arcs a priori to $k=10$, and this set of arcs was chosen uniformly at random.
On each variable arc $(u,v)$ we took the interval $[\freetravel_{(u,v)},2\cdot\freetravel_{(u,v)}]$.
We picked a node uniformly at random as the root, which the shortest-path trees are directed to; and this experiment was repeated ten times per instance.
Qualitatively the parameterized Dijkstra Algorithm \ref{alg:traverse} behaves exactly as in the first scenario in Section~\ref{subsubsec:berlin-mitte-center}.
The running times vary considerably, but the predominant factor is the total number of solutions.
This is consistent with our theoretical analysis of the running time from Theorem~\ref{thm:shortestpathtrees}.
And this also agrees with what we observed experimentally in Figure~\ref{fig:tntp-timings} for the first scenario.
Instead of the timings itself, Table~\ref{tab:tntp-all} gives basic statistical information about the \emph{performance}, which we define as the number of solutions per second.
Since we also list the (maximum and the average of) the number of solutions the actual running times can be deduced if necessary.
Here we have fewer variables (but several much larger graphs), and thus the fluctuations are larger.
Again this is no surprise; compare the left and the right diagram in Figure~\ref{fig:tntp-timings}.
A more detailed idea about the entire statistics can be derived from the bar--whisker plots below Table~\ref{tab:tntp-all}.
For the decadic logarithm of the performance it shows the minimum, the 25\% percentile, the median, the 75\% percentile and the maximum per data set.
We think that even the fairly small number of ten random samples per graph suffices to show that the overall behavior of Algorithm \ref{alg:traverse} and its \polymake implementation is well captured by the comprehensive analysis in Section~\ref{subsubsec:berlin-mitte-center}.
\section{Concluding remarks}
\label{sec:concluding}
\noindent
In Section~\ref{sec:computations} we provided experimental evidence to show that our approach is viable in practice, provided that the output size is moderate.
Indeed, we are not aware of any other limiting factor.
For instance, other models of randomly picking variable link travel times significantly change the running times only as far as the total number of shortest-path trees is affected.
This is supported by Table~\ref{tab:tntp-all} which exhibits that the running time per solution found, on a logarithmic scale, stays in a narrow range over a wide selection of rather different networks.
That is to say, the diagram to the right of Figure~\ref{fig:tntp-timings} captures our algorithm's asymptotic behavior on sufficiently large networks rather well.
Our assumption on separated variables models parametric shortest-path problems with the maximal degree of independence among the parameters.
It is conceivable that our approach can be extended to more elaborate settings, but at the price of a greater technical overhead in the analysis.
\section*{Acknowledgments.}
We are very grateful for the support by several colleagues.
Ewgenij Gawrilow's work on the \polymake project is crucial; here he implemented the parameterized Dijkstra algorithm.
Max Klimm was helpful in discussing the application to traffic.
Georg Loho directed our attention to the article \cite{Gallo+Grigoriadis+Tarjan:1989}.
Two anonymous referees provided helpful suggestions to improve the exposition.
Thanks to everybody.
\bibliographystyle{alpha}
|
2,869,038,156,755 | arxiv | \section{Introduction}
The technique of the Imaging Atmospheric Cherenkov Telescopes (IACTs) has successfully been
demonstrated as powerful tools for ground-based sub-TeV and TeV gamma-ray astronomy.
Approximately 50 sources have already been discovered by IACTs.
However, the energy range from 10 GeV to 100 GeV has not yet been well investigated.
It is very important to lower the energy threshold of IACTs and observe more in this energy range because a lot of interesting physics remains there, e.g. high red-shift AGNs and GRBs which are not observable in the TeV range
because of absorption by Extra-galactic Background Light (EBL), new categories of sources such as LBLs, which have a lower inverse
Compton peak than HBLs in general, and the pulsed emission from galactic pulsars.
The MAGIC (Major Atmospheric Gamma-Imaging Cherenkov) telescope\cite{magic}, with a reflector diameter of
17m, is the world's largest IACT. Since fall 2003 it has been in operation on the Canary Islands of La Palma
(28.75$^{\circ}$ N, 17.90$^{\circ}$ W and 2200 m a.s.l). In order to further lower the threshold energy and
increase the sensitivity, a second 17-m diameter telescope, located at 85-m distance from the first
telescope, is being constructed. We call this stereoscopic observation by two telescopes the MAGIC-II project.
In the MAGIC-II project, in addition to the gain of stereoscopic observation, we are planning to use
high quantum efficiency (QE) Hybrid PhotoDetectors (HPDs) with GaAsP photocathodes \cite{Icrc2005}\cite{gaasp2} \cite{gaasp1} as alternative photo
sensors to PMTs, which are used in IACTs.
An HPD R9792U-40 consists of a GaAsP photocathode and of an Avalanche Diode (AD) serving as an anode.
When applying
a $\sim$8 kV high tension to the photocathode, photoelectrons are accelerated in the high electric field and bombarded to the AD. This electron bombardment produces $\sim 1600$ electron-hole pairs per one photoelectron. These electrons subsequently induce avalanches in the active volume of AD and provide an additional
gain of 30-50 with a bias voltage of a few hundred volts, which is called avalanche amplification.
As shown in figure \ref{QE}, Quantum Efficiency (QE) is well over 50\% at around 500 nm. Coating with WLS increases QE in the UV. One can see a large difference between HPDs and a PMT which is used in the current MAGIC camera.
Rise time, fall time and FWHM of output signals are $\sim 0.8$ nsec, $\sim 1.6$ nsec and $\sim 1.6$ nsec, respectively, when applying photocathode voltage of -8 kV and AD bias voltage of +439V.
Multi-photoelectron peaks are well resolved as shown in figure \ref{single}.
In addition to the favorable characteristics mentioned above, afterpulsing probability should also be cared for
in order to lower the trigger threshold. Temperature compensation of avalanche gain is also important for a stable operation. Here we will report the results of afterpulsing probability measurement and the development of a temperature compensation circuit.
\begin{figure}[t]
\begin{center}
\includegraphics [width=0.45\textwidth]{QE_unify_2.eps}
\caption{Q.E. curves of four HPDs without WLS coating, an HPD with WLS coating and a PMT with WLS coating.}\label{QE}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics [width=0.4\textwidth]{single2.eps}
\caption{Signal amplitude resolution with overall gain of $\sim$78000.}\label{single}
\end{center}
\end{figure}
\section{Afterpulsing Probability}
We measured afterpulsing probability by using a LED pulser (603nm) and a FADC (Acqiris cc103).
An HPD with a photocathode voltage of --8 kV (bombardment gain of 1550) and with an AD bias voltage of 370 V (avalanche gain of 30) was put in a dark box and the LED pulser illuminated it through a optical fiber.
The LED pulser created a trigger and output signals from the HPD were recorded by the FADC with a 2 GHz sampling rate
for 500 nsec after the main pulse to search afterpulses. Two different LED light intensities were used (3 ph.e. level and 90 ph.e. level).
Figure \ref{approb} shows the results of afterpulsing probability as a function of threshold level. The afterpulsing probability ($P_{AP}$) is defined as follows,
\begin{equation}
P_{AP} = \frac{N_{AP}}{N_{MP}\times M_{MP}}
\end{equation}
where $N_{AP}$, $N_{MP}$ and $M_{MP}$ are the number of afterpulses, the number of main pulses, and the number of photoelectrons in the main pulse, respectively.
Open and red circles show HPD results with
3 ph.e. and 90 ph.e. LED light intensity, respectively, and blue triangles show the results of a PMT of the same type used in the camera of the first MAGIC telescope. The probability of HPDs to
produce afterpulses of a level above 2 ph.e. is more than 300 times less than that of the PMT. \\
Arrival time distribution of afterpulse of a level above 2 ph.e. is shown in figure \ref{aptime}.
Several peaks can be seen ($\sim$45, $\sim$60, $\sim$90, $\sim$135, and $\sim$180 nsec), some of them are not clear, though. They can be well explained as follows. Molecules on the surface of the AD are ionized by impingement of photoelectrons with a certain probability. The ions are accelerated back and hit the photocathode, resulting in additional electron emission.
The delay time from main pulse to afterpulse can be roughly estimated since the dimension of the HPD and the applied
voltage are known. Assuming a uniform electric field with --8 kV of potential difference and the distance of 2.8 cm between the photocathode and the AD, the delay time can be estimated as $\sim$45$\sqrt{\frac{M/M_p}{Z}}$ nsec, where $M, M_p, Z$ is the mass of ion, the mass of proton, and the charge of ion, respectively. Peaks seen in
figure \ref{aptime} may reflect feedbacks of ions with $\frac{M/M_p}{Z} = 1, 2, 4,8,16$, where protons, hydrogen molecular ions, helium ions, and methane ions can be the candidates.
If the trigger threshold of afterpulse is set at a 1 ph.e. level the afterpulsing probability increases
by two orders of magnitude.
Apart from ion-feedbacks, there may be another mechanism, which
always produces single-ph.e.-level afterpulses.
One possible explanation of this additional mechanism is a photon-feedback. Scintillation emission from ceramics inside the HPD is one of possible candidates for the origin
of photon-feedback. Arrival time distribution of one ph.e level afterpulse does not show any peak structure,
but an exponential decay structure, which supports the explanation above.
\begin{figure}[t]
\begin{center}
\includegraphics [width=0.4\textwidth]{rate_fin2.eps}
\end{center}
\caption{Afterpulsing probability of an HPD R9792U-40 and a PMT which is used in the current MAGIC camera as a function of threshold level. Above two ph.e., the probability of the HPD is more than 300 times lower than that of the PMT. Apart from ion-feedbacks, photon-feedbacks may exist which cause afterpulses 100 times more often than ion-feedbacks but always produce 1 ph.e. afterpulses. See text.}\label{approb}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics [width=0.48\textwidth]{timehist_fin2_2.eps}
\end{center}
\caption{Distribution of time difference between a main pulse and afterpulses of a level above 2 ph.e.. Several peaks can be seen at $\sim$45, $\sim$60, $\sim$90, $\sim$135, and $\sim$180 nsec, some of them are not clear, though. Candidate ions are written over the peaks. }\label{aptime}
\end{figure}
It is widely known that GaAsP photocathodes have a higher Q.E. but a shorter lifetime than e.g. bialkali ones.
Cesium layer on the GaAsP crystal shows a larger degradation by ion-feedbacks. However, since the HPD R9792U-40s have a low ion-feedback rate, their lifetime is long enough for them to be used
for IACTs. Lifetimes of 5 HPDs have been measured and, if we define the lifetime as the period when Q.E. degrades by relatively 20\% and that in 1 year we have 1000 hours of operation under 300 MHz night sky background photons, all of them have more than 10 years of life.
\section{Temperature Compensation}
Avalanche amplification strongly depends on the working temperature. We measured $\sim-2\%/^{\circ}$C (see figure \ref{temp-comp}) at gain of 30 at 25$^{\circ}$C. This dependence is one order of magnitude stronger than that of PMTs and should be compensated. We developed a temperature compensation circuit with
3 resistors, a DC/DC converter (APD 5P501201, Systems Development \& Solutions). and a thermistor (103AT-2, Ishizuka Electronic Corporation). As the temperature goes higher, a higher bias voltage is applied to the AD through the circuit.
An HPD with the compensation circuit was put in a temperature regulation chamber and the LED pulser illuminated it with a light intensity of several ph.e.. Output charge distribution was recorded at different temperatures and the change of the gain was estimated by using the single ph.e. peak. In order to make sure that temperature in the chamber was well
stabilized and there was no hysteresis, we measured it twice, i.e. first the temperature was raised from about 20$^{\circ}$C to about 40$^{\circ}$C and then lowered to about 20$^{\circ}$C.
Figure \ref{temp-comp} shows the result. Blue and red points show the temperature dependence of the avalanche gain without and with the compensation circuit, respectively. A green line denotes the simulation result.
Temperature dependence was suppressed at the level of $\sim$0.3\%/1 $^{\circ}$C from 25 $^{\circ}$C to 35$^{\circ}$C, which is the same level as that of PMTs.
It should be noted that we tuned the system for a mean temperature of 30 degrees this time, but that we can easily shift it
by changing the resistors of the circuit.
\begin{figure}[t]
\begin{center}
\includegraphics [width=0.4\textwidth]{temp_comp.eps}
\end{center}
\caption{Temperature dependence of the avalanche gain. The blue, red and green lines denote change of the gain without a compensation circuit ($\sim-2$\%/$^{\circ}$C), with the compensation circuit ($\sim -0.3$\%/$^{\circ}$C from 25$^{\circ}$ to 35$^{\circ}$) and simulation result of temperature compensation.}\label{temp-comp}
\end{figure}
\section{Summary}
The advantage of the HPD R9792U-40 from Hamamatsu compared to conventional PMTs is not only a higher quantum efficiency but also a 300 times lower afterpulsing probability. Extremely low afterpulsing probability leads to a long
lifetime, estimated to be more than 10 years under standard conditions of IACTs. Temperature dependence of the avalanche gain can be reduced at the same level of a PMT by using a simple compensation circuit based on a thermistor.
The field test of HPDs are scheduled using the second telescope.
\section{Acknowledgements}
The MAGIC project is supported by the MPG (Max-Planck-Society in Germany), and the BMBF (Federal Ministry of Education and Research in Germany), the INFN (Italy), the CICYT (Spain) and the IAC (Instituto de Astrophysica de Canarias).
\nocite{ref4}
\nocite{ref5}
\nocite{ref6}
\nocite{ref7}
|
2,869,038,156,756 | arxiv | \section{Introduction}
Symbiotic binaries, or symbiotic stars, are interacting binaries consisting of a hot compact object, usually a white dwarf (WD), though neutron stars are also possible, accreting material from a cool evolved giant. The donor star can be either a normal red giant (S-type) or a Mira type variable embedded in an optically thick dust shell (D-type). Symbiotic binaries have the longest orbital separations and periods among the interacting binaries, with the periods ranging up to tens of years in D-type binaries \citep[e.g. 43.6 years for R Aqr;][]{Mikolajewska09}.
To date, there are 257 confirmed symbiotic binaries in the Milky Way and 66 extra-galactic objects \citep{Akras19}, which is much less than predicted by theoretical estimates, which range from $10^3$ \citep[e.g.][]{Lu06, Yungelson10} to a few times $10^5$ \citep{Magrini03}.
Symbiotic binaries are surrounded by a complex circumstellar environment, the result of the hot ionizing compact object being embedded in the dense neutral wind of the giant star, with both ionized and neutral regions present as well as dust forming regions, accretion discs and possibly jets. The vast array of differing conditions makes symbiotic binaries excellent test cases of late stages of stellar evolution and binary interactions. In addition, symbiotic binaries have been discussed in the context of the problem of progenitors of Type Ia supernovae (SNe Ia) \citep[see][for a review]{Maoz14}, both in single and double degenerate channels, and also as a possible channel of neutron star formation via accretion-induced collapse \citep[AIC;][]{Nomoto91, Wang18}. For a thorough review of symbiotic binaries, see \citet{Mikolajewska12}.
Symbiotic binaries may also exhibit other phenomena usually associated with accreting white dwarfs, such as thermonuclear novae, either slow or recurrent, where the hydrogen accreted from the giant onto the WD burns into heavier elements, either in a long outburst or in a short flash. These novae typically cause some amount of matter to be ejected from the WD, which may prevent the WD from growing in mass \citep{Wolf13}, except in the case of short recurrence time novae, where the WD can retain a significant amount of the accreted material due to the higher interior temperature and thus less explosive burning \citep{Hillman16}.
If the WD in the binary is accreting material steadily, the system may appear as a super-soft X-ray binary (SSS), which is characterised by effective temperatures of $10^{5-6}$ K and luminosities of $10^{37-38}$ erg s$^{-1}$ \citep{Greiner00}. In the SSS phase the accreted hydrogen is burned steadily on the surface of the white dwarf, which allows the mass of the WD to grow efficiently \citep{vandenHeuvel92}.
LIN 358, also known as RX J0059.1-7505, is an S-type symbiotic binary consisting of a WD and an asymptotic giant branch (AGB) star and located in the outskirts of the Small Magellanic Cloud (SMC) at coordinates RA = 00h 59m 12.3s, Dec = -75$^{\circ}$ 05$'$ 17.6$''$. It was first discovered by \citet{Lindsay61} and characterised as a symbiotic binary by \citet{Walker83} using optical observations. \citet{Muerset97} analysed the ROSAT PSPC observations of LIN 358 and classified it as a super-soft X-ray source based on its X-ray spectrum. \citet{Kahabka06} observed LIN 358 with XMM-Newton and obtained an effective temperature of $T_h = 227.5 \pm 30$ kK and luminosity $L_h = 1.0 \times 10^{38}$ erg s$^{-1}$ for the hot component of the binary from their black-body fit to the super-soft component (0.13 -- 1.0 keV). LIN 358 was also studied by \citet{Skopal15a}, who used multi-wavelength modelling of the spectral energy distribution to determine the effective temperature $T_h = 250 \pm 10$ kK and luminosity $\mathrm{log_{10}(}L_h\mathrm{)} = 38.03 \pm 0.11$ erg s$^{-1}$ for the WD, in agreement with the previous X-ray analysis.
In addition, \citet{Skopal15a} derived the effective temperature $T_{\mathrm{g}} = 4000 \pm 200$ K, bolometric luminosity $L_g = (2.8 \pm 0.8) \times 10^{37} \, (d/\mathrm{60 kpc})^2$ erg s$^{-1}$, and the radius $R_g = 178 \, (d/\mathrm{60 kpc})^2$ $\mathrm{R_{\odot}}$ for the giant star. \citet{Skopal15a} determined the properties of the giant by matching the photometric \textit{BVJHK} flux points with a synthetic spectrum calculated for $T_{\mathrm{g}} = 4000 \pm 200$ K from a grid of giant atmosphere models calculated by \citet{Hauschildt99}. This is possible due to the fact that in the near-IR the emission from the AGB star dominates over the emission from both the WD, which peaks in the UV, and the ionized nebula, which is most prominent in the optical lines \citep{Skopal05}. Using these parameters \citet{Skopal15a} classified the giant star in LIN 358 to be a type K5 Ib supergiant.
Apart from the temperatures and luminosities, not much is known about this system. In particular, the mass accretion rate and the mass of the WD, which are the most important parameters in determining the much speculated viability of symbiotic binaries as Type Ia supernova progenitors, are still unknown. In this work we determine the temperature, mass, and mass accretion rate of LIN 358 by using our new optical spectroscopic observations together with photoionization calculations performed with the spectral synthesis code \textsc{Cloudy}.
This paper is organised as follows: In Sec.~\ref{sec:ov} we review the overall properties and geometry of LIN 358. In Sec.~\ref{sec:obs} we describe our observations and the data reduction procedure, in Sec.~\ref{sec:data} we present the data, and in Sec.~\ref{sec:sim} we describe the \textsc{Cloudy} simulations. In Sec.~\ref{sec:results} we present our results and discuss the possible implications.
\section{LIN 358 overview}\label{sec:ov}
\subsection{Mass estimation}\label{sec:mass}
\citet{Skopal15a} classified the cool giant in LIN 358\ to be a K5 Ib type supergiant by matching the photometric BVJHK flux points of \citet{Muerset1996} to the spectral models calculated by \citet{Hauschildt99}. These models were calculated for a mass of 5 M$_{\odot}$, which is a quite typical mass for a cool supergiant AGB star of this spectral type \citep{Hohle10}, so we have adopted 5 M$_{\odot}$ to be the mass of the giant star in LIN 358 .
The mass of the giant star is important in estimating a lower limit for the WD mass, assuming that both of the stars in the binary were born at the same time. If the current giant has a mass of 5 M$_{\odot}$, then the progenitor star of the WD should have had larger initial mass in order to evolve before the current giant star. According to the initial mass -- WD mass relationship of \citet{Cummings18}, a star with an initial mass \textgreater 5 M$_{\odot}$ should create a WD with mass \textgreater 1 M$_{\odot}$. In addition, the X-ray spectral fits of LIN358 by \citet{Orio07} indicate a WD mass $> 1.18 $ M$_{\odot}$. In the rest of the paper we have assumed a mass of 1 M$_{\odot}$ for the WD. We note, however, that our results are only weakly sensitive to the adopted masses, because they affect only the orbital separation of the binary (Sec.~\ref{sec:orb}), which in turn does not affect our results significantly (see Sec.~\ref{sec:discussion}).
The mass of the WD has important implications on the composition of the WD, because the maximum mass of a newborn carbon and oxygen (CO) rich WDs is $\sim 1.2$ M$_{\odot}$. AGB stars with masses $\gtrsim 6 $ M$_{\odot}$ produce higher mass WDs, which are believed to be formed as oxygen, neon, and magnesium (ONeMg) rich. This in turn affects the possible end results of the binary, because only CO WDs are thought to produce SNe Ia and ONeMg WDs are believed to form neutron stars via AIC \citep{Nomoto84, Nomoto91}.
The mass estimates can be affected also by the pre-WD binary evolution. When the current WD went through the AGB phase, it lost the majority of its mass through stellar winds, and fraction of the mass lost may have been accreted by the second star, thus skewing the current mass ratio \citep{vandenHeuvel94}. A careful modelling of the binary properties is required to examine this problem and is thus outside the scope of this paper. This effect, however, is likely not significant for our results, because the typical mass accreted mass is only $\sim 10$ \% of the mass lost by the former AGB star. In the rest of the paper we have assumed that there were not any significant interactions before the current evolutionary stage.
\subsection{Orbital parameters}\label{sec:orb}
Despite the large orbital periods, the WDs in symbiotic binaries are often accreting material efficiently from the AGB star.
However, in most symbiotic binaries, the orbital separation is too large for the standard Roche lobe overflow (RLOF) scenario. In addition, interaction via RLOF from an AGB star with a deep convective envelope may often lead to unstable mass transfer \citep[][though see e.g., \citealt{WI11}]{Hjellming87, Chen08} and a common envelope phase \citep{Paczynski76}.
AGB stars are known to have strong stellar winds with mass-loss rates on the order of $10^{-8} - 10^{-5}$ M$_{\odot}$ yr$^{-1}$ and low velocities on the order of $5 - 30$ km s$^{-1}$ \citep{Hoefner18}. Therefore, instead of RLOF, the WD is assumed to accrete material from the wind of the donor star.
However, the standard Bondi-Hoyle-Lyttleton \citep[BHL;][]{Hoyle39, Bondi44} wind accretion scenario often fails to explain the required mass accretion rates.
The BHL description is a good approximation of the wind accretion when the outflow velocity is fast compared to the orbital velocity, which is not the case for typical AGB winds.
Recent simulations suggest a new mode of mass transfer, called wind Roche lobe overflow \citep[WRLOF;][]{Mohamed07, Mohamed12}, where the star itself does not fill the Roche lobe, but the stellar wind is confined in the Roche lobe, because the wind acceleration radius $R_{d}$ is larger than the Roche lobe radius $R_{L,1}$. In this situation the wind is focused towards the orbital plane \citep{deValBorro09}, allowing an efficient mass transfer through the Lagrangian 1 (L1) point. In WRLOF the mass-transfer rate may exceed the estimated rates from the simple BHL accretion by up to 2 orders of magnitude.
The conditions necessary for WRLOF can be estimated with the ratio $R_{d} / R_{L,1}$. The Roche-lobe radius of the donor star $R_{L,1}$ depends on the mass ratio $q$ and the binary separation $a$, and can be estimated as \citep{Eggleton83}:
\begin{equation}\label{eq:roche}
R_{L,1} \,\, = \,\, a \, \times \, \frac{0.49 q^{2/3}}{0.6 q^{2/3} + \mathrm{ln}(1 + q^{1/3})}.
\end{equation}
In AGB stars, the stellar winds are driven by dust \citep{Hoefner15, Hoefner18}, which means that the wind acceleration radius coincides with the dust condensation radius, i.e. the radius where the gas is cooled enough to form dust grains.
This radius can be estimated as \citep{Lamers99, Hoefner07}:
\begin{equation}\label{eq:rd}
R_d \,\, = \,\, \frac{1}{2} R_* \left( \frac{T_{\mathrm{g}}}{T_{\mathrm{cond}}} \right) ^{\frac{4+p}{2}},
\end{equation}
where $R_*$ is the stellar radius and $T_g$ is the temperature of donor star. The condensation temperature $T_{\mathrm{cond}}$ and the exponent $p$ are characteristics of the grain material and depend on the chemical composition.
LIN 358\ is an S-type symbiotic binary \citep{Muerset1996}, which means the infrared emission is dominated by the stellar continuum and not the dust emission as in D-type binaries. The dust grains are important in driving the stellar wind \citep{Hoefner18}, but the emission from the WD will later destroy the dust grains and ionize most of the wind (see Sec.~\ref{sec:csmabs}).
The atmosphere of the AGB star in LIN358 is O-rich \citep{Muerset1996}, meaning the condensing grains are mainly various silicates. The exact nature of the O-rich condensates is still debated, but following \citet{Bladh12, Bladh15, Hoefner16, Hoefner18}, for the most efficient silicate grains $p \approx -1.0$ and $T_{\mathrm{cond}} \approx 1100$ K. Using these values in Eq.~(\ref{eq:rd}), we get $R_d = 617.2 $ R$_{\odot}$ ($\approx 2.9$ AU).
Next, we can estimate the orbital separation $a$ in LIN 358\ by using the ratio of the wind acceleration radius and the Roche lobe radius of the donor $R_{d} / R_{L,1}$. Based on the hydrodynamical simulations of \citet{Mohamed12, Abate13}, the maximal accretion efficiency is reached at $R_{d} / R_{L,1} = 1.5$. Using this with the Eq.~(\ref{eq:roche}), we can calculate the semi-major axis to be $a = 3.7$ AU, which corresponds to a orbital period of $\approx 2.9$ years.
This is well within the range of typical periods for S-type symbiotic binaries ($\sim$ 1 -- 6 years; \citealt{Gromadzki13}).
In the rest of the paper we have used the $a = 3.7$ AU, but we note that our results are not particularly sensitive to this chosen value (see Sec.~\ref{sec:discussion}).
\begin{figure*}
\centering
\includegraphics[width=0.97\textwidth]{Figs/LIN358_spectrum.pdf}
\caption{The spectrum of LIN 358\ as observed with WiFeS. }
\label{fig:spectrum}
\end{figure*}
\section{Observations}\label{sec:obs}
We observed LIN 358\ on the nights of 2018 November 04--05 (P.I.: Seitenzahl; Proposal ID: 4180034) with the Wide Field Spectrograph (WiFeS) mounted on the Australian National University 2.3\,m telescope at the Siding Spring Observatory. We present only a short summary of the data reduction method, which is described in detail by \citet{Dopita16} and \citet{Ghavamian17}.
The WiFeS is a double-beam spectrograph which provides simultaneous and independent channels for both the blue (3500--5700 \AA ) and red (5300--7000 \AA ) wavelength ranges. We used the B3000 and R7000 gratings which means the spectral resolution in the blue wavelength range is R = 3000 ($\Delta v \approx$ 100 km s$^{-1}$) and in the red R = 7000 ($\Delta v \approx$ 45 km s$^{-1}$).
The observations were performed in the `binned mode', which provided us a field of view of 25 $\times$ 35 spatial pixels (or spaxels), each of them $1'' \times 1''$ in angular size. This correspond to a field of view of 7.3 pc $\times$ 10.2 pc assuming a distance of 60 kpc to the SMC.\footnote{\citet{Scowcroft16} found the the distance to SMC to be $62.0 \pm 0.3$ kpc, but for easier comparison with previous results we have assumed a 60 kpc distance in this paper.}
LIN 358 was observed in one pointing which was offset from the source by $15''$ in the axis of the wider side of the WiFeS field of view (35 spaxels, or $35''$). The observations consisted of $2 \times 1800$s on source exposures and $2 \times 900$s blank sky exposures, which were scaled and subtracted from the two co-added frames.
The data were reduced with the \textsc{pywifes} v0.7.3 pipeline \citep{Childress14ascl,Childress14}, which provided us a wavelength calibrated, sensitivity corrected, and photometrically calibrated data cube.
The data were dereddened using the extinction curves for SMC bar of \citet{Weingartner01} with an assumed carbon abundance of zero and using the column density $N_H = 7.6 \times 10^{20}$ cm$^{-2}$ obtained by \citet{Kahabka06} from blackbody fits to the \textit{XMM-Newton} data.
LIN 358 is located in the outskirts of the SMC, not in the bar, but given the lack of an available extinction curve for this region, we use the curve for the SMC bar. This does not affect the results significantly because the dereddening factor for e.g. the He \textsc{ii} 4686 \AA\ line is only $\approx 1.04 $, so our measured fluxes may be underestimated at most by a few per cent, which is of the same order as the measured flux errors.
In addition, we corrected the data for the redshift of 185.2 km s$^{-1}$, which was measured from several narrow emission lines (SMC heliocentric radial velocity is 145.6 km s$^{-1}$; \citealt{McConnachie12}), and subtracted the continuum emission using the Locally Weighted Scatterplot Smoothing algorithm (LOWESS; \citealt{Cleveland79}) similarly to \citet{Vogt17a, Vogt17b}.
\section{Data}\label{sec:data}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{Figs/four_lines.pdf}
\caption{The observed line profiles from left to right: He \textsc{ii} 4686\AA , H$\beta$ , H$\beta$ , and the Raman scattered O \textsc{vi} 6830 \AA .}
\label{fig:Ha_Hb}
\end{figure*}
After the data reduction process described in Sec.~\ref{sec:obs} we extracted the total source spectrum from the blue and red data cubes. The overall spectrum is show in Fig.~\ref{fig:spectrum}. The spectrum consists of various emission lines: H Balmer series from H$\alpha$ to the $n = 10 \rightarrow 2$ transition; various He \textsc{i} and He \textsc{ii} lines, most notably He \textsc{ii} 4686 \AA , He \textsc{i} 5876 \AA , and He \textsc{i} 6678 \AA ; [Fe \textsc{x}] 6374 \AA , and the [O \textsc{vi}] 6830 \AA\ Raman scattered feature. The usual nebular lines, such as [O \textsc{iii}] 4959, 5007 \AA\ and [S \textsc{ii}] 6716, 6731 \AA\ are notably absent in LIN 358.
In our analysis we will focus mainly on the brightest observed lines, whose line luminosities are listed in Table~\ref{table:lums} and whose properties are outlined below.
\begin{table}
\caption{List of the luminosities of the brightest observed emission lines.}
\begin{tabular}{lc}
\multicolumn{1}{c}{Emission line}&\multicolumn{1}{c}{Luminosity $\times$ 10$^{33}$ (erg s$^{-1}$)}\\ \hline
He \textsc{ii} 4686 & 38.0 $\pm$ 1.2 \\
H $\beta$ & 23.1 $\pm$ 2.7 \\
He \textsc{i} 5876 & 6.2 $\pm$ 0.3 \\
$[$Fe \textsc{x}$]$ 6374 & 7.4 $\pm$ 0.4 \\
H $\alpha$ & 112.3 $\pm$ 8.1 \\
He \textsc{i} 6678 & 4.3 $\pm$ 0.2 \\ \hline
\end{tabular}
\label{table:lums}
\end{table}
\subsection{He II 4686 \AA}
The He \textsc{ii} 4686 \AA\ is the second brightest emission line in the spectrum with a luminosity L = $3.8 \pm 0.12 \times 10^{34}$ erg s$^{-1}$. This line comes from the n$ = 4 \rightarrow 3$ transition and is the brightest He \textsc{ii} line in the optical. The high ionization potential of He \textsc{ii} (54.4 eV) requires a hot ($\gtrsim$ 10$^{5}$ K) ionizing source, which makes this emission line a clear and important signature of an accreting white dwarf with steady nuclear burning \citep{Rappaport94}. For this reason this emission line has been used extensively in previous accreting WD and Type Ia supernova studies \citep{Woods13, Johansson14, Chen15, Woods16, Kuuttila19}.
Other observed He \textsc{ii} lines are at 4199 \AA , 4541 \AA , and 5411 \AA .
\subsection{He I lines}
The two brightest He \textsc{i} lines are at 5876 \AA\ and 6678 \AA , which are produced by the singlet transition 3$^1$D $\rightarrow$ 2$^1$P$^0$ and the triplet transition 3$^3$D $\rightarrow$ 2$^3$P$^0$, respectively.
Other observed He \textsc{i} lines include the triplet lines at 3889 \AA\ and 4471 \AA , and the singlet lines at 3965 \AA , 4922 \AA , and 5016 \AA .
Due to the high meta-stability of the He \textsc{i} excited level 2$^3$S, and to a lesser extent 2$^1$S, collisional effects play a significant role in the production of He \textsc{i} emission lines. Of the recombinations to excited levels of He \textsc{i}, approximately one fourth are to the singlet levels and three fourths are to the triplet levels, all of which eventually cascade down to the meta-stable 2$^3$S level through radiative transitions \citep{Osterbrock06}. The 2$^3$S level can decay to the ground state by emitting a photon, but at densities $\gtrsim 10^4$ cm$^{-3}$ most of the 2$^3$S states are depopulated by collisional transitions, for example to the singlet level 2$^1$S and triplet 2$^3$P$^0$ \citep{Bray00}. These collisional effects become even more important in the high densities and temperatures of symbiotic binaries. Calculating the luminosities of these lines thus requires a full treatment of all radiative and collisional processes.
\subsection{Balmer lines}
The H$\alpha$\ and H$\beta$\ emission lines are among the most important astrophysical lines and typically are very well understood. However, in the case of LIN 358 , the high density makes the treatment of these lines quite difficult.
In lower density environments \citep[e.g., n $\ll 10^{6} $ cm$^{-3}$, see ][]{Hummer1987},
H$\alpha$\ and H$\beta$\ emissivities can be treated using the simple Case B approximation \citep{Osterbrock06}. In the high density environment of symbiotic binaries, however, this is not the case, which is evident from the observed line ratio H$\alpha$ /H$\beta$\ $\approx$ 4.9, compared to the Case B line ratio H$\alpha$ /H$\beta$\ $\sim$ 3. The high densities and large optical depths in the nebulae around symbiotic binaries can cause the gas to become optically thick in the Balmer lines and self-absorption to occur
This will drastically change the Balmer line ratios, as has been observed in some active galactic nuclei \citep{Netzer75} and also other symbiotic binaries \citep{Davidsen77}.
These conditions require a full radiative transfer treatment to fully model the line luminosities and ratios.
The observed H$\alpha$\ and H$\beta$\ line profiles are shown in Fig.~\ref{fig:Ha_Hb}. H$\alpha$\ requires four Gaussians components to be explained well: two narrow peaks, an intermediate component, and very broad ($\sim 2000$ km/s) wings. These kind of emission line profiles have been previously observed in many planetary nebulae and symbiotic binaries \citep[e.g.][]{Arrieta03, Chang18}.
There are several different possible formation mechanisms for the broad wings, e.g. optically thin stellar winds from the hot component \citep{Skopal06}, Thomson scattering by free electrons \citep{Sekeras12}, and Raman scattering of Ly$\beta$ photons \citep{Nussbaumer89, Lee00}. The Raman scattering is favoured by the fact that there are no clear broad components in the other hydrogen lines, because with Thomson wings H$\alpha$\ and H$\beta$\ would have the same width that is proportional to $T_e ^{1/2}$, which is not the case in LIN 358. However, according to simulations of \citet{Chang18}, the Raman wings of H$\alpha$\ are about three times wider than the Raman wings of H$\beta$\ due to the different cross sections for Ly$\beta$ and Ly$\gamma$, which fits the picture of LIN 358. In addition, the observed O \textsc{vi} 6830 \AA\ Raman feature shows that the conditions for Raman scattering are met, so it is reasonable to assume that the Balmer lines include a contribution from Raman scattering as well.
In order to estimate the Balmer emission coming from the ionized region around the WD, we modelled the Raman scattered component with the simulated line profile of \citet[][Fig. 6]{Chang18} for N$_{\mathrm{H\textsc{i}}}$ = 10$^{20}$ cm$^{-2}$. After subtracting the Raman scattered component, the resulting line profile can be well fitted with two Gaussians, one emission line and one absorption line, see Fig.~\ref{fig:Ha_fit}.
\subsection{[Fe X] 6374 \AA}\label{sec:fex}
The coronal [Fe \textsc{x}] 6374 \AA\ line is the only forbidden line present in our data. This lines comes from the $^2$P$_{1/2} \rightarrow ^2$P$_{3/2}$ transition of Fe$^{9+}$. The ionisation energy of Fe$^{9+}$ is 233.6 eV, which makes this line very dependent on the temperature and thus a good, almost model independent test case. The complicated and poorly known wind and accretion structures of symbiotic binaries make simulating the low energy lines, e.g. He \textsc{i} and Balmer lines, a very difficult task, but the high ionisation energy of Fe$^{9+}$ means that the emission originates from very close to the WD, which means that this line is insensitive to the large scale wind structure.
\subsection{O VI 6830 \AA\ Raman feature}\label{sec:raman}
The broad O \textsc{vi} 6830 \AA\ feature is due to inelastic scattering of O \textsc{vi} 1032 \AA\ photons by hydrogen atoms. In this Raman scattering process the O $\textsc{vi}$ photons are absorbed by a hydrogen atom at the ground state ($1s^2$S), which is then excited to an intermediate state. A photon with $ \lambda \sim 6830$ \AA\ is then emitted and the hydrogen is left in an excited state ($2s^2$S) \citep{Schmid89}. This Raman scattering process requires an ionising source hot enough to produce O \textsc{vi} in the vicinity of large amount of neutral hydrogen. The Raman scattering cross-section is $7.5 \times \sigma _{\mathrm{T}} \approx 5 \times 10^{-24}$ cm$^{2}$, where $\sigma _{\mathrm{T}}$ is the Thomson cross-section \citep{Schmid89, Lee97}. For an optical depth of 1, column densities of $N_H \approx 1/\sigma \approx 2 \times 10^{23}$ cm$^2$ are needed. Such densities are typically reached only in the innermost parts of the wind, or in the photosphere of the donor star (see also Fig.~\ref{fig:columnDensity}).
The Raman scattered feature is observed almost exclusively in symbiotic binaries ($\approx $ 55\% of them in the Milky Way; \citealt{Akras19}), together with the 7088 \AA\ feature, which comes from Raman scattering of O $\textsc{vi}$ 1038 \AA\ (this is outside of the WiFeS R7000 wavelength range). Other Raman scattered lines, e.g. He $\textsc{ii}$ 1025 \AA\ $\rightarrow$ 6545 \AA\ \citep{Sekeras15}, have also been detected in some symbiotic systems, but these are mostly very faint and thus not detectable in our observations of LIN 358 .
\section{\textsc{Cloudy} simulations setup}\label{sec:sim}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Figs/Ha_fit.pdf}
\caption{The H$\alpha$\ line profile with fitted components. The black line is the observed data, green dashed line is the simulated line profile of \citet[][Fig. 6]{Chang18} for N$_{\mathrm{H\textsc{i}}}$ = 10$^{20}$ cm$^{-2}$, in cyan and blue dashed lines are the two Gaussian components, and in the red is the sum of all three. }
\label{fig:Ha_fit}
\end{figure}
As described in Sec.~\ref{sec:data}, many of the observed lines in the high density nebulae around symbiotic binaries are products of complicated radiative and collisional processes, and thus explaining the source properties via these emission lines requires a detailed, simultaneous and self-consistent treatment of all the necessary complex processes. To tackle this problem, we used the open source spectral synthesis code \textsc{Cloudy}\footnote{www.nublado.org} version 17.01 \citep{Ferland17} to simulate the LIN 358\ system.
We calculated a grid of photoionization models while assuming that the central ionizing source (the WD) emitted a blackbody spectrum, which is a reasonable approximation for the ionizing emission of nuclear burning WDs \citep{Woods16}.
We have ignored the radiation from the giant star, because of its low effective temperature (4000 K). While the giant star dominates the infrared emission, its contribution to the optical line emission is very small compared to the white dwarf.
In our simulations the metallicity of the gas was set to one fifth of solar metallicity from \citet{Lee05, Grevesse10}, corresponding roughly to a typical SMC metallicity.
A diffuse background radiation field was included in the calculations in the standard way it is implemented in \textsc{Cloudy}, where the radiation field shape and intensity were set to describe the cosmic radio to X-ray background \citep{Ostriker83, Ikeuchi86, Vedel94} with the cosmic microwave background included, and the extra heating and ionisation of cosmic rays were included in the calculations according to the mean ionisation rate of \citet{Indriolo07}.
\subsection{Density structure}
As mentioned in Sec.~\ref{sec:ov}, the ionising source in symbiotic binaries sits in the dense wind emitted by the giant donor star, which makes the density structure around the WD asymmetric. This makes simulating these complex objects with \textsc{Cloudy} problematic, because \textsc{Cloudy} is a one dimensional code. LIN 358\ and other symbiotic binaries have been previously studied with \textsc{Cloudy} \citep[see][for LIN 358]{Orio07}. However, our initial attempts to compute the nebula spectrum in 1D Cloudy calculations grossly failed in explaining our observations. This is probably not surprising, given that the density profile towards the donor star is very different from the density profile in the opposite direction.
For this reason, we constructed a model where we calculate the ionized gas structure with \textsc{Cloudy} along a number of paths from the WD and combine the results to get the 2D gas structure. We assume that the WD and the giant star are separated by 3.7 AU (see Sec.~\ref{sec:ov}) and that the mass-loss from the giant is spherically symmetric so that the ionization structure becomes rotationally symmetric along the axis between the WD and the giant star, and we can restrict the calculations to a 2D plane.
The real situation is naturally more complex. The wind in AGB stars is driven by stellar pulsations and thus variable both in direction and time. However, the long-term average mass-loss is still often well approximated by a spherically symmetric formula \citep{Hoefner18}. Furthermore, hydrodynamical simulations \citep[e.g.][]{Mohamed07, Mohamed12,deValBorro09} show that the wind in symbiotic binaries is focused towards the binary orbital plane. Accurately accounting for this effect would require full 3D radiative transfer simulations, which is outside the scope of this paper. We note, however, that the wind focusing in this system appears to be moderate, given that our 2D simulations were able to correctly reproduce the global ionisation structure of the wind, as confirmed by the good consistency of the simulated spectra with observations, despite the richness and high statistical quality of the observational data. On the other hand, our initial experiments showed that these effects are escaping the 1D ionisation simulations which grossly fail in explaining the observed spectrum of LIN 358. Our calculations presented here are a significant improvement on the earlier 1D calculations and we have achieved consistent results at a significantly lower cost compared to what would be required for 3D simulations.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Figs/system.pdf}
\caption{The geometrical configuration of our simulations. The giant star and the centre of the spherically symmetric density distribution is marked with the black open circle. The white dwarf is marked with the grey star at the distance $r_c$ from the density centre. A test particle at distance $r$ from the centre of the density distribution will have coordinates $(R, \theta)$ in the WD-centred reference system.}
\label{fig:system}
\end{figure}
We assume the mass-loss to be of the form
\begin{equation}\label{eq:massloss}
\dot{M} = 4 \, \pi \, \mu \, m_{\mathrm{H}} \, v \, r^2 \, n(r) \, \left( 1 - \frac{R_*}{r} \right) ,
\end{equation}{}
where $\mu$ is the mean molecular weight, $m_{\mathrm{H}}$ is the mass of a hydrogen atom, $v = 15$ km s$^{-1}$ is the assumed constant wind velocity \citep{Chen17}, $R_{*} = 178$ R$_{\odot}$ is the origin of the stellar wind, i.e. the radius of the giant star, $r$ is the distance from the centre of the giant star, and $n(r)$ is the number density at distance $r$. The main consequence of Eq.~(\ref{eq:massloss}) is that the wind density structure follows a power-law ($r^{-2}$) distribution when $r \gg R_{*}$, but at the surface of the giant star the density becomes very high. We set the maximum density to $n = 10^{14}$ cm$^{-3}$ to avoid the infinite density at $r = R_{*}$.
With this wind structure the ionising source is sitting off-centre at a distance $r_c$, but the density distribution from the white dwarfs point of view can easily be calculated following \citet{Arthur07}. First, we can write the density distribution in a form:
\begin{equation}\label{eq:nr}
n(r) = n_c \, \left( \frac{r}{r_c} \right)^{-2} \, \left( 1 - \frac{R_*}{r} \right)^{-1} ,
\end{equation}{}
where $n_c$ is the number density at the distance $r_c$, i.e. at the position of the WD. We can now change the coordinates and centre the reference system to the position of the WD so that we can write
\begin{equation}\label{eq:rR}
r^2 = R^2 + r_c^2 - 2 R r_c \mathrm{cos}(\pi - \theta) \, ,
\end{equation}{}
where $R$ is the distance measured from the WD to the direction of $\theta$, where the angle $\theta$ is measured from the symmetry axis between the WD and the giant star, and the giant star is in the direction $\theta = \pi$; see Fig.~\ref{fig:system}. Combining equations (\ref{eq:nr}) and (\ref{eq:rR}), we can write the density distribution from the white dwarfs point of view as
\begin{equation}\label{eq:nRT}
n(R, \theta) = \frac{n_c \, r_c^2}{R^2 + r_c^2 + 2 R r_c \mathrm{cos}\theta } \left( 1 - \frac{R_*}{R^2 + r_c^2 + 2 R r_c \mathrm{cos}\theta} \right)^{-1}.
\end{equation}{}
With Eq.~(\ref{eq:nRT}) we can calculate the density distribution along any line of sight, which is illustrated in Fig.~\ref{fig:denstheta} with $n_c = 10^7$ cm$^{-3}$ and $r_c = 3.7$ AU.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Figs/thetas.pdf}
\caption{The density distribution as seen by the WD with $n_c = 10^7$ cm$^{-3}$ and $r_c$ = 3.7 AU calculated for various angles. }
\label{fig:denstheta}
\end{figure}
\section{Results}\label{sec:results}
Using the setup explained in Sec.~\ref{sec:sim} we performed simulations to see if we can reproduce the observed characteristics of the LIN 358\ by varying the main parameters of the problem: temperature $T_h$ and luminosity $L$ of the WD and the mass-loss rate $\dot{M}_{\mathrm{loss}}$ of the donor star. Strictly speaking, the WD luminosity is not an independent parameter. It is determined by the mass accretion rate modulo the regimes of the nuclear burning on the WD surface, whereas for the given binary system parameters, the mass accretion rate depends on the mass loss rate of the donor star. On the other hand, line luminosities scale with the gas density which in turn is proportional to the mass loss rate. Due to the non-linear nature of the problem, we varied $\dot{M}_{\mathrm{loss}}$, $L$ and $T_h$ in an ad hoc iterative procedure to reach the values that best described the observed properties of LIN 358. In this procedure, we considered only the high excitation lines of He \textsc{ii} 4686 and [Fe \textsc{x}] 6374 \AA\ for which the line luminosities are robustly predicted in our simulations. As, in the relevant parameter range, the line luminosities are monotonic functions of temperature, WD luminosity and the donor mass-loss rate, the so found solution would be unique. Other significantly detected lines were used for an a posteriori consistency check.
\subsection{Colour temperature of the white dwarf}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Figs/FeX_temp.pdf}
\caption{Simulated [Fe \textsc{x}] 6374 \AA\ luminosity as a function of the WD blackbody temperature calculated with $L = 10^{38}$ erg s$^{-1}$ and $\dot{M}_{\mathrm{loss}} = 1.2 \times 10^{-6}$ M$_{\odot}$ yr$^{-1}$.}
\label{fig:FeXtemp}
\end{figure}
First, we determined the temperature of the ionising source by matching the observed [Fe \textsc{x}] 6374 \AA\ emission line to the simulations. As mentioned in Sec.~\ref{sec:fex}, this emission line is produced close to the white dwarf and is very sensitive to the temperature of ionizing radiation but insensitive to the structure of the stellar wind at large distances from the white dwarf (see also Fig.~\ref{fig:massloss}). The simulations and the comparison to the observed line luminosity is shown in Fig.~\ref{fig:FeXtemp}. In this figure, the WD luminosity and the wind mass loss rate are fixed at the values, determined in the above mentioned iterative procedure. From this plot we can derive for the temperature:
\begin{equation}\label{eq:fextemp}
T_h \, = \, ( \, 2.22 \pm 0.03 \, ) \, \times \, 10^5 \,\, \mathrm{K} \,\,\, \approx \, 19 \,\, \mathrm{eV},
\end{equation}
where the error correspond to the statistical error of the [Fe \textsc{x}] 6374 \AA\ line luminosity. The so obtained value of the WD colour temperature is close to the temperatures $T_h = (2.275 \pm 0.3) \times 10^5$ K and $T_h = (2.50 \pm 0.1) \times 10^5$ K derived previously by \citet{Kahabka06} and \citet{Skopal15a}, respectively. In addition, this temperature is in agreement with the simple formula of \citet{Iijima81} used to estimate the effective temperature of a central source from the nebular emission line fluxes:
\begin{equation}
T (10^4 K) = 19.38 \sqrt{\frac{2.22 F_{4686}}{4.16 F_{H\beta} + 9.94 F_{4471}}} + 5.13,
\end{equation}
where $F_{4686}$, $F_{H\beta}$, and $F_{4471}$ are the fluxes of He $\textsc{ii}$ 4686, H $\beta$, and He $\textsc{i}$ 4471 emission lines, respectively. Using this formula for LIN 358, we get $T_{\mathrm{eff}} = 2.36 \times 10^5$ K.
In all of the following calculations we have used the temperature derived from the [Fe \textsc{x}] 6374 \AA\ emission line in Eq.~(\ref{eq:fextemp}).
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Figs/luminosity.pdf}
\caption{Simulated [Fe \textsc{x}] 6374 \AA\ (red) and He \textsc{ii} 4686 \AA\ (blue) line luminosity as a function of the WD luminosity for $T = 19$ eV and $\dot{M}_{\mathrm{loss}} = 1.2 \times 10^{-6}$ M$_{\odot}$ yr$^{-1}$.}
\label{fig:luminosity}
\end{figure}
\subsection{White dwarf luminosity}
To justify the choice of the WD luminosity we used the two high-excitation lines [Fe \textsc{x}] 6374 \AA\ and He \textsc{ii} 4686 \AA. The comparison between simulations and observations is shown in Fig.~\ref{fig:luminosity}, demonstrating that the two lines give consistent estimates of the WD luminosity. From this we can derive the WD luminosity:
\begin{equation}
L \, = \, ( \, 1.02 \, \pm \, 0.15 \,) \, \times \, 10^{38} \,\, \mathrm{erg} \,\, \mathrm{s}^{-1}.
\end{equation}
This value is consistent with the previous values of \citet{Kahabka06} and \citet{Skopal15a}.
\subsection{Mass-loss rate of the donor star}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Figs/massloss.pdf}
\caption{The observed and simulated line luminosities for main emission lines in the spectrum of LIN 358\ shown as a function of the mass-loss rate of the giant star. In the left panel we show the high excitation lines of He \textsc{ii} 4686 and [Fe \textsc{x}] 6374 \AA\ which luminosities are robustly predicted in our simulations. In the right panel we show the H$\alpha$ , H$\beta$ , He \textsc{i} 5875, and He \textsc{i} 6678 \AA\ lines which are more sensitive to the collisional effects, self-absorption and detail of the wind structure near the donor star. The solid lines are predicted by the \textsc{Cloudy} simulations and the dashed horizontal lines show the observed values from Table~\ref{table:lums}. The mass-loss rate on the x-axis is from the Eq.~(\ref{eq:massloss}) with $v = 15$ km s$^{-1}$, $r=3.7$ AU, and $R_* = 178$ R$_{\odot}$. }
\label{fig:massloss}
\end{figure*}
The density structure of the wind is related to the mass-loss rate from the donor star via Eq.~(\ref{eq:massloss}) with $v = 15$ km s$^{-1}$, $r=3.7$ AU, and $R_* = 178$ R$_{\odot}$. Simulated and observed line luminosities of the two high excitation lines of He \textsc{ii} 4686 and [Fe \textsc{x}] 6374 \AA\ are shown in the left panel of Fig.~\ref{fig:massloss}.
From this figure one can see that the luminosities of our two main diagnostic lines can be explained with a consistent mass-loss rate of
\begin{equation*}
\dot{M}_{\mathrm{loss}} \approx 1.2 \times 10^{-6} \,\, \mathrm{M_{\odot} \, yr^{-1}}
\end{equation*}
The right hand panel in Fig.~\ref{fig:massloss} shows luminosities of other principal emission lines. As one can see from this plot, the He \textsc{i} lines are consistent with the high excitation lines, but the Balmer lines show some scatter in mass-loss rate values, which however is not dramatic and its origin is reasonably well understood, as discussed in Sec. \ref{sec:discussion}.
\subsection{Structure of the emission regions of principal lines}
\begin{figure*}
\centering
\vbox{
\includegraphics[width=0.8\textwidth]{Figs/emissivity_Halpha.pdf}
\vspace{0.5cm}
\includegraphics[width=0.8\textwidth]{Figs/emissivity_HeFe.pdf}
}
\caption{Structure of the emission region in our best-fitting simulation (L = $10^{38}$ erg s$^{-1}$ and $\dot{M}_{\mathrm{loss}} = 1.2 \times 10^{-6}$ M$_{\odot}$ yr$^{-1}$) for different emission lines. The top row shows emissivity of H$\alpha$ line with two different length scales of $10^{16}$ cm (left) and $10^{15}$ cm (right). The lower panels show emissivity of He \textsc{i} on the left and of [Fe \textsc{x}] 6374 \AA\ on the right at the length scale of $10^{15}$ cm. The colours represent the line volume emissivity in erg s$^{-1}$ cm$^{-3}$ according to the colour-bars. In this representation the WD is located at the centre of the image at coordinates (0, 0) and the donor star is to the left of the WD at a distance of 3.7 AU (5.5 $\times$ $10^{13}$ cm). }
\label{fig:emissivities}
\end{figure*}
With our simulations we can investigate where the majority of the nebular emission in a symbiotic binary originates. This is illustrated in the top two panels in Fig.~\ref{fig:emissivities}, where we show the volume emissivity of several emission lines. The emissivity is shown in an arbitrary plane, because we have assumed an azimuthal symmetry. From these figures one can see that the emissivity distribution for low excitation lines is quite asymmetric along the line connecting the white dwarf and the donor star. Indeed, in the simple Str\"omgren sphere case, the radius of the ionized region in a constant density nebula containing only hydrogen is:
\begin{equation}\label{eq.stromgren}
\mathrm{R_S} = \left( \frac{3}{4 \pi} \frac{\mathrm{\dot{N}_{ph}}}{\mathrm{n^2} \alpha} \right) ^{\frac{1}{3}}
\approx 155 \mathrm{AU} \left( \frac{\mathrm{\dot{N}_{ph}}}{10^{48} \mathrm{s}^{-1}} \right) ^{\frac{1}{3}} \left( \frac{\mathrm{n}}{10^7 \mathrm{cm} ^{-3}} \right) ^{-\frac{2}{3}},
\end{equation}
where $\alpha$ is the recombination coefficient, $\mathrm{\dot{N}_{ph}}$ is the number of ionizing photons per second, and $\mathrm{n}$ is the number density of the ISM. The lower density away from the giant star (i.e. increasing x-coordinate in Fig.~\ref{fig:emissivities}) causes the ionized region to be more extended in this direction than towards the giant star. The dark cone to the left is the ``shadow'' of the giant star, which blocks the emission from the WD propagating to the left.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Figs/diffLum.pdf}
\caption{Distribution of line luminosity around the white dwarf as a function of angle $\theta$ (see Fig.~\ref{fig:system}). The lines are calculated for L = $10^{38}$ erg s$^{-1}$ and $\dot{M}_{\mathrm{loss}} = 1.2 \times 10^{-6}$ M$_{\odot}$ yr$^{-1}$.}
\label{fig:diffLum}
\end{figure}
From the Fig.~\ref{fig:emissivities} one can see that most of the emission originates from the vicinity of the giant star, where the density is higher. This is further illustrated by Fig.~\ref{fig:diffLum} where we show the simulated line luminosity as a function of the angle $\theta$ (see Fig.~\ref{fig:system}) for various emission lines.
These figures also highlight the differences between different lines: the [Fe \textsc{x}] 6374 \AA\ line originates roughly spherically from the immediate surroundings of the ionizing source, as already mentioned in Sec.~\ref{sec:fex}, whereas virtually all of the He \textsc{i} emission comes from a small region near the giant star, where the gas is not too highly ionized and the density is high enough that the collisional processes dominate the emission mechanisms.
The emissivity as a function of the distance from the ionising source for several angles is shown in Fig.~\ref{fig:em_radius}. Also in this figure one can clearly see the same differences in the emission lines: the [Fe \textsc{x}] 6374 \AA\ emissivity peaks much to the WD than the H and He \textsc{I} lines, which all peak roughly at the same distance close to the giant star. For He \textsc{i} lines the emissivity peak is very sharp near the donor star, while for the others the emissivity is more extended.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Figs/emissivity_threeAngles.pdf}
\caption{The emissivity of various emission lines shown as a function of the distance from the ionising source with three different angles (see Fig.~\ref{fig:system}): $\theta = 0^{\circ}$ (left), $\theta = 90^{\circ}$ (middle), and $\theta = 175^{\circ}$ (right). The lines are calculated for L = $10^{38}$ erg s$^{-1}$ and $\dot{M}_{\mathrm{loss}} = 1.2 \times 10^{-6}$ M$_{\odot}$ yr$^{-1}$. }
\label{fig:em_radius}
\end{figure*}
\section{Discussion}
\label{sec:discussion}
We have used 2D radiative transfer calculations based on the \textsc{Cloudy} photoionization code to study ionization of the wind from the AGB donor star by the emission from the nuclear burning white dwarf in the LIN 358 system. As discussed in Sec.~\ref{sec:sim}, the real geometry of the wind in symbiotic binaries is not spherically symmetric and strictly speaking requires 3D simulations. However, our 2D simulations do capture correctly the main geometrical aspects of the photoionization nebula around the white dwarf -- significant asymmetry of the density distribution towards the donor star, away from it and in the direction normal to the orbital plane of the system. With this, our simulations successfully explain all the main emission lines except for the Balmer lines, which are modified by self-absorption and have complicated line profiles. Our approach enabled us to significantly improve on the 1D calculations (which in our runs failed to explain the observed spectrum) while also keeping the computational effort manageable. Based on our calculations we provide a self-consistent description of the observed optical spectrum and derived the main parameters of the problem -- wind mass loss rate of the AGB donor star, luminosity of the white dwarf and the colour temperature of its emission.
The orbital parameters of LIN 358 are not known whereas the orbital separation of the system is an important parameter in our calculations. To this end, we used estimates based on numerical modelling and an analytical description of the wind Roche Lobe overflow accretion regime, as described in Sec.~\ref{sec:orb}. As these estimates bear some uncertainty we investigated the dependence of our results on the assumed binary separation $a$ and found that they are relatively insensitive to $a$ in the relevant range of values of $a$. In particular, we have found that increasing $a$ by a factor of two increases the derived mass-accretion rate by only $\sim$20\%, and the temperature and luminosity change by $\sim$2\%. Similarly, decreasing the binary separation to 2 AU on the other hand decreases the mass accretion rate by $\sim$15\%.
\subsection{Low excitation lines}
Although absolute luminosities of H$\alpha$\ and H$\beta$\ lines can be roughly accounted in our simulations (Fig. \ref{fig:massloss}) their observed line ratio is not reproduced. The line ratio in our simulations remains constant at $\approx 3$, as expected from the Case B recombination \citep{Osterbrock06}, but the observed line ratio is higher, H$\alpha$ /H$\beta$\ $\approx$ 4.9. This higher Balmer decrement could have been caused by ISM dust absorption, but this is unlikely, given the small value of interstellar reddening towards the source (see Sec.~\ref{sec:obs}). The most likely reason for the high Balmer decrement is self-absorption, which is not fully captured by our simple simulation setup.
High H$\alpha$ /H$\beta$\ line ratios have been previously observed in e.g. AGN and other symbiotic binaries, and our observed value can be explained by e.g. the calculations done by \citet{Netzer75}, but the complicated high density structure near the surface of the giant star and inside the wind acceleration radius, where the collisional effects play a major role in the production of Balmer lines, cannot be fully described by our simple simulation setup. The wind structure is likely more complicated than a smooth power-law profile and there can be clumps, inhomogeneities, and shock waves. In addition, the wind can be gravitationally focused towards the orbital plane of the binary \citep{deValBorro09, Shagatova16}, and simulating this would require full 3D calculations.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Figs/massloss_temp.pdf}
\caption{The photospheric temperature (black lines) and radius (blue lines of the white dwarf shown as a function of the mass-accretion rate. The dashed, solid, and dash-dotted lines show the results of \citet{Hachisu99b} for WD masses of 0.8, 1.0, and 1.3 M$_{\odot}$, respectively. The red areas show our results for LIN 358 , and the blue horizontal line shows the photospheric WD radius derived by \citet{Skopal15a}. }
\label{fig:massloss_temp}
\end{figure}
\subsection{Mass accretion rate}
Steady nuclear burning on the white dwarf surface can occur only in a rather narrow range of mass-accretion rates, below which the WD exhibits nova outbursts, and above which the WD is believed to have an expanded photosphere and lose the excess mass via high velocity winds \citep{Hachisu96, Nomoto07, Wolf13}.
Our results suggest that the giant star in the LIN 358\ system is losing mass through stellar winds with a mass-loss rate of $\dot{M}_{\mathrm{loss}} \approx 1.2 \times 10^{-6}$ M$_{\odot}$ yr$^{-1}$. This value is well within the range of typical mass-loss rates for O-rich AGB stars like LIN 358\ \citep{Ramstedt09, Groenewegen18}.
How much of this material ends up on the WD is not fully clear, but using the same method as described by e.g. \citet{Abate18, Ilkiewicz20, Belloni20}
we can estimate the accretion efficiency to be $\approx 0.5$ which means the WD would accrete material at rate $\dot{M}_{\mathrm{acc}} \approx 6 \times 10^{-7}$ M$_{\odot}$ yr$^{-1}$. Recalling that for a 1 M$_{\odot}$ white dwarf stable nuclear fusion can occur in the range of mass accretion rates from $\approx 2 \times 10^{-7}$ M$_\odot$ yr$^{-1}$ to $\approx 4 \times 10^{-7}$ M$_\odot$ yr$^{-1}$ \citep{Nomoto07, Wolf13}, we conclude that the white dwarf in LIN 358\ accretes above the stability strip. In this regime, only a fraction of accreted material can be retained, the maximal rate given by the upper boundary of the stability strip, $\approx 4 \times 10^{-7}$ M$_\odot$ yr$^{-1}$ for a 1 M$_{\odot}$ white dwarf, the rest being blown away in a radiation driven wind \citep{Hachisu96, Nomoto07, Wolf13}. The high velocity WD wind will further complicate the geometry of the photoionization problem.
Interestingly, the upper boundary of the stability strip, $\dot{M} = 4 \times 10^{-7}$ M$_{\odot}$ yr$^{-1}$, corresponds to the bolometric luminosity of $L = 1.1 \times 10^{38}$ erg s$^{-1}$, which is very close to the luminosity of the white dwarf derived in our photoionisation calculations, $L = ( 1.02 \pm 0.15) \times 10^{38}$ erg s$^{-1}$. We emphasise that this value of WD bolometric luminosity was obtained in Sec.~\ref{sec:results} on completely different grounds.
Conversely, we can use bolometric luminosity of the WD derived from photoionisation calculations to constrain the accretion efficiency in LIN358. Indeed, given the efficiency of the nuclear burning and assuming solar abundances, a luminosity of $1.02 \times 10^{38}$ erg s$^{-1}$ requires the mass accretion rate of $\dot{M} = 3.7 \times 10^{-7}$ M$_{\odot}$ yr$^{-1}$. Given the mass-loss rate of $\dot{M}_{\mathrm{loss}} \approx 1.2 \times 10^{-6}$ M$_{\odot}$ yr$^{-1}$ in LIN 358, the accretion efficiency is $ 0.31$. Taking into account that the bolometric luminosity is close to the upper boundary of the stability strip for 1 M$_{\odot}$ WD, this value should be considered as a lower limit. We also note that if LIN 358\ harbours a more massive white dwarf as suggested e.g. in Orio et al. (2007), the conclusion that the WD is accreting above the stable nuclear burning limit would not change as the value of $\dot{M} = 3.7 \times 10^{-7}$ M$_{\odot}$ yr$^{-1}$, corresponding to the luminosity of $1.02 \times 10^{38}$ erg s$^{-1}$, exceeds the lower boundary of the stability strip for any WD mass up to about 1.3 M$_{\odot}$. On the other hand, we argue that the mass of the WD in LIN358 is larger than 0.9 M$_{\odot}$ as for smaller masses the luminosity of $10^{38}$ erg s$^{-1}$ could not be maintained for extended period.
Furthermore, we use the wind solution from \citet{Hachisu99b} to plot in Fig.~\ref{fig:massloss_temp} the photospheric temperature and radius of an accreting WD as a function of the mass-accretion rate. In this plot we also show our best fit WD colour temperature, our estimate of the mass accretion rate and the WD photospheric radius measurement from \citet{Skopal15a}. As one can see, the lines cross between the 0.8 and 1 $M_{\odot}$ curves derived from \citet{Hachisu99b} calculations.
Conversely, we can use bolometric luminosity of the WD derived from photoionisation calculations to constrain the accretion efficiency in LIN358. Indeed, given the efficiency of the nuclear burning and assuming solar abundances, a luminosity of $10^{38}$ erg s$^{-1}$ requires the mass accretion rate of $\dot{M} = 3.5 \times 10^{-7}$ M$_{\odot}$ yr$^{-1}$. Given the mass-loss rate of $\dot{M}_{\mathrm{loss}} \approx 1.2 \times 10^{-6}$ M$_{\odot}$ yr$^{-1}$ in LIN 358, the accretion efficiency is $\approx 0.3$. Taking into account that the bolometric luminosity is close to the upper boundary of the stability strip for 1 M$_{\odot}$ WD, this value should be considered as a lower limit. We also note that if LIN 358\ harbours a more massive white dwarf as suggested e.g. in Orio et al. (2007), the conclusion that the WD is accreting above the stable nuclear burning limit would not change as the value of $\dot{M} = 3.5 \times 10^{-7}$ M$_{\odot}$ yr$^{-1}$ corresponding to the luminosity of $10^{38}$ erg s$^{-1}$ exceeds the lower boundary of the stability strip for any WD mass up to about 1.3 M$_{\odot}$. On the other hand, we argue that the mass of the WD in LIN358 is larger than 0.9 M$_{\odot}$ as for smaller masses the luminosity of $10^{38}$ erg s$^{-1}$ could not be maintained for extended period.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Figs/columnDensity.pdf}
\caption{The neutral hydrogen column density around the WD as a function of the angle $\theta$ (see Fig.~\ref{fig:system}) calculated for L = $10^{38}$ erg s$^{-1}$ and $\dot{M}_{\mathrm{loss}} = 1.2 \times 10^{-6}$ M$_{\odot}$ yr$^{-1}$. The black line shows the simulation results, red line shows the upper limit from \citet[][see text]{Kahabka06}, and the blue dashed line shows the column density at which the optical depth for Raman scattering equals unity (see Sec.~\ref{sec:raman}). }
\label{fig:columnDensity}
\end{figure}
Notably, any high velocity winds ($\sim$ 100's -- 1000 km s$^{-1}$) driven from the WD, as is predicted for accretion rates above the stable accretion regime \citep{Hachisu99b}, should prove strongly supersonic within the red giant wind medium. This would produce a strong shock, in a manner analogous to the colliding winds found in some high-mass binaries \citep[e.g.,][]{Dougherty2005}. Detecting such emission in LIN 358\ should be feasible; indeed, shock emission from colliding winds has previously been detected in a massive binary in the SMC \citep{Naze2007}. Modelling the additional emission expected from such a shock is, however, beyond the present scope; we address this, as well as prospects for constraining WD accretion physics, in a subsequent effort.
\subsection{Circumstellar absorption}
\label{sec:csmabs}
X-ray spectral fitting of LIN 358 gave the column density of neutral material intrinsic to SMC of $(3.9 \pm 0.6) \, \times 10^{20}$ cm$^{2}$ (on top of Galactic absorption of $3.7 \times 10^{20}$ cm$^{2}$) \citep{Kahabka06}. This number includes contributions from the neutral ISM in the SMC as well as the neutral circumstellar material (CSM) around LIN 358. In a more general context, attenuation by the dense wind from the giant star is often proposed to explain the paucity of observed symbiotic binaries. However, in many cases the wind should be too highly ionized to provide significant level of attenuation \citep{Nielsen15}.
Our calculations show that this is also true for LIN 358, where the circumstellar material is mostly ionized except for a narrow cone towards the donor star.
We show in Fig.~\ref{fig:columnDensity} the neutral hydrogen column density $\mathrm{N_H}$ obtained from our simulations as a function of the viewing angle from the WD. From this figure one can see that the wind is highly ionized everywhere except angles $\Theta \gtrsim 150^{\circ}$ where a large amount of neutral gas is present with column densities in excess of $\mathrm{N_H} \gtrsim 10^{22}$ cm$^{-2}$. However, the solid angle subtended by the regions of large column density is only about $\sim 0.07$ of $4\pi$. Therefore, even if the binary inclination angle is sufficiently small, large $\mathrm{N_H}$ values will only be observed in the narrow range of binary orbital phase, when the donor star is close to the line of sight.
We thus conclude that in the case of LIN 358, the circumstellar material can not provide any notable attenuation of the white dwarf emission, and that the excess absorption (above the Galactic value) observed in the X-ray spectrum of the source is due to the ISM in the SMC. Given the low colour temperature of LIN 358, even a modest amount of absorption by the neutral gas in the Milky Way and SMC is sufficient to attenuate its emission by a large factor.
Although in LIN 358\ the white dwarf emission freely escapes the surrounding CSM, we did not detect any extended ($\gtrsim 1$ pc) ionized nebula around the source. Super-soft X-ray sources are expected to ionize the surrounding ISM and create a distinct H \textsc{II} region around them \citep{Rappaport94, Remillard95, Woods16}. The expected presence of such ionized nebulae around accreting WDs has been used to constrain their accretion history and their role in producing Type Ia supernovae \citep{Woods16, Woods17, Woods18, Graur19, Kuuttila19, Farias20}, but the ambient ISM density around LIN 358\ appears to be too low for a nebula to be detectable.
\subsection{LIN 358 in the context of the origin of SN Ia}
Our results suggest that the white dwarf in LIN 358\ is growing in mass with a rate of $\approx 4 \times 10^{-7}$ M$_{\odot}$ yr$^{-1}$. For this reason LIN 358\ and symbiotic systems in general have been considered as prospective SN Ia progenitors \citep[see][for a review]{Maoz14}. One of the well known problems of this scenario, limiting the possible contribution of symbiotic systems to SN Ia production, is the short lifetime of an AGB star, of the order of $\sim 10^5$ years \citep{Yungelson98}.
Indeed, even a WD accreting above the steady burning limit and growing its mass at the maximum rate, similar to LIN 358, would gain only $\sim 0.04$ M$_{\odot}$ in the course of the symbiotic phase. The WD would have to be initially very massive in order to reach the Chandrasekhar mass, but such massive WDs are thought to be formed as ONeMg-rich WDs, which are thought to rather form a neutron star via AIC instead of exploding \citep{Nomoto84, Nomoto91}. The explosion and collapse mechanisms are however not yet fully understood and recent simulations show that ONeMg-rich WDs can explode \citep[see e.g.][]{Marquardt15, Jones16, Jones19}.
Nevertheless, symbiotic binaries are thought of as possible progenitor candidates for some peculiar SNe Ia, especially those exhibiting signatures of interaction with hydrogen-rich circumstellar material \citep[SNe Ia-CSM; e.g. SN 2002ic][]{Hamuy03}. These supernovae exhibit strong early time H$\alpha$\ emission, consistent with the supernova ejecta interacting with dense circumstellar material \citep{Silverman13a}. In addition, SNe Ia-CSM show large H$\alpha$ / H$\beta$\ ratios \citep[e.g. $> 7$ for PTF11kx;][]{Silverman13b} caused by collisional excitation and Balmer self-absorption in a high density gas, similar to LIN 358\ and many other symbiotic binaries.
While an AIC is a more likely outcome of a symbiotic binary than a SN Ia \citep{Yungelson10}, given that the highest mass WDs formed are ONeMg-rich,
the formation of neutron stars via AIC from AGB+WD binaries faces the same issue as SNe Ia progenitors, which is the short lifetime of the AGB star. The number of AIC progenitors with an AGB donor in the Galaxy has recently been estimated to be $\sim 30$ by \citet{Ruiter19}, who assumed a BHL wind description and a time-averaged accretion rate of $\sim 10^{-8}$ M$_{\odot}$ yr$^{-1}$. However, as shown here, the accretion rate can be much higher than that, meaning that the AGB+WD binaries like LIN 358\ can form neutron stars more efficiently.
As mentioned above, however, symbiotic binaries can be difficult to detect, especially in X-rays, while in other wavelengths they can be difficult to separate from other astrophysical sources. For example, in the infrared the symbiotic binaries are dominated by emission from the AGB star and thus they can be difficult if not impossible to separate from e.g. single AGB stars. The best wavelength range to unambiguously detect symbiotic binaries may be the optical spectrum, because symbiotic binaries like LIN 358\ are often very bright in the He \textsc{ii} 4686 \AA\ emission line, which is a clear signature of an accreting white dwarf. This line, together with some possible high excitation state forbidden lines such as [Fe \textsc{x}] 6374 \AA, can be used to identify possible symbiotic candidates, but perhaps the most important identifiers are the Raman scattered O \textsc{vi} features at 6830 \AA\ and 7088 \AA . These features are observed almost exclusively in symbiotic binaries, and in the Milky Way the presence of the Raman scattered lines is confirmed in about 55\% of the symbiotic population; in the SMC the percentage is 92\% \citep{Akras19}. Future surveys focusing on these Raman features, like the RAMSES II search \citep{Angeloni19}, can shed more light on the true population of symbiotic binaries and therefore provide some constraints on the birthrates of SNe Ia and AICs from the symbiotic channel.
\section{Conclusions}
We have examined the properties of the SMC symbiotic binary LIN 358\ by comparing our optical spectroscopic observations with 2D radiative transfer simulations performed with the help of the \textsc{Cloudy} photoionization code. Comparing the results of our simulations and observations, we have determined the colour temperature of the WD to be $T = (2.23 \pm 0.03) \times 10^5$ K, its bolometric luminosity of $L = (1.02 \pm 0.15) \times 10^{38}$ erg s$^{-1}$, and the mass-loss rate of the donor star to be $\dot{M}_{\mathrm{loss}} = 1.2 \times 10^{-6}$ M$_{\odot}$ yr$^{-1}$.
Assuming a solar H to He ratio in the wind material, a lower limit to the accreted mass fraction in LIN358 is 0.31
We also determined the accretion rate on to the white dwarf to be $\dot{M}_{\mathrm{acc}} \approx 6 \times 10^{-7}$ M$_{\odot}$ yr$^{-1}$. These results indicate that the WD in LIN 358\ is accreting material at a rate above the stability strip of hydrogen fusion and may be loosing a fraction of the accreted mass via a high velocity wind. At these high accretion rates the photosphere of the white dwarf expands to a fraction of the solar radius, thus explaining the low colour temperature of the white dwarf emission. We speculate that many symbiotic systems may be operating in this regime, which explains the paucity of detected systems. For the mass of $\approx 1$ M$_{\odot}$, the white dwarf in LIN 358\ is growing its mass at the maximum possible rate of $\approx 4 \times 10^{-7}$ M$_{\odot}$ yr$^{-1}$. It is however unlikely that the white dwarf in LIN 358\ will ever reach the Chandrasekhar limit due to the short lifetime of the AGB donor star. Our calculations show that the circumstellar material in LIN 358\ is nearly completely ionized everywhere except for a narrow cone around the donor star with an opening angle of 30 degrees, and the radiation of the white dwarf freely escapes from the system. The low energy absorption detected in the X-ray spectrum of this system is due to neutral ISM in the Milky Way and in the SMC.
\section*{Acknowledgements}
IRS and AJR are supported by the Australian Research Council through grant numbers FT160100028 and FT170100243, respectively. MG acknowledges partial support by the RSF grant 14-22-00271. TEW acknowledges support from the NRC-Canada Plaskett fellowship.
\section*{Data Availability Statement}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
2,869,038,156,757 | arxiv | \section{Introduction}
Rumors may mislead people's judgments, affect people's lives, and cause adverse effects on society. For example, a mass of rumors about COVID-19 propagating on the Internet while the COVID-19 is spreading globally. And there have been rumors that \emph{Drinking alcohol can kill the new coronavirus in the body}\footnote{https://time.com/5828047/methanol-poisoning-iran/}, and it did cause many people to drink plenty of alcohol and even be hospitalized. Therefore, it is necessary to detect rumors at an early stage. With the development of deep learning, many researchers have tried to train deep learning models for social media rumor detection. The mainstream of deep learning is based on neural networks, and there are also some non-neural based deep learning models \cite{b0}. In this paper, deep learning refers specifically to deep neural networks.
Deep learning has been successfully applied to many natural language processing (NLP) tasks. General rumor detection task is essentially a text binary classifification task \cite{b1}. With the rapid development of NLP technology, current models have achieved surprising performance on many rumor datasets. Under normal circumstances, it is difficult for humans to judge the authenticity of rumors. The current deep learning models can easily reach nearly 90\% accuracy on a rumor dataset \cite{b2,b3,b4,b5,b6,b7}. Does this mean that the models have learned the real ability to detect rumors from the rumor dataset? Obviously this question needs further research and confirmation. And high accuracy on a specific test set does not mean that the model has really learned to detect rumors.
In this work, we ask the questions: Do models learn to detect rumors? And we subdivide this question into four sub-questions to study separately.
(1) Does performance on individual rumor datasets generalize to new datasets? (2) Can models detect common-sense rumors? (3) Are the predictions of the model credible and consistent? (4) What does model learn from rumor datasets?
To answer these questions, first we evaluate models on their generalization ability to out-of-domain examples by fine-tuning BERT-based models on five real-world datasets and evaluating against all five test sets. The experimental results indicate that model performance cannot be well generalized to other unseen datasets. Second, we create a dataset of common-sense rumors to test the trained models and find that the models could not effectively detect common-sense rumors. Third, we analyze some specific cases, and the results show that the predictions of the models are inconsistent. Therefore, the predictions of models may be correct but not credible. Finally, we find that there are serious data pitfalls in Twitter15 and Twitter16 datasets. Those pitfalls lead models to learn absurd knowledge and rules. Based on the experiments and research in this work, we make a certain number of recommendations on how to better create rumor datasets and evaluate rumor detection models at the end of this paper.
\section{Related Works}
Existing rumor detection works are mainly foucus on improving the performance of model on the test set. In contrast, our work focuses on the behavior of models and the real capabilities that models learn from relevant datasets. And our work is inspired by related research and analysis on other NLP task models. \cite{b8} showed through experiments that BERT achieved high performance on the Argument Reasoning Comprehension Task is entirely accounted for by exploitation of spurious statistical cues in the dataset. \cite{b9} explored five QA datasets with six tasks and indicated that models did not learn to generalize well, remained suspiciously robust to wrong data, and failed to handle variations in questions. \cite{b10} found that these QA deep learning models often ignored important question terms. \cite{b11} analyzed the Behavior of Visual Question Answering Models. There are existing perturbation methods meant to evaluate specific behavioral capabilities of NLP models such as logical consistency \cite{b12} and robustness to noise \cite{b13}, name changes \cite{b14}, or adversaries \cite{b15}. Based on behavioral testing in software engineering, \cite{b16} proposed an NLP model testing tool, CheckList, which included a matrix of general linguistic capabilities and test types.
\section{Datasets and Model}
\textbf{Datasets:} We used five rumors or fake news data sets in our experiments: Twitter15, Twitter16, PHEME, GossipCop, PolitiFact. The size and label distribution of the datasets are shown in Table~\ref{t1}. In order to cross-test the data sets, we use only the original rumor text (original Twitter text and news headline) for all datasets, and only take the two labels of "true" and "false". Below we describe each dataset in our experiments:
\begin{table}[t]
\centering
\renewcommand\arraystretch{1.15}
\caption{Overview of the datasets used in this paper.}
\label{t1}
\setlength{\tabcolsep}{3mm}{
\begin{tabular}{|l|r|r|r|c|}
\hline
Datasets & \# True & \# False & \# Total & Label: false \% \\ \hline
Twitter15 & 372 & 370 & 742 & 49.87\% \\ \hline
Twitter16 & 205 & 205 & 410 & 50.00\% \\ \hline
PHEME & 3,830 & 1,972 & 5,802 & 33.98\% \\ \hline
GossipCop & 16,817 & 5,323 & 22,140 & 24.04\% \\ \hline
PolitiFact & 624 & 432 & 1,056 & 40.90\% \\ \hline
\end{tabular}}
\end{table}
Twitter15 and Twitter16: Two well-known datasets compiled by \cite{b17}. Each dataset contains a collection of source tweets, along with their corresponding sequences of retweet users. We choose only source tweet text, and "true" and "false" labels as the ground truth. What we need to know is that the ultimate goal of rumor detection is to detect in the early stages of rumors spreading effectively. And rumors are usually only a short paragraph of text in the initial stage (no user comments, no reposting information). Therefore, our work is carried out in the scenario mentioned above.
PHEME\cite{b19}:
This dataset contains a collection of Twitter rumors and non-rumors posted during breaking news. The five breaking news provided with the dataset are as follows:
(1) Charlie Hebdo
(2) Ferguson
(3) Germanwings Crash
(4) Ottawa Shooting
(5) Sydney Siege.
GossipCop and PolitiFact:
The data sets GossipCop and PolitiFact obtained fake news and real news from fact-checking websites \emph{GossipCop.com} and \emph{PolitiFact.com} respectively \cite{b20}. In order to ensure that the text length is similar to other data sets, we only use news headlines as training and test data in the experiment.
\textbf{Model}: Note that the purpose of this paper is not to achieve high accuracy on these datasets. In order to facilitate experiments and unify standards, we have chosen a typical deep learning model BERT \cite{b21}, which takes advantage of the self-attention mechanism, and pre-training on a large-scale corpus to learn the general language knowledge and to present state-of-the-art results in many NLP tasks. In this work, all models are initialized from a pre-trained BERT-Base uncased model with 110M parameters. Moreover, each dataset are divided into 70\% training set and 30\% test set. The hyperparameters for fine-tune the models are: Batch Size = 64; Learning Rate =1e-5; Epochs = 8; Max Seq Length =50; Hidden size = 768. The process of using BERT to predict rumors is shown in Fig. \ref{f1}. Feed the model a rumor text, and the model outputs a true or false label.
\begin{figure}[t]
\centering
\includegraphics[width=0.90\linewidth]{f1.png}
\caption{ Detecting a rumor with BERT. Feed a rumor text to the model to learn the representation vector. The final [CLS] vector is then
passed to a linear layer to predicte the label: True or False. }
\label{f1}
\end{figure}
\begin{table}[]
\centering
\label{t2}
\caption{F1-score of each fine-tuned model evaluated on each test set. The model-on-self baseline is highlighted in \textbf{bold}.}
\renewcommand\arraystretch{1.2}
\setlength{\tabcolsep}{0.9mm}{
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{}} & \multicolumn{5}{c|}{Evaluated on} \\ \cline{3-7}
\multicolumn{2}{|c|}{} & Twitter15 & Twitter16 & PHEME & GossipCop & PolitiFact \\ \hline
\multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Trained\\ on\end{tabular}} & Twitter15 & \textbf{89.56}& 60.59&38.53&50.34&43.44 \\ \cline{2-7}
& Twitter16 & 47.74&\textbf{91.13}&40.54&24.19&28.85 \\\cline{2-7}
& PHEME & 45.65&25.81&\textbf{84.34}&50.85&56.32 \\\cline{2-7}
& GossipCop & 43.43&47.06&44.29&\textbf{81.57}&57.86 \\\cline{2-7}
& PolitiFact & 37.31&32.67&47.48&37.46&\textbf{86.41}\\ \hline
\end{tabular}
}
\end{table}
\section{Do Models Learn to Detect Rumors?}
\subsection{Does Performance on Individual Rumor Datasets Generalize to New Datasets?}
For our first experiment, we evaluate the generalizability of models to out-of-domain examples. At present, the vast majority of deep learning works are only tested on a single test set, which shows only a little insight of model's generalization ability. However, generalization ability is important to a model when it is applied in real-world task.
We test generalizability by fine-tuning BERT-based models on each dataset and evaluating against all five test sets. Datasets such as PHEME and GossipCop are unbalanced. Therefore, we choose the high information revealing metric F1-score to measure the performance of the models, and the results are reported in Table II. The rows show a single model’s performance across all five datasets, and the columns show the performance of all the models on a single test set. The model-on-self baseline is indicated in bold.
All models take a great drop in performance when evaluated on an out-of-domain test set. The performance of the models tested on out-of-domain data is even worse than random prediction. This shows that a model's rumor detection performance on an individual dataset does not generalize across datasets. However, the generalization ability of each model is different. The model trained on Twitter15 has an F1-score of 60.59\% when tested on Twitter16. Similarly, the model trained on Twitter16 achieves 47.74\% of the F1 tested on Twitter15, which is better than the models trained on other datasets. One posssible reason is that the fields and topics of the rumors in Twitter15 and Twitter16 datasets are relatively similar. The models trained on GossipCop is the best generalization ability among the five models. Note that GossipCop and PHEME are the first and second largest in the five datasets, and the generalization ability of the model trained on these two data sets is also the first and second. As can be seen from the above experimental results that more training data can improve the generalization ability of a model.
\begin{table*}[h]
\centering
\renewcommand\arraystretch{1.1}
\caption{Examples of common-sense rumors dataset. There are a total of 200 samples, with 100 rumors and non-rumors each, and the content corresponds to each other.}
\label{t3}
\setlength{\tabcolsep}{2.5mm}{
\begin{tabular}{|c|l|l|} \hline
\# num& Rumors (\# 100) & Non-rumors (\# 100) \\ \hline
1&The pet dog next door gave birth to a cat! & Dogs can only give birth to dogs, cats can only give birth to cats. \\ \hline
2&Human blood is generally blue. & Human blood is generally red. \\ \hline
3&In nature, penguins live in the Africa. & In nature, penguins live in Antarctica. \\ \hline
4&Signals from 5G signal towers can spread COVID-19. & The 5G signal tower does not spread COVID-19. \\ \hline
\end{tabular}}
\end{table*}
\subsection{Can Models Detect Common-Sense Rumors?}
From the experimental results in the previous section, it is proved that the models cannot generalize the high performance well to out-of-domain data. We are wondering if the reason is that the rumors in these datasets are difficult to detect, making it difficult for the model to learn this ability. Therefore, we create a common-sense rumors dataset to verify models’ simple rumor detection ability. The rumors in this dataset are easy for humans to distinguish true and false. We manually collecte and make more than 500 common-sense rumors and corresponding non-rumors by two Ph.D. students and two master students, and then invited 10 people aged 16-40 people (5 males and 5 females) to classify them. We delete the data with more than two people's judgment errors, and finally, we kept 200 pieces of data in our common-sense rumors dataset. Examples of common-sense rumors are shown in Table~\ref{t3}.
The common-sense rumors are adopted to evaluate whether the above models have the ability to detect common-sense rumors. The performance of those models on the original rumor test set and the common-sense rumor test set is shown in Table~\ref{t4}. It can be observed that the accuracy of all models on common-sense rumors is about 50\%, which is basically equivalent to guessing. We further check the the \emph{precision}, \emph{recall} and \emph{f1-score} of the models, and found that although the accuracy of the models are the same in the common-scense dataset, the other metrics of each model are different. The model fine-tuned on Twitter16 has 98\% of recall in the common-sense rumor test. However, the \emph{precision}, \emph{recall} and \emph{f1-score} of the models fine-tuned on PHEME and GossipCop are all very low. This result shows that the model fine-tuned on Twitter16 is more inclined to predict that the samples are false. The model fine-tuned on PHEME and GossipCop is more inclined to predict that the samples are true.
The models in Table~\ref{t6} performed very well on the original test set, with an average accuracy of about 87\%. But is the ability of these models to detect rumors really good? The above experimental results prove that these models not only cannot generalize high performance to other rumor test sets, but also basically do not have the ability to detect common-sense rumors.
\begin{table}[h]
\centering
\renewcommand\arraystretch{1.1}
\caption{The performance of the models on the original rumor test set and the common-sense rumor test set. The three criteria of Precision, Recall and F1-score are under the label-False.}
\label{t4}
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{|c|cccc|cccc|}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Bert\\ fine-tuned on\end{tabular}} & \multicolumn{4}{c|}{\begin{tabular}[c]{@{}c@{}}Test on\\ original rumors\end{tabular}} & \multicolumn{4}{c|}{\begin{tabular}[c]{@{}c@{}}Test on\\ common-sense rumors\end{tabular}} \\ \cline{2-9}
& Acc & Pre & Recall & F1 & Acc & Pre & Recall & F1 \\ \hline
Twitter15 & 89.69& 88.89 & 88.00 & 88.44 & 48.00 & 48.63 & 71.00 & 57.72 \\ \hline
Twitter16 & 91.13 & 91.94 & 90.48 & 91.20 & 49.00 & 49.49 & 98.00 & 65.77 \\ \hline
PHEME & 85.22 & 79.77 & 77.63 & 78.68 & 49.50 & 21.23 & 15.32 & 08.66 \\ \hline
GossipCop &87.87 & 78.51 & 68.89 & 73.36 & 48.50& 18.34 & 10.12 & 06.52 \\ \hline
PolitiFact & 87.03& 84.55 & 82.54 & 83.53 & 52.00 & 51.30 & 79.00 & 62.20 \\ \hline
\end{tabular}}
\end{table}
\subsection{Are the Predictions of Models Credible and Consistent?}
We feed a rumor text to a model, and the model will output a label “true” or “false”. The prediction results of a model may be correct, but the results are not necessarily credible because almost all deep learning models are black boxes. In order to know more clearly the ability of the model to detect rumors, the prediction results for each sample are shown in Table~\ref{t5}. We can see that the prediction results of many models are inconsistent. For example, the model PH and GC predict that “\emph{Dogs can only give birth to dogs, cats can only give birth to cats.}” is true, but they also believe that “\emph{The pet dog next door gave birth to a cat!}” is true. In addition, models T15 and T16 predict that the label of “\emph{Human blood is generally blue.}” is false, and it seems that the models are correct. However, the models predict that the label of “\emph{Human blood is generally red.}” is also false. Obviously, the labels of some sample pairs cannot be the same. These examples clearly show that the prediction results of models are not necessarily credible, models just output the label “true" or “false" according to the learned wrong rules. Such a model is like a “mouth” without a “brain”, only output without thinking.
\begin{figure}[t]
\centering
\includegraphics[width=0.92\linewidth]{f11.png}
\caption{Comparison of standard evaluation method and the proposed PairT evaluation method. Standard evaluation accuracy = 5/8 = 62.5\%; PairT evaluation accuracy = 1/4 = 25\%.}
\label{f2}
\end{figure}
\begin{table*}[]
\centering
\renewcommand\arraystretch{1.1}
\caption{Models' prediction results case analysis. The models trained on the 5 data sets of Twitter15, Twitter16, Pheme17, GossipCop, and PolitiFact are referred to as T15, T16, PH, GC, and PF respectively. And
he test samples come from the common-sense rumors dataset. The correct prediction results are \underline{underlined}.}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} & \multicolumn{5}{c|}{Prediction of models} \\ \cline{2-6}
\multicolumn{1}{|c|}{\multirow{-2}{*}{Test samples}} & T15 & T16 & PH & GC & PF \\ \hline
The pet dog next door gave birth to a cat! (False) & True & \underline{False} & True & True & \underline{False} \\
Dogs can only give birth to dogs, cats can only give birth to cats. (True) & False & False & \underline{True} & \underline{True} & False \\ \hline
Human blood is generally blue. (False) & \underline{False} & \underline{False} & True & True & True \\
Human blood is generally red. (True) & False & False & \underline{True} & \underline{True} & \underline{True} \\ \hline
In nature, penguins live in the Africa. (False) & \underline{False} & \underline{False} & True & True & \underline{False} \\
In nature, penguins live in Antarctica. (True) & \underline{True} & False & \underline{True} & \underline{True} & False \\ \hline
Signals from 5G signal towers can spread COVID-19. (False) & True & \underline{False} & True & True & \underline{False} \\
The 5G signal tower does not spread COVID-19. (True) & \underline{True} & False & \underline{True} & \underline{True} & False \\ \hline
\end{tabular}}
\label{t5}
\end{table*}
Model misjudgment may cause unintended consequences, we need to challenge the rumor detection model higher. The results in Table~\ref{t5}, indicate that the model predicts a sentence as a rumor that does not mean that the model really understands the meaning of the sentence. For example, the model thinks that the sentence "\emph{Dogs can only give birth to dogs, cats can only give birth to cats.}" is true, but model does not understand and learn the true meaning of this sentence. Therefore, the model mistakenly believed that the rumor "\emph{The pet dog next door gave birth to a cat!}" is also true.
In order to more realistically evaluate the performance of the rumor detection model, we propose a new evaluation method called paired test (\textbf{PairT}) in this paper. In this new evaluation method, models should test samples in pairs such as [A \& A$’$], the A is the original rumor text, and the A$’$ is a new text created manually. The important point is that the hidden knowledge and label contained in sample A are opposed to sample A$’$. And the model needs to predict samples A and A$’$ correctly at the same time. If a model predicts that the labels of A and A$’$ are the same, it means that this model is unreliable, because the prediction results are inconsistent. As shown in Fig.~\ref{f2}, there are eight test samples, that is, four pairs of A = [\#1 \& \#2], B = [\#3 \& \#4], C = [\#5 \& \#6], D = [\#7 \& \#8]. Assuming that a model predicts the five samples \#1, \#2, \#4, \#5 and \#8 correctly, and the three samples of \#3, \#6 and \#7 wrong, the accuracy rate calculated according to the standard evaluation method is 62.5\%. But according to our new evaluation method \textbf{PairT}, only one pair, the Pair A = [\#1 \& \#2], has been correctly predicted. So the accuracy of the
model is 25\%. We hope that the output of model will not contradict each other, and make inferences based on the true knowledge learned as much as possible, rather than ridiculous shortcut rules. This will pose a higher challenge to models so that models can be better applied in the real-world.
\begin{table}[]
\centering
\renewcommand\arraystretch{1.1}
\caption{By calculating the s and b of each word, we found out the words “Obama”, “Paul” and “Sydney” that satisfy $s\geq s_{min} = 0.8, b\geq b_{min} = 5\%$.}
\label{t6}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{} & \multicolumn{2}{c|}{Twitter15} & \multicolumn{2}{c|}{Twitter16} \\ \cline{2-5}
& Obama & Paul & Obama & Sydney \\ \hline
Strength $s$ & 0.95 & 0.92 & 0.96 & 0.89 \\ \hline
Breadth $b$ & 5.4\% & 11.2\% & 5.0\% & 11.7\% \\ \hline
\end{tabular}}
\end{table}
\sethlcolor{yellow}
\begin{table*}[]
\centering
\renewcommand\arraystretch{1.03}
\caption{Original and Adversarial data. The label of the adversarial data relative to the original data must be changed. The place where the text has changed is \hl{highlighted}.}
\label{t7}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{|m{6cm}|l|m{6cm}|l|}
\hline
\multicolumn{2}{|c|}{Original} & \multicolumn{2}{c|}{Adversarial} \\ \hline
\multicolumn{1}{|c|}{Text} & label & \multicolumn{1}{c|}{Text} & label \\ \hline
Obama \hl{says}, “America doesn't want any stay-at-home-moms! Enough is enough!” & False & Obama \hl{does not say}, “America doesn't want any stay-at-home-moms! Enough is enough!” & True \\ \hline
r.i.p to the driver that died with Paul that \hl{no one cares} about because he \hl{wasn't} famous. & True & r.i.p to the driver that died with Paul that \hl{many poeple care} about because he \hl{was} famous. & False \\ \hline
Hostage situation erupts in Sydney cafe, Australian prime minister says it \hl{may be} “politically motivated” & True & Hostage situation erupts in Sydney cafe, Australian prime minister says it \hl{could not be} “politically motivated” & False \\ \hline
\end{tabular}}
\end{table*}
\begin{table}[h]
\renewcommand\arraystretch{1.1}
\caption{Results for BERT on the adversarial test. O, P and S stand for “Obama”, “Paul” and “Sydney” respectively. The dataset Twitter15 contains the words “Obama” and “Paul”; the dataset Twitter16 contains the words “Obama” and “Sydney”. Declining accuracy are highlighted in \textbf{bold}.}
\label{t8}
\centering
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{|m{3.5cm}|c|c|}
\hline
& Twitter15 & Twitter16 \\ \hline
Original test set & 89.69\% & 91.13\% \\ \hline
Adversarial (O) & 84.75\% & 81.45\% \\ \hline
Adversarial (O \& P or S) & 73.09\% & 68.55\% \\ \hline
Declining accuracy & \textbf{16.60\%} & \textbf{22.58\%} \\ \hline
\end{tabular}}
\end{table}
\subsection{What Does Model Learn from Rumor Datasets?}
The above experimental results indicate that the models do not learn to detect rumors well. In order to know what the model has learned, we use the word-level attention mechanism to analyze the words that models mainly focus and breadth $b$ of its own signal, where the strength $s$ is the average attention weight of the word, and the breadth $b$ is the proportion of the data containing the word to all the data. In order to find the clue words that have a serious impact on the model, we set the thresholds of $s$ and $b$ to $s_{min} = 0.8, b_{min} = 5\%$. Finally, three clue words were found on Twitter15 and Twitter16, as shown in Table~\ref{t6}. A single word occupies about 90\% of the attention weight in a sentence, which is obviously unreasonable. After checking the original datasets, we found that the reason is that the distribution of these words is very uneven. For example, the word “Obama” almost only appears in the samples whose label is “False”. Therefore, the model trained on these two datasets is likely to take shortcuts to achieve high performance.
For the model trained on Twitter15, it tends to predict “false” when a test sentence contains the word “Obama”; In contrast, the model tends to predict “true” while a test sentence includes the phrase “Paul”. And for the model trained on Twitter16, it also tends to predict “false” when a test sentence contains the word “Obama”; In contrast, the model tends to predict “true” while a test sentence contains the word “Sydney”. This phenomenon is obviously illogical and unreasonable. In order to verify this finding, an adversarial dataset is established. We reverse the meaning of the data containing “Obama”, “Paul” and “Sydney” in the Twitter15 and Twitter16 datasets, and change the label of the data. The adversarial data is shown in Table~\ref{t7}. We test the performance of the models on the adversarial dataset, and the experimental results are shown in Table~\ref{t8}. When only the data containing “Obama” is modified, the accuracy of the model on the two datasets is 84.75\% and 81.45\%, which is about 5 and 10\% lower than the original accuracy, respectively. By further modifying the sentences containing “Paul” and “Sydney”, the accuracy of the model drops to 73.09\% and 68.55\%, respectively, and the accuracy drops by 16.60\% and 22.58\% compared to the original accuracy, respectively. Those accuracy drops are huge and alarming, because the words “Obama”, “Paul” and “Sydney” have great strength $s$ and breadth $b$ at the same time. Just because of the uneven distribution of a word, the results of the model will be severely affected. That is a cautionary tale for us that we need to be more cautious when creating a rumor dataset.
In addition, we analyzed three specific cases to point out the absurd knowledge and rules learned by the model. As we can see from Fig.~\ref{fig:2}, the label predicted by the model for the sentence “\emph{Human blood is generally blue.}” is False. If the sentence is changed to “\emph{Human blood is generally blue in Sydney.}”, the label predicted by the model will become True, which is obviously unreasonable. The model will output the label “True” as long as it sees the word “Sydney”. Similarly, we change the sentence “\emph{Some people can live forever and be young and healthy forever.}” to “\emph{Paul Walker can live forever and be young and healthy forever.}”, the model prediction result will also change from “False” to “True”. In short, if the text contains “Sydney” or “Paul”, the model will predict “True” with a high probability. On the contrary, if the text includes “Obama”, the model will output “False” with a high probability. The root cause of the model taking shortcuts or deception is that those two datasets, Twitter15 and Twitter16, left data pitfalls in the collection and creation process. There are two main reasons for serious data pitfalls. First, the range of events when collecting data is small, which leads to high coverage of certain (“Obama”, “Paul”, “Sydney”) words in the data set in all data. Second, the data containing these words (“Obama”, “Paul”, “Sydney”) is seriously unbalanced. For example, in the process of data collection, more than 90\% of the data containing “Obama” are labeled as false. In the data collection process, these issues are not difficult to avoid.
\begin{figure}[]
\centering
\includegraphics[width=0.95\linewidth]{f22.png}
\caption{Misleading the model to make wrong predictions through spurious clues. The test model is Bert fine-tuned on dataset Twitter15, and the text modification is highlighted.}
\label{fig:2}
\end{figure}
\section{Conclusion}
In this work, we conducted a series of experiments using five public datasets and the pre-training model BERT, and found that the seemingly high-performance models do not really learn to detect rumors. When dealing with a task, deep learning models are actually more likely to take shortcuts and cheat than humans. The good performance of a model may be because it learns some hidden clues and simple rules in the dataset. The shortcomings in datasets and evaluation methods make it difficult to judge whether models learn to detect rumors or not. Based on our work, we offer the following recommendations to researchers who create rumor datasets or train and evaluate rumor detection models:
\textbf{Do not just focus on the accuracy, precision, recall and f1-score of models.} Training a rumor detection model should focus not only on improving the accuracy, precision, recall and f1-score of the model but also on the behavior of the model, the credibility of model predictions, and the interpretability of model results.
\textbf{Test out-of-domain examples for the generalization ability of models.} A model has better generalization ability, and its application in the real world will be more valuable. A new rumor detection model should evaluate the performance across multiple related datasets.
\textbf{Challenge the models and use stricter evaluation criteria for them.} Evaluating on some easy test sets will exaggerate our judgments about the real capabilities learned by models. Rumor detection models can be evaluated using our proposed new evaluation method PairT.
\textbf{Create datasets carefully to avoid data pitfalls and ensure that models do not take shortcuts and cheat.} When creating a new rumor dataset, prevent a large number of unbalanced data clues, such as “Obama”, “Paul” and “Sydney” found in the paper. Try to eliminate such data pitfalls to avoid model taking shortcuts and cheat.
\section*{Acknowledgment}
This work was funded in part by Qualcomm through a Taiwan University Research Collaboration Project and in part by the Ministry of Science and Technology, Taiwan, under grant MOST 110-2221-E-006-001 and NCKU B109-K027D. We thank to National Center for High-performance Computing (NCHC) for providing computational and storage resources.
\section{Introduction}
Rumors may mislead people's judgments, affect people's lives, and cause adverse effects on society. For example, a mass of rumors about COVID-19 propagating on the Internet while the COVID-19 is spreading globally. And there have been rumors that \emph{Drinking alcohol can kill the new coronavirus in the body}\footnote{https://time.com/5828047/methanol-poisoning-iran/}, and it did cause many people to drink plenty of alcohol and even be hospitalized. Therefore, it is necessary to detect rumors at an early stage. With the development of deep learning, many researchers have tried to train deep learning models for social media rumor detection. The mainstream of deep learning is based on neural networks, and there are also some non-neural based deep learning models \cite{b0}. In this paper, deep learning refers specifically to deep neural networks.
Deep learning has been successfully applied to many natural language processing (NLP) tasks. General rumor detection task is essentially a text binary classifification task \cite{b1}. With the rapid development of NLP technology, current models have achieved surprising performance on many rumor datasets. Under normal circumstances, it is difficult for humans to judge the authenticity of rumors. The current deep learning models can easily reach nearly 90\% accuracy on a rumor dataset \cite{b2,b3,b4,b5,b6,b7}. Does this mean that the models have learned the real ability to detect rumors from the rumor dataset? Obviously this question needs further research and confirmation. And high accuracy on a specific test set does not mean that the model has really learned to detect rumors.
In this work, we ask the questions: Do models learn to detect rumors? And we subdivide this question into four sub-questions to study separately.
(1) Does performance on individual rumor datasets generalize to new datasets? (2) Can models detect common-sense rumors? (3) Are the predictions of the model credible and consistent? (4) What does model learn from rumor datasets?
To answer these questions, first we evaluate models on their generalization ability to out-of-domain examples by fine-tuning BERT-based models on five real-world datasets and evaluating against all five test sets. The experimental results indicate that model performance cannot be well generalized to other unseen datasets. Second, we create a dataset of common-sense rumors to test the trained models and find that the models could not effectively detect common-sense rumors. Third, we analyze some specific cases, and the results show that the predictions of the models are inconsistent. Therefore, the predictions of models may be correct but not credible. Finally, we find that there are serious data pitfalls in Twitter15 and Twitter16 datasets. Those pitfalls lead models to learn absurd knowledge and rules. Based on the experiments and research in this work, we make a certain number of recommendations on how to better create rumor datasets and evaluate rumor detection models at the end of this paper.
\section{Related Works}
Existing rumor detection works are mainly foucus on improving the performance of model on the test set. In contrast, our work focuses on the behavior of models and the real capabilities that models learn from relevant datasets. And our work is inspired by related research and analysis on other NLP task models. \cite{b8} showed through experiments that BERT achieved high performance on the Argument Reasoning Comprehension Task is entirely accounted for by exploitation of spurious statistical cues in the dataset. \cite{b9} explored five QA datasets with six tasks and indicated that models did not learn to generalize well, remained suspiciously robust to wrong data, and failed to handle variations in questions. \cite{b10} found that these QA deep learning models often ignored important question terms. \cite{b11} analyzed the Behavior of Visual Question Answering Models. There are existing perturbation methods meant to evaluate specific behavioral capabilities of NLP models such as logical consistency \cite{b12} and robustness to noise \cite{b13}, name changes \cite{b14}, or adversaries \cite{b15}. Based on behavioral testing in software engineering, \cite{b16} proposed an NLP model testing tool, CheckList, which included a matrix of general linguistic capabilities and test types.
\section{Datasets and Model}
\textbf{Datasets:} We used five rumors or fake news data sets in our experiments: Twitter15, Twitter16, PHEME, GossipCop, PolitiFact. The size and label distribution of the datasets are shown in Table~\ref{t1}. In order to cross-test the data sets, we use only the original rumor text (original Twitter text and news headline) for all datasets, and only take the two labels of "true" and "false". Below we describe each dataset in our experiments:
\begin{table}[t]
\centering
\renewcommand\arraystretch{1.15}
\caption{Overview of the datasets used in this paper.}
\label{t1}
\setlength{\tabcolsep}{3mm}{
\begin{tabular}{|l|r|r|r|c|}
\hline
Datasets & \# True & \# False & \# Total & Label: false \% \\ \hline
Twitter15 & 372 & 370 & 742 & 49.87\% \\ \hline
Twitter16 & 205 & 205 & 410 & 50.00\% \\ \hline
PHEME & 3,830 & 1,972 & 5,802 & 33.98\% \\ \hline
GossipCop & 16,817 & 5,323 & 22,140 & 24.04\% \\ \hline
PolitiFact & 624 & 432 & 1,056 & 40.90\% \\ \hline
\end{tabular}}
\end{table}
Twitter15 and Twitter16: Two well-known datasets compiled by \cite{b17}. Each dataset contains a collection of source tweets, along with their corresponding sequences of retweet users. We choose only source tweet text, and "true" and "false" labels as the ground truth. What we need to know is that the ultimate goal of rumor detection is to detect in the early stages of rumors spreading effectively. And rumors are usually only a short paragraph of text in the initial stage (no user comments, no reposting information). Therefore, our work is carried out in the scenario mentioned above.
PHEME\cite{b19}:
This dataset contains a collection of Twitter rumors and non-rumors posted during breaking news. The five breaking news provided with the dataset are as follows:
(1) Charlie Hebdo
(2) Ferguson
(3) Germanwings Crash
(4) Ottawa Shooting
(5) Sydney Siege.
GossipCop and PolitiFact:
The data sets GossipCop and PolitiFact obtained fake news and real news from fact-checking websites \emph{GossipCop.com} and \emph{PolitiFact.com} respectively \cite{b20}. In order to ensure that the text length is similar to other data sets, we only use news headlines as training and test data in the experiment.
\textbf{Model}: Note that the purpose of this paper is not to achieve high accuracy on these datasets. In order to facilitate experiments and unify standards, we have chosen a typical deep learning model BERT \cite{b21}, which takes advantage of the self-attention mechanism, and pre-training on a large-scale corpus to learn the general language knowledge and to present state-of-the-art results in many NLP tasks. In this work, all models are initialized from a pre-trained BERT-Base uncased model with 110M parameters. Moreover, each dataset are divided into 70\% training set and 30\% test set. The hyperparameters for fine-tune the models are: Batch Size = 64; Learning Rate =1e-5; Epochs = 8; Max Seq Length =50; Hidden size = 768. The process of using BERT to predict rumors is shown in Fig. \ref{f1}. Feed the model a rumor text, and the model outputs a true or false label.
\begin{figure}[t]
\centering
\includegraphics[width=0.90\linewidth]{f1.png}
\caption{ Detecting a rumor with BERT. Feed a rumor text to the model to learn the representation vector. The final [CLS] vector is then
passed to a linear layer to predicte the label: True or False. }
\label{f1}
\end{figure}
\begin{table}[]
\centering
\label{t2}
\caption{F1-score of each fine-tuned model evaluated on each test set. The model-on-self baseline is highlighted in \textbf{bold}.}
\renewcommand\arraystretch{1.2}
\setlength{\tabcolsep}{0.9mm}{
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{}} & \multicolumn{5}{c|}{Evaluated on} \\ \cline{3-7}
\multicolumn{2}{|c|}{} & Twitter15 & Twitter16 & PHEME & GossipCop & PolitiFact \\ \hline
\multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Trained\\ on\end{tabular}} & Twitter15 & \textbf{89.56}& 60.59&38.53&50.34&43.44 \\ \cline{2-7}
& Twitter16 & 47.74&\textbf{91.13}&40.54&24.19&28.85 \\\cline{2-7}
& PHEME & 45.65&25.81&\textbf{84.34}&50.85&56.32 \\\cline{2-7}
& GossipCop & 43.43&47.06&44.29&\textbf{81.57}&57.86 \\\cline{2-7}
& PolitiFact & 37.31&32.67&47.48&37.46&\textbf{86.41}\\ \hline
\end{tabular}
}
\end{table}
\section{Do Models Learn to Detect Rumors?}
\subsection{Does Performance on Individual Rumor Datasets Generalize to New Datasets?}
For our first experiment, we evaluate the generalizability of models to out-of-domain examples. At present, the vast majority of deep learning works are only tested on a single test set, which shows only a little insight of model's generalization ability. However, generalization ability is important to a model when it is applied in real-world task.
We test generalizability by fine-tuning BERT-based models on each dataset and evaluating against all five test sets. Datasets such as PHEME and GossipCop are unbalanced. Therefore, we choose the high information revealing metric F1-score to measure the performance of the models, and the results are reported in Table II. The rows show a single model’s performance across all five datasets, and the columns show the performance of all the models on a single test set. The model-on-self baseline is indicated in bold.
All models take a great drop in performance when evaluated on an out-of-domain test set. The performance of the models tested on out-of-domain data is even worse than random prediction. This shows that a model's rumor detection performance on an individual dataset does not generalize across datasets. However, the generalization ability of each model is different. The model trained on Twitter15 has an F1-score of 60.59\% when tested on Twitter16. Similarly, the model trained on Twitter16 achieves 47.74\% of the F1 tested on Twitter15, which is better than the models trained on other datasets. One posssible reason is that the fields and topics of the rumors in Twitter15 and Twitter16 datasets are relatively similar. The models trained on GossipCop is the best generalization ability among the five models. Note that GossipCop and PHEME are the first and second largest in the five datasets, and the generalization ability of the model trained on these two data sets is also the first and second. As can be seen from the above experimental results that more training data can improve the generalization ability of a model.
\begin{table*}[h]
\centering
\renewcommand\arraystretch{1.1}
\caption{Examples of common-sense rumors dataset. There are a total of 200 samples, with 100 rumors and non-rumors each, and the content corresponds to each other.}
\label{t3}
\setlength{\tabcolsep}{2.5mm}{
\begin{tabular}{|c|l|l|} \hline
\# num& Rumors (\# 100) & Non-rumors (\# 100) \\ \hline
1&The pet dog next door gave birth to a cat! & Dogs can only give birth to dogs, cats can only give birth to cats. \\ \hline
2&Human blood is generally blue. & Human blood is generally red. \\ \hline
3&In nature, penguins live in the Africa. & In nature, penguins live in Antarctica. \\ \hline
4&Signals from 5G signal towers can spread COVID-19. & The 5G signal tower does not spread COVID-19. \\ \hline
\end{tabular}}
\end{table*}
\subsection{Can Models Detect Common-Sense Rumors?}
From the experimental results in the previous section, it is proved that the models cannot generalize the high performance well to out-of-domain data. We are wondering if the reason is that the rumors in these datasets are difficult to detect, making it difficult for the model to learn this ability. Therefore, we create a common-sense rumors dataset to verify models’ simple rumor detection ability. The rumors in this dataset are easy for humans to distinguish true and false. We manually collecte and make more than 500 common-sense rumors and corresponding non-rumors by two Ph.D. students and two master students, and then invited 10 people aged 16-40 people (5 males and 5 females) to classify them. We delete the data with more than two people's judgment errors, and finally, we kept 200 pieces of data in our common-sense rumors dataset. Examples of common-sense rumors are shown in Table~\ref{t3}.
The common-sense rumors are adopted to evaluate whether the above models have the ability to detect common-sense rumors. The performance of those models on the original rumor test set and the common-sense rumor test set is shown in Table~\ref{t4}. It can be observed that the accuracy of all models on common-sense rumors is about 50\%, which is basically equivalent to guessing. We further check the the \emph{precision}, \emph{recall} and \emph{f1-score} of the models, and found that although the accuracy of the models are the same in the common-scense dataset, the other metrics of each model are different. The model fine-tuned on Twitter16 has 98\% of recall in the common-sense rumor test. However, the \emph{precision}, \emph{recall} and \emph{f1-score} of the models fine-tuned on PHEME and GossipCop are all very low. This result shows that the model fine-tuned on Twitter16 is more inclined to predict that the samples are false. The model fine-tuned on PHEME and GossipCop is more inclined to predict that the samples are true.
The models in Table~\ref{t6} performed very well on the original test set, with an average accuracy of about 87\%. But is the ability of these models to detect rumors really good? The above experimental results prove that these models not only cannot generalize high performance to other rumor test sets, but also basically do not have the ability to detect common-sense rumors.
\begin{table}[h]
\centering
\renewcommand\arraystretch{1.1}
\caption{The performance of the models on the original rumor test set and the common-sense rumor test set. The three criteria of Precision, Recall and F1-score are under the label-False.}
\label{t4}
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{|c|cccc|cccc|}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Bert\\ fine-tuned on\end{tabular}} & \multicolumn{4}{c|}{\begin{tabular}[c]{@{}c@{}}Test on\\ original rumors\end{tabular}} & \multicolumn{4}{c|}{\begin{tabular}[c]{@{}c@{}}Test on\\ common-sense rumors\end{tabular}} \\ \cline{2-9}
& Acc & Pre & Recall & F1 & Acc & Pre & Recall & F1 \\ \hline
Twitter15 & 89.69& 88.89 & 88.00 & 88.44 & 48.00 & 48.63 & 71.00 & 57.72 \\ \hline
Twitter16 & 91.13 & 91.94 & 90.48 & 91.20 & 49.00 & 49.49 & 98.00 & 65.77 \\ \hline
PHEME & 85.22 & 79.77 & 77.63 & 78.68 & 49.50 & 21.23 & 15.32 & 08.66 \\ \hline
GossipCop &87.87 & 78.51 & 68.89 & 73.36 & 48.50& 18.34 & 10.12 & 06.52 \\ \hline
PolitiFact & 87.03& 84.55 & 82.54 & 83.53 & 52.00 & 51.30 & 79.00 & 62.20 \\ \hline
\end{tabular}}
\end{table}
\subsection{Are the Predictions of Models Credible and Consistent?}
We feed a rumor text to a model, and the model will output a label “true” or “false”. The prediction results of a model may be correct, but the results are not necessarily credible because almost all deep learning models are black boxes. In order to know more clearly the ability of the model to detect rumors, the prediction results for each sample are shown in Table~\ref{t5}. We can see that the prediction results of many models are inconsistent. For example, the model PH and GC predict that “\emph{Dogs can only give birth to dogs, cats can only give birth to cats.}” is true, but they also believe that “\emph{The pet dog next door gave birth to a cat!}” is true. In addition, models T15 and T16 predict that the label of “\emph{Human blood is generally blue.}” is false, and it seems that the models are correct. However, the models predict that the label of “\emph{Human blood is generally red.}” is also false. Obviously, the labels of some sample pairs cannot be the same. These examples clearly show that the prediction results of models are not necessarily credible, models just output the label “true" or “false" according to the learned wrong rules. Such a model is like a “mouth” without a “brain”, only output without thinking.
\begin{figure}[t]
\centering
\includegraphics[width=0.92\linewidth]{f11.png}
\caption{Comparison of standard evaluation method and the proposed PairT evaluation method. Standard evaluation accuracy = 5/8 = 62.5\%; PairT evaluation accuracy = 1/4 = 25\%.}
\label{f2}
\end{figure}
\begin{table*}[]
\centering
\renewcommand\arraystretch{1.1}
\caption{Models' prediction results case analysis. The models trained on the 5 data sets of Twitter15, Twitter16, Pheme17, GossipCop, and PolitiFact are referred to as T15, T16, PH, GC, and PF respectively. And
he test samples come from the common-sense rumors dataset. The correct prediction results are \underline{underlined}.}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} & \multicolumn{5}{c|}{Prediction of models} \\ \cline{2-6}
\multicolumn{1}{|c|}{\multirow{-2}{*}{Test samples}} & T15 & T16 & PH & GC & PF \\ \hline
The pet dog next door gave birth to a cat! (False) & True & \underline{False} & True & True & \underline{False} \\
Dogs can only give birth to dogs, cats can only give birth to cats. (True) & False & False & \underline{True} & \underline{True} & False \\ \hline
Human blood is generally blue. (False) & \underline{False} & \underline{False} & True & True & True \\
Human blood is generally red. (True) & False & False & \underline{True} & \underline{True} & \underline{True} \\ \hline
In nature, penguins live in the Africa. (False) & \underline{False} & \underline{False} & True & True & \underline{False} \\
In nature, penguins live in Antarctica. (True) & \underline{True} & False & \underline{True} & \underline{True} & False \\ \hline
Signals from 5G signal towers can spread COVID-19. (False) & True & \underline{False} & True & True & \underline{False} \\
The 5G signal tower does not spread COVID-19. (True) & \underline{True} & False & \underline{True} & \underline{True} & False \\ \hline
\end{tabular}}
\label{t5}
\end{table*}
Model misjudgment may cause unintended consequences, we need to challenge the rumor detection model higher. The results in Table~\ref{t5}, indicate that the model predicts a sentence as a rumor that does not mean that the model really understands the meaning of the sentence. For example, the model thinks that the sentence "\emph{Dogs can only give birth to dogs, cats can only give birth to cats.}" is true, but model does not understand and learn the true meaning of this sentence. Therefore, the model mistakenly believed that the rumor "\emph{The pet dog next door gave birth to a cat!}" is also true.
In order to more realistically evaluate the performance of the rumor detection model, we propose a new evaluation method called paired test (\textbf{PairT}) in this paper. In this new evaluation method, models should test samples in pairs such as [A \& A$’$], the A is the original rumor text, and the A$’$ is a new text created manually. The important point is that the hidden knowledge and label contained in sample A are opposed to sample A$’$. And the model needs to predict samples A and A$’$ correctly at the same time. If a model predicts that the labels of A and A$’$ are the same, it means that this model is unreliable, because the prediction results are inconsistent. As shown in Fig.~\ref{f2}, there are eight test samples, that is, four pairs of A = [\#1 \& \#2], B = [\#3 \& \#4], C = [\#5 \& \#6], D = [\#7 \& \#8]. Assuming that a model predicts the five samples \#1, \#2, \#4, \#5 and \#8 correctly, and the three samples of \#3, \#6 and \#7 wrong, the accuracy rate calculated according to the standard evaluation method is 62.5\%. But according to our new evaluation method \textbf{PairT}, only one pair, the Pair A = [\#1 \& \#2], has been correctly predicted. So the accuracy of the
model is 25\%. We hope that the output of model will not contradict each other, and make inferences based on the true knowledge learned as much as possible, rather than ridiculous shortcut rules. This will pose a higher challenge to models so that models can be better applied in the real-world.
\begin{table}[]
\centering
\renewcommand\arraystretch{1.1}
\caption{By calculating the s and b of each word, we found out the words “Obama”, “Paul” and “Sydney” that satisfy $s\geq s_{min} = 0.8, b\geq b_{min} = 5\%$.}
\label{t6}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{} & \multicolumn{2}{c|}{Twitter15} & \multicolumn{2}{c|}{Twitter16} \\ \cline{2-5}
& Obama & Paul & Obama & Sydney \\ \hline
Strength $s$ & 0.95 & 0.92 & 0.96 & 0.89 \\ \hline
Breadth $b$ & 5.4\% & 11.2\% & 5.0\% & 11.7\% \\ \hline
\end{tabular}}
\end{table}
\sethlcolor{yellow}
\begin{table*}[]
\centering
\renewcommand\arraystretch{1.03}
\caption{Original and Adversarial data. The label of the adversarial data relative to the original data must be changed. The place where the text has changed is \hl{highlighted}.}
\label{t7}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{|m{6cm}|l|m{6cm}|l|}
\hline
\multicolumn{2}{|c|}{Original} & \multicolumn{2}{c|}{Adversarial} \\ \hline
\multicolumn{1}{|c|}{Text} & label & \multicolumn{1}{c|}{Text} & label \\ \hline
Obama \hl{says}, “America doesn't want any stay-at-home-moms! Enough is enough!” & False & Obama \hl{does not say}, “America doesn't want any stay-at-home-moms! Enough is enough!” & True \\ \hline
r.i.p to the driver that died with Paul that \hl{no one cares} about because he \hl{wasn't} famous. & True & r.i.p to the driver that died with Paul that \hl{many poeple care} about because he \hl{was} famous. & False \\ \hline
Hostage situation erupts in Sydney cafe, Australian prime minister says it \hl{may be} “politically motivated” & True & Hostage situation erupts in Sydney cafe, Australian prime minister says it \hl{could not be} “politically motivated” & False \\ \hline
\end{tabular}}
\end{table*}
\begin{table}[h]
\renewcommand\arraystretch{1.1}
\caption{Results for BERT on the adversarial test. O, P and S stand for “Obama”, “Paul” and “Sydney” respectively. The dataset Twitter15 contains the words “Obama” and “Paul”; the dataset Twitter16 contains the words “Obama” and “Sydney”. Declining accuracy are highlighted in \textbf{bold}.}
\label{t8}
\centering
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{|m{3.5cm}|c|c|}
\hline
& Twitter15 & Twitter16 \\ \hline
Original test set & 89.69\% & 91.13\% \\ \hline
Adversarial (O) & 84.75\% & 81.45\% \\ \hline
Adversarial (O \& P or S) & 73.09\% & 68.55\% \\ \hline
Declining accuracy & \textbf{16.60\%} & \textbf{22.58\%} \\ \hline
\end{tabular}}
\end{table}
\subsection{What Does Model Learn from Rumor Datasets?}
The above experimental results indicate that the models do not learn to detect rumors well. In order to know what the model has learned, we use the word-level attention mechanism to analyze the words that models mainly focus and breadth $b$ of its own signal, where the strength $s$ is the average attention weight of the word, and the breadth $b$ is the proportion of the data containing the word to all the data. In order to find the clue words that have a serious impact on the model, we set the thresholds of $s$ and $b$ to $s_{min} = 0.8, b_{min} = 5\%$. Finally, three clue words were found on Twitter15 and Twitter16, as shown in Table~\ref{t6}. A single word occupies about 90\% of the attention weight in a sentence, which is obviously unreasonable. After checking the original datasets, we found that the reason is that the distribution of these words is very uneven. For example, the word “Obama” almost only appears in the samples whose label is “False”. Therefore, the model trained on these two datasets is likely to take shortcuts to achieve high performance.
For the model trained on Twitter15, it tends to predict “false” when a test sentence contains the word “Obama”; In contrast, the model tends to predict “true” while a test sentence includes the phrase “Paul”. And for the model trained on Twitter16, it also tends to predict “false” when a test sentence contains the word “Obama”; In contrast, the model tends to predict “true” while a test sentence contains the word “Sydney”. This phenomenon is obviously illogical and unreasonable. In order to verify this finding, an adversarial dataset is established. We reverse the meaning of the data containing “Obama”, “Paul” and “Sydney” in the Twitter15 and Twitter16 datasets, and change the label of the data. The adversarial data is shown in Table~\ref{t7}. We test the performance of the models on the adversarial dataset, and the experimental results are shown in Table~\ref{t8}. When only the data containing “Obama” is modified, the accuracy of the model on the two datasets is 84.75\% and 81.45\%, which is about 5 and 10\% lower than the original accuracy, respectively. By further modifying the sentences containing “Paul” and “Sydney”, the accuracy of the model drops to 73.09\% and 68.55\%, respectively, and the accuracy drops by 16.60\% and 22.58\% compared to the original accuracy, respectively. Those accuracy drops are huge and alarming, because the words “Obama”, “Paul” and “Sydney” have great strength $s$ and breadth $b$ at the same time. Just because of the uneven distribution of a word, the results of the model will be severely affected. That is a cautionary tale for us that we need to be more cautious when creating a rumor dataset.
In addition, we analyzed three specific cases to point out the absurd knowledge and rules learned by the model. As we can see from Fig.~\ref{fig:2}, the label predicted by the model for the sentence “\emph{Human blood is generally blue.}” is False. If the sentence is changed to “\emph{Human blood is generally blue in Sydney.}”, the label predicted by the model will become True, which is obviously unreasonable. The model will output the label “True” as long as it sees the word “Sydney”. Similarly, we change the sentence “\emph{Some people can live forever and be young and healthy forever.}” to “\emph{Paul Walker can live forever and be young and healthy forever.}”, the model prediction result will also change from “False” to “True”. In short, if the text contains “Sydney” or “Paul”, the model will predict “True” with a high probability. On the contrary, if the text includes “Obama”, the model will output “False” with a high probability. The root cause of the model taking shortcuts or deception is that those two datasets, Twitter15 and Twitter16, left data pitfalls in the collection and creation process. There are two main reasons for serious data pitfalls. First, the range of events when collecting data is small, which leads to high coverage of certain (“Obama”, “Paul”, “Sydney”) words in the data set in all data. Second, the data containing these words (“Obama”, “Paul”, “Sydney”) is seriously unbalanced. For example, in the process of data collection, more than 90\% of the data containing “Obama” are labeled as false. In the data collection process, these issues are not difficult to avoid.
\begin{figure}[]
\centering
\includegraphics[width=0.95\linewidth]{f22.png}
\caption{Misleading the model to make wrong predictions through spurious clues. The test model is Bert fine-tuned on dataset Twitter15, and the text modification is highlighted.}
\label{fig:2}
\end{figure}
\section{Conclusion}
In this work, we conducted a series of experiments using five public datasets and the pre-training model BERT, and found that the seemingly high-performance models do not really learn to detect rumors. When dealing with a task, deep learning models are actually more likely to take shortcuts and cheat than humans. The good performance of a model may be because it learns some hidden clues and simple rules in the dataset. The shortcomings in datasets and evaluation methods make it difficult to judge whether models learn to detect rumors or not. Based on our work, we offer the following recommendations to researchers who create rumor datasets or train and evaluate rumor detection models:
\textbf{Do not just focus on the accuracy, precision, recall and f1-score of models.} Training a rumor detection model should focus not only on improving the accuracy, precision, recall and f1-score of the model but also on the behavior of the model, the credibility of model predictions, and the interpretability of model results.
\textbf{Test out-of-domain examples for the generalization ability of models.} A model has better generalization ability, and its application in the real world will be more valuable. A new rumor detection model should evaluate the performance across multiple related datasets.
\textbf{Challenge the models and use stricter evaluation criteria for them.} Evaluating on some easy test sets will exaggerate our judgments about the real capabilities learned by models. Rumor detection models can be evaluated using our proposed new evaluation method PairT.
\textbf{Create datasets carefully to avoid data pitfalls and ensure that models do not take shortcuts and cheat.} When creating a new rumor dataset, prevent a large number of unbalanced data clues, such as “Obama”, “Paul” and “Sydney” found in the paper. Try to eliminate such data pitfalls to avoid model taking shortcuts and cheat.
\section*{Acknowledgment}
This work was funded in part by Qualcomm through a Taiwan University Research Collaboration Project and in part by the Ministry of Science and Technology, Taiwan, under grant MOST 110-2221-E-006-001 and NCKU B109-K027D. We thank to National Center for High-performance Computing (NCHC) for providing computational and storage resources.
|
2,869,038,156,758 | arxiv | \chapter{Investigating bias in the Antarctic Circumpolar Current within HadGEM-GC3.1 models} \label{TJ_ACC}
Inspection of Antarctic Circumpolar Current (ACC) transport in HadGEM-GC3.1 models at different spatial resolutions reveals large differences in estimates for total transport through the Drake Passage. In this chapter, the overturning streamfunction decomposition diagnostic developed for the AMOC in Chapter \ref{TJ_TM} is modified for decomposition of the zonal ACC transport at the Drake Passage, the point of narrowest constriction of the ACC. ACC transport is decomposed into the sum of terms involving only boundary properties, and an additional $\beta$ term arising from a meridionally-varying Coriolis parameter. Decomposing the cumulative ACC transport in this way helps us to investigate the role of model spatial resolution, and identify possible physical mechanisms which may give rise to resolution dependence. The boundary densities along the Antarctic coastline are mapped using the algorithm from Chapter \ref{TJ_Bdry} and spatial characteristics discussed for various models, reanalysis and climatology.
\section{Background} \label{Sct_ACC_Bck}
The Southern Ocean is a crucial part of the global ocean circulation. Its unique bathymetry and lack of meridional boundary (i.e. zonally-unblocked latitudes), in conjunction with an equator-to-pole temperature gradient and strong westerly winds, results in an eastward flowing ACC. The ACC is the strongest ocean current in the world ($173.3\pm10.7$Sv, \citealt{Donohue2016}), vital in transporting heat, carbon and nutrients between the major ocean basins (see Section \ref{I_SO}).
The ACC is strongly associated with steeply meridionally-tilted isopycnals in the Southern Ocean, with isopycnals of lighter water masses outcropping at lower latitudes. Using geostrophy (Equation \ref{GB}), the isopycnal slope can be related to the perpendicular transport: a steep negative gradient from south to north results in eastward flow. The slope of the isopycnals is also related to the strength of the westerly winds encircling Antarctica. These winds result in large-scale coastal Ekman upwelling, with surface waters moving northward and a geostrophic return flow at depth. In the absence of a zonal land boundary, the zonal pressure gradient is only supported at great depths, resulting in a relatively deep geostrophic return flow (see Section \ref{I_dynL}). NADW waters are found to upwell before the Antarctic shelf to replace the northward-moving surface waters. The upwelling not only steepens isopycnals, but returns old, nutrient-rich waters to the surface. This cycle in the Southern Ocean is commonly known as the Deacon cell, (\citealt{Doos1994}). In general, in regions of strong currents (such as the ACC, Gulf Stream and Kuroshio), baroclinic instabilities are prominent, giving rise to eddies which act to flatten the isopycnals, opposing the steepening from the wind-driven Deacon cell.
Early estimates of ACC transport covered a wide range of values, in part due to the use of different reference depths for hydrographic calculations, and lack of knowledge of near-bottom current characteristics. \cite{Bryden1977} showed a level of no-motion near the sea-floor would not be appropriate, due to bottom currents through the Drake Passage. In the late 1970s and early 1980s, the International Southern Ocean Studies (ISOS) programme (\citealt{Whitworth1982}, \citealt{Whitworth1983}, \citealt{Whitworth1985}) led to significantly improved understanding of the Southern Ocean. \cite{Cunningham2003} reanalysed the ISOS data and estimated a mean ACC transport of $134$Sv with uncertainty of between $15$Sv and $27$Sv.
\cite{Meredith2011} review hydrographic estimates for baroclinic transport through the Drake Passage ($SR1b$ section) for the period $1993-2009$ (including data from the World Ocean Circulation Experiment). The baroclinic transport through the Drake Passage is found to be $136.7\pm6.9\text{Sv}$ ($\pm$ indicates standard deviation of transport), a relatively consistent value throughout the period of observation. Using a combination of CPIES and current meters at a location slightly west of $SR1b$ ($cDrake$ experiment), \cite{Chidichimo2014} estimate the average baroclinic transport over a 4-year period ($2007-2011$) through the Drake Passage, to be $127.7\pm8.1\text{Sv}$. \cite{Donohue2016}, quotes an additional depth-independent (or barotropic) transport component of $45.6$Sv, calculated using bottom current recorders, as a part of the $cDrake$ experiment. This updated combined (baroclinic plus barotropic) estimate of $173.3\pm10.7$Sv for the transport through the Drake Passage, represents an increase of approximately $30\%$ over the benchmark interval of $130$ to $140$Sv, typically expected for ACC transport in climate models (see Section \ref{I_obsACC} for further information regarding observing the ACC).
\cite{Allison2009} decomposed the ACC into its boundary components within the pre-industrial control integration of the GFDL CM2.1 model. She found a relatively stable ACC of $130$Sv using zonal velocities; however, her estimate of ACC transport, using boundary components and a $\beta$ term, fell 20Sv short of this value. We speculate that the discrepancy might be accounted for by contributions from additional cells. The dominant terms contributing to the time-mean ACC in \cite{Allison2009} are the northern density (83Sv) and depth-independent components (30Sv).
The transport accumulated from the bottom to the surface, calculated using zonal velocities (see Equation \ref{e_ACC_def} below) at $66.5^\circ$W, is shown in Figure \ref{F_Tru_Trsp} for the three HadGEM-GC3.1 models (with $1^\circ$, $1/4^\circ$ and $1/12^\circ$ resolutions) considered in this chapter. The timeseries shown include the initial 30-year ``spin-up'' phase for each model. Model resolution has a dramatic effect on the corresponding strength of the ACC. The figure also shows estimates for $T_{ACC}$ from observations, given by \cite{Meredith2011} and \cite{Donohue2016}; the $1/4^\circ$ model resolution in particular underestimates the observed ACC transport.
The work conducted in this chapter is part of a wider effort to understand the ACC, and Southern Ocean characteristics, in the current generation of UK Met Office GCMs.
\begin{figure}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/Pres_ResC_Tot_66_5W.png}}
\caption[Timeseries of volume transport $T_{ACC}$ integrated up to the surface through the Drake Passage (Section: $66.5^\circ$W) calculated using Equation \ref{e_T_ACC_math} for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ HadGEMGC3.1 spatial resolutions.]{Timeseries of volume transport $T_{ACC}$ integrated up to the surface through the Drake Passage (Section: $66.5^\circ$W) calculated using Equation \ref{e_T_ACC_math} for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ HadGEMGC3.1 spatial resolutions. Shaded red region indicates the baroclinic transport estimate using hydrographic data from \cite{Meredith2011} (without bottom flow), blue region indicates estimate quoted by \cite{Donohue2016} using cDrake data (including depth-independent transport). }
\label{F_Tru_Trsp}
\end{figure}
\section{Method} \label{Sct_ACC_Mth}
Following the method derived for decomposing the AMOC's overturning streamfunction into boundary components in Chapter \ref{TJ_TM}, we now outline the analogous decomposition of the ACC zonal cumulative transport into Ekman, depth-independent (bottom), northern and southern boundary density and $\beta$ terms. The $\beta$ term is introduced to account for the contribution made by a varying Coriolis parameter across the meridional section (south to north). Much of the derivation here is similar to that presented in detail in Chapter \ref{TJ_TM}; for this reason, we present the current derivation concisely, referring the reader to Chapter \ref{TJ_TM} for details as necessary.
The zonal volume transport (with east as positive) below a given depth z through the meridional section corresponding to the Drake Passage is defined as the meridional and vertical integral of zonal velocities
\begin{equation}
T_{ACC}(z;x) = \int_{S(z;x)}^{N(z;x)} \int_{-H(y;x)}^{z} u_E(y,z';x) \mathrm{d} z' \mathrm{d} y
\label{e_ACC_def}
\end{equation}
where $S(z;x)$ and $N(z;x)$ are the southern and northern boundaries respectively at depth $z$ for zonal coordinate $x$. $H(y;x)$ is the maximum depth of the fluid column at meridional coordinate $y$. The zonal velocity $u_E$ can be partitioned into the sum of (a) thermal wind ($u_{th}$), (b) depth-independent ($u_{bot}$) and (c) Ekman ($u_{Ekm}$) components, in an analogous manner to that shown in Equation \ref{v} for the AMOC. Hence the corresponding cumulative transport components can be calculated.
The depth-independent cumulative transport component $T_{bot}(z;x)$ can be thought of as introducing a geostrophic reference into the calculation. We write
\begin{equation}
T_{bot}(z;x) = \int_{S(z;x)}^{N(z;x)} Z_{bot}(y,z;x) u_{bot}(y;x) \mathrm{d} y
\label{e_ACC_int_1}
\end{equation}
for bottom zonal velocity $u_{bot}(y;x)$, where bottom depth $Z_{bot}(y,z;x)$ is given by
\begin{align}
Z_{bot}(y,z; x) = & H(y; x) + z, & z > -H(y; x), \\
Z_{bot}(y,z; x) = & 0, & z \leq -H(y; x) .
\label{e_Z_{bot}_ACC}
\end{align}
The thermal wind cumulative transport component $T_{th}(z;x)$ is calculated (following a rationale similar to that outlined in Equations \ref{Tc_1}-\ref{Tc_2} for the AMOC) as a meridional and vertical integral of the zonal thermal wind velocity $u_{th}(y,z; x)$. The integral is simplified using integration by parts, taking $u_{th}(y,z; x) = 0$ when $z = -H(y; x)$. In marked contrast to the calculation of meridional transport for the AMOC, the Coriolis parameter ($f(y)$, introduced via the thermal wind relationship, Equation \ref{TW}) is not constant with latitude in the meridional integral for zonal ACC transport; hence we cannot simply move $f(y)$ ``outside'' the south-north integral in the current case. The resulting expression for $T_{th}(z;x)$ is
\begin{equation}
T_{th}(z;x) = \frac{g}{\rho_0} \left(\int_{S(z;x)}^{N(z;x)} \left[ z\int_{-H(y;x)}^{z} \frac{1}{f(y)}\frac{\partial \rho}{\partial y}(y,z';x)dz' - \int_{-H(y;x)}^{z} z' \frac{1}{f(y)}\frac{\partial \rho}{\partial y}(y,z';x) dz' \right] \mathrm{d}y \right),
\label{e_ACC_TW_1}
\end{equation}
where $\rho$ is neutral density, and $\rho_0$ is the nominal density of seawater. Equation \ref{e_ACC_TW_1} is re-arranged in a manner analogous to Equation \ref{Tc_2}, to obtain
\begin{align}
\label{e_ACC_TW_2}
T_{th}(z; x) &= \frac{g}{\rho_0} \left[ \int_{-H_{max}(x)}^{z} (z-z') \left[ \sum_{d=1}^{n_D(z;x)} \left( \frac{\rho_{N_d}(z';x)}{f_{N_d}(x)} - \frac{\rho_{S_d}(z';x)}{f_{S_d}(x)} \right) \right] \mathrm{d}z' \right. \nonumber \\
&+ \left. \int_{-H_{max}(x)}^{z} (z-z') \int_{S(z;x)}^{N(z;x)} \frac{\beta(y) \rho(y,z';x)}{f(y)^2} \mathrm{d}y \mathrm{d}z' \right],
\end{align}
where $\beta(y)=\frac{\partial f}{\partial y}(y)$, and $\rho_S(z;x)$ and $\rho_N(z;x)$ are the southern and northern boundary densities at each $z$ for a given $x$. Further, $f_S(x)$ and $f_N(x)$ are Coriolis parameter values at the southern and northern boundary density points for a given $x$. $H_{\max}(x)$ is the maximum basin depth at zonal coordinate $x$, and we are careful to acknowledge the possible presence of $n_D \ge 1$ pairs of southern and northern boundaries for depth $z$ and zonal coordinate $x$, with meridional intervals $[S_d(z,x),N_d(z,x)]$, $d=1,2,...,n_D(z;x)$, as illustrated in Figure \ref{F_bdryACC}.
It can be seen from Equation \ref{e_ACC_TW_2} that the resulting expression for the zonal ACC thermal wind decomposition takes a similar form to that of the meridional AMOC thermal wind decomposition, with the addition of a $\beta$ term. The thermal wind contribution of the ACC transport can therefore be written as the sum of southern and northern boundary density terms $T_S(z;x)$ and $T_N(z;x)$, and a $\beta$ term ($T_{\beta}$) accounting for the role of a varying Coriolis parameter across the section. In the presence of $n_D(z;x) \ge 1$ pairs of southern and northern boundaries, the boundary terms consist of sums over the individual independent meridional intervals
\begin{equation}
T_{N}(z; x) = \frac{g}{\rho_0} \int_{-H_{max}(x)}^{z} (z-z') \left[ \sum_{d=1}^{n_D(z;x)} \left( \frac{\sigma_{N_d}(z;x)}{f_{N_d}(x)} \right) \right] \mathrm{d}z'
\label{T_Nrt}
\end{equation}
\begin{equation}
T_{S}(z; x) = - \frac{g}{\rho_0} \int_{-H_{max}(x)}^{z} (z-z') \left[ \sum_{d=1}^{n_D(z;x)} \left( \frac{\sigma_{S_d}(z;x)}{f_{S_d}(x)} \right) \right] \mathrm{d}z'
\label{T_Sth}
\end{equation}
\begin{equation}
T_{\beta}(z; x) = + \int_{-H_{max}(x)}^{z} (z-z') \int_{S(z;x)}^{N(z;x)} \frac{\beta(y) \sigma(y,z';x)}{f(y)^2} \mathrm{d}y \mathrm{d}z'.
\label{T_Beta}
\end{equation}
The difference between the sloping boundary density components is what sets the thermal wind condition. Therefore, any constant can be added to the boundary density components in these equations. Hence, in Equations \ref{T_Nrt}-\ref{T_Beta}, to better distinguish the properties of the northern and southern density components, the neutral density of the deepest point $\rho_{deep}$ is subtracted from the neutral densities at the boundaries, yielding a referenced density or neutral density anomaly $\sigma$. Thus, for the northern density term corresponding to the $d^\text{th}$ boundary, the anomaly with respect to the density at the deepest point is
\begin{equation}
\sigma_{N_d}(z;x) = \rho_{N_d}(z;x,\mathcal{S}, \theta, P=0) - \rho_{deep}(z;x,\mathcal{S}, \theta, P=0)
\end{equation}
with a similar definition for $\sigma_{S_d}(z;x)$ and $\sigma(y,z;x)$, for salinity $\mathcal{S}$, potential temperature $\theta$ and pressure $P$. We note that $\rho_{deep}$ is time-dependent, varying from year to year. The calculation for $T_{\beta}$ requires interior information for neutral densities within the section, and therefore the ACC decomposition is no longer solely a function of boundary information. Theoretically, in-situ densities should be used within the thermal wind relationship, as is done in Chapters \ref{TJ_TM} and \ref{TJ_Var}. However, for improved visualisation of the local density structure, neutral densities are used for the decomposition calculation, this has a negligible impact on the resulting ACC transport. We calculate the neutral densities for the section using the method outlined by \cite{Jackett1997} and \cite{McDougall2020}.
The final, Ekman component of the cumulative ACC transport decomposition is estimated using the meridional wind stresses ($\tau^y$), in a similar fashion to that described in Equations \ref{Tek} and \ref{e_h} for the AMOC. The Ekman contribution to the cumulative ACC transport is given by
\begin{eqnarray}
T_{Ekm}(z; x) &=& - \frac{1}{\rho_0}\int_{S(z; x)}^{N(z; x)} \int_{-h(y; x)}^{z} \frac{1}{h(y; x)}\frac{\tau_s^y(y; x)}{f(y)} \mathrm{d}z'\mathrm{d}y \nonumber \\
&=& - \frac{1}{\rho_0}\int_{S(z; x)}^{N(z; x)} \frac{(h(y;x)+z)}{h(y; x)}\frac{\tau_s^y(y; x)}{f(y)} \mathrm{d}y
\label{e_ACC_Tek}
\end{eqnarray}
where $h(y; x)$ is the depth of the Ekman layer given by
\begin{align}
h(y; x) = & H(y; x), & H(y; x) \leq 50m, \nonumber \\
h(y; x) = & 50m, & H(y; x) > 50m,
\label{e_ACC_h}
\end{align}
and $\tau_s^y(y; x)$ is the meridional wind stress on the ocean's surface.
Therefore the full decomposition diagnostic of the zonal cumulative volume transport for a given $x$ with respect to depth, illustrated in Figure \ref{F_bdryACC}, is given by
\begin{equation}
\tilde{T}_{ACC}(z; x) = T_N(z; x) + T_S(z; x) + T_{\beta}(z;x) + T_{bot}(z; x) + T_{Ekm}(z; x)
\label{T_smp_ACC}
\end{equation}
or
\begin{equation}
\begin{multlined}
\tilde{T}_{ACC}(z; x) = \frac{g}{\rho_0} \int_{-H_{max}(x)}^{z} (z-z') \left[ \sum_{d=1}^{n_D(z;x)} \left( \frac{\sigma_{N_d}(z;x)}{f_{N_d}(x)} \right) \right] \mathrm{d}z'\\ - \frac{g}{\rho_0} \int_{-H_{max}(x)}^{z} (z-z') \left[ \sum_{d=1}^{n_D(z;x)} \left( \frac{\sigma_{S_d}(z;x)}{f_{S_d}(x)} \right) \right] \mathrm{d}z' \\
+ \int_{-H_{max}(x)}^{z} (z-z') \int_{S(z;x)}^{N(z;x)} \frac{\beta(y) \sigma(y,z';x)}{f(y)^2} \mathrm{d}y \mathrm{d}z' + \int_{S(z;x)}^{N(z;x)} Z_{Bot} u_{bot}(y;x) \mathrm{d} y \\
- \frac{1}{\rho_0}\int_{S(z; x)}^{N(z; x)} \frac{(h(y;x)+z)}{h(y; x)}\frac{\tau_s^y(y; x)}{f(y)} \mathrm{d}y .
\end{multlined}
\label{e_tilde_T_ACC}
\end{equation}
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/Figure_boundary_ACC_New.png}}
\caption[Schematic of the boundary decomposition and the contributing components to the ACC transport through the Drake Passage.]{Schematic of the boundary decomposition and the contributing components to the ACC transport through the Drake Passage. We note in this illustration that multiple pairs of southern and northern boundaries are present at depth, whereas only one pair is present at the surface. The grey circle indicates eastward-flowing ACC.}
\label{F_bdryACC}
\end{figure*}
\section{Decomposition of the ACC within the HadGEM3-GC3.1 general circulation model}
The decomposition is applied to data from the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ oceanic spatial resolution control runs of the HadGEM-GC3.1 HighResMIP project, with HadGEM-GC3.1 model configurations as described in Chapter \ref{TJ_TM}. We include $30$-year spin-up phases at each model resolution, incorporating 1950s forcing and N216 atmospheric resolution. The spin-up phases are included to investigate the initial temporal trends within the models, where the greatest change in ACC transport is found, as illustrated in Figure \ref{F_Tru_Trsp}. All three model spin-ups begin from the same EN4 conditions.
For the present zonal cumulative transport decomposition, the calculation is centred upon the $v$-points of the NEMO computational grid shown in Figure \ref{NEMO_grid} and \ref{var_grid} (rather than the $u$-points used for meridional transport decomposition of the AMOC). The method outlined in Equations \ref{e_ACC_int_1}, \ref{T_Nrt}, \ref{T_Sth}, \ref{T_Beta} and \ref{e_ACC_Tek} is then applied as follows for interior cells (cells containing full field information, Figure \ref{F_Sch_Cells}) only. In the notation of Chapter \ref{TJ_TM}, the southern and northern boundary density contributions at depth $z_k$, time $t_\ell$ and longitude section $x_i$ take the form
\begin{equation}
T_{S}(z_k, t_{\ell}; x_i) = -\frac{g}{\rho_{0}} \sum_{m=k}^{M}\left(z_{k-1 / 2}-z_{m}\right) \left[ \sum_{d=1}^{n_D(z_k;x_i)} \frac{\sigma_{S_d}(z_m, t_{\ell}; x_i) \Delta z_{m}}{f_{S_d}(x_i)} \right]
\label{T_S_math}
\end{equation}
and
\begin{equation}
T_{N}(z_k, t_{\ell}; x_i) = +\frac{g}{\rho_{0}} \sum_{m=k}^{M}\left(z_{k-1 / 2}-z_{m}\right) \left[ \sum_{d=1}^{n_D(z_k;x_i)} \frac{\sigma_{N_d}(z_m, t_{\ell}; x_i) \Delta z_{m}}{f_{N_d}(x_i)} \right]
\label{T_N_math}
\end{equation}
where $k$ indexes the centre of a T-cell, with values running from $k=M$ at the deepest interior cell to $k=1$ at the surface. $\Delta z_m = z_{m-1/2} - z_{m+1/2}$ is the thickness of the $m^{\text{th}}$ level, and $f_{S_d}(x_i)$ and $f_{N_d}(x_i)$ are the Coriolis parameters for the $d^\text{th}$ southern and northern boundaries at longitude section $x_i$. Further, $\sigma_{S_d}(z_m, t_{\ell}; x_i)$ and $\sigma_{N_d}(z_m, t_{\ell}; x_i)$ are the corresponding neutral density anomalies at the $m^{\text{th}}$ depth level.
The $\beta$ cumulative transport component $T_{\beta}$ takes the form
\begin{equation}
T_{\beta}(z_k, t_{\ell}; x_i) = \sum_{m=k}^{M}\left(z_{k-1 / 2}-z_{m}\right) \sum_{j\in \mathcal{I}(z_m; x_i)} \frac{\beta(y_j;x_i) \bar{\sigma}(y_j, z_m, t_{\ell}; x_i) }{f(y_j; x_i)^2} \Delta y_{j} \Delta z_{m} ,
\label{T_Beta_math}
\end{equation}
where $\bar{\sigma}(y_j, z_m, t_{\ell}; x_i)$ represents the neutral density anomaly (relative to $\rho_{deep}$) averaged onto the $v$-point, since the density is located at the centre of the T-cell rather than at the $v$-point. $\mathcal{I}(z_k; x_i)$ is the index set for meridional coordinates between the southern and northern boundaries $S(z_k; x_i)$ and $N(z_k; x_i)$, and $\Delta y_j=y_{j-1/2} - y_{j+1/2}$ is the meridional cell width. The bottom component $T_{bot}$ is given by
\begin{equation}
T_{bot}(z_k, t_{\ell}; x_i) = \sum_{j\in \mathcal{I}(z_k; x_i)} (H(y_j; x_i) + z_{k-1/2}) \overline{u}_{bot}(y_j, t_{\ell}; x_i) \Delta y_j ,
\label{Ta_b_math}
\end{equation}
where $\overline{u}_{bot}(y_j, t_{\ell}; x_i)$ is the sum of (a) the $4$-point average of the local zonal velocities (averaged onto the $v$-point) and (b) the vertical gradient in meridional velocity, obtained using the meridional density gradient across the bottom cell and thermal wind relationship (see Equation \ref{T_b_math} and related text in Chapter \ref{TJ_TM}). The Ekman component takes the form
\begin{equation}
T_{Ekm}(z_k, t_{\ell}; x_i) = - \frac{1}{\rho_0} \sum_{j\in \mathcal{I}(z_k; x_i)} \frac{(h(y_j; x_i) + z_k)}{h(y_j; x_i)} \frac{\tau_s^y(y_j, t_{\ell}; x_i)}{f(y_j)} \Delta y_j ,
\label{Ta_Ek_math}
\end{equation}
where $h(y_j; x_i)$ is defined in Equation \ref{e_ACC_h}, and $\tau_s^y(y_j, t_{\ell}; x_i)$ is the meridional surface wind stress on the NEMO grid as before.
Near boundaries and bathymetry, additional cells are present, referred to as ``sidewall'' and ``partial'' cells respectively. The latter cells replicate the original boundary as closely as possible given model resolution. Unfortunately, potential temperature and salinity fields are not available in the additional cells, since the width of sidewall cells and depths of partial cells vary; hence the decomposition calculation cannot be replicated reliably for these cells. This is an unavoidable limitation of our approach. Sidewall and partial cells are illustrated in Figure \ref{F_Sch_Cells}. However, using zonal velocities present in these incomplete cells, we are able to estimate their transport contributions, referred to as $T_{AC}(z_k, t_{\ell}; x_i)$. $T_{AC}$ is calculated using Equation \ref{e_ACC_def} by directly integrating zonal velocities vertically and meridionally within the additional cells. Therefore,
\begin{equation}
T_{AC}(z_k, t_{\ell}; x_i) = \sum_{m=k}^{M+1} \sum_{j\in \mathcal{I}(z_m; x_i)} \overline{u}_{AC}(y_j,z_m, t_{\ell}; x_i) \Delta y_j \Delta z_m ,
\label{e_T_AC_math}
\end{equation}
where $M+1$ is the vertical cell index for the additional cell in the basin, and $\overline{u}_{AC}(y_j,z_m, t_{\ell}; x_i)$ is the averaged zonal velocity within depth-latitude combinations $z_m,y_j$ corresponding to additional cells only, and set to zero otherwise.
Using the transport components defined above, the estimated total zonal transport $\tilde{T}_{ACC}$, for each depth, time and longitude section, is then
\begin{eqnarray}
\label{e_tilde_T_ACC_math}
\tilde{T}_{ACC}(z_k, t_{\ell}; x_i) &=& T_{S}(z_k, t_{\ell}; x_i) + T_{N}(z_k, t_{\ell}; x_i) + T_{\beta}(z_k, t_{\ell}; x_i) \\
&+& T_{bot}(z_k, t_{\ell}; x_i) + T_{Ekm}(z_k, t_{\ell}; x_i) + T_{AC}(z_k, t_{\ell}; x_i)\nonumber .
\end{eqnarray}
To validate this decomposition, the cumulative ACC transport is also calculated directly using Equation \ref{e_ACC_def} by vertically and meridionally integrating the zonal velocity (see Figure \ref{F_Tru_Trsp}). For this calculation, we use a $4$-point average velocity $\overline{u}$ at each NEMO $u$-point. The ``true'' or ``expected'' cumulative transport $T_{ACC}$ is then
\begin{equation}
T_{ACC}(z_k, t_{\ell}; x_i) = \sum_{m=k}^{M+1} \sum_{j\in \mathcal{I}(z_m; x_i)} \overline{u}(y_j,z_m, t_{\ell}; x_i) \Delta y_j \Delta z_m ,
\label{e_T_ACC_math}
\end{equation}
where $M$ is the vertical cell index for the bottommost interior cell in the basin, and level $M+1$ indicates an additional cell (similar to Equation \ref{e_T_math}). The resulting total cumulative transport $T_{ACC}(z_k, t_{\ell}; x_i)$ includes additional cell contributions, and can thus be compared directly to the total estimated transport $\tilde{T}_{ACC}(z_k, t_{\ell}; x_i)$ calculated as the sum of the boundary and $\beta$ components (Equation \ref{e_tilde_T_ACC_math}).
\section{Interrogating ACC Transport within HadGEMGC3.1 models at the Drake Passage}
In this section, we explore various aspects of cumulative ACC transport characteristics at $66.5^\circ$W. Section \ref{Sct_Inter_TT} compares decomposition estimate $\tilde{T}_{ACC}$ with expected transport $T_{ACC}$ as a function of HadGEMGC3.1 model resolution. Section \ref{Sct_Inter_CS} examines properties along a meridional section including salinity, potential temperature and neutral densities, for the initial model spin-up phase. Section \ref{Sct_Inter_Dcm} describes results for the components of the cumulative transport decomposition analysis. Finally, Section \ref{Sct_Inter_Wdd} considers possible physical rationales for the weakening of the ACC during and after the spin-up phase in the $1/4^\circ$ model. Note that hereafter, ACC transport refers to the ACC volume transport integrated from the sea floor up to the surface.
\subsection{Total transport at $66.5^\circ$W} \label{Sct_Inter_TT}
Disparities in volume transport $T_{ACC}$ through the Drake Passage, calculated directly from zonal velocities using Equation \ref{e_ACC_def}, have already been illustrated in Figure \ref{F_Tru_Trsp}. Disparities between transports $T_{ACC}$ in models with spatial resolutions of $1^\circ$ (orange), $1/4^\circ$ (blue) and $1/12^\circ$ (green) raise questions about the resolution dependence of the ACC within HadGEMGC3.1 models, and indeed whether the models at any resolution provide realistic estimates of the ACC transport. The red and blue shaded regions of Figure \ref{F_Tru_Trsp} indicate recent observations of $T_{ACC}$ by \cite{Meredith2011} and \cite{Donohue2016}, as discussed in Section \ref{Sct_ACC_Bck}.
These literature values are calculated near or on the $SR1b$ repeat hydrography line, close to $61^\circ$W longitude. However, we choose to perform all our direct transport and decomposition calculations at $66.5^\circ$W, the nearest meridional section for which northern and southern boundaries of the Drake Passage are present for all model resolutions.
Figure \ref{F_Tru_Trsp} indicates an initial weakening of $T_{ACC}$ for all spatial resolutions. This is especially evident for the $1/4^\circ$ model, exhibiting a reduction of almost $60$Sv within the first $30$-year spin-up phase of model integration; corresponding weakening of around $20$Sv and $30$Sv is observed for $1^\circ$ and $1/12^\circ$ model resolutions, respectively. None of the model-calculated transports lie within the values given by \cite{Donohue2016}. Transport calculated from the $1^\circ$ model stabilises at $\approx 157$Sv, mid-way between the literature estimates. Surprisingly, $T_{ACC}$ calculated from the $1/4^\circ$ model continues to weaken post $30$-year spin-up, eventually stabilising at around $65$Sv. Clearly the behaviour of $T_{ACC}$ during the initial $30$-year spin-up phase for each model run is critical to understanding the ACC weakening.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/Pres_ResC_TruEstSmp_66_5W.png}}
\caption[Timeseries of cumulative volume transports: expected $T_{ACC}$, estimated $\tilde{T}_{ACC}$ and simplified boundary estimate $\tilde{T}_{ACC_{SP}}$ at the surface through the Drake Passage for $1^\circ$, $1/4^\circ$ and $1/12^\circ$ spatial resolution at $66.5^\circ$W.]{Timeseries of cumulative volume transports: expected $T_{ACC}$, estimated $\tilde{T}_{ACC}$ (dashed) and simplified boundary estimate $\tilde{T}_{ACC_{SP}}$ (line with circles) at the surface through the Drake Passage for the $1^\circ$ (orange), $1/4^\circ$ (blue) and $1/12^\circ$ (green) spatial resolution at $66.5^\circ$W. Dashed lines indicate estimated volume transport comprising multiple northern and southern boundary densities, $\beta$, depth-independent flow and Ekman components. Lines with circles indicate the estimated volume transport using a simplified boundary density term, where a single density value is used for each boundary at each depth. The shaded red region indicates the estimate from \cite{Meredith2011}, blue that from \cite{Donohue2016} (including a depth-independent component of transport).}
\label{F_SB_Trsp}
\end{figure*}
Timeseries of the three ACC decomposition estimates $\tilde{T}_{ACC}$ (Equation \ref{e_tilde_T_ACC}) at $66.5^\circ$W are shown as dashed lines alongside directly-calculated $T_{ACC}$ in Figure \ref{F_SB_Trsp}. The discrepancy between $\tilde{T}_{ACC}$ and ${T}_{ACC}$ for the $1^\circ$ and $1/4^\circ$ models is small, with the $1/4^\circ$ estimate in particular performing extremely well in capturing the expected ACC transport. The corresponding discrepancy for the $1/12^\circ$ model is large. We explain this observation as follows. As shown in e.g. Equations \ref{T_S_math} and \ref{T_N_math}, calculation of the decomposition diagnostic can involve multiple pairs of southern and northern boundaries. The number of pairs is greatest for the $1/12^\circ$ model resolution, which exhibits greatest bathymetric detail, leading to numerous small trenches being incorporated into the calculation. We hypothesise that density representation in these small trenches is poor, resulting in an increased estimate $\tilde{T}_{ACC}$ relative to ${T}_{ACC}$ for this model resolution.
To explore this effect further, a decomposition using a single pair (of southernmost and northernmost boundaries) at each depth to calculate the boundary density contributions was performed, ignoring the contributions of all intermediate boundaries. The resulting estimated transport $\tilde{T}_{ACC_{SP}}$ for the $1/12^\circ$ model at $66.5^\circ$W gives much improved agreement with ${T}_{ACC}$ as shown also in Figure \ref{F_SB_Trsp} (as lines with circles). The coarser nature of the $1^\circ$ and $1/4^\circ$ model bathymetries leads to little difference between $\tilde{T}_{ACC_{SP}}$ and $\tilde{T}_{ACC}$ for those model resolutions.
The improvement found for the $1/12^\circ$ model at $66.5^\circ$W using the single boundary pair decomposition is not replicated for other meridional sections (e.g. between Antarctica and South Africa, New Zealand, Malvinas Islands) exhibiting large-scale intermediate bathymetry effects (such as islands, large ridges and trenches). This highlights the strong influence of bathymetry upon the diagnostics. In general we find the use of multiple southern and northern boundaries to give the best representation of $\tilde{T}_{ACC}$ across resolutions for varying meridional sections, discussed further later in the chapter.
\subsection{Cross-sectional properties at the Drake Passage} \label{Sct_Inter_CS}
We investigate the characteristics of underlying physical quantities, available in HadGEMGC3.1 model output, which may be related to the apparent weakening transport at Drake Passage. Depth-latitude cross-sectional comparisons at $66.5^\circ$W are made for potential temperature $\theta$, salinity $\mathcal{S}$, and neutral densities $\rho_{neut}$ for each of the three model resolutions. Our focus lies on the initial 30-year spin-up phase of the model integrations, where significant changes in transport are found across all resolutions.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/plt_RC_LRT30.png}}
\caption[Cross-sections of trend in potential temperature for the first 30 years (spin-up phase) of model run, estimated using linear regression. Section taken at $66.5^\circ$W at the Drake Passage for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ spatial resolutions.]{Cross-sections of trend in potential temperature $\theta$ for the first 30 years (spin-up phase) of model run, estimated using linear regression. Section taken at $66.5^\circ$W at the Drake Passage for the (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ spatial resolutions. Red indicates increasing potential temperature.}
\label{F_T_Trnd}
\end{figure*}
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/plt_RC_LRSal30.png}}
\caption[Cross-sections of the trend in salinity during the first 30 years (spin-up phase) of model run, estimated using linear regression. Section taken at $66.5^\circ$W at the Drake Passage for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ model resolutions.]{Cross-sections of the trend in salinity $\mathcal{S}$ during the first 30 years (spin-up phase) of model run, estimated using linear regression. Section taken at $66.5^\circ$W at the Drake Passage for the (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ model resolutions. Blue indicates decreasing salinity.}
\label{F_S_Trnd}
\end{figure*}
Figures \ref{F_T_Trnd} and \ref{F_S_Trnd} show the initial trends found for $\theta$ and $\mathcal{S}$ within the spin-up phase. The trend was estimated using linear regression over the first 30 years of output, independently at each depth-latitude pair available. Figure \ref{F_T_Trnd} suggests that temperatures at each model resolution generally exhibits a warming trend within the interior of the section, and a slight cooling near the Antarctic shelf. The $1/4^\circ$ model in Panel (b) shows greater warming at depth and significant cooling in the upper 500m near $63.5^\circ$S. Similar cooling characteristics are found in the $1/12^\circ$ model. The $\theta$ trends in the northern part of the section show a significant warming in both higher resolutions models, indicating a lightening of waters near the northern boundary within the spin-up phase.
At high latitudes, density variations are driven mainly by changes in salinity rather than temperature. We find a general freshening or reduction in salinity in Figure \ref{F_S_Trnd} throughout a large portion of the interior, for all 3 model resolutions considered. This general spatial characteristic is enhanced for the $1/4^\circ$ model in Panel (b). The large decrease in salinity observed near the southern boundary would indicate a similar large change in density, possibly due to freshwater input from melting sea-ice or a freshening of the along-coast boundary current. The trends found in $\theta$ and $\mathcal{S}$ (Figures \ref{F_T_Trnd} and \ref{F_S_Trnd}) suggest a general lightening of densities across the section, with greater density reduction along the southern boundary in the $1/4^\circ$ model output as shown in Figure \ref{F_ND_Trnd}.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/LR30_ND_66_5_South_America.png}}
\caption[Cross-sections of trend in neutral densities during the first 30 years (spin-up phase) of model run, estimated using linear regression. Section taken at $66.5^\circ$W at the Drake Passage for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ model resolutions.]{Cross-sections of trend in neutral densities $\rho_{neut}$ during the first 30 years (spin-up phase) of model run, estimated using linear regression. Section taken at $66.5^\circ$W at the Drake Passage for the (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ model resolutions. Blue indicates decreasing neural density.}
\label{F_ND_Trnd}
\end{figure*}
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/Yr30_ND_66_5_South_America.png}}
\caption[Cross-sections of neutral densities $\rho_{neut}$ for the 30$^{\text{th}}$ year of model spin-up. Section taken at $66.5^\circ$W at the Drake Passage for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ model resolutions.]{Cross-sections of neutral densities $\rho_{neut}$ for the 30$^{\text{th}}$ year of model spin-up. Section taken at $66.5^\circ$W longitude at the Drake Passage for the (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ model resolutions.}
\label{F_ND_Avg}
\end{figure*}
The different trends in $\theta$, $\mathcal{S}$ and $\rho_{neut}$ in each model simulation, result in material differences between cross-sectional $\rho_{neut}$ after the first $30$ years. These differences are most prominent near the southern boundary. Panel (a) of Figure \ref{F_ND_Avg} shows $\rho_{neut}$ in the 30$^{\text{th}}$ year of model spin-up for the $1^\circ$ model, in reasonable agreement with the expected isopycnal structure inferred from current climatological, reanalysis and observational data shown in Figure \ref{F_ND_ObsAvg}. Specifically, isopycnals rise from the northern towards the southern boundary, with many outcropping before the southern boundary. $\rho_{neut}$ for the spin-up phase of the $1/12^\circ$ model shown in Panel (c) exhibits similar spatial features to the $1^\circ$ model output, except for a few oscillations in isopycnals near the Antarctic coast, also present in the GloSea5 reanalysis dataset (Figure \ref{F_ND_ObsAvg}(b)). These features could possibly signify re-circulations within the interior of the ocean via standing eddies etc.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/Avg_ND_GK_GS_Obs.png}}
\caption[Cross-sections of average neutral densities $\rho_{neut}$ for the \cite{GouretskiViktorandKoltermann2004} climatology, GloSea5 reanalysis and SR1b section shipboard observations. Section taken at $66.5^\circ$W at the Drake Passage for GK and GloSea5, observations from SR1b section located close to $61^\circ$W.]{Cross-sections of average neutral densities $\rho_{neut}$ for the (a) \cite{GouretskiViktorandKoltermann2004} climatology, (b) GloSea5 reanalysis and (c) SR1b section shipboard observations. Section taken at $66.5^\circ$W at the Drake Passage for (a) GK and (b) GloSea5. Panel (c) shows observations from the SR1b section located close to $61^\circ$W; note the change in latitudinal extent of the section.}
\label{F_ND_ObsAvg}
\end{figure*}
From Figure \ref{F_Tru_Trsp}, we know that significant weakening of ACC transport is observed for the $1/4^\circ$ model simulation. The corresponding $\rho_{neut}$ after the spin-up phase (shown in Figure \ref{F_ND_Avg}(b)), shows slumping of isopycnals near the southern boundary, as opposed to outcropping as expected. We attribute this to the previously observed freshening on the Antarctic shelf. Near the Antarctic coastline, the local north-south isopycnal gradient, in conjunction with geostrophy, implies the presence of a local return flow where isopycnals slump, in contrast to the main eastward ACC flow. A local westward flow along the coastline through the Drake Passage weakens the net transport. There is also evidence of similar slumping of southern boundary isopycnals in the $1/12^\circ$ model, particularly between 800m and 1200m.
\subsection{Decomposition of ACC transport into boundary and $\beta$ components} \label{Sct_Inter_Dcm}
The decomposition estimate $\tilde{T}_{ACC}$, shown as dashed lines in Figure \ref{F_SB_Trsp}, is the sum of six components (Equation \ref{e_tilde_T_ACC_math}). Three components correspond to northern, southern and $\beta$ density contributions. The depth-independent component corresponds to zonal bottom velocities, and the Ekman component to meridional wind stresses. The final component arises from the transport contributions of additional cells. Figure \ref{F_Dcmp_All_Qtd} illustrates the evolution of each of these components in time, for the transport accumulated up to the surface from the $1/4^\circ$ model. For reference, timeseries of ${T}_{ACC}$ and $\tilde{T}_{ACC}$ are also given.
We observe that $T_{\beta}$, $T_{Ekm}$ and $T_{AC}$ are small throughout the period. The biggest contribution from any of these components, of approximately $-5$Sv, is due to $T_{\beta}$ within the first $50$ years. $T_{\beta}$ subsequently decreases in magnitude and stabilises near $-2$Sv. This pattern of initial increase and subsequent decrease and stabilisation of $T_{\beta}$ is also seen in $1^\circ$ and $1/12^\circ$ model outputs, which stabilise at $-5.5$Sv and $-4$Sv respectively. $T_{Ekm}$ is found to oscillate between $\pm 0.4$Sv throughout the period for all model resolutions (not shown). $T_{AC}$ lies at approximately $2$Sv for both higher-resolution models; for the 1$^\circ$ model, however, its value is near $10$Sv. We attribute the larger value of $T_{AC}$ at 1$^\circ$ to larger model grid cell size and hence greater relative importance of additional cells in the transport calculation. The results for $T_{\beta}$ and $T_{Ekm}$ are similar to those found by \cite{Allison2009}; however, our $T_{AC}$ contribution is smaller than the residual term they find.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/Pres_QTD_AllCmp_66_5W.png}}
\caption[Timeseries of expected $T_{ACC}$, estimated $\tilde{T}_{ACC}$ and contributing components through the Drake Passage at $66.5^\circ$W for the $1/4^\circ$ model.]{Timeseries of expected $T_{ACC}$ (black), estimated $\tilde{T}_{ACC}$ (red dashed) and contributing component cumulative volume transports at the surface through the Drake Passage at $66.5^\circ$W for the $1/4^\circ$ model. Contributing components are (a) northern boundary density ($T_N$, green), (b) southern boundary density ($T_S$, dark blue), (c) bottom (depth-independent $T_{bot}$, light blue), (d) Ekman ($T_{Ekm}$, yellow), (e) beta ($T_{\beta}$, orange) and (d) additional cell ($T_{AC}$, purple). Shaded red and blue regions indicate estimates from \cite{Meredith2011} and \cite{Donohue2016} respectively.}
\label{F_Dcmp_All_Qtd}
\end{figure*}
The major contributors to $\tilde{T}_{ACC}$ are $T_S$, $T_N$ and $T_{bot}$. Timeseries of $T_S$ and $T_N$ for each model resolution are shown in Figure \ref{F_Dcmp_NS}. $T_S$ and $T_N$ generally show high negative correlation for each model resolution. This negative dependence is similar to that observed for western and eastern boundary contributions in the Atlantic overturning streamfunction. That is, for the ACC, $T_S$ and $T_N$ tend to compensate each other, except for the early years.
For the $1/4^\circ$ model, the magnitude of $T_S$ increases more quickly than that of $T_N$ within the first 40 years. We have observed from Figures \ref{F_T_Trnd}-\ref{F_ND_Trnd} a large freshening of the southern boundary for the $1/4^\circ$ model; this freshening appears to increase the magnitude of the southern boundary transport contribution by $70$Sv. This increase in southern boundary contribution is partially compensated by an increase in northern contribution; however, southern boundary freshening reduces ACC transport by approximately 60Sv in the first 40 years for the $1/4^\circ$ model. A similar but weaker trend is observed within the $1/12^\circ$ model. In contrast, timeseries of $T_S$ and $T_N$ in the $1^\circ$ model remain relatively stable, with some initial strengthening of both components within the first 50 years. This coincides with slight weakening in $\tilde{T}_{ACC}$ at this time (see Figure \ref{F_SB_Trsp}). Differences in magnitudes of $T_S$ and $T_N$ for the $1/12^\circ$ resolution are attributed to the greater number of boundary pairs found at that resolution. The contribution made by $T_S$ is considerably larger within HadGEM-GC3.1 models than found by \cite{Allison2009}; in their study, $T_S$ has a magnitude of 5Sv only. Of course, we note that the magnitudes of $T_N$ and $T_S$ are arbitrary and dependent on the choice of reference density used.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/Pres_ResC_NC_66_5W.png}}
\caption[Timeseries of northern and southern boundary density contributions to the estimated volume transport $\tilde{T}_{ACC}$ through the Drake Passage for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ spatial resolutions. Section taken at $66.5^\circ$W.]{Timeseries of northern ($T_N$, solid) and southern ($T_S$, dashed) boundary density contributions to the estimated volume transport $\tilde{T}_{ACC}$ integrated up to the surface through the Drake Passage for the $1^\circ$ (orange), $1/4^\circ$ (blue) and $1/12^\circ$ (green) spatial resolutions. Section taken at $66.5^\circ$W.}
\label{F_Dcmp_NS}
\end{figure*}
The initial strengthening of $T_S$ for the $1/4^\circ$ model reverses after the first 40 years. That is, for a period of approximately 150 years following the initial phase, $T_S$ weakens and returns eventually to a value similar to its initial value. During the same 150 year period, $T_N$ exhibits a slow weakening, compensating for the change in $T_S$. Hence, the overall estimated cumulative transport $\tilde{T}_{ACC}$ does not exhibit a strong trend during this 150 year period; for the same period, no strong trend is seen in the expected transport $T_{ACC}$ either. A similar pattern of behaviour is observed for the $1/12^\circ$ model, but with greater interannual temporal variability.
The final large contributor to $\tilde{T}_{ACC}$ is the bottom or depth-independent flow component $T_{bot}$, shown by \cite{Donohue2016} to contribute over a quarter of the observed ACC transport through the Drake Passage. Figure \ref{F_Dcmp_bot} shows the \cite{Donohue2016} estimate as a blue shaded region, alongside the decomposition estimates for the $1^\circ$ (orange), $1/4^\circ$ (blue) and $1/12^\circ$ (green) models. Only the $1^\circ$ model provides estimates for $T_{bot}$ within the range of values suggested by \cite{Donohue2016}.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/Pres_Bot_ResC_66_5W.png}}
\caption[Timeseries of the depth-independent flow contribution $T_{bot}$ to the estimated cumulative volume transport $\tilde{T}_{ACC}$ the Drake Passage for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ models. Section taken at $66.5^\circ$W.]{Timeseries of the depth-independent flow contribution $T_{bot}$ to the estimated cumulative volume transport $\tilde{T}_{ACC}$ through the Drake Passage for the $1^\circ$ (orange), $1/4^\circ$ (blue) and $1/12^\circ$ (green) models. Section taken at $66.5^\circ$W. Blue shaded region indicates the range of observed values found by \cite{Donohue2016} using current meters.}
\label{F_Dcmp_bot}
\end{figure*}
Higher resolution models exhibit a starting value for $T_{bot}$ which is already some 10Sv smaller than the range of values suggested by \cite{Donohue2016}; the value of $T_{bot}$ subsequently weakens further. The end value of $T_{bot}$ for the $1/4^\circ$ simulation is around $35$Sv weaker than that suggested by \cite{Donohue2016}. The final value for $T_{bot}$ from the $1/12^\circ$ model is also found to be around $25$Sv weaker than expected, accounting for almost half the difference between the literature value and the expected $T_{ACC}$ from that model.
A possible future investigation to help explain discrepancies in estimates for $T_{bot}$, would compare bottom currents measured by \cite{Donohue2016} with those generated from GCMs, confirming the extent to which GCMs are able to resolve bottom flows within the Drake Passage for different model resolutions. In particular, this may reveal why the coarser resolution model apparently outperforms both higher resolution models here.
\subsection{The Weddell gyre} \label{Sct_Inter_Wdd}
Sections \ref{Sct_Inter_TT}-\ref{Sct_Inter_Dcm} suggest a significant change in physical properties along the southern boundary of the Drake Passage within the $1/4^\circ$ model over the spin-up phase, leading to a weakening ACC transport. The freshening of the Antarctic shelf leads to a stronger return flow along the shelf. Here we consider whether these changes might be associated with changes in the characteristics of the Weddell Gyre, formed by wind stresses and Coriolis effects in the Weddell Sea.
\subsubsection*{Sea surface height in Weddell Sea and the Drake Passage}
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/jtplt_RC_Yr30_DP.png}}
\caption[Weddell Sea surface height above the geoid for $1^\circ$, $1/4^\circ$ and $1/12^\circ$ resolution models for the $30^{\text{th}}$-year of spin-up.]{Weddell Sea surface height above the geoid for (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ resolution models for the $30^{\text{th}}$-year of spin-up. Region displayed is from $80^\circ$W to $30^\circ$E and $85^\circ$S to $40^\circ$S.}
\label{F_SSH_sp}
\end{figure*}
Figure \ref{F_SSH_sp} shows the annual-mean sea surface height (SSH) from the (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ resolution models for the 30$^{\text{th}}$ year of model spin-up. We observe large differences in SSH between models at different resolution near the Antarctic shelf region. We note that SSH is generally negative with respect to the geoid. The $1^\circ$ model shows good agreement with the SSH structure observed using satellite altimetry data ($Rio$ MDT dataset from AVISO, not shown, \textit{personal correspondence with Pat Hyder}). The smaller, (less negative) SSH observed in the $1/4^\circ$ (and to a lesser extent $1/12^\circ$) model suggests a fresher and less dense shelf region. Panel (b) shows a tongue-like structure protruding westward into the Drake Passage, suggesting possible changes in the structure of the Weddell gyre in the region. In fact, the smaller SSH observed near the Antarctic coastline in the $1/4^\circ$ model extends all the way around the Antarctic land mass.
There is a mainly positive trend in SSH over the first 30 years, shown in Figure \ref{F_SSH_spt} for all model resolutions. The $1/4^\circ$ model exhibits the strongest trend, especially along the Antarctic shelf, suggesting significant changes to the properties of the shelf current.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/plt_RC_Trnd30_DP.png}}
\caption[Trend in Weddell Sea surface height above the geoid for $1^\circ$, $1/4^\circ$ and $1/12^\circ$ resolution models for the initial $30$-year spin-up phase.]{Trend in sea surface height above the geoid for (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ resolution models for the initial $30$-year spin-up phase. Region displayed is from $80^\circ$W to $30^\circ$E and $85^\circ$S to $40^\circ$S.}
\label{F_SSH_spt}
\end{figure*}
\subsubsection*{Characteristics of the Weddell Gyre}
Two possible mechanisms for a freshening Antarctic shelf in the $1/4^\circ$ model are (a) excessive sea-ice melt around Antarctica, and (b) leakage of the Weddell gyre. \cite{Meijers2016} discuss the strong correlation between the curl of the wind stress integrated over the Weddell Gyre and the water properties of the westward flowing shelf current near Elephant Island. Using observations from hydrographic cruise CTDs and bottom landers, they hypothesise that the westward flowing shelf current is an extension of the Antarctic slope current, with flow across the South Scotia Ridge between the Weddell and Scotia Seas being controlled by changes in the strength of the Weddell Gyre.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/BrtStrmSO_Yr30_RC.png}}
\caption[Barotropic streamfunction $\psi_u(x,y)$ around Antarctica for $1^\circ$, $1/4^\circ$ and $1/12^\circ$ resolution models for the $30^{\text{th}}$-year of model spin-up.]{Barotropic streamfunction $\psi_u(x,y)$ around Antarctica for (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ resolution models for the $30^{\text{th}}$-year of model spin-up. Only positive values of streamfunction are shown, to emphasise the location and strength of the Weddell and Ross Sea gyres (in the upper and lower halves of each panel, respectively).}
\label{F_BrtS_TM}
\end{figure*}
Figure \ref{F_BrtS_TM} shows the barotropic streamfunction $\psi_u$ for the 30$^{\text{th}}$ year at each model resolution, defined by
\begin{equation}
\psi_u(x,y) = \int_{S(x)}^{y}\int_{-H(x,y)}^{0}u_E(x,y',z) \ \mathrm{d} z \ \mathrm{d} y'
\label{e_psi_u}
\end{equation}
for zonal velocity $u_E$, southern boundary $S$ and sea floor depth $H$. That is, $\psi_u$ is the cumulative sum from the southern boundary at a given $x$ to a meridional coordinate $y$ of interest, of the vertical sum of zonal velocities from the ocean floor to the surface. We choose to take the integral from the southern boundary northward at a given $x$, since the starting values at the southernmost location for the longitudes of interest correspond to land, and hence $\psi_u(x,S(x))=0$. Figure \ref{F_BrtS_TM} shows stronger Weddell and Ross Sea gyres for the $1/4^\circ$ model. Here, the Weddell Gyre is found to protrude into the Drake Passage, leading to a weaker overall eastward transport through the passage.
\subsection*{Ocean-only GCMs, and $1/4^\circ$ sensitivity experiments} \label{Sct_Inter_Oth}
In Appendix \ref{Sct_Inter_Oth_A} we summarise preliminary investigations into the impact of model parameterisation, and atmospheric coupling, on the ACC transport at the Drake Passage. We note that further investigations into ACC transport have recently been conducted by the UK MetOffice, showing clear ACC transport sensitivity to characteristics of the Southern Ocean bathymetry.
\subsection{Decomposition analysis for other ACC longitude sections} \label{Sct_LngSct}
In addition to the decomposition of transport through the Drake Passage reported in Section \ref{Sct_Inter_Dcm}, we also estimate total cumulative transport $T_{ACC}$ at the surface and its decomposition estimate $\tilde{T}_{ACC}$ for 6 other longitude sections, as illustrated in Figure \ref{F_LngSct}. The sections correspond to (a) South Africa - Antartica, (b) Madagascar - Antarctica, (c) western Australia - Antarctica, (d) Tasmania - Antarctica, (e) New Zealand South Island - Antarctica and (f) New Zealand North Island - Antarctica.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=14cm]{Fig_ACC/LongSections.png}}
\caption[Locations of seven meridional sections, at which total cumulative ACC transports up to the surface $T_{ACC}$ and $\tilde{T}_{ACC}$ are calculated. ]{Locations of seven meridional sections, at which total cumulative ACC transports $T_{ACC}$ and $\tilde{T}_{ACC}$ are calculated. Meridional sections are from Antarctica to South America (green), South Africa (black), Madagascar (red), western Australia (pink), Tasmania (orange), New Zealand South Island (brown) and New Zealand North Island (blue).}
\label{F_LngSct}
\end{figure*}
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/TruEst_LngSct.png}}
\caption[Timeseries of expected $T_{ACC}$ and estimated $\tilde{T}_{ACC}$ cumulative volume transports up to the surface through each of 6 meridional sections for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ models.]{Timeseries of expected $T_{ACC}$ (solid) and estimated $\tilde{T}_{ACC}$ (dashed) cumulative volume transports up to the surface through each of 6 meridional sections for the $1^\circ$ (orange), $1/4^\circ$ (blue) and $1/12^\circ$ (green) models. Meridional sections show are (a) South Africa - Antartica, (b) Madagascar - Antarctica, (c) Australia - Antarctica, (d) Tazmania - Antarctica, (e) New Zealand South Island - Antarctica and (f) New Zealand South Island - Antarctica.}
\label{F_TE_LngSct}
\end{figure*}
Figure \ref{F_TE_LngSct} gives timeseries of $T_{ACC}$ and $\tilde{T}_{ACC}$ through each section for all model resolutions. The figure suggests that many of the findings of the decomposition analysis at the Drake Passage are common to all meridional sections considered. In particular, the values of transports $T_{ACC}$ and $\tilde{T}_{ACC}$ from GCM outputs at different model resolutions are ordered: values from the $1^\circ$ model tend to be larger than those from the $1/12^\circ$ model, which themselves are larger than those from the $1/4^\circ$ model. Moreover, the values of $T_{ACC}$ for the Madagascar - Antarctica section are typically the largest at each model resolution; we attribute this to the wider section at this longitude and the incorporation of flows which are not part of the ACC at latitudes near Madagascar (e.g. Agulhas Return Flow); similarly, values of $T_{ACC}$ and $\tilde{T}_{ACC}$ at the South America - Antractica (Drake Passage) section are relatively small. We note the weaker transport observed for the South Africa - Antarctica section (Panel (a), across model resolutions) is due to the westward-flowing Agulhas current at the northern edge of this meridional section. We find varying levels of agreement between $T_{ACC}$ and $\tilde{T}_{ACC}$ across model resolution and section; generally, the $1/4^\circ$ model performs best in this respect. For the $1/12^\circ$ model, $\tilde{T}_{ACC}$ is always larger than $T_{ACC}$, with best agreement found for the South Africa and Madagascar sections. The Madagascar section also yields best agreement between $T_{ACC}$ and $\tilde{T}_{ACC}$ for the $1^\circ$ and notably the $1/4^\circ$ model. Together, these results appear to emphasise the important role of local bathymetric characteristics on the quality of estimates of ACC transport decomposition.
Results of the transport decomposition diagnostic applied to these additional meridional sections highlight the importance of $T_{\beta}$ for wider sections (not shown). The increased magnitude of $T_{\beta}$ for wider sections is attributed to greater density variations across those sections, and the incorporation of density contributions (otherwise accounted for by northern and southern boundary densities for narrower sections), also resulting in smaller $T_S$ and $T_N$. Interestingly, the magnitude of $T_{bot}$ from the decomposition analysis, is approximately three times larger for the Drake Passage and Madagascar - Antarctica sections, than for any of the other sections. This is likely to be due to strong bottom currents in these meridional sections, and worthy of further investigation.
\section{Mapping along-boundary densities for the Antarctic}
\label{ACC_BdryD}
The investigations of ACC transport through the Drake Passage, and meridional cross-sections of potential temperature and salinity there, indicate significant changes in southern boundary properties. Here, we apply the boundary mapping algorithm developed previously for the Atlantic and neighbouring continuous boundaries to the Antarctic continent. The objective is to examine along-boundary properties within the control $1^\circ$ and $1/4^\circ$ HadGEM-GC3.1 models, GloSea5 reanalysis (\citealt{MacLachlan2015}) and GK climatology (\citealt{GouretskiViktorandKoltermann2004}).
\subsection{Along-boundary mapping}
The mapping algorithm is applied as described in Sections \ref{CntMpBnd.1}-\ref{CntMpBnd.2}, with minor modifications. For the Antarctic boundary, we use a reference contour at depth $k=46$ (947m), and an initialisation point offshore of Antarctica near $80^\circ$E. Figure \ref{F_MapAnt_1pth} shows the reference contour for the $1^\circ$ model together with the common along-boundary distance scale used to reference boundary contours at all depth levels. Reference nodes (red circles) are located every 400km, for the clockwise pathway.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=14cm]{Fig_ACC/BdryRef_GC3_1C_ap466_kRef_46_Ant_int.png}}
\caption[Reference contour at depth level $k=46$ (947m) for the Antarctic coastline with nodes, distance markers and cumulative distance.]{Reference contour (black dots) at depth level $k=46$ (947m) for the Antarctic coastline with nodes (red circles), distance markers (pink stars, every $4 \times 10^3$km) and cumulative distance (blue numbers, $\times 10^3$km). Contours at all other depths are referenced to this contour.}
\label{F_MapAnt_1pth}
\end{figure*}
\subsection{Boundary neutral densities for the $1^\circ$-$LM$ model}
Section \ref{Sct_Inter_TT} shows that the best estimate for ACC transport is provided by the HadGEM-GC3.1 $1^\circ$ model with medium atmospheric resolution ($LM$). Section \ref{Sct_Inter_CS} demonstrates good agreement of cross-sectional properties at the Drake Passage between the $1^\circ$-$LM$ model and climatological, observational and reanalysis datasets. Therefore, we begin by characterising the time-mean boundary density structure around Antarctica in the $1^\circ$-$LM$ model. To reduce potential issues with model drift, only the first 100 years of output for each GCM model is considered; GloSea5 time-averaging is performed over all 23 years of data available.
Figure \ref{F_BD_1LM} shows neutral densities on the sloping boundary around Antarctica for the $1^\circ$-$LM$ model, for (a) the upper 400m and (b) all depths, with an adjusted colour bar to highlight isopycnal structure at depth.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/BdryDensGrad_neutD_GC3_1C_ap466_Ant_int.png}}
\caption[Along-boundary time-average neutral densities for the first 100 years of the $1^\circ$-$LM$ model for the upper 400m and for all depths.]{Along-boundary time-average neutral densities for the first 100 years of the $1^\circ$-$LM$ model for (a) the upper 400m and (b) all depths. Dashed white lines indicate locations of longitudes $90^\circ$E, $180^\circ$, $90^\circ$W, $0^\circ$ and that of Elephant Island.}
\label{F_BD_1LM}
\end{figure*}
The upper few hundred metres is dominated by local wind-stress variability, and therefore along-boundary continuity in isopycnals is not found nor expected. However, at depth, we might expect some continuity in density structure, since the ACC is reasonably longitudinally coherent, and any major changes in Antarctic coastline densities would require compensatory effects elsewhere to maintain a constant ACC.
Following the boundary clockwise around Antarctica, from $80^\circ$E we find that isopycnal depths vary with along-boundary distance throughout the fluid column for the first $4.5 \times 10^3$km, with a small upward slope. At approximately $5 \times 10^3$km (near longitude $180^\circ$), on entering the Ross Sea (where deep water formation is known to occur), we observe a large change in isopycnal depths. Near the surface, a region of significantly denser water is found. Below this, there is a sudden deepening of along-boundary isopycnals. Connecting to this feature, at along-boundary distances of $4.5-5.0 \times 10^3$km and a depth of approximately 300m, we find a horizontal tongue of deep water which extends along the boundary.
After the Ross Sea region of dense surface water, we enter the Amundsen and Bellinghausen Seas at distances $6-10\times 10^3$km and find well-stratified boundary densities. Flat isopycnals are observed at depth, along with a gentle upward slope in isopycnals for depths 100-400m.
At approximately $10\times 10^3$km we pass through the Drake Passage where there is a shallowing of all isopycnals. Passing Elephant Island, the southernmost point of the SR1b hydrographic section, a steep gradient in isopycnals is found for depths between 500m and 2500m, which continues into the Weddell Sea (at $11-13 \times 10^3$km). Like the Ross Sea, this sheltered location is a key region of deep water formation, and is a prominent source of AABW. At $12\times 10^3$km, dense water dominates the entire fluid column, a clear indicator not only of dense water formation near the surface but also of convective sinking of dense waters along the shelf and continental slope. Following the Weddell Sea, we find isopcynals below 500m slope gently upwards along East Antarctica.
Findings from Figure \ref{F_BD_1LM} agree with the \cite{Thompson2018} characterisation of regions of dense water formation in both the Weddell and Ross Seas. However, \citealt{Thompson2018} discuss a third region of dense water formation near the Adelie coast (at $17.6 \times 10^3$km) which is not immediately visible within Figure \ref{F_BD_1LM} for the $1^\circ$-$LM$ model; on close inspection however, a small region of denser water can be found in Panel (b) at around 250m. Section \ref{GS5_S} below considers the Adelie coast region further, and the impact of seasonality on deep water formation.
\subsection{Boundary neutral densities in other datasets}
Figure \ref{F_BD_Ant_Rest} presents the time-average along-boundary neutral densities for the (a) $1^\circ$-$LL$ model, (b) $1/4^\circ$ model, (c) GloSea5 reanalysis and (d) GK climatology. Figure \ref{F_SB_Trsp} shows weaker ACC transport than expected for the $1/4^\circ$ model, suggesting model inaccuracies in representing Southern Ocean dynamics.
Using a common colour scale for both Figures \ref{F_BD_1LM} and \ref{F_BD_Ant_Rest}(b), we find a stark difference in densities on the boundary for the corresponding models. The 27.8kgm$^{\text{-3}}$ (dark pink) density contour penetrates down to 1500m for the $1/4^\circ$ model, whereas in the $1^\circ$ model its maximum depth is only about 500m. Lighter densities throughout Figure \ref{F_BD_Ant_Rest}(b) indicate clearly warmer and fresher Southern Ocean and Antarctic coastline bottom waters in the $1/4^\circ$ model. Boundary potential temperature and salinity properties (not shown) indicate a generally fresh upper 1500m, leading to colder surface waters. However, at depth, significantly warmer waters are found in the $1/4^\circ$ model.
The $1/4^\circ$ model does show small regions of dense surface waters in the Weddell and Ross Seas. No connectivity to deeper water is found, suggesting no pathway for dense waters sinking down the shelf and continental slope. Isopycnal structure below 1500m is relatively stable, except for regions prior to the Ross Sea (Figure \ref{F_BD_Ant_Rest}(b), at $5\times 10^3$km) and after the Weddell Sea (at $14\times 10^3$km). These features suggest the presence of regions of warming or freshening at depth, supported by significant unexplained warming (not shown) found along East Antarctica (at $13.5$-$16 \times 10^3$km) at 2000m depth.
\begin{figure*}[ht!]
\centering
\includegraphics[width=\textwidth]{Fig_ACC/BdryDensGrad_neutD_GC3_LL_ar766_Ant_int_sA.png}\\
\includegraphics[width=\textwidth]{Fig_ACC/BdryDensGrad_neutD_GC3_025C_aj368_Ant_int_sA.png}\\
\includegraphics[width=\textwidth]{Fig_ACC/BdryDensGrad_neutD_GloSea5_Ant_int_sA.png}\\
\includegraphics[width=\textwidth]{Fig_ACC/BdryDensGrad_neutD_GKclim_Ant_int_sA.png}
\caption[Neutral densities along the sloping Antarctic coastline for the (a) $1^\circ$-$LL$ model, (b) $1/4^\circ$ model, (c) GloSea5 reanalysis and (d) GK climatology, for all depths.]{Neutral densities along the sloping Antarctic boundary for the (a) $1^\circ$-$LL$ model, (b) $1/4^\circ$ model, (c) GloSea5 reanalysis and (d) GK climatology, for all depths. Model outputs are averaged over the first 100 years, and GloSea5 over 23 years of run. Dashed white lines indicate locations of longitudes $90^\circ$E, $180^\circ$, $90^\circ$W, $0^\circ$ and that of Elephant Island.}
\label{F_BD_Ant_Rest}
\end{figure*}
Figure \ref{F_BD_Ant_Rest}(c) shows time-average along-boundary neutral densities for the GloSea5 reanalysis dataset, for the period $1995$-$2018$. The spatial structure of boundary densities is very similar to that exhibited by the $1^\circ$-$LM$ model in Figure \ref{F_BD_1LM}. Weddell and Ross Sea deep water formation regions are clearly defined, despite a slightly different bathymetry resulting in minor differences in contour location, resulting in some surface data being rejected. Within GloSea5, there is more evidence of deep water formation near the Adelie coast (at $17.6 \times 10^3$km). Relatively flat or slightly tilted isopycnals are found along the western Antarctic Peninsula, between the notable regions of deep water formation in the Ross and Weddell Seas. The slopes of isopycnals along eastern Antarctica (at $12.5-17.5 \times 10^3$km) for GloSea5 are somewhat steeper than those for the $1^\circ$-$LM$.
Comparison of the coarser atmospheric resolution $1^\circ$-$LL$ GCM (Figure \ref{F_BD_Ant_Rest}(a), discussed in Section \ref{Bdry_Mod}) and the \cite{GouretskiViktorandKoltermann2004} climatology data (Figure \ref{F_BD_Ant_Rest}(d)), again reveals differing boundary density structures.
For the GK dataset, much denser waters at depth are present, with three clear pockets of dense water near the surface in the Ross Sea (at $3.5$, $5.2\times 10^3$km) and the Weddell Sea (at $12.1\times 10^3$km). Prior to the Ross Sea, isopycnal depths vary significantly with distance, with less clear connectivity between surface and deep dense water in comparison to the Weddell Sea. Between the Ross and Weddell Seas, isopycnals slope downward (in contrast to all other datasets considered), and post Weddell Sea we observe relatively flat isopycnals. Near the Adelie coast, a small region of dense water near 500m is found. In terms of boundary density magnitudes, the GK dataset is closest to the GloSea5 and the $1^\circ$-$LM$ model; however its detailed isopycnal structure differs from all datasets considered. We note that observational data for the Antarctic coastline is sparse. Since the GK methodology uses interpolation in the absence of data, there is some doubt as to the veracity of the detailed density structure estimated.
Density structure for the $1^\circ$-$LL$ model (Figure \ref{F_BD_Ant_Rest}(a)) exhibits localised regions of deep water formation, with no connectivity along the boundary from dense surface water to the deep ocean. $1^\circ$-$LL$ model waters are lighter generally than those of the GloSea5, $1^\circ$-$LM$ and GK datasets, with relatively flat isopycnals below 700m, except for some variability prior to the Ross and Weddell Seas. The general density structure is similar to that of the $1/4^\circ$ model in Figure \ref{F_BD_Ant_Rest}(b), except that waters as somewhat denser, and that dense surface regions are clear but isolated. There is no evidence for dense surface water near the Adelie coast. The lack of connectivity between dense surface and deep water along the boundary in the $1^\circ$-$LL$ and $1/4^\circ$ models would suggest mixing through the interior via open ocean convection, instead of down-slope flow of dense waters.
When comparing first and last years of model runs (not shown), densities become generally lighter for both the $1/4^\circ$ and $1^\circ$-$LL$ models, suggesting model drift. With fresher surface waters and warmer water at intermediate depths, we find isolated regions of dense surface water near the Weddell and Ross Seas for both models; again, no connectivity between the surface and depth is found at these locations.
The large differences in density structure found between model, climatology and reanalysis datasets, emphasise the difficulty in representing Southern Ocean and Antarctic coastline properties, and the importance of direct observation of these remote and hostile waters.
\subsection{Mechanisms underlying observed isopycnal structure}
We hypothesise that the isopycnal structure on the Antarctic boundary for the $1^\circ$-$LM$ model and GloSea5 reanalysis dataset is influenced by the presence of an Antarctic Slope Current (ASC). The ASC is a near-circumpolar current flowing westward along Antarctica's shelf and slope regions, disrupted only by the western Antarctic Peninsula between the Pacific and Atlantic Oceans. There is uncertainty regarding the formation location of the ASC. The ASC behaves like a shelf break current, typically formed by wind stresses aligned parallel to the coastline, acting as a barrier to mixing between shelf and open-ocean waters. The formation area of the ASC spans from the Bellingshausen Sea to the western Ross Sea ($6$-$10 \times 10^3$km along our boundary). \cite{Thompson2018} show the ASC disappears near the western Antarctic Peninsula; here instead, there is an eastward flowing slope current along with the southern edge of the ACC.
Concentrating on Figures \ref{F_BD_1LM} and \ref{F_BD_Ant_Rest}(c), starting at the Weddell Sea (at along-boundary distance $12 \times 10^3$km), we find dense water near the surface, caused by local bathymetric and atmospheric conditions. Moving further along the boundary, isopycnals are then found to slope upwards to the east along East Antarctica until the Ross Sea. However, following the Ross Sea (at $5 \times 10^3$km, again exhibiting dense water throughout the fluid column), the pattern of gently sloping isopycnals is not repeated as we move along the boundary back to the Weddell Sea. Instead, we find that isopycnals are relatively flat.
We speculate that this feature may be related to the disconnect between Bellinghausen and Weddell basins, and the absence of ASC here. On the Antarctic boundary, we hypothesise that the down-slope movement of the ASC transports denser surface waters westwards (anti-clockwise) from the Ross Sea along the sloping boundary of East Antarctica to the Weddell Sea, causing a shallow downwards isopycnal slope in the direction of the ASC. In contrast, there is no ASC westwards from the Weddell Sea, through the Drake Passage, along the western Antarctic Peninsula to the Ross Sea. As a result, we observe relatively flat isopycnals here.
Similarities can be drawn between the Antarctic boundary isopycnal structure in Figures \ref{F_BD_1LM} and \ref{F_BD_Ant_Rest}(c), and that observed along western boundaries of the Atlantic discussed in Section \ref{CntMpBnd}. The sloping isopycnals found between Ross and Weddell Seas along East Antarctica are consistent with a boundary current (the ASC here) propagating (anti-clockwise) around Antarctica and slowly moving down slope to maintain kinetic energy as outlined by \cite{MacCready1994} (see Section \ref{Bdry_WMec} for more detail; inspection of boundary and bottom velocities might reveal the location and characteristics of the ASC and other deep along-boundary return currents). In contrast, isopycnals along the western Antarctic Peninsula behave similarly to those of the western boundary of the Indian Ocean. There is no strong boundary current, and hence no clear pathway for moving dense Weddell Sea water around Elephant Island into the Bellinghausen and Amundsen Seas.
\subsection{Seasonal features of along-boundary densities from GloSea5 reanalysis}
\label{GS5_S}
Analysis of the monthly GloSea5 data reveals the impact of seasonality on along-boundary neutral densities, as shown in Figure \ref{F_BD_GS5_S}. Panels correspond to time-average seasonal boundary densities for the three-month periods (a) DJF, (b) MAM, (c) JJA and (d) SON.
Figure \ref{F_BD_GS5_S}(c) and (d) exhibit denser surface waters and deeper mixed-layers, caused by wind-driven mixing and brine rejection during ice formation in preceding stormy winter months. In autumn, Figure \ref{F_BD_GS5_S}(b) shows distinct stratification and lighter fresher near-surface waters due to ice melt throughout the preceding summer months. The lightest and warmest year-round surface waters are found along the western Antarctic Peninsula, in agreement with \cite{Thompson2018}, as seen from Figure \ref{F_BD_GS5_M}(e, f) below.
Figure \ref{F_BD_GS5_M} shows the monthly-average boundary density variation with depth, at nine along-boundary locations given in Figure \ref{F_BD_GS5_S}(d). In the Weddell Sea (Figure \ref{F_BD_GS5_M}(g)) and western Ross Sea (Figure \ref{F_BD_GS5_M}(d)), dense water is present throughout the year, with very shallow regions of light near-surface water in the first few months of the year. However, there is clear seasonality in deep water formed in the eastern Ross Sea ($3.2 \times 10^3$km) and near the Adelie coast ($17.8 \times 10^3$km), highlighted by Figure \ref{F_BD_GS5_M}(b) and (i). Both regions show lighter surface conditions during the first 4-5 months of the year, proceeding to dense surface waters on the boundary later in the year.
\begin{figure*}[ht]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/GloSea5_SeasonsDems_neutD_GloSea5_Ant_int.png}}
\caption[Seasonally-averaged neutral densities on the Antarctic boundary in the GloSea5 reanalysis dataset for DJF, MAM, JJA and SON. Dashed white lines indicate along-boundary distances at which month-depth illustrations are given in Figure \ref{F_BD_GS5_M}. ]{Seasonally-averaged neutral densities on the Antarctic boundary in the GloSea5 reanalysis dataset for (a) DJF, (b) MAM, (c) JJA and (d) SON. Dashed white lines indicate along-boundary distances at which month-depth illustrations are given in Figure \ref{F_BD_GS5_M}.}
\label{F_BD_GS5_S}
\end{figure*}
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_ACC/GloSea5_Months_neutD_GloSea5_Ant_int.png}}
\caption[Monthly-averaged neutral densities on the Antarctic boundary in the GloSea5 reanalysis dataset for nine different along-boundary distances (given in Figure \ref{F_BD_GS5_S}(d)).]{Monthly-averaged neutral densities on the Antarctic boundary in the GloSea5 reanalysis dataset for nine different along-boundary distances (given in Figure \ref{F_BD_GS5_S}(d)). The first month is January.}
\label{F_BD_GS5_M}
\end{figure*}
Seasonality found near the Adelie coast impacts the resulting gradient in isopycnals along the boundary (Fiugure \ref{F_BD_GS5_S}). During periods of deep water formation, more steeply-sloping isopycnals are found, whereas during the rest of the year isopycnals have a gentler along-boundary slope. However, along the western Antarctic Peninsula ($6$-$10 \times ^3$km) we find seasonality has minimal effect on the density structure. As a result, along-boundary isopycnals there remain relatively flat.
\section{Summary}
This chapter has developed the cumulative transport decomposition diagnostic for the Antarctic Circumpolar Current (ACC), attributing total zonal transport through various sections to contributions on ocean boundaries together with a Coriolis-related $\beta$ term. The decomposition diagnostic has been applied to output from the HadGEM3-GC3.1 model at $1^\circ$, $1/4^\circ$ and $1/12^\circ$ spatial resolutions across the Drake Passage and other ACC meridional sections.
Analysis (Section \ref{Sct_Inter_TT}) reveals a large resolution dependence of the expected ACC volume transport $T_{ACC}$. Estimates of $T_{ACC}$ from the $1/4^\circ$ model are poor, stabilising at a mere $60$Sv, compared with an observational estimate of $173$Sv (\citealt{Donohue2016}). Estimates of $T_{ACC}$ from the $1^\circ$ and $1/12^\circ$ models are better, at approximately $150$Sv and 120Sv respectively, but nevertheless underestimate the observed transport. A large weakening in $T_{ACC}$ with time is observed within the initial 30-year ``spin-up'' phase of the model run for all three model resolutions, where the initial ocean conditions are taken from the EN4 dataset.
Reconstructed estimates $\tilde{T}_{ACC}$ of the total ACC transport has been found to perform well for the Drake Passage within the $1^\circ$ and $1/4^\circ$ models. Discrepancies within the reconstructed $1/12^\circ$ model estimate are attributed to the influence of greater bathymetric detail at this high resolution. Specifically, the presence of a large number of small trenches and ridges near the sea floor, where physical properties are not well represented, leads to an overestimate of $T_{ACC}$. We find that a simplified decomposition, incorporating the northernmost and southernmost boundary contributions only, results in an improved estimate for $\tilde{T}_{ACC}$ from the $1/12^\circ$ model.
We analysed properties of the Drake Passage section (Section \ref{Sct_Inter_CS}), including potential temperature, salinity and neutral density for the initial 30-year spin-up phase of the model run, to investigate possible causes for the large weakening in transport. Trends in these properties include large changes in salinity and neutral density near the southern boundary in the $1/4^\circ$ model. The $1/4^\circ$ model output also indicates slumping rather than outcropping of isopycnals near the southern boundary. In contrast, the neutral density structure of the $1^\circ$ and $1/12^\circ$ models is found to correspond well with climatology (\citealt{GouretskiViktorandKoltermann2004}) and GloSea5 reanalysis.
We used the decomposition diagnostic to calculate the boundary and $\beta$ transport components contributing to the estimated cumulative transport $\tilde{T}_{ACC}$ (Section \ref{Sct_Inter_Dcm}). Large changes in the southern density component $T_S$ for the $1/4^\circ$ simulation explain the initial weakening observed for $\tilde{T}_{ACC}$. Gradual recovery of $T_S$ later in the timeseries is compensated by gradual weakening of the northern density component $T_N$; hence little impact on $\tilde{T}_{ACC}$ is observed after the spin-up phase. Transport contributions of additional cells, Ekman and $\beta$ components are negligible.
We compared the bottom (or depth-independent) component $T_{bot}$ calculated from the decomposition diagnostic with an estimate for $T_{bot}$ from \cite{Donohue2016}, calculated using observations of bottom velocities. The decomposition-based $T_{bot}$ is in good agreement with observations for the $1^\circ$ model, but found to be smaller than expected for both of the higher resolution models.
Analysis of sea surface height (SSH) data for the Weddell Sea and Drake Passage (Section \ref{Sct_Inter_Wdd}) reveals smaller SSH near the Antarctic coastline for the $1/4^\circ$ degree model. Indeed, at this model resolution, a strong positive trend in SSH (getting smaller) in time is observed around the entire Antarctic coastline, suggesting changes in the Antarctic Slope Current (ASC) during model spin-up. Large-scale sea-ice melt near the coastline, and changes in the Weddell Gyre leakage due to a stronger Weddell Gyre observed in the $1/4^\circ$ model, are noted as possible explanatory mechanisms.
The transport decomposition analysis has been performed for other meridional sections across the ACC in Section \ref{Sct_LngSct}. Many of the characteristics of the decomposition are common to all meridional sections. Expected ${T}_{ACC}$ and its decomposition estimate $\tilde{T}_{ACC}$ are found to be relatively consistent across sections, except for the wider Madagascar - Antarctica section, where non-ACC currents are also present. We find weaker $T_{bot}$ for all other sections except Madagascar - Antarctica and the Drake Passage. $T_{\beta}$ is found to increase in magnitude for the wider sections, for which it represents a larger proportion of the overall density contribution.
Investigation of along-boundary neutral densities reveals good agreement between boundary density structures for the HadGEM-GC3.1 $1^\circ$-$LM$ model and the GloSea5 reanalysis dataset. More generally, there are considerable differences in isopycnal structure between different model, climatology and reanalysis datasets. Boundary density values, and locations of surface dense water along the Antarctic coastline, are comparable for the GloSea5, GK and $1^\circ$-$LM$ model datasets. However, GK does not replicate (a) the presence of flat isopycnals between the Weddell and Ross Seas along the western Antarctic Peninsula, and (b) sloping isopycnals along East Antarctica between the Ross and Weddell Seas. In contrast, the $1/4^\circ$ and $1^\circ$-$LL$ models show significantly warmer waters at depth and fresher surface upper layers, resulting in lighter densities throughout. Both $1/4^\circ$ and $1^\circ$-$LL$ models exhibit weaker isolated regions of dense water near the surface, with a lack of connectivity along the boundary to the deep ocean. The contrasting density features of model, climatology and reanalysis datasets emphasise the difficulty in characterising along-boundary density, and the physical mechanisms underlying its structure, especially given sparse direct observational data from this remote region.
We hypothesise that the presence of an ASC is important in setting the sloping isopycnal structure between the Ross and Weddell Seas along East Antarctica. The absence of such a current leads to flatter isopycnals along the western Antarctic Peninsula. Investigation of GloSea5 data reveals that regions of deep water formation in the Weddell and western Ross Seas exhibit dense surface water on the boundary throughout the year. However, deep water formation regions of the eastern Ross Sea and Adelie coast show seasonal characteristics, with dense water throughout the fluid column only from June to December.
\chapter{MOC decomposition and time-mean AMOC characteristics} \label{TJ_TM}
\renewcommand{\arraystretch}{1.5}
\setlength{\arrayrulewidth}{0.5mm}
In this chapter, the theoretical basis and practical application of the decomposition diagnostic for the MOC are outlined. Using HadGEM3-GC3.1 model data, the overturning streamfunction of the AMOC for any given latitude is decomposed into its constituent boundary components. The time-mean AMOC overturning streamfunction reconstructed from these boundary components is compared to the overturning streamfunction calculated directly from time-mean meridional velocities; further, the time-mean contributions of the four boundary components are analysed. Discrepancies are investigated and possible explanations are discussed. Finally, the time-mean maximum overturning streamfunction and contributing boundary components are characterised across the full range of Atlantic latitudes.
\section{Background}
\label{S3_Bck}
A decomposition of the overturning (introduced in Sections \ref{I_dynL} and \ref{I_oAMOC}) was first performed by \cite{Lee1998}, and formed the basis of the RAPID methodology. They decomposed the meridional velocity into Ekman, external mode and vertical shear components to highlight the relative importance of various dynamical processes within the seasonal cycle of heat transport in the Indian Ocean.
\cite{Marotzke1999} suggested the possibility of ``boundary monitoring'' the MOC based on a study using the adjoint of the Massachusetts Institute of Technology general circulation model (MITgcm). They showed that the dynamical sensitivity of heat transport variability is generally greatest to density perturbations near meridional boundaries of the basin. Further, the net meridional transport is influenced by density anomalies only once they reach the boundary. \cite{Hirschi2003} and \cite{Baehr2004} used two eddy-permitting numerical ocean models to test the practical feasibility of the \cite{Marotzke1999} method. \cite{Baehr2004} note within the FLAME model at $26^\circ$N that the dominant contributors to the overturning streamfunction are from the boundary density terms and the Ekman transport, with sometimes significant but generally smaller contributions from other terms. The feasibility and performance of a hypothetical monitoring array in reconstructing the AMOC depends upon the strength of bottom velocities; in regions where strong currents hit sloping boundaries, the contribution of a ``depth-averaged'' velocity (or ``external mode'') term becomes significant. In such regions, the MOC cannot be reliably estimated using density and surface wind stresses alone.
The work of these authors led to the deployment in $2004$ of the RAPID-MOCHA mooring array (Section \ref{I_obsAMOC}) at a latitude of $26.5^\circ$N to monitor overturning volume and heat transports. The main components of the overturning streamfunction estimate are: (a) the geostrophic interior flow, (b) Ekman transport due to wind-stresses and (c) transport through Florida straits, measured using a telephone cable (\citealt{Cunningham2007}, \citealt{Kanzow2007}).
\cite{Sime2006} note the external mode (i.e. the component arising due to depth-independent flow) becomes important in regions of strong western boundary currents and overflows. They find for latitudes between $25^\circ$N and $32^\circ$N, and at $62^\circ$N that the external mode is a dominant contributor to the time-mean overturning streamfunction.
\section{Method for decomposition of the MOC}
\label{S3_Mth}
Here we discuss theoretical principles underlying the decomposition diagnostic and the mathematical formulation of the boundary components contributing to the overturning streamfunction.
Using approximations of the Navier-Stokes equation appropriate for large-scale ocean dynamics (Section \ref{I_dynL}), we can relate the zonal pressure gradient to a meridional velocity using the geostrophic relationship (Equation \ref{GB}). It is reasonable to assume geostrophic balance holds even within the western boundary current for a flow parallel to the coastline (\citealt{Bingham2008}, \citealt{Bell2011}). In conjunction with hydrostatic balance (Equation \ref{HB}), geostrophy can be used to establish an expression for the vertical structure of the northward velocity dependent upon the zonal density gradients. This is known as the thermal wind relationship (Equation \ref{TW}). The overturning streamfunction $\Psi(z; y)$, equivalent to the cumulative meridional volume transport up to a chosen depth $z$ for a given meridional coordinate $y$, is defined by Equation \ref{T}. We partition the meridional transport into three main components: (a) depth-independent (barotropic, sea-level or surface-pressure-change-driven) flow, (b) baroclinic flow (related to density gradients) and (c) wind-stress driven (i.e. Ekman) flow. Using Equation \ref{T}, the meridional velocity $v_{N}(x,z; y)$ (with north as positive) can be decomposed as
\begin{equation}
v_{N}(x,z; y) = v_{bot}(x; y) + v_{th}(x,z;y) + v_{Ekm}(x,z;y),
\label{v}
\end{equation}
where $v_{bot}(x; y)$ is the meridional velocity adjacent to the bathymetry, a single value for each fluid column (indexed by zonal and meridional coordinates $x$ and $y$, and independent of depth $z$). $v_{th}(x,z;y)$ represents the density or thermal wind contribution to the velocity, zero adjacent to the bottom and defined to be in thermal wind balance with the east-west gradient of the in-situ density, $\rho(x,z; y)$. $v_{Ekm}(x,z;y)$ represents the Ekman or wind-stress contribution to the velocity, found within upper layers of the ocean.
The total estimated overturning streamfunction $\tilde{\Psi}(z; y)$, equivalent to the total estimated cumulative MOC transport up to depth $z$, calculated from the decomposition diagnostic, will be written in the form
\begin{equation}
\tilde{\Psi}(z; y) = \Psi_{bot}(z; y) + \Psi_{th}(z; y) + \Psi_{Ekm}(z; y),
\end{equation}
where the terms correspond to contributions from bottom velocities, boundary densities, and surface wind-stresses. As will be shown below, $\Psi_{th}$ can further be expressed in terms of the eastern and western boundary density contributions. We use the notation $\tilde{\Psi}$ for the total estimated overturning streamfunction from the boundary decomposition, in order to distinguish it from the (expected total) overturning streamfunction $\Psi$ calculated directly using Equation \ref{T}.
For derivation of boundary components contributing to the overturning streamfunction, we assume a local Cartesian coordinate system, for simplicity of presentation only. Actual calculations, discussed in Section \ref{S_App_DD}, exploit the NEMO Arakawa ``C'' model grid, incorporating appropriate metric terms. The depth-independent flow component of the overturning streamfunction, $\Psi_{bot}$, can be simplified to
\begin{eqnarray}
\Psi_{bot}(z; y) &=& \int_{W(z; y)}^{E(z; y)} \int_{-H(x; y)}^{z} v_{bot}(x; y) dz' dx \\
&=& \int_{W(z; y)}^{E(z; y)} \left[ \int_{-H(x; y)}^{z} dz'\right]v_{bot}(x; y) dx \\
&=& \int_{W(z; y)}^{E(z; y)} Z_{bot}(x,z; y) v_{bot}(x; y) dx,
\label{Q_{bot}}
\end{eqnarray}
where $W(z; y)$ and $E(z; y)$ represent the western and eastern boundaries at each $z$ for a given $y$, and $Z_{bot}(x,z; y)$ is the depth of the water below the height $z$,
\begin{align}
Z_{bot}(x,z; y) = & H(x; y) + z, & z > -H(x; y), \\
Z_{bot}(x,z; y) = & 0, & z \leq -H(x; y) ,
\label{Z_{bot}}
\end{align}
where $H(x; y)$ is the height of the fluid column, i.e. within the fluid column $Z_{bot}=H+z$, and outside the fluid column $Z_{bot} =0$. The depth-independent flow component can be thought of as a compensatory component which introduces a geostrophic reference velocity into the calculation, accounting for pressure variations within the interior of the ocean across the basin.
The thermal wind component $\Psi_{th}(z; y)$ of the overturning streamfunction is calculated using integration by parts, noting that $v_{th}(x,z; y) = 0$ when $z = -H(x; y)$; i.e. the thermal wind velocity is zero at the bathymetry. Integration by parts gives
\begin{equation}
\begin{split}
\Psi_{th}(z; y) = & \int_{W(z; y)}^{E(z; y)} \int_{-H(x; y)}^{z} v_{th}(x,z'; y) dz'dx \\
= & \int_{W(z; y)}^{E(z; y)} \left( [z'v_{th}(x,z'; y)]_{-H(x; y)}^z - \int_{-H(x; y)}^{z} z' \frac{\partial v_{th}}{\partial z'} (x,z'; y) dz' \right) dx .
\end{split}
\label{Tc_1}
\end{equation}
Since $v_{th}(x,z'; y) = 0$ when $z' = -H(x; y)$, we see that $[z'v_{th}(x,z'; y)]_{-H(x; y)}^z = zv_{th}(x,z; y)$. Further, $zv_{th}(x,z; y)$ can be written in terms of the vertical gradient in velocity
\begin{equation}
zv_{th}(x,z; y) = z\int_{-H(x; y)}^{z} \frac{\partial v_{th}}{\partial z'} (x,z'; y) dz',
\label{e_zvth}
\end{equation}
which in turn can be related to the horizontal density gradient using the thermal wind relationship (Equation \ref{TW}), as follows. Using Equation \ref{TW}, and substituting the value of $zv_{th}(x,z; y)$ from Equation \ref{e_zvth} into Equation \ref{Tc_1},
\begin{align}
\label{Tc_2}
&\hspace{-20pt} \Psi_{th}(z; y)\\ \nonumber
&= \int_{W(z; y)}^{E(z; y)} \left( z\int_{-H(x; y)}^{z} \frac{\partial v_{th}}{\partial z'}(x,z'; y) dz' - \int_{-H(x; y)}^{z} z' \frac{\partial v_{th}}{\partial z'}(x,z'; y) dz' \right) dx \\ \nonumber
&= \int_{W(z; y)}^{E(z; y)} \int_{-H(x; y)}^{z} (z-z') \frac{\partial v_{th}}{\partial z'}(x,z'; y) dz' dx \\ \nonumber
&= - \frac{g}{f(y) \rho_0} \int_{W(z; y)}^{E(z; y)} \int_{-H(x; y)}^{z} (z-z') \frac{\partial \rho }{\partial x} (x,z'; y) \mathrm{d} z' \mathrm{d} x \\ \nonumber
&= - \frac{g}{f(y) \rho_0} \int_{-H_{\max}(y)}^{z} (z-z') \left[ \int_{W(z; y)}^{E(z; y)} \frac{\partial \rho }{\partial x} (x,z'; y) \mathrm{d} x \right] \mathrm{d} z' \\ \nonumber
&= - \frac{g}{f(y) \rho_0} \int_{-H_{\max}(y)}^{z} ( z - z' ) \left[ \sum_{d=1}^{n_D(z;y)} ( \rho_{E_d}(z'; y) - \rho_{W_d}(z'; y) ) \right] \mathrm{d} z' .
\end{align}
Equation \ref{Tc_2} states that the thermal wind contribution to the overturning streamfunction can be written as the sum of western and eastern boundary density contributions. In the third line of the equation we use the thermal wind relationship (Equation \ref{TW}) to introduce density $\rho(x,z; y)$ and the latitude-dependent Coriolis parameter $f(y)$. In the fourth line the order of integration is reversed, noting that $H_{\max}(y) = \max_x H(x; y)$, the maximum basin depth at meridional coordinate $y$. In the fifth line we introduce boundary densities $\rho_E(z; y) = \rho(E(z; y),z; y)$ and $\rho_W(z; y) = \rho(W(z; y),z; y)$. Here, we are careful to acknowledge the possible presence of $n_D \ge 1$ pairs of western and eastern boundaries for $z$ and $y$, with zonal coordinate intervals $[W_d(z,y),E_d(z,y)]$, $d=1,2,...,n_D(z;y)$, as illustrated in Figure \ref{F_bdryAMOC}.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/AMOC_schem_Psi.png}}
\caption[Schematic of boundary decomposition and contributing components to the AMOC overturning streamfunction.]{Schematic of the boundary decomposition and the contributing components to the AMOC overturning streamfunction. We note in this illustration that multiple pairs of western and eastern boundaries are present at depth, whereas only one pair is present at the surface.}
\label{F_bdryAMOC}
\end{figure*}
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/ExplainInt.png}}
\caption[Schematic of vertical and horizontal integration taking place in calculation of overturning streamfunction up to depth $z$ in Equation \ref{Tc_2}.]{Vertical and horizontal integration taking place in calculation of overturning streamfunction up to depth $z$ in Equation \ref{Tc_2}.}
\label{f_Integral}
\end{figure*}
Intuitively Equation \ref{Tc_2} is explained as follows, and illustrated in Figure \ref{f_Integral}. For each $y$, the original integral requires a ``vertical'' integration of $\partial \rho / \partial x$ from the bathymetry to depth $z$ (shaded orange in Figure \ref{f_Integral}), followed by ``horizontal'' integration from the western to eastern boundaries. After reversal of the order of integration, the integral involves horizontal integration for each $z'$ from the western to eastern boundaries (shaded blue in Figure \ref{f_Integral}), followed by vertical integration from the maximum water depth to $z$; importantly the result of the horizontal integration can be written purely in terms of western and eastern boundary densities.
The individual western and eastern boundary density contributions from Equation \ref{Tc_2} are
\begin{equation}
\Psi_W(z; y) = \frac{g}{f(y) \rho_0} \int_{-H_{\max}(y)}^{z} ( z - z' ) \left[ \sum_{d=1}^{n_D(z;y)} \rho_{W_d} (z'; y) \right] \mathrm{d} z',
\end{equation}
\begin{equation}
\Psi_E(z; y) = - \frac{g}{f(y) \rho_0} \int_{-H_{\max}(y)}^{z} ( z - z' ) \left[ \sum_{d=1}^{n_D(z;y)} \rho_{E_d}(z'; y) \right] \mathrm{d} z',
\end{equation}
so that $\Psi_{th}(z; y) = \Psi_W(z; y) + \Psi_E(z; y)$.
The Ekman contribution to the overturning streamfunction is estimated using surface wind stress data. The total contribution of the wind stress for all $x$ at the given $y$ is calculated and then evenly distributed over the upper $50$m of the ocean, or the depth of the ocean if shallower than $50$m. We recognise the depth of the wind-mixed Ekman layer varies with latitude, longitude and time. However, given our interest lies in the MOC at greater depth, the choice of $50$m has minimal influence on our results. Therefore the Ekman contribution to the overturning streamfunction is defined as
\begin{eqnarray}
\Psi_{Ekm}(z; y) &=& - \int_{W(z; y)}^{E(z; y)} \int_{-h(x; y)}^{z} \frac{1}{h(x; y)}\frac{\tau_S^x(x; y)}{\rho_0 f(y)} \mathrm{d}z'\mathrm{d}x \nonumber\\
&=& - \int_{W(z; y)}^{E(z; y)} \frac{(h(x;y)+z)}{h(x; y)}\frac{\tau_S^x(x; y)}{\rho_0 f(y)} \mathrm{d}x,
\label{Tek}
\end{eqnarray}
where $h(x; y)$ is the depth of the Ekman layer given by
\begin{align}
h(x; y) = & H(x; y), & H(x; y) \leq 50m, \nonumber \\
h(x; y) = & 50m, & H(x; y) > 50m ,
\label{e_h}
\end{align}
and $\tau_S^x(x; y)$ is the wind stress on the ocean surface in the zonal direction. The decomposition diagnostic for the overturning streamfunction at a given $y$ up to a given $z$ is therefore
\begin{equation}
\tilde{\Psi}(z; y) = \Psi_W(z; y) + \Psi_E(z; y) + \Psi_{bot}(z; y) + \Psi_{Ekm}(z; y),
\label{T_smp}
\end{equation}
which equates to
\begin{equation}
\begin{multlined}
\tilde{\Psi}(z; y) = \frac{g}{\rho_0 f(y)} \int_{-H_{\max}(y)}^{z} ( z - z' ) \left[ \sum_{d=1}^{n_D(z';y)} \rho_{W_d}(z'; y) \right] \mathrm{d} z' \\
- \frac{g}{f(y) \rho_0} \int_{-H_{\max}(y)}^{z} ( z - z' ) \left[ \sum_{d=1}^{n_D(z';y)} \rho_{E_d}(z'; y) \right] \mathrm{d} z' \\
+ \int_{W(z; y)}^{E(z; y)} Z_{bot}(x,z; y) v_{bot}(x; y) dx - \frac{1}{\rho_0 f(y)} \int_{W(z; y)}^{E(z; y)} \int_{-h(x; y)}^{z} \frac{\tau_S^x(x; y)}{h(x; y)} \mathrm{d}z'\mathrm{d}x.
\end{multlined}
\label{T_bdry}
\end{equation}
We aim to investigate and further understand the contribution of these boundary components in relation to: (a) the time-mean overturning streamfunction, (b) temporal variability of the overturning streamfunction and (c) possible driving mechanisms for the observed variability of the overturning streamfunction at different timescales. In order to fulfil these objectives, we diagnose the boundary components using data from a GCM.
\section{Numerical model description}
\label{S3_ModDes}
In this section the general circulation model (GCM) used to investigate the overturning streamfunction decomposition is introduced. An outline of the GCM configuration is given first, followed by a detailed description of the $3$-D lattice structure used for GCM computation. Then we describe the calculation of the overturning streamfunction decomposition components in light of the lattice.
The GCM model configuration used to diagnose the overturning streamfunction into its boundary components is the UK Met Office coupled HadGEM3-GC3.1 model (described by \citealt{Williams2018}, \citealt{Roberts2019}). The GCM incorporates oceanic, atmospheric, land and sea-ice components. The oceanic component of the model (Met Office Global Ocean $6$, $GO6$) is based upon the Nucleus for European Modelling of the Ocean (NEMO $3.6$) model with $75$ vertical levels (\citealt{Storkey2018}). The thickness (vertically) of tracer (``T'') cells increases from 1m at the surface to $\approx250$m at depth. There are 3 model spatial resolutions of particular interest based upon the ``ORCA'' family within NEMO: ORCA12, ORCA025 and ORCA1 at $1/12^\circ$, $1/4^\circ$ and $1^\circ$ zonal grid spacing, respectively, at the equator. The isotropic Mercator grid used leads to a reduction in the meridional grid spacing together with reducing zonal grid spacing when approaching the poles. The ocean model is based upon a tripolar grid with quasi-isotropic grid spacing in the Northern Hemisphere, accommodating two poles in Siberia and Canada. NEMO GCM models incorporate a free surface by means of a ``terrain-following'' vertical s- (or z*-) coordinate system. The atmospheric component of the model consists of a Met Office Unified Model Global Atmosphere, of resolution $N216$ ($\approx60$km, \citealt{Walters2019}) with $85$ vertical levels. Finally, a GSI$8.1$ sea-ice component (\citealt{Ridley2018}) is used at the same resolution as the ocean component.
Short-term temporal variability of the overturning streamfunction is relatively well understood due to the availability of observational data. The main body of research reported here involves analysis of longer-term trends using annual mean data from a GCM with $1/4^\circ$ oceanic spatial resolution, and a $N216$ atmospheric model using 1950s forcing. This $657$-year control run is one of the longest available. A long control run is essential when investigating temporal variability of the overturning streamfunction on annual and longer timescales. For testing of the AMOC decomposition diagnostics and comparison purposes, the $1^\circ$ and $1/12^\circ$ oceanic spatial resolutions with identical $N216$ atmospheric components are also considered. Each model run is initially spun-up for 30 years using 1950s greenhouse gas forcing before commencement of the control run. All three models form part of the High Resolution Model Intercomparison Project (HighResMIP) for CMIP6 (\citealt{Haarsma2016}). The simulation lengths for the $1^\circ$ and $1/12^\circ$ models are, respectively, $104$ and $176$ years.
\subsection{Application of AMOC decomposition diagnostics within HadGEM3-GC3.1 global climate model}
\label{S_App_DD}
Data fields in NEMO are located on an Arakawa ``C'' grid as shown in Figure \ref{NEMO_grid}, where the centre of each T-cell denotes the location of potential temperature $\theta$, salinity $S$ and pressure $P$, with Cartesian velocity components $u$, $v$, $w$ located at the centre of (and normal to) each cell face. Planetary and relative vorticities, both denoted by $f$ here, are located at the corner of each T-cell at the same vertical level as $u$ and $v$. The grid indices $i$, $j$, $k$ used for the decomposition diagnostic are not aligned with longitude and latitude everywhere; we find at high latitudes the alignment changes, therefore calculations are performed using the model grid rather than lines of constant latitude. In the following discussion, for ease of description, we nevertheless refer to the $i$ and $j$ indices, and the corresponding grid locations, as ``longitude'' and ``latitude''.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=12cm]{Fig_AMOC/NEMO_grid.png}}
\caption[Arrangement of Arakawa C grid variables (reproduced from \citealt{Madec2016}).]{ Arrangement of Arakawa C grid variables. $T$ indicates a scalar point where temperature, salinity, density and pressure are defined. $(u,v,w)$ indicates vector points, and $f$ indicates vorticity points where both relative and planetary vorticities are defined (reproduced from \citealt{Madec2016}).}
\label{NEMO_grid}
\end{figure*}
The geostrophic relationship (Equation \ref{GB}), on which the thermal wind relationship (Equation \ref{TW}) of relevance here is based, relates to the $x$-component of the momentum equation. For this reason we centre calculations at the $u$-point, in order to obtain the best representation of the geostrophic relationship within the NEMO framework. Centering calculations at the $u$-point also allows us to compare our calculations with relevant momentum diagnostics output by the model. However, calculation of the geostrophic and thermal wind relationships at the $u$-point causes some difficulties: (a) the calculation requires averaging of terms such as the planetary vorticity $f$ and northward velocity $v$ to calculate the Coriolis acceleration $fv$; (b) density points are located at the centre of the T-cell (Figure \ref{var_grid}) rather than at the $u$ or $v$-points and therefore, at a boundary, the closest density point is half a grid cell away from the boundary; and (c) partial cells present at the bottom bathymetry have differing depths; thus averaging across two cells can cause bias. Issues arising due to these sidewall cells (incomplete cells adjacent to eastern and western boundaries) and partial cells (incomplete cells adjacent to bottom bathymetry) are discussed further in Section \ref{S_TM_est}.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth, angle = 0]{Fig_AMOC/Fig_grid.png}}
\caption[Locations of variables used within the decomposition diagnostics.]{Locations of variables (for two T-cells) used within the decomposition diagnostics, using $i,j,k$ notation to indicate horizontal $(i,j)$ and vertical ($k$) indices. }
\label{var_grid}
\end{figure*}
Using time-mean potential temperature and practical salinity fields, the in-situ density is calculated at the centre of each T-cell using the $1980$ equation of state ($eos80$ seawater toolbox, \citealt{Millero1980}), in line with the NEMO GCMs.
The boundary contributions to the overturning streamfunction, $\Psi_W$, $\Psi_E$, $\Psi_{bot}$ and $\Psi_{Ekm}$, described in Equation \ref{T_bdry}, are adapted for diagnostic calculation from the GCM data as follows. We assume a locally Cartesian lattice of locations $x_i$, $y_j$, $z_k$ on which $\rho$ is defined for each year indexed by time $t_{\ell}$. Then the western and eastern boundary density terms are calculated relative to a mean depth-dependent basin density $\overline{\rho}(z_k)$, with mean taken over longitude, latitude and time,
\begin{equation}
\rho^*(x_i, y_j, z_k, t_{\ell}) = \rho(x_i, y_j, z_k, t_{\ell}) - \overline{\rho}(z_k).
\end{equation}
In cases where there are obstacles such as islands or ridges along the sea floor, there will be multiple pairs of western and eastern boundaries, and the sum of their contributions relative to $\overline{\rho}(z_k)$ is taken, as specified by the sum in e.g. Equation \ref{T_W_math} for $\Psi_W$.
Our interest lies in the depth structure of the boundary components contributing to the overturning streamfunction. We calculate this by first calculating the transport at each depth, and then summing the transport from the sea floor to the depth of interest. Equation \ref{Tc_2} is replicated within the model by
\begin{equation}
\Psi_{W}(z_k, t_{\ell}; y_j) = -\frac{g}{f(y_j) \rho_{0}} \sum_{m=k}^{M}\left(z_{k-1 / 2}-z_{m}\right) \left[ \sum_{d=1}^{n_D(z_m;y_j)} \rho^*_{W_d}(z_m, t_{\ell}; y_j) \right] \Delta z_{m},
\label{T_W_math}
\end{equation}
\begin{equation}
\Psi_{E}(z_k, t_{\ell}; y_j) = + \frac{g}{f(y_j) \rho_{0}} \sum_{m=k}^{M}\left(z_{k-1 / 2}-z_{m}\right) \left[ \sum_{d=1}^{n_D(z_m;y_j)} \rho^*_{E_d}(z_m, t_{\ell}; y_j) \right] \Delta z_{m},
\label{T_E_math}
\end{equation}
where $k$ indexes the centre of a T-cell, with values running from $k=M$ at the deepest cell to $k=1$ at the surface cell. $\Delta z_m = z_{m-1/2} - z_{m+1/2}$ is the thickness of the $m^{\text{th}}$ level and $f(y_j)$ is the Coriolis parameter at latitude $y_j$. $\rho^*_{W}(z_m, t_{\ell}; y_j)$ and $\rho^*_{E}(z_m, t_{\ell}; y_j)$ are the western and eastern boundary anomaly densities at the $m^{\text{th}}$ level at latitude $y_j$ for year $t_{\ell}$.
At each latitude, the bottom component of the overturning streamfunction $\Psi_{bot}$ is calculated using the velocity $\overline{v}_{bot}$ at the bottom of the deepest full cell adjacent to the bathymetry, half a grid point below the lowest $u$-point considered, for each longitude. This is then multiplied by the cell thickness and width for each longitude, integrated over longitude, and integrated vertically upwards through the fluid column to the depth of interest. $\overline{v}_{bot}$ is calculated by taking the $4$-point average of the local velocities onto the $u$-point, and using the zonal density gradient across the bottom cell to obtain, via thermal wind, the vertical gradient in meridional velocity. These terms are summed to give the velocity at the bottom of the deepest cell, $\overline{v}_{bot}$. Therefore, mathematically,
\begin{equation}
\Psi_{bot}(z_k, t_{\ell}; y_j) = \sum_{i\in \mathcal{I}(z_k; y_j)} (H(x_i; y_j) + z_{k-1/2}) \overline{v}_{bot}(x_i, t_{\ell}; y_j) \Delta x_i
\label{T_b_math}
\end{equation}
where $\mathcal{I}(z_k; y_j)$ is an index set for longitudes between the western and eastern boundaries $W(z_k; y_j)$ and $E(z_k; y_j)$ at depth $z_k$ for latitude $y_j$, and $\Delta x_i$ is the width of the cell $\Delta x_i = x_{i+1/2} - x_{i-1/2}$.
The Ekman component of the overturning streamfunction is estimated by calculating the total Ekman transport for each latitudinal section from the wind, and then redistributing this transport as currents over the upper 50m, before integrating vertically and zonally. It was confirmed that the initially-calculated total Ekman transport was equal to the eventual Ekman cumulative transport up to the surface (or overturning streamfunction contribution at the surface) from the redistributed currents. The resulting expression for $\Psi_{Ekm}$ is
\begin{equation}
\Psi_{Ekm}(z_k, t_{\ell}; y_j) = \frac{1}{\rho_0 f(y_j)} \sum_{i\in \mathcal{I}(z_k; y_j)} \frac{(h(x_i; y_j) + z_k)}{h(x_i; y_j)} \tau_S^x(x_i, t_{\ell}; y_j) \Delta x_i
\label{T_Ekm_math}
\end{equation}
where $h(x_i; y_j)$ is defined in Equations \ref{e_h}, and $\tau_S^x(x_i, t_{\ell}; y_j)$ is the zonal surface wind stress as before.
Using the overturning streamfunction components defined above, the estimated overturning streamfunction $\tilde{\Psi}$ can be calculated using
\begin{eqnarray}
\tilde{\Psi}(z_k, t_{\ell}; y_j) =\Psi_{W}(z_k, t_{\ell}; y_j) + \Psi_{E}(z_k, t_{\ell}; y_j) + \Psi_{bot}(z_k, t_{\ell}; y_j) + \Psi_{Ekm}(z_k, t_{\ell}; y_j) .
\label{e_T_tilda_math}
\end{eqnarray}
In order to assess the accuracy of this decomposition, the MOC overturning streamfunction is also calculated by vertically and zonally integrating the meridional velocity (Equation \ref{T}). For this calculation, we use a $4$-point average velocity $\overline{v}$ at each $u$-point, and a 2-point average velocity for cells adjacent to the bathymetry (for which the 4-point average is not available, discussed further in Section \ref{S_TM_est}). The ``true'' or ``expected'' overturning streamfunction $\Psi$ is therefore calculated using
\begin{equation}
\Psi(z_k, t_{\ell}; y_j) = \sum_{m=k}^{M+1} \sum_{i\in \mathcal{I}(z_m; y_j)} \overline{v}(x_i,z_m, t_{\ell}; y_j) \Delta x_i \Delta z_m
\label{e_T_math}
\end{equation}
where $M$ is again the vertical cell index for the bottommost interior cell in the basin, and level $M+1$ indicates an additional partial cell. The resulting overturning streamfunction $\Psi(z_k, t_{\ell}; y_j)$ can thus be compared directly to the total estimated overturning streamfunction $\tilde{\Psi}(z_k, t_{\ell}; y_j)$ calculated as the sum of the boundary components (Equation \ref{T_smp}).
\section{Time-mean characteristics of the overturning streamfunction}
\label{S_TM_est}
In this section the estimated time-mean overturning streamfunction, calculated as the sum of boundary components, is compared to the expected overturning streamfunction calculated directly from meridional velocities. Contributions made by boundary components to the time-mean overturning streamfunction are also analysed. Conservation of volume is applied at each latitude in the form of a compensatory term, and the role of additional cells, not included in the decomposition diagnostic, is considered.
The time-mean overturning streamfunction $\Psi(z_k; y_j)$ at depth $z_k$ for latitude $y_j$, is calculated using the temporal average
\begin{equation}
\Psi(z_k; y_j) = \frac{1}{n_Y}\sum_{\ell = 1}^{n_Y} \Psi(z_k, t_{\ell}; y_j)
\label{e_T_avg}
\end{equation}
where $n_Y$ is the number of years in the model run. Time-mean $\tilde{\Psi}(z_k; y_j)$ (Equation \ref{e_T_tilda_math}) and the corresponding time-mean boundary components are calculated in a similar fashion.
At each latitude $y_j$ individually throughout the Atlantic basin, for latitude interval ($34^\circ$S, $67^\circ$N), we calculate the overturning streamfunction to the surface, $\Psi(0; y_j)$. To conserve volume we expect that this value should be near to zero. However, it is known that a small volume inflow occurs through the Bering Strait, estimated by \cite{McDonagh2015} to be of order 1.1Sv. In addition, freshwater via melting sea ice and river outflow provide other sources of water into the Atlantic. For simplicity, we assume basin volume is conserved for each latitude and therefore impose no net meridional transport at each latitude ($\Psi(0;y_j) = 0$).
At the RAPID monitoring array (\citealt{Cunningham2007}, \citealt{McCarthy2015}), depth-independent interior flows, other than those within the western boundary wedge, are represented using a uniformly distributed volume conservation term. In our work, depth-independent interior flows are primarily allocated to the bottom component of the decomposition, with volume compensation being used subsequently to close any remaining imbalances in the volume budget. We impose volume conservation throughout the basin by redistributing $\Psi(0; y_j)$ uniformly as a velocity $v_c$ in the opposite direction across the longitude-depth cross-section for each individual latitude.
For any individual overturning streamfunction component $\Psi_{\eta}$, the compensated component $\Psi_{\eta}^c$ is given by
\begin{eqnarray}
\Psi_{\eta}^c(z_k, t_{\ell}; y_j) = \Psi_{\eta}(z_k, t_{\ell}; y_j) - C_{\eta}(z_k, t_{\ell}; y_j) , \label{e_etaC_T}
\end{eqnarray}
where
\begin{eqnarray}
C_{\eta}(z_k, t_{\ell}; y_j) = \sum_{m=k}^{M}\left( E(z_m; y_j) - W(z_m; y_j) \right) v_c (t_{\ell}; y_j) \Delta z_m
\label{e_etaC_C}
\end{eqnarray}
and the compensation velocity in this case is
\begin{eqnarray}
v_c(t_{\ell}; y_j) = \frac{\Psi_{\eta}(0; y_j)}{\sum_{m=1}^{M} \left( E(z_m; y_j) - W(z_m; y_j) \right) \Delta z_m } .
\label{e_etaC_vc}
\end{eqnarray}
In Equation \ref{e_etaC_vc}, the denominator represents the total basin cross-sectional area. This compensation is applied independently at each latitude for each individual overturning streamfunction component, and the expected $\Psi$ and estimated $\tilde{\Psi}$ overturning streamfunction. Note that the estimated compensated overturning streamfunction is related to the individual compensated boundary components, as expected, by
\begin{eqnarray}
\tilde{\Psi}^c(z_k, t_{\ell}; y_j) =\Psi_{W}^c(z_k, t_{\ell}; y_j) + \Psi_{E}^c(z_k, t_{\ell}; y_j) + \Psi_{bot}^c(z_k, t_{\ell}; y_j) + \Psi_{Ekm}^c(z_k, t_{\ell}; y_j) .
\label{e_etaC_Ttilda}
\end{eqnarray}
The compensation term corresponding to the expected overturning streamfunction $\Psi$ (Equation \ref{e_T_avg}) is denoted by $\Psi_{vol}$, and is considered further in Chapter \ref{TJ_Var}. Compensated overturning streamfunctions $\Psi^c$ and $\tilde{\Psi}^c$ at each latitude are shown in Figure \ref{F_TM_strm}, with $\Psi^c = \tilde{\Psi}^c = 0$ at the surface. Hereafter, when we discuss the expected and estimated overturning streamfunctions, we are in fact referring to the estimated and expected ``compensated'' overturning streamfunctions ($\Psi^c$, $\tilde{\Psi}^c$).
In Figure \ref{F_TM_strm}, the gradient of the overturning streamfunction at a particular depth indicates the magnitude and direction of volume transport. Flow around positive maxima (deep red) is clockwise, whereas flow around negative minima (deep blue) is anticlockwise. Thus, the figure indicates northward flow near the surface and a southward flow at roughly $1000$-$4000$m. Latitudes near to the equator between $7^\circ$S and $7^\circ$N are omitted since the Coriolis parameter $f$ (present as $1/f$ in $\Psi_W$ and $\Psi_E$) vanishes at the equator, and thus geostrophic balance is no longer a good approximation.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_TM_025C_aj_TotEst.png}}
\caption[Comparison of expected and estimated time-mean overturning streamfunction for the $1/4^\circ$ model.]{Comparison of time-mean overturning streamfunction for the $657$ year-long $1/4^\circ$ dataset calculated directly from meridional velocities ($\Psi^c$, Equation \ref{e_T_math}) and the estimated overturning streamfunction $\tilde{\Psi}^c$ reconstructed from the boundary components (Equation \ref{e_etaC_Ttilda}).}
\label{F_TM_strm}
\end{figure*}
The reconstructed Atlantic overturning streamfunction $\tilde{\Psi}^c$ (Figure \ref{F_TM_strm}(b)) compares reasonably well at leading order to the expected overturning streamfunction $\Psi^c$ (Panel (a)). In terms of spatial variance (explained in Appendix \ref{App_SpatVar}), $\tilde{\Psi}^c$ explains $95.8\%$ of the spatial structure of $\Psi^c$. However, there are clear differences between the two streamfunctions. Potential reasons for these differences will be discussed in the remainder of the chapter.
A region of clear difference between $\tilde{\Psi}^c$ and $\Psi^c$ is the latitude range $25^\circ$N to $30^\circ$N. Here strong western boundary currents are present, and a possible reason for weaker transport within $\tilde{\Psi}^c$ is model formulation. As discussed in Section \ref{S3_ModDes}, primary variables (e.g. $\rho$, $\theta$, $S$) are located at the centre of a T-cell. Cells containing full field information are known as ``complete'' cells or interior cells. Near boundaries and bathymetry, additional cells are present, referred to as ``sidewall'' and ``partial'' cells, respectively. The latter cells replicate the original boundary as closely as possible given model resolution. Unfortunately, partial cells are of varying vertical thickness relative to adjacent interior cells at the same $k$-level. It is therefore challenging to calculate unambiguously the density gradient across a $u$-cell within a partial cell. Hence the decomposition calculation cannot be replicated routinely using densities for these cells. This is an unavoidable limitation of our approach. Sidewall and partial cells are illustrated in Figure \ref{F_Sch_Cells}.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=16cm]{Fig_AMOC/Fig_AddCells.png}}
\caption[A longitude-depth cross-section of cells, including interior, partial and sidewall cells.]{A longitude-depth cross-section of cells, including interior, partial and sidewall cells. Sidewall cell contributions are calculated using $\overline{v}_n$ located at the centre of the full T-cell adjacent to boundary (same location as $\rho_W$ and $\rho_E$ in figure). Partial cell contributions are calculated using the northward velocity in the partial cell, $\overline{v}_{pc}$.}
\label{F_Sch_Cells}
\end{figure*}
However, using northward velocities present in these incomplete cells, we are able to estimate their compensated overturning streamfunction contributions, referred to as $\Psi_{AC}^c(z_k, t_{\ell}; y_j)$. This calculation is similar to that performed for the expected overturning streamfunction $\Psi$ (Equation \ref{e_T_math}, without compensation applied), but for additional cells (i.e. sidewall and partial cells) only. Figure \ref{F_TM_BC} shows boundary component and additional cell contributions to the total estimated compensated overturning streamfunction $\tilde{\Psi}^c$.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_TM_025C_aj_BdryCmpt.png}}
\caption[Time-mean contributions from boundary components to the estimated overturning streamfunction $\tilde{\Psi}^c$ for $1/4^\circ$ model.]{Time-mean contributions from boundary components to the estimated overturning streamfunction $\tilde{\Psi}^c$ shown in Figure \ref{F_TM_strm} for $1/4^\circ$ model. Panels are; (a) thermal wind contribution, $\Psi_W^c+\Psi_E^c$, (b) additional cell contribution $\Psi_{AC}^c$ from partial and sidewall cells, (c) depth-independent flow contribution $\Psi_{bot}^c$ and (d) Ekman contribution $\Psi_{Ekm}^c$.}
\label{F_TM_BC}
\end{figure*}
The thermal wind component is equivalent to the aggregated contribution $\Psi_W^c+\Psi_E^c$ of western and eastern boundary density components. $\Psi_W^c$ and $\Psi_E^c$ have similar large magnitudes but opposite signs. Very little is gained from visual inspection of $\Psi_W^c$ and $\Psi_E^c$ individually. The thermal wind component dominates the majority of northward transport within the basin, except for in the surface region between $7^\circ$N and $25^\circ$N where strong easterly winds result in northward Ekman transport, and $25^\circ$N to $30^\circ$N where strong boundary currents through the Florida Straits result in a large depth-independent flow contribution, $\Psi_{bot}^c$. \cite{Sime2006} also observe a strong contribution of the ``external mode'' component, similar to our depth-independent component, in HadCM$3$ data.
The contribution of additional cells to the time-mean overturning streamfunction is relatively large in magnitude. At the current spatial resolution of $1/4^\circ$, the width of sidewall cells near the coastline can vary with longitude and latitude. Similarly, the depths of partial cells can vary from a few metres to approximately $260$m deep. Unsurprisingly, in regions of strong western boundary currents and bottom flows, the contribution made by these cells to the overturning streamfunction throughout the water column is significant and cannot be ignored. We find that by adding $\Psi_{AC}^c$ to the decomposition estimate $\tilde{\Psi}^c$, the spatial variance of ${\Psi}^c$ explained by $\tilde{\Psi}^c + \Psi_{AC}^c$ rises from $95.8\%$ to $97.9\%$. Further analysis of the additional cells and how to minimise their neglected contribution is discussed in Section \ref{S3_ResCmp}.
\section{Impact of GCM model resolution on $\Psi_{AC}^c$}
\label{S3_ResCmp}
Issues related to sidewall and partial cells arise because of the finite resolution of the NEMO computational grid. For coarse spatial resolution, the area corresponding to additional cells in a longitude-depth cross-section is relatively large. Therefore, increasing grid resolution should reduce the contribution of additional cells. However, as spatial resolution increases, the number of additional cells also increases to describe the bathymetry more precisely, and hence the total area corresponding to partial and sidewall cells may reduce relatively slowly. Therefore, it is interesting to consider the effect of grid resolution on the importance of additional cell contributions to the overturning streamfunction.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_ResComp_AC_DiffTot.png}}
\caption[Comparing time-mean contribution from additional cells and the difference between expected $\Psi$ and decomposition estimate $\tilde{\Psi}^c$ of the overturning streamfunction.]{Left: time-mean contribution from additional cells ($\Psi_{AC}^c$) for $1^\circ$ (a), $1/4^\circ$ (c) and $1/12^\circ$ (e) grid resolutions. Right: difference between expected $\Psi^c$ and decomposition estimate $\tilde{\Psi}^c$ (b,d and f).}
\label{F_AC_c}
\end{figure*}
To test the role of ocean model grid resolution in determining the additional cell contribution, the decomposition diagnostic is applied to GCM data corresponding to $1^\circ$ and $1/12^\circ$ spatial resolutions, and compared with the decomposition using $1/4^\circ$ data discussed previously. Figure \ref{F_AC_c} shows the contribution $\Psi_{AC}^c$ of additional cells in comparison to the difference $\Psi^c-\tilde{\Psi}^c$ between the estimated $\tilde{\Psi}^c$ and expected overturning streamfunction $\Psi^c$. The figure suggests that the additional cell contribution is at least partly responsible for the structure of $\Psi^c-\tilde{\Psi}^c$; this will shortly be quantified. Further, increasing spatial resolution reduces the additional cell contribution in some areas. However increasing resolution does not bring better agreement between $\Psi^c$ and $\tilde{\Psi^c}$ at all latitudes for the resolutions considered; we still find regions poorly explained by $\tilde{\Psi^c}$ even when additional cells are considered.
For the same underlying initial settings (e.g. atmospheric resolution, spin-up period) applied to the three spatial resolutions, we find that for volume compensated overturning streamfunctions, the spatial variance (Appendix \ref{App_SpatVar}, Table \ref{Tab_SV}) of $\Psi^c$ explained by $\tilde{\Psi}^c$ (without additional cells) increases with increasing resolution. Adding the additional cell contribution $\Psi_{AC}^c$ into the decomposition estimate $\tilde{\Psi}^c$ for all resolutions in turn results in a significant improvement in spatial variance explained for the $1^\circ$ and $1/4^\circ$ models, and a smaller improvement for the $1/12^\circ$ model.
\begin{table}[h!]
\centering
\begin{tabular}{ |c||c|c|c| }
\hline
\multicolumn{4}{|c|}{Spatial variance of $\Psi^c$ explained} \\
\hline
& $1^\circ$ &$1/4^\circ$&$1/12^\circ$\\
\hline
$\tilde{\Psi}^c$ & $93.1\%$ &$95.8\%$& $98.3\%$\\
$\tilde{\Psi}^c+\Psi_{AC}^c$ & $98.3\%$ & $97.9\%$ &$98.6\%$\\
\hline
\end{tabular}
\caption[Quantifying the role of additional cell contributions to the overturning streamfunction, using spatial variation of $\Psi^c$ explained by $\tilde{\Psi}^c$ and $\tilde{\Psi}^c+\Psi_{AC}^c$.]{Investigating the role of additional cell contributions to the overturning streamfunction: spatial variation of $\Psi^c$ explained by $\tilde{\Psi}^c$ and $\tilde{\Psi}^c+\Psi_{AC}^c$. }
\label{Tab_SV}
\end{table}
We note that for the $1/4^\circ$ model, as shown in Appendix \ref{App_SpatVar}, $\tilde{\Psi}^c$ provides a nearly unbiased estimate for the expected overturning streamfunction $\Psi^c$. It is interesting, however, that $\tilde{\Psi}^c+\Psi_{AC}^c$ alone also provides an estimate for $\Psi^c$ with increased bias but considerably reduced uncertainty. We conclude that inclusion of additional cells introduces considerable extra information into the expected overturning streamfunction relative to our estimate based on boundary quantities, at the expense of increased bias (seen by comparing the value of $\sigma$ with $\sigma^*$, and $\mu$ with $\mu^*$ in Equation \ref{e_A_BiasVar}).
Previous work by \cite{Allison2009} and \cite{Burton2010} used similar decomposition methods to investigate the variability of the ACC through the Drake Passage, and the overturning in the North Atlantic, respectively. Both studies were able to capture the temporal variability well, but capturing the time-mean value of the ACC or AMOC overturning streamfunction was more problematic. \cite{Burton2010} found her estimates for the time-mean overturning streamfunction for a specific depth and latitude were double the value expected from direct application of Equation \ref{T}. No explanation for this discrepancy, similar to that encountered by \cite{Allison2009}, was provided.
The ability of the decomposition diagnostic to capture the spatial structure of $\Psi^c$ is promising. Discrepancies shown in Figure \ref{F_TM_strm} are partially explained by the contribution of the additional cells, but further improvement might be possible. We explore potential sources of discrepancies between estimated and expected overturning streamfunctions in subsequent sections. Direct comparison of the observed overturning streamfunction timeseries at RAPID and SAMBA with corresponding GCM-based timeseries is made in Section \ref{S3_RS}.
We note that contributions from additional cells are henceforth incorporated into the definition of the estimated overturning streamfunction, namely
\begin{eqnarray}
\tilde{\Psi}^c(z_k, t_{\ell}; y_j) &=& \Psi_{W}^c(z_k, t_{\ell}; y_j) + \Psi_{E}^c(z_k, t_{\ell}; y_j) + \Psi_{bot}^c(z_k, t_{\ell}; y_j) \nonumber \\
&+& \Psi_{Ekm}^c(z_k, t_{\ell}; y_j) + \Psi_{AC}^c(z_k, t_{\ell}; y_j). \quad \quad
\label{e_tilde_TcF}
\end{eqnarray}
\section{Exploring the discrepancy between ${\Psi}^c$ and $\tilde{\Psi}^c$}
One motivation for performing decomposition diagnostic calculations at $u$-points on the model grid is that equivalent momentum trends and their constituents for these exact same locations can be accessed directly.
The nature of discrepancies between $\Psi^c$ and $\tilde{\Psi}^c$ (Figure \ref{F_AC_c}) raises questions regarding possible theoretical oversights in constructing the decomposition diagnostic. In this section we will show that the reconstructed time-mean overturning streamfunction calculated using underlying momentum contributions is better than that achieved using the decomposition diagnostic. We will show that the main reason for the discrepancy is the non-linear dependence of in-situ density on potential temperature, and the use of annual-mean potential temperature fields within the in-situ density calculation. By adopting a correctly time-averaged density, we will demonstrate a marked improvement in the performance of the decomposition diagnostic.
First, we will discuss the contribution of the Coriolis acceleration and bottom drag to the differences between decomposition-based $\tilde{\Psi}^c$ and expected overturning streamfunction $\Psi^c$.
\subsection{The effect of the Coriolis acceleration calculation}
Analysis of the uncompensated overturning streamfunctions shows a depth-dependent difference between the estimated $\tilde{\Psi}^c$ and expected $\Psi^c$ streamfunctions for numerous latitudes (not shown), which increases as the surface is approached. This suggests the possibility that small errors near the bottom of the fluid column are propagated upwards due to vertical integration in overturning streamfunction calculations.
Possible sources of error could be the simplification of the Coriolis acceleration ($fv$) calculation near bathymetry, or lack of consideration of bottom friction and drag within the diagnostic framework. An investigation, reported in Appendix \ref{App_Coriolis}, suggests that the greater number of calculation locations within the Energy and Enstrophy (EEN) scheme used to calculate the Coriolis acceleration ($fv$) by NEMO improves the resulting overturning streamfunction estimates, reducing errors caused by bathymetry in places where the simplified 4-point velocity method used in the decomposition performs poorly. Improvement in estimated overturning streamfunction is found especially in regions of rough bathymetry; these improvements are, however, small in comparison to the differences $\Psi^c - \tilde{\Psi}^c$ in time-mean overturning streamfunction.
\subsection{Direct calculation of the overturning streamfunction using momentum trend terms}
We now examine differences between momentum trend terms output directly by the NEMO model and equivalent trends calculated using information (e.g. $\theta$, $S$) exploited in the construction of the decomposition diagnostics. Each momentum term indicates the contribution made by a different physical process to the $u$-component of fluid momentum (Equation \ref{GB}).
Annual-mean momentum trend terms will be considered for a single year, output after a $30$-year spin-up (for $1.4^\circ$ model run u-bx$950$). For fair comparison, the compensated overturning streamfunction using the decomposition diagnostics is also recalculated using data from the same run. In addition to further understanding the contributions made by various momentum trend terms and explaining the lack of closure of the decomposition diagnostics, we hope to quantify the optimal theoretical performance of our decomposition diagnostics using only those momentum terms which relate to processes accounted for within the decomposition.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_MomTrnds_-34.png}}
\caption[One year average contribution made by each of the momentum trend terms for a single latitude ($34^\circ$S).]{One year average contribution made by each of the momentum trend terms for a single latitude ($34^\circ$S). (a) total momentum trend, (b) horizontal pressure gradient trend, (c) Coriolis acceleration or $fv$, (d) vertical diffusion, (e) momentum advection, or sum of relative vorticity and kinetic energy gradient, (f) lateral diffusion, (g) vertical advection and (h) bottom friction. Colour scale chosen in order to see features in momentum trends at the expense of saturation found in Panels (b) and (c).}
\label{MomDiag_all}
\end{figure*}
Momentum trend terms output by the NEMO model (shown in Figure \ref{MomDiag_all}) are the: horizontal pressure gradient, planetary vorticity (Coriolis acceleration), relative vorticity (vortex force), vertical diffusion, lateral diffusion, vertical advection, bottom friction, gradient of the kinetic energy and total momentum trend. The sum of all momentum trend contributions is equal to the ``total momentum'' trend (Panel (a)), which at longer timescales tends to zero (i.e. ${\partial u}/{\partial t} \rightarrow 0 $). Visible in Figure \ref{MomDiag_all} are the dominant magnitudes of the horizontal pressure gradient (b) and the Coriolis acceleration (c) terms.
The decomposition diagnostic, in its thermal wind term $\Psi_W^c + \Psi_E^c$, takes into account the role of the depth-dependent part of the pressure gradient force, encapsulated by the horizontal pressure gradient momentum term. In addition, the depth-independent flow term $\Psi_{bot}^c$ can be viewed as incorporating the depth-independent component of the pressure gradient force (when bottom friction is negligible and geostrophy accurate); this is equivalent to the Coriolis acceleration term $fv$. The Ekman term $\Psi_{Ek}^c$ is equivalent to the vertical diffusion momentum trend term. The optimal theoretical capability of the decomposition diagnostic is therefore equivalent to the overturning streamfunction calculated using the sum of these three momentum trend terms only. Additional, more localised, processes such as momentum advection (i.e. the sum of the gradient of kinetic energy, and relative vorticity terms), lateral diffusion and bottom friction are neglected within the decomposition diagnostics; to describe the overturning streamfunction fully, these additional contributions would need to be included.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_Diff_MaxStrm_MT.png}}
\caption[Investigating role of momentum trend terms, not contributing to the decomposition diagnostic, in explaining the total overturning streamfunction calculated using all momentum trend terms.]{ Investigation into the role of momentum trend terms, not contributing to the decomposition diagnostic, in explaining the total overturning streamfunction calculated using all momentum trends, for a yearly average. The figure shows differences in maximum overturning streamfunction as a function of latitude, between the expected overturning streamfunction and streamfunctions estimated using (a) only momentum trends used in boundary decomposition (Dcmp M.T., red), (b) momentum trends used in boundary decomposition and momentum advection (orange), (c) momentum trends used in boundary decomposition, momentum advection and lateral diffusion (green), (d) momentum trends used in boundary decomposition, momentum advection, lateral diffusion and vertical advection (blue) and (e) all momentum trends (black).}
\label{MD_RMS_Trsp}
\end{figure*}
Figure \ref{MD_RMS_Trsp} emphasises the role of each of the additional momentum trend contributions not taken into account within the decomposition diagnostic. Improvement at certain latitudes would be found if additional information such as the momentum advection contribution were to be included within the decomposition diagnostic. However the inclusion of additional terms would require a radically different approach to decomposition, detracting from the relative simplicity of the current diagnostics. Further, the added complexity of an amended decomposition would require information from the ocean interior, inconsistent with the aim of understanding and quantifying the overturning streamfunction using boundary information only. We note that the total momentum trend line (black) should display smallest discrepancy values in the figure, but surprisingly we find large values between $20^\circ$N and $30^\circ$N. We speculate that this feature is due to the large number of additional cells at these latitudes due to numerous islands and ridges, exacerbating inaccuracies in representing bottom friction effects within this model run.
In Figure \ref{MD_AMOC_Trsp}, we compare different estimates for the total compensated overturning streamfunction: (a) $\Psi^c$, using meridional velocities from the model (Equation \ref{T}), (b) the equivalent overturning streamfunction calculated using the $fv$ momentum trend term, by dividing by $f$ and using Equation \ref{T} (c) $\tilde{\Psi}^c$, using the decomposition diagnostic, and (d) the overturning streamfunction calculated using only momentum trend terms accounted for in the decomposition diagnostic. Figure \ref{MD_Trsp_Diff} shows the differences between the estimates of total overturning streamfunction in Figure \ref{MD_AMOC_Trsp}. Comparing Panels (a), (b), (c) and (d) of Figure \ref{MD_Trsp_Diff} reveals that the decomposition diagnostic (Figure \ref{MD_AMOC_Trsp} (c)) performs less well than would be expected from consideration of its constituent momentum trend terms. This is particularly evident in Figure \ref{MD_Trsp_Diff}(c), where large differences are found between the decomposition estimate ($\tilde{\Psi}^c$) and the equivalent calculated using only momentum trends accounted for in the decomposition diagnostic. Figure \ref{MD_AMOC_Trsp}(a,b) and Figure \ref{MD_Trsp_Diff}(c) show the overturning streamfunction (Equation \ref{T}) calculated using meridional velocities and the Coriolis acceleration $fv$ momentum trend are similar.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_MT_Dcmp_Tot.png}}
\caption[Comparison of overturning streamfunctions calculated using definition, decomposition diagnostic and momentum trend terms.]{Comparison of overturning streamfunctions: (a) using meridional velocities ($v^{4pt}_{north}, $Equation \ref{T}) (b) using Coriolis acceleration $fv$ momentum trend (and Equation \ref{T}) (c) estimate from the decomposition diagnostic ($\tilde{\Psi}^c$) and (d) using only momentum trend terms accounted for in the decomposition diagnostic.}
\label{MD_AMOC_Trsp}
\end{figure*}
Detailed analysis of the main decomposition contributions ($\Psi_W+\Psi_E$, $\Psi_{bot}$ and $\Psi_{Ek}$), compared with the equivalent contributions calculated using momentum trends, localises the source of error to the density term, and the thermal wind component $\Psi_W+\Psi_E$ (not shown). Specifically, for a simple ocean box away from any bathymetry, there is a clear difference between the $\Psi_W+\Psi_E$ term calculated from density and the equivalent calculated using momentum trend terms; the latter is calculated using the horizontal pressure gradient momentum trend with bottom pressure removed from each fluid column, and then integrated vertically and zonally.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_MT_Dcmp_Diff.png}}
\caption[Comparison of the differences between overturning streamfunctions shown in Figure \ref{MD_AMOC_Trsp}.]{Comparison of the differences between overturning streamfunctions shown in Figure \ref{MD_AMOC_Trsp}: (a) using meridional velocities - estimate from the decomposition diagnostic ($\tilde{\Psi}^c$), (b) using Coriolis acceleration $fv$ momentum trend - using momentum trends contributing to the decomposition diagnostic only, (c) using momentum trends contributing to the decomposition diagnostic only - estimate from the decomposition diagnostic ($\tilde{\Psi}^c$) and (d) using Coriolis acceleration $fv$ momentum trend - using meridional velocities.}
\label{MD_Trsp_Diff}
\end{figure*}
Similar comparisons of the vertical diffusion trend with the Ekman component $\Psi_{Ek}$, and of the equivalent $fv$ terms using the (a) EEN scheme, (b) $4$-point (4pt) averaging method $\Psi_{bot}$ and (c) the Coriolis acceleration $fv$ momentum trend, show little to no differences. This suggests strongly that the source of transport discrepancy lies in the boundary density terms $\Psi_W$ and $\Psi_E$.
\subsection{Influence of time-averaging input fields upon the horizontal pressure gradient momentum trend}
\subsubsection*{Motivating investigations}
One possible reason for the differences found when comparing the main decomposition components and their momentum trend equivalents could be the assumption made within the decomposition of a rigid lid, and hence the absence of a free surface. Using the rigid lid approximation, T-cell depths are constant with latitude and longitude since surface waves are neglected. A free surface and hence s- rather than z-coordinate system, was therefore introduced for this investigation, to improve the correspondence between the diagnostic model framework and NEMO. An s-coordinate (or $\sigma$-coordinate) system (\citealt{SONG1994228}, \citealt{Song1998}) can be though of as a terrain-following coordinate system. Its definition exploits the ratio of the pressure at any point in the ocean to the pressure at the bottom of the ocean.
Further, the horizontal pressure gradient (hpg) trend calculated using input fields used for the decomposition diagnostic was compared directly with the horizontal pressure gradient trend output for a single year from the u-bx$950$ run. This reduces the number of integrations (avoids vertical integration) and the possibility for error propagation throughout the calculation. Initial attempts at recalculating the horizontal pressure gradient trend using annually averaged input fields gave poor agreement throughout the basin and worsened with depth. The significant depth-dependence of the difference suggested possible issues with accuracy of the depths used for the centre of T-cells, and W-levels (water depth at top of each T-cell) within the calculation.
In an attempt to obtain full agreement for the horizontal pressure gradient trend, the explicit NEMO method for calculating the in-situ density anomaly $\rho_d$,
\begin{equation}
\rho_d = \frac{\rho - \rho_0}{\rho_0},
\label{rhd}
\end{equation}
was also introduced. Here $\rho_0$ is the nominal density of seawater ($= 1026 \text{kgm}^{-3}$) and $\rho$ is the in-situ density. However, agreement between outputted momentum trend fields from NEMO and the equivalent calculated using annually time-average input fields (e.g. potential temperature, salinity and cell depths) was again poor. The same calculation based on monthly data also gave poor agreement, possibly attributed to internal gravity waves. Agreement is only found when instantaneous input fields are used.
\subsubsection*{The role of time-averaging}
\label{TM_TA}
In the NEMO model, the horizontal pressure gradient trend is calculated from instantaneous $\theta$, $S$ and other fields (shown in Figure \ref{hpg_terms}). In the overturning streamfunction decomposition diagnostic, we use a time-average fields for $\theta$, $S$ etc.
The influence of time-averaging horizontal pressure gradient trends is analysed using monthly averaged data for June $1996$ and for December $1996$, from the $1/4^\circ$ u-bj$748$ dataset, illustrated here for location $55.21^\circ$S, $38.75^\circ$W. The horizontal pressure gradient trend is first calculated for each of the two months separately, as a function of four main input parameters $\theta_i, S_i, SSH_i, e3t_i$, namely potential temperature, salinity, sea surface height and the depth of T-cell fields for the $i^{th}$ month ($i=1,2$). This yields $hpg(\theta_1, S_1, SSH_1, e3t_1)$ and $hpg(\theta_2, S_2, SSH_2, e3t_2)$ for the individual months. Then the time-average horizontal pressure gradient trend over the two months is calculated using
\begin{equation}
\overline{hpg}_{AA} = \frac{hpg(\theta_1, S_1, SSH_1, e3t_1) + hpg(\theta_2, S_2, SSH_2, e3t_2)}{2}.
\label{TM_c}
\end{equation}
The influence of time-averaging is quantified by comparing $\overline{hpg}_{AA}$ with the equivalent horizontal pressure gradient trend $\overline{hpg}_{AB}$ calculated using time-average input fields (averaged over the two months)
\begin{equation}
\overline{hpg}_{AB}= hpg(\overline{\theta}, \overline{S}, \overline{SSH}, \overline{e3t}),
\label{TM_ic}
\end{equation}
where, for example, $\overline{\theta}$ is the average potential temperature over June and December 1996. In simple terms, there are two methods for calculating the time-mean horizontal pressure gradient trend. Both are dependent on time-averaging, but the point at which the average is taken is crucial to obtaining correct results for time-average horizontal pressure gradient. The accurate method is to calculate the horizontal pressure gradient trend initially for each month independently and then time-average (Equation \ref{TM_c}). We denote this estimate as $hpg_{AA}$, with subscript referring to ``averaging after''. The second method estimates $hpg_{AB}$ using time-average inputs, with subscript referring to ``averaging before''; this is the method used in the decomposition diagnostic.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/hpg_schem.png}}
\caption[Schematic of terms used to calculate horizontal pressure gradient ($hpg$) momentum trend term.]{Schematic of the terms used to calculate the horizontal pressure gradient ($hpg$) momentum trend. Percentage difference between the two ways of calculating each term (Equation \ref{e_PrcDff}) is calculated to investigate the role of time-averaging. Terms include the potential temperature ($\theta$), salinity ($S$), width of u-cells ($e1u$), sea surface height ($\Delta h$), depth of T-cells ($e3t$), in-situ density ($\rho$), depth of W-levels ($e3w$), depth of T-cells with respect to geoid ($gdept$), in-situ density anomaly ($\rho_d$), depth of W-levels with respect to geoid ($gdept3w$), hydrostatic pressure gradient along s-surfaces ($zhpi$) and s-coordinate pressure gradient correction ($zuap$).}
\label{hpg_terms}
\end{figure*}
We find that $hpg_{AB} \neq hpg_{AA}$. According to Jensen's inequality (\citealt{Jensen1906}), equality between $hpg_{AB}$ and $hpg_{AA}$ occurs in general when the function $hpg$ is linear. The observed inequality suggests that the horizontal pressure gradient trend may be non-linear with respect to at least one of its input fields.
One method to quantify the influence of time-averaging of input fields is to calculate the percentage difference between estimates of the horizontal pressure gradient trend computed using the two methods:
\begin{equation}
\text{Percentage Difference} = \frac{\overline{hpg}_{AA} - \overline{hpg}_{AB}}{0.5 \times (\overline{hpg}_{AA} + \overline{hpg}_{AB})} \times 100\% .
\label{e_PrcDff}
\end{equation}
This metric is used to assess the non-linearity of all input arguments to the horizontal pressure gradient trend calculation, described graphically in more detail in Figure \ref{hpg_terms}. From the figure, we see that the horizontal pressure gradient trend is calculated as the sum of two terms $zuap$ and $zhpi$, explained further in the figure caption. These terms are themselves functions of other variables including $\rho_d$ as described in the figure. Ultimately, we see that the horizontal pressure gradient trend is a function of all the inputs on the left hand side of the figure.
Figure \ref{Mhpg}(a) explores $h_{AA}$ and $h_{AB}$ for $h=hpg, zuap$ and $zhpi$, as a function of water depth. Panel (b) of Figure \ref{Mhpg} then shows the difference $h_{AA} - h_{AB}$ with depth. Finally, Panel (c) gives the percentage difference (Equation \ref{e_PrcDff}) with depth. The figure shows clearly that the source of the difference found in the horizontal pressure gradient trend can be attributed to the $zhpi$ term, which itself is a function of $\rho_d$, shown in red in Panel (c) of Figure \ref{Mhpg}.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/TMinit.png}}
\caption[Investigating influence of time-averaging on main components of horizontal pressure gradient momentum trend term calculation.]{Exploring the role of time-averaging. (a) main components of horizontal pressure gradient momentum trend calculation, horizontal pressure gradient momentum trend ($hpg$, black), hydrostatic pressure gradient along s-surfaces ($zhpi$, green) and s-coordinate pressure gradient correction ($zuap$, blue). Solid lines represent averages of each individual monthly contribution. Dashed lines represent contributions calculated using time-average input fields. (b) difference in the main components of horizontal pressure gradient momentum trend calculation (shown in (a)). (c) percentage difference between time-averaging approaches for different input field variables of horizontal pressure gradient momentum trend.}
\label{Mhpg}
\end{figure*}
Figures \ref{hpg_terms} and \ref{Mhpg} indicate a key time-averaging issue stemming from the calculation of the in-situ density anomaly. The method of time-averaging of other input terms, including sea surface height, depth of T-cells etc. has a negligible effect. It is interesting to note, however, that the difference and percentage difference between the two calculations for the s-coordinate pressure gradient correction term $zuap$ is negligible, even though it is nominally $\rho_d$-dependent.
Given the importance of in-situ density $\rho_d$ to estimation of the horizontal pressure gradient trend and hence to our overturning decomposition, we proceed to investigate the main time-dependent input components contributing to its calculation. Referring to Figure \ref{hpg_terms}, these are the potential temperature $\theta$ and salinity $S$. The contributions of these terms are now individually analysed by averaging them (e.g. $\overline{\theta}$, Equation \ref{e_ATh}) whilst allowing each other input field in the horizontal pressure gradient trend calculation to vary with time, then calculating the time-average horizontal pressure gradient trend between both months using
\begin{equation}
\overline{hpg}_{\theta AB}= \frac{hpg(\overline{\theta}, S_1, SSH_1, e3t_1) + hpg(\overline{\theta}, S_2, SSH_2, e3t_2)}{2}
\label{e_ATh}
\end{equation}
for potential temperature, and
\begin{equation}
\overline{hpg}_{SAB} = \frac{hpg(\theta_1, \overline{S}, SSH_1, e3t_1) + hpg(\theta_2, \overline{S}, SSH_2, e3t_2)}{2}
\label{e_AS}
\end{equation}
for salinity.
\begin{figure*}[ht!]
\includegraphics[width=\textwidth]{Fig_AMOC/plt_Inv_TnSn_Diff_Lat-55_Lon-38.png}
\caption[Investigating role of time-average potential temperature and salinity on horizontal pressure gradient trend term.]{Left: horizontal pressure gradient momentum trend for $\overline{hpg}_{AA}$ (black, Equation \ref{TM_c}), $\overline{hpg}_{AB}$ (dashed navy, Equation \ref{TM_ic}, denoted by time-average after or $taf$), $\overline{hpg}_{\theta AB}$ (dotted red, Equation \ref{e_ATh}) and $\overline{hpg}_{SAB}$ (dotted green, Equation \ref{e_AS}). $\overline{hpg}_{SAB}$ overlies $\overline{hpg}_{AA}$, $\overline{hpg}_{\theta AB}$ overlies $\overline{hpg}_{AB}$. Right: differences in in-situ density $\rho_d$ from the expected $\overline{\rho_d}_{AA}$ for each experiment. Navy denotes $\overline{\rho_d}_{AA}-\overline{\rho_d}_{AB}$, dotted red $\overline{\rho_d}_{AA}-\overline{\rho_d}_{\theta AB}$ and dotted green $\overline{\rho_d}_{AA}-\overline{\rho_d}_{SAB}$.}
\label{TS_hpg}
\end{figure*}
Figure \ref{TS_hpg} indicates that the dependence of $\rho_d$ on potential temperature is problematic for horizontal pressure gradient and overturning calculations using time-average input. This is shown by the almost identical behaviour of the $\overline{hpg}_{AB}$ (dashed navy) and $\overline{hpg}_{\theta AB}$ (dotted red) curves in Panel (a) of Figure \ref{TS_hpg}, and the corresponding curves in Panel (b). The influence of time-average salinity, SSH and depths of T-cells on the horizontal pressure gradient trend, whilst not zero, is negligible in comparison.
In fact it can be shown (Figure \ref{Lin_TS}, Panel (a)) that for fixed values of salinity $S$ and T-cell depth $e3t$, in-situ density anomaly $\rho_d$ is a non-linear function of potential temperature $\theta$. In comparison, the variation of $\rho_d$ with $S$ (for fixed $\theta$ and $e3t$), and of $\rho_d$ with $e3t$ (for fixed $\theta$ and $S$) is effectively linear. However, at low (polar) temperatures the influence of salinity on setting the density structure is greater than that of temperature. Hence, here it may be the case that incorrectly time-averaged salinities may also play a larger role. More generally, we might expect regional variations in the relative importance of salinity and potential temperature to the discrepancy in density calculation when using time-averaged inputs.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Linearity_TSD_80.png}}
\caption[Behaviour of in-situ density with varying potential temperature, salinity and depth of T-cell.]{The behaviour of in-situ density with varying potential temperature, salinity and depth of T-cell.}
\label{Lin_TS}
\end{figure*}
A further experiment was conducted to examine the usefulness of a time-average $\rho$ given by
\begin{equation}
\overline{\rho}_{AA} = \frac{\rho(\theta_1, S_1) + \rho(\theta_2, S_2)}{2},
\end{equation}
calculated as the average of $\rho$ for different instantaneous input fields, in the calculation of the horizontal pressure gradient momentum trend. It is found that a properly time-average density field $\overline{\rho}_{AA}$ provides a much improved horizontal pressure gradient momentum trend calculation, as discussed further in Section \ref{S3_Ndens}. The agreement between the horizontal pressure gradient trend calculated using instantaneous fields and that obtained using a correctly time-average density field shows promise as a route to a more accurate MOC overturning streamfunction decomposition into boundary components.
In summary, in the standard overturning streamfunction decomposition diagnostic we estimate the in-situ density $\rho$ using time-average quantities involving $\overline{\theta}$ and $\overline{S}$, rather than accumulating the time-average value of $\rho$ from instantaneous density fields. This approach has been shown to be inaccurate, since $\rho$ is a non-linear function of $\theta$. In order to account correctly for the contribution of $\rho$ to volume transport, a time-average estimate of $\rho$ based on direct accumulation of instantaneous $\rho$ fields is required.
\subsection{Overturning streamfunction estimated using correct NEMO time-mean density}
\label{S3_Ndens}
Now we explore the usefulness of an improved decomposition diagnostic incorporating a correctly time-average density field $\overline{\rho}_{NEMO}$ output directly from NEMO, but unfortunately currently unavailable as a standard output. For a trial case, a single year of correctly time-average instantaneous density fields is output from run u-bx950. This time-average density field is used within the overturning streamfunction decomposition diagnostics in place of, and to compare with, the original approach using time-average $\theta$ and $S$ to calculate in-situ density; in the discussion below, we adopt the shorthand ``T-M'' (time-mean) for brevity, particularly in tables and figures. As an initial step, we compare the performance of $\overline{\rho}_{NEMO}$ with other methods for calculating the horizontal pressure gradient momentum trend.
\begin{table}[h!]
\centering
\begin{tabular}{ |p{3.75cm}||p{3.3cm}|p{3.3cm}|p{3.3cm}| }
\hline
\multicolumn{4}{|c|}{Quantifying improvement using $\overline{\rho}_{NEMO}$ as part of hpg momentum trend calculation } \\
\hline
& (a) NEMO output in-situ density $hpg_1(\overline{\rho}_{NEMO})$ & (b)T-M fields and NEMO code $hpg_2(\overline{\theta}, \overline{S}, ...)$ & (c)T-M fields with eos80 tools $hpg_3(\overline{\theta}, \overline{S}, ...)$ \\
\hline
Mean squared error $(\text{m}^2\text{s}^{\text{-2}})^2$ & $3.142 \times 10^{-31}$ &$1.700 \times10^{-29}$& $1.804\times10^{-29}$ \\
Mean absolute difference $(\text{m}^2\text{s}^{\text{-2}})$& $2.040\times10^{-16}$ & $7.253\times10^{-16}$ &$7.687\times10^{-16}$ \\
\hline
\end{tabular}
\caption[Performance of different methods for calculating the horizontal pressure gradient trend (using the horizontal pressure gradient trend output from NEMO as truth) globally for all latitudes, longitudes and depths.]{Performance of different methods for calculating horizontal pressure gradient momentum trend, using horizontal pressure gradient momentum trend from NEMO as truth, globally for all latitudes, longitudes and depths. Methods evaluated are (a) $hpg_1(\overline{\rho}_{NEMO})$, using correctly time-average density from NEMO, (b) $hpg_2(\overline{\theta}, \overline{S}, ...)$. using time-mean input fields alongside explicit NEMO formulation and (c) $hpg_3(\overline{\theta}, \overline{S}, ...)$ using time-mean input fields alongside $eos80$ to calculate in-situ density. All cells are volume weighted before calculating the resulting mean squared error and the mean absolute difference for all depths, longitudes and latitudes.}
\label{Tab_TMhpg}
\end{table}
Table \ref{Tab_TMhpg} quantifies the performance of the three methods: (a) using NEMO output time-average in-situ density $\overline{\rho}_{NEMO}$, (b) using time-average $\theta$ and $S$ alongside replicated NEMO code to calculate horizontal pressure gradient trend, and (c) using the $eos80$ toolbox and time-average $\theta$ and $S$ to estimate the in-situ density and horizontal pressure gradient trend.
Performance in Table \ref{Tab_TMhpg} is quantified in terms of volume weighted mean squared error and the mean absolute difference between each of the three methods and the true horizontal pressure gradient trend for the whole ocean. This assessment therefore takes account of all ocean cells, providing a global view of performance. The improvement found when using a correctly time-averaged in-situ density (a) rather than time-average input fields (b,c) to estimate the in-situ density is clear. In terms of mean absolute difference, use of correctly time-averaged NEMO density output (a) is around $3.5$ times better than use of time-mean input fields (b,c). In terms of mean squared error, use of correctly time-averaged NEMO density output is around 60 times better than use of time-mean input fields. The difference between methods (b, NEMO) and (c, eos80), which both use time-average input fields to estimate the in-situ density is somewhat surprising, given they should be the same; however the difference is small.
In-depth analysis of the resulting overturning streamfunction calculated using the $eos80$ toolbox and $\overline{\rho}_{NEMO}$ highlights the improvement when a correctly time-average density is adopted. The improvement is found to be latitude dependent (and note that results for a single latitude are shown in Figure \ref{Trsp_Dens}), emphasising the role of localised physical mechanisms at different latitudes as discussed previously.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_Dens_Trsp_Lat_29.png}}
\caption[Assessment of impact of using density calculated from time-average fields using $eos80$ versus time-average density output directly by NEMO calculated using instantaneous fields at $29^\circ$N.] {Assessment of impact of using density calculated from time-average fields using $eos80$ (orange) versus time-average density output directly by NEMO (green) calculated using instantaneous fields at $29^\circ$N, Left: sum of density differences for each depth between the western and eastern boundaries, as explained in the text. Centre: resulting volume-uncompensated thermal wind component of the overturning. Right: resulting volume-uncompensated interior overturning streamfunction (ignoring the additional cell contribution) compared with the expected interior overturning streamfunction calculated using the definition of the overturning streamfunction (Equation \ref{T}) with (light grey) and without (dashed black) additional cell contributions.}
\label{Trsp_Dens}
\end{figure*}
The zonal transect at any given depth may consist of multiple intervals of water, each with its own western and eastern boundary. Therefore, in Figure \ref{Trsp_Dens}, Panel (a) shows the sum of the corresponding density differences over all such intervals, for the complete ocean transect.
The difference between profiles for time-average input field calculations (orange) and profiles using time-average density fields directly output from NEMO (green) is small in Panel (a), but becomes noticeable in the upper $1000$m for both the geostrophic overturning streamfunction (i.e. thermal wind component, Panel (b)) and the interior overturning streamfunction (Figure \ref{Trsp_Dens}, Panel (c)).
The estimated interior overturning streamfunction in Panel (c) of Figure \ref{Trsp_Dens} includes contributions from depth-independent flow and Ekman components. We can therefore compare these interior overturning streamfunction estimates with the expected overturning streamfunction with (light grey) and without (dashed black) additional cell contributions. For the latitude $29^\circ$N considered, and in general, a clear improvement is found from incorporating time-average density into the decomposition diagnostic. For certain latitudes no improvement is found, highlighting the importance of additional contributions from small scale processes which we are unable to capture utilising only boundary information.
The overall performance of the interior overturning streamfunction (using $\overline{\rho}_{NEMO}$) from the decomposition, in estimating the expected interior overturning streamfunction, is given in Table \ref{Tab_TrspCmpDns} over all latitudes and depths in the Atlantic basin. Relative to the original decomposition diagnostic (which uses time-mean input fields and $eos80$), the performance of the $\overline{\rho}_{NEMO}$-based estimate shows improvement.
\begin{table}[h!]
\centering
\begin{tabular}{ |p{5cm}||p{4.4cm}|p{4.4cm}| }
\hline
\multicolumn{3}{|c|}{Quantifying improvement using $\overline{\rho}_{NEMO}$ as part of interior streamfunction calculation } \\
\hline
& (a) NEMO output density $\tilde{\Psi}_1(\overline{\rho}_{NEMO})$ & (b) T-M fields with eos80 tools $\tilde{\Psi}_2(\overline{\theta}, \overline{S}, ...)$\\
\hline
Mean squared error $(\text{Sv}^2)$ & $4.46 \times 10^{-3}$ &$7.87 \times 10^{-3}$\\
Mean absolute difference $(\text{Sv})$& $1.35 \times 10^{-2}$ & $1.90 \times 10^{-2}$\\
\hline
\end{tabular}
\caption[Performance of different methods for calculating interior overturning streamfunction, compared to the expected interior overturning streamfunction calculated using Equation \ref{T} and meridional velocities, over all latitudes and depths in the Atlantic basin.]{Performance of different methods for calculating interior overturning streamfunction, compared to the expected interior overturning streamfunction calculated using Equation \ref{T} and northward velocities, over all latitudes and depths in the Atlantic basin. Methods evaluated are (a) $\tilde{\Psi}_1(\overline{\rho}_{NEMO})$, using correctly time-average density from NEMO, and (b) $\tilde{\Psi}_2(\overline{\theta}, \overline{S}, ...)$ using time-average $\theta$ and $S$ fields alongside $eos80$ to calculate in-situ density. All cells are depth-weighted before calculating the resulting mean squared error and the mean absolute difference for all depths and latitudes.}
\label{Tab_TrspCmpDns}
\end{table}
These results demonstrate that including correctly time-average densities in the calculation of the overturning streamfunction decomposition diagnostic improves its performance in reproducing time-averaged characteristics of the overturning streamfunction. However, the unavailability of a direct time-average in-situ density $\overline{\rho}_{NEMO}$ output from the HadGEM3-GC3.1 model hinders further use of time-average densities.
\section{Variation of time-mean overturning with latitude}
\label{s_var_tm_ovr_lat}
We now consider the full latitudinal extent of the AMOC, examining the maximum (with respect to $z$) of the overturning streamfunction, for each latitude and year. The maximum overturning streamfunction may be defined either in terms of the expected $\Psi^c$ or the estimated $\tilde{\Psi}^c$, e.g. Figure \ref{F_TM_strm}. For each latitude-year combination, we also find the corresponding depth of the maximum streamfunction. We then calculate time-average maxima, illustrated in Figure \ref{p_Qtd_MxStrmLtt}, for the $1/4^o$ model (outline in Section \ref{S3_ModDes}). That is, for example in the case of $\Psi^c$,
\begin{eqnarray}
\Psi_{\max}^c(y) &=& \underset{t}{\text{mean}} \max_{z} \{\Psi^c(z,t;y)\}, \\
d_{\max}^c(y) &=& \underset{t}{\text{mean}} \ \underset{z}{\arg\!\max} \{\Psi^c(z,t;y)\} ,
\label{e_Psi_max}
\end{eqnarray}
with similar definitions for $\tilde{\Psi}^c$-based values. Averages are taken over the entire length of each model control run. Contributions of boundary components shown in Figure \ref{p_Qtd_MxStrmLtt} are taken at the same depth $\tilde{d}_{\max}^c$ as those of the maximum estimated overturning $\tilde{\Psi}^c$ and then averaged over the entire model run. In this way the maximum estimated overturning streamfunction can therefore also be decomposed into its boundary components.
Figure \ref{p_Qtd_MxStrmLtt}(a) illustrates the variation with latitude of contributions made by each boundary component, highlighting (i) the dominance of the thermal wind (W+E) contribution in the southern hemisphere, (ii) compensating variations between the components for latitudes from $7^\circ$N to $35^\circ$N, and large changes in boundary contributions with latitude, (iii) significant Ekman contribution in equatorial regions, and (iv) the reversal of density and bottom components at high northern latitudes and $24^\circ$N to $28^\circ$N.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_Cmp_MaxStrm_Qtr.png}}
\caption[Contribution of boundary components to time-mean maximum overturning streamfunction for $1/4^\circ$ model.]{Contributions to the time-average maximum overturning streamfunction for the $1/4^\circ$ model. Panel (a) shows time-average maximum overturning streamfunctions $\Psi_{\max}^c$ (red) and $\tilde{\Psi}_{\max}^c$ (purple) with latitude, for all years in model run. It also shows corresponding contributions of boundary components to $\tilde{\Psi}^c$, namely the sum of western and eastern boundary components (Thermal wind, orange), bottom component (green), Ekman component (dashed light blue) and additional cell contribution (dashed navy). Shaded regions indicate $\pm1$ temporal standard deviation of the maximum overturning streamfunction. Panel (b) shows the time-average depths $d_{\max}^c(y)$ and $\tilde{d}_{\max}^c$ of the maximum overturning streamfunction for $\Psi_{\max}^c$ (red) and $\tilde{\Psi}_{\max}^c$ (purple).}
\label{p_Qtd_MxStrmLtt}
\end{figure*}
Further, Figure \ref{p_Qtd_MxStrmLtt} indicates good agreement between $\Psi_{\max}^c$ and $\tilde{\Psi}_{\max}^c$ for the majority of the basin. $\Psi_{\max}^c$ and $\tilde{\Psi}_{\max}^c$ vary smoothly with latitude, with the latter showing a greater level of variability in equatorial regions. The relatively constant magnitude (15Sv) of both $\Psi_{\max}^c$ and $\tilde{\Psi}_{\max}^c$ at all latitudes up to $35^\circ$N can be attributed to conservation of volume between latitudes. Interestingly, the relative stability of both $\Psi_{\max}^c$ and $\tilde{\Psi}_{\max}^c$ is not replicated in the constituent boundary components, especially in the northern hemisphere subtropics. Between $10^\circ$N and $20^\circ$N we find the reduction in $\tilde{\Psi}_{th_{\max}}^c$ is compensated by a large $\tilde{\Psi}_{Ekm_{\max}}^c$ contribution, attributed to easterly trade winds in the region. Further northwards between $25^\circ$N and $30^\circ$N, the reversal of $\tilde{\Psi}_{th_{\max}}^c$ and $\tilde{\Psi}_{bot_{\max}}^c$ contributions to the maximum overturning streamfunction coincides with strong western boundary currents near the Florida Strait. For the same reason we find a large $\tilde{\Psi}_{AC_{\max}}^c$ contribution for this latitude range.
This same analysis is extended to both $1^\circ$ and $1/12^\circ$ models to assess the impact of spatial resolution on the latitudinal variation of $\Psi_{\max}^c$ and $\tilde{\Psi}_{\max}^c$; results are shown in Figures \ref{p_RC_MxStrmLtt} and \ref{p_RC_BdryMxStrmLtt} below. In the $1^\circ$ model, we find that at low latitudes the Ekman component dominates, possibly due to shallow sub-tropical cells, leading to a strong surface contribution. Consequently, we find that the depth of maximum $d_{\max}^c$ and $\tilde{d}_{\max}^c$ for these latitudes lie in the upper few hundred metres; depths which are not our main focus. In order to acquire an appropriate depth for the maximum overturning, we constrain the calculation to only use values corresponding to depths of maxima greater than $500$m. This constraint is found to have little or no effect when applied to the $1/4^\circ$ and $1/12^\circ$ models.
Panel (a) of Figure \ref{p_RC_MxStrmLtt} shows $\Psi_{\max}^c$ and $\tilde{\Psi}_{\max}^c$ for the three model resolutions, and indicates a clear mean state difference between the three models throughout the basin, up to $35^\circ$N. Following the separation of the Gulf Stream, we notice a significant drop in the maximum overturning streamfunction in both higher resolution models, compared with a more gradual weakening for the $1^\circ$ model. Panel (b) shows the corresponding average depths $d_{\max}^c$ and $\tilde{d}_{\max}^c$, corresponding to $\Psi_{\max}^c$ and $\tilde{\Psi}_{\max}^c$ for each latitude within the Atlantic basin. Surprisingly, we find the $1^\circ$ model exhibits the deepest overturning maximum in contrast to shallower maxima for both higher resolution models. We might expect higher resolution models to yield more accurate and realistic results, therefore, it's surprising that higher resolution models have shallower maxima.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_MaxStrm_ResCmp_Tot.png}}
\caption[Spatial resolution comparison of time-mean maximum overturning streamfunction with latitude.]{Contributions to the time-average maximum overturning streamfunction for different model spatial resolutions. Panel (a) shows the time-mean maximum overturning streamfunction, $\Psi_{\max}^c$ and $\tilde{\Psi}_{\max}^c$, as a function of latitude for $1^\circ$ (navy, dashed light blue), $1/4^\circ$ (dark green, dashed light green) and $1/12^\circ$ (red, dashed red) model resolutions. The time-average is taken over the length of the run in each case. Shaded regions indicate $\pm1$ temporal standard deviation for each quantity. Panel (b) shows the corresponding average depths $d_{\max}^c$ and $\tilde{d}_{\max}^c$ corresponding to $\Psi_{\max}^c$ and $\tilde{\Psi}_{\max}^c$ for each latitude of the Atlantic basin.}
\label{p_RC_MxStrmLtt}
\end{figure*}
\cite{Hirschi2020} discuss the impact of model resolution upon the overturning circulation, and show that eddy-resolving models tend to have a stronger time-average maximum streamfunction (see the $1/12^\circ$ results in Figure \ref{p_RC_MxStrmLtt}). They also comment on a significant weakening in depth-space overturning north of $30^\circ$N, also present in the figure here. They attribute the enhanced weakening at higher resolution to the pathways taken by the AMOC at higher latitudes; at higher resolution we observe a more realistic subpolar gyre and therefore a more realistic northward pathway. This pathway, which includes the North Atlantic Current, impacts the representation of the overturning strength, especially in depth coordinates. Due to gyre circulation, we find the northward and southward transport cancel when using depth coordinates, leading to an apparent weakening of the overturning. \cite{Hirschi2020} show further that the use of density-coordinate-based diagnostics would improve our ability to capture the overturning strength at high northern latitudes. This is especially true for high spatial resolution models, due to greater horizontal circulation.
Motivated by \cite{Hirschi2020}, to show the more realistic subpolar gyre within both higher resolution models, the time-average barotropic streamfunction,
\begin{equation}
\Psi_v(x,y) = \int_{W(y)}^{x}\int_{-H(x,y)}^{0}v_N(x',y,z) \ \mathrm{d} z \ \mathrm{d} x',
\label{e_psi_v}
\end{equation}
across all years for the North Atlantic is shown in Figure \ref{p_RC_BrtStrm}. $\Psi_v$ is the cumulative sum from the western boundary at a given $y$ to a $x$ of interest, of the vertical sum of meridional velocities from the ocean floor to the surface. We choose to take the integral from the western boundary eastward at a given $y$. Further, a constant is added to the values of $\Psi_v$ for all longitudes and latitudes, so that $\Psi_v$ for the north-westernmost grid location is zero.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/BrtStrm_RC.png}}
\caption[Spatial resolution comparison of time-mean barotropic streamfunction in the North Atlantic over all years of model runs.]{Time-average barotropic streamfunction $\Psi_v$ for the (a) $1^\circ$, (b) $1/4^\circ$ and $1/12^\circ$ models within the North Atlantic. The time-average is taken over the length of the run in each case.}
\label{p_RC_BrtStrm}
\end{figure*}
Figure \ref{p_RC_BrtStrm} shows that the subpolar and subtropical gyres within the higher resolution models are stronger, resulting in greater horizontal circulation, which at subpolar latitudes is responsible for much of the water mass transformation, reducing the role of depth-space overturning. This leads to the significant weakening of $\Psi_{\max}^c$ and $\tilde{\Psi}_{\max}^c$ at high northern latitudes in Figure \ref{p_RC_MxStrmLtt}.
\subsection{Boundary contributions to $\tilde{\Psi}_{\max}^c$}
Figure \ref{p_RC_BdryMxStrmLtt} illustrates the boundary contributions to the estimates for $\tilde{\Psi}_{\max}^c$ in Figure \ref{p_RC_MxStrmLtt} made by the density (western and eastern), bottom, Ekman and additional cell components for each model resolution. For brevity, we choose to refer to these quantities as $\Psi^c_{{th}_{\max}}$, $\Psi^c_{{bot}_{\max}}$, $\Psi^c_{{Ekm}_{\max}}$ and $\Psi^c_{{AC}_{\max}}$ respectively.
Panel (c) shows little variation in $\Psi^c_{{Ekm}_{\max}}$ between model resolutions; this is no surprise since common initial atmospheric conditions and atmospheric resolution are used for each model. In Panel (d), it is promising to see that $\Psi^c_{{AC}_{\max}}$ reduces with increasing model resolution. Large peaks between $20^\circ$N and $30^\circ$N in the coarser models are attributed to shallow, narrow channels between islands along the eastern coast of the U.S; these narrow channels are found to be smaller than the computational grid size. For example, the large peak in the $1^\circ$ model (blue line) at $25^\circ$N is due to the narrow channel between Florida and the Bahamas. This channel is narrower than the $\approx100$km grid box size corresponding to a $1^\circ$ grid cell, and hence the contribution of this region is attributed to $\Psi^c_{{AC}_{\max}}$ instead of $\Psi^c_{{th}_{\max}}$ and $\Psi^c_{{bot}_{\max}}$.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_MaxStrm_ResCmp_Cmpt.png}}
\caption[Boundary contributions to the time-mean estimated maximum overturning streamfunction for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ model resolutions.]{Time-mean maximum overturning streamfunction boundary contributions to $\tilde{\Psi}_{\max}^c$ for $1^o$ (navy), $1/4^o$ (green) and $1/12^o$ (red) model resolutions. Panel contents are: (a) $\Psi^c_{{th}_{\max}}$, (b) $\Psi^c_{{bot}_{\max}}$, (c) $\Psi^c_{{Ekm}_{\max}}$ and (d) $\Psi^c_{{AC}_{\max}}$. Time-average is taken over the length of the run in each case. Shaded regions indicate $\pm1$ temporal standard deviation of the annual-mean data for the corresponding component. We note model run length differs with resolution. }
\label{p_RC_BdryMxStrmLtt}
\end{figure*}
Panel (a) of Figure \ref{p_RC_BdryMxStrmLtt} illustrates the combined contribution to $\tilde{\Psi}_{\max}^c$ of eastern and western boundary densities ($\Psi^c_{{th}_{\max}}$), throughout the Atlantic basin. Noticeable is the weaker contribution in the $1^\circ$ model from $10^\circ$S to $25^\circ$N, attributed to an increase in densities on the western boundary (or decrease in the east), and to coarser model bathymetry. The coarser resolution results in a greater proportion of the contribution being attributed to $\Psi^c_{{AC}_{\max}}$, due to the greater area attributed to additional cell. Similarly, in Panel (b) we find a weaker $\Psi^c_{{bot}_{\max}}$ contribution to $\tilde{\Psi}_{\max}^c$ for the $1^\circ$ model at latitudes $40^\circ-60^\circ$N, clear from inspection of bottom velocities illustrated in Figure \ref{p_RC_Bvel}. These latitudes correspond to the presence of a weaker deep western boundary current (DWBC) for that model resolution (Figure \ref{p_RC_Bvel}). The $1/4^\circ$ and $1/12^\circ$ models show comparable values for the strength of the DWBC, and hence similar $\Psi^c_{{bot}_{\max}}$ contributions to $\tilde{\Psi}_{\max}^c$. Regions of strong (positive and negative) $\Psi^c_{{bot}_{\max}}$ in Figure \ref{p_RC_BdryMxStrmLtt} relate closely to boundary currents such as the Gulf Stream ($25^\circ$ to $35^\circ$N) and the southward flows around the eastern shores of the Caribbean islands ($5^\circ$ to $15^\circ$N) and Labrador Sea ($45^\circ$ to $60^\circ$N).
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Bvel_RC.png}}
\caption[Time-mean meridional bottom velocities of the Atlantic basin for the (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ model resolutions.]{Time-mean meridional bottom velocities for the (a) $1^\circ$, (b) $1/4^\circ$, and (c) $1/12^\circ$ model resolutions. The time-average is taken over the length of the run in each case.}
\label{p_RC_Bvel}
\end{figure*}
In Figure \ref{p_RC_MxStrmLtt}(a), a large peak for $\Psi_{\max}^c$ and $\tilde{\Psi}_{\max}^c$ in the $1/12^\circ$ model is present at $32^\circ$N, caused by a strong contribution from $\Psi^c_{{bot}_{\max}}$ (Figure \ref{p_RC_BdryMxStrmLtt}(b)) there. This sudden increase in $\Psi^c_{{bot}_{\max}}$ is due to the changes in bathymetry around the island of Bermuda, which is resolved by the $1/12^\circ$ model but not the other resolutions. Hence bottom flows present around the island are different in the $1/12^\circ$ model compared with coarser resolution models (Figure \ref{p_RC_Bermuda}).
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Bvel_Bath_Bermuda_Zm.png}}
\caption[Bathymetry and time-mean meridional bottom velocities for Bermuda region in the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ models.]{Bathymetry and time-mean meridional bottom velocities for Bermuda region. ($70^\circ$ to $60^\circ$W, $30.5^\circ$ to $33.8^\circ$N) for (a,b) $1^\circ$, (c,d) $1/4^\circ$ and (e,f) $1/12^\circ$ model resolution. The time-average is taken over the length of the run in each case.}
\label{p_RC_Bermuda}
\end{figure*}
Only in the $1/12^\circ$ model is Bermuda represented as an island (i.e. land at $z=0$). In both $1/4^\circ$ and $1^\circ$ models, Bermuda appears as a seamount which does not penetrate the surface. Panels (b,d,f) of Figure \ref{p_RC_Bermuda} illustrate a clear difference in the local time-average bottom velocities across model resolutions. The $1/12^\circ$ model exhibits strong northward velocities to the west of Bermuda, resulting in the strong northward contribution from $\Psi^c_{{bot}_{\max}}$ for this region (Figure \ref{p_RC_MxStrmLtt}(a) and \ref{p_RC_BdryMxStrmLtt}(b)).
\subsection{Variation in thermal wind component, $\Psi^c_{{th}_{\max}}$, with latitude}
\label{s_TW_Dcmp}
Appendix \ref{App_MaxWestEast} illustrates the difficulty of explaining variation in $\Psi^c_{{th}_{\max}}$ in terms of its constituent eastern and western boundary contributions. The close coupling of $\Psi^c_{{W}_{\max}}$ and $\Psi^c_{{E}_{\max}}$ is to be expected, however, due to a number of considerations outlined here. First, both eastern and western densities are defined relative to a depth-dependent basin average density, introducing correlation between the values of $\Psi^c_{{W}_{\max}}$ and $\Psi^c_{{E}_{\max}}$. Next, $\Psi^c_{{th}_{\max}}$ is estimated using multiple pairs of eastern and western boundaries, whose contributions are summed at each latitude. However, due, e.g., to the presence of islands, the number of pairs of eastern and western boundaries is latitude-dependent. This leads potentially to larger variability in the magnitudes of $\Psi^c_{{W}_{\max}}$ and $\Psi^c_{{E}_{\max}}$ with latitude, e.g. at Caribbean latitudes in Figure \ref{p_MaxStrm_WE}(a,b) of the Appendix. Finally, the maximum ocean depth varies with latitude. Therefore, $\Psi^c_{{W}_{\max}}$ and $\Psi^c_{{E}_{\max}}$ are estimated by integration over different depths as a function of latitude. This introduces further correlation between $\Psi^c_{{W}_{\max}}$ and $\Psi^c_{{E}_{\max}}$ with latitude. Implementation of a local reference density, normalisation by the number of boundary pairs and integration from $2429$m to the surface within the $1^\circ$ model does slightly improve our ability to decouple the boundary components (not shown), but the underlying issue of coupled western and eastern boundary components remains. A similar issue is encountered by \cite{Waldman2021}.
To improve our understanding of the boundary density components and to locate the boundary densities which are important to $\tilde{\Psi}^c_{\max}$, a further regional decomposition is applied to the $1/4^\circ$ model data. $\Psi^c_{{W}_{\max}}$ and $\Psi^c_{{E}_{\max}}$ contributions are split into 4 domains, namely: (a) Mid Atlantic Ridge ($\Psi^{c,MAR}_{{W}_{\max}}$, $\Psi^{c,MAR}_{{E}_{\max}}$), (b) Gulf of Mexico and Caribbean Sea ($\Psi^{c,GMC}_{{W}_{\max}}$, $\Psi^{c,GMC}_{{E}_{\max}}$), (c) Atlantic boundary ($\Psi^{c,AB}_{{W}_{\max}}$, $\Psi^{c,AB}_{{E}_{\max}}$) and (d) remainder unaccounted for by the other sub-components ($\Psi^{c,R}_{{W}_{\max}}$, $\Psi^{c,R}_{{E}_{\max}}$). Figure \ref{p_Bath_sp} shows how the regions are defined within the Atlantic basin.
For each latitude, the regions in (a)-(c) above are defined by inspection of the bathymetry. Within the southern hemisphere, the western Atlantic region extends from the longitude corresponding to the South American coastline (red) to the nearest longitude corresponding to the deepest bathymetry (magenta, between the South American coastline and the MAR) at that latitude. The MAR region is defined in a similar manner relative to the longitude of the shallowest MAR bathymetry (between the orange and yellow lines). Finally, the eastern Atlantic region extends to the longitude of the African coastline (blue) from the nearest longitude corresponding to the deepest bathymetry (green, between the MAR and African coastline). Regions in the northern hemisphere are defined similarly, apart from at latitudes corresponding to the Gulf of Mexico and Caribbean Sea (GMC). Here, the western Atlantic boundary is assumed to follow the Caribbean islands, from Trinidad and Tobago northwards along the eastern coast of the Dominican Republic and Cuba to Florida. A separate GMC region is then defined. The eastern boundary of the GMC region (red) is coincident with the western boundary of the Atlantic region where no islands are present, and traces the western boundary of the Caribbean islands otherwise. For each latitude, longitudes not included in any of the Atlantic, MAR and GMC regions are considered to belong to the residual category.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=14cm]{Fig_AMOC/plt_025C_aj_bathymetry_GMCdcmp1.png}}
\caption[Bathymetry of the Atlantic basin within $1/4^\circ$ model, including sub-domains for further regional decomposition.]{Bathymetry of the Atlantic basin within the $1/4^\circ$ model. Coloured lines indicate boundaries used for regional decomposition of thermal wind contributions to the overturning streamfunction. Going from left to right, red: start of western Atlantic region (and eastern boundary of GMC where this exists); magenta: end of western Atlantic region; orange: start of MAR region; yellow: end of MAR region; green: start of eastern Atlantic region; blue: end of eastern Atlantic region.}
\label{p_Bath_sp}
\end{figure*}
Figure \ref{p_MaxStrm_WE4_inv} shows the contribution made by the eastern and western boundaries of each domain. In each panel, the black line indicates $\Psi^c_{{th}_{\max}}$ and the blue line the sum of the western and eastern contributions for that particular sub-component (e.g. Panel (a) shows the sum $\Psi^{c,AB}_{{th}_{\max}}$ of western and eastern contributions $\Psi^{c,AB}_{{W}_{\max}}+\Psi^{c,AB}_{{E}_{\max}}$ from the Atlantic boundary, with analogous notation used for the other sub-components).
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_MaxStrm_TWCmpt_Ind.png}}
\caption[Contribution made by regionally decomposed sub-components to $\Psi^{c}_{\max}$]{Further decomposition of $\Psi^c_{{W}_{\max}}$ and $\Psi^c_{{E}_{\max}}$ by zonal sub-domains (see Figure \ref{p_Bath_sp}). (a) the contribution made by the Atlantic boundaries (where Gulf of Mexico and Caribbean Sea is neglected, $\Psi^{c,AB}_{{W}_{\max}}$, $\Psi^{c,AB}_{{E}_{\max}}$), (b) the contribution made by the Mid Atlantic Ridge ($\Psi^{c,MAR}_{{W}_{\max}}$, $\Psi^{c,MAR}_{{E}_{\max}}$), (c) the contribution of the Gulf of Mexico and Caribbean Sea ($\Psi^{c,GMC}_{{W}_{\max}}$, $\Psi^{c,GMC}_{{E}_{\max}}$) and (d) the remaining contribution not accounted for by the other components ($\Psi^{c,R}_{{W}_{\max}}$, $\Psi^{c,R}_{{E}_{\max}}$). Black line shows $\Psi^c_{{th}_{\max}}$ and blue is the sum of the eastern and western sub-components (e.g. (b) shows the sum of eastern and western contribution made by the MAR, $\Psi^{c,MAR}_{{th}_{\max}}$). For each panel, solid lines indicate eastern and dashed western contributions respectively. }
\label{p_MaxStrm_WE4_inv}
\end{figure*}
Panel (a) indicates that the majority of $\Psi^c_{{th}_{\max}}$ (black) in the southern hemisphere is accounted for by $\Psi^{c,AB}_{{th}_{\max}}$ (blue), as would be expected. Moving northwards, the change in sign of Coriolis parameter $f$ results in a sign change for the sub-components. The negative $\Psi^{c,AB}_{{th}_{\max}}$ (blue) at low northern latitudes is due to the dominant western boundary contribution (dashed green), somewhat surprisingly suggesting southward upper-ocean flow due to these boundaries. We find a large change in $\Psi^{c,AB}_{{W}_{\max}}$ near $27^\circ$N where western boundary densities change across the Gulf Stream as it separates from the boundary, leading to a change in sign of $\Psi^{c,AB}_{{W}_{\max}}$. Conversely $\Psi^{c,AB}_{{E}_{\max}}$ (solid green) is relatively stable, showing smooth decline northwards from the equator, possibly attributed to the presence of relatively flat eastern along boundary isopycnals (discussed further in Chapter \ref{TJ_Bdry}). This steady decline in conjunction with the sudden increase in $\Psi^{c,AB}_{{W}_{\max}}$ near $27^\circ$N results in a strengthening of $\Psi^{c,AB}_{{th}_{\max}}$ (blue) and $\Psi^{c}_{{th}_{\max}}$ (black). Panel (a) clearly indicates the majority of the geostrophic contribution to the Atlantic's maximum overturning streamfunction is governed by $\Psi^{c,AB}_{{th}_{\max}}$, and $\Psi^{c,AB}_{{W}_{\max}}$ shows greater latitude-dependence than that of the eastern boundary. Further analysis of Atlantic boundary densities and the physical mechanisms leading to the density structure observed are investigated and discussed in Chapter \ref{TJ_Bdry}.
For the MAR region, Panel (b) indicates a very weak contribution made by boundary density components $\Psi^{c,MAR}_{{W}_{\max}}$ (dashed purple) and $\Psi^{c,MAR}_{{E}_{\max}}$ (solid purple) to $\Psi^c_{{th}_{\max}}$ (black). The sum $\Psi^{c,MAR}_{{th}_{\max}}$ of boundary contributions is small except for the very highest northern latitudes. We find a slight increase in the magnitude of boundary sub-components near $39^\circ$N, likely to be attributable to the Azores, and a decrease at high northern latitudes attributed to boundary densities around eastern Iceland.
For the GMC, Panel (c) shows that the eastern boundary contribution $\Psi^{c,GMC}_{{E}_{\max}}$ (solid red) is the dominant contributor. There appears to be a gradual weakening of the magnitude of $\Psi^{c,GMC}_{{W}_{\max}}$ (dashed magenta) with latitude relative to $\Psi^{c,GMC}_{{E}_{\max}}$, resulting in a northward contribution to geostrophic part of the maximum overturning streamfunction $\Psi^{c,GMC}_{{th}_{\max}}$ for GMC (solid blue).
Finally Panel (d) shows the maximum overturning streamfunction contributions unaccounted for by the other 3 sub-components. These contributions can be attributed to the presence of small islands and ridges within the Atlantic basin. The peaks in the southern hemisphere can be explained by the presence of the Malvinas islands ($30^\circ$S) and the juncture of the Walvis ridge with the African continent ($20^\circ$S). Similarly, the contributions between $15^\circ$N and $29^\circ$N are attributed to islands such as the Bahamas, Turks and Caicos and Canary islands, all of which fall outside the other sub-domains (Figure \ref{p_Bath_sp}).
\section{Summary}
In this chapter, the theory for decomposing the meridional overturning streamfunction into contributions related to variables on the ocean boundaries has been outlined, and its application to output of HadGEM3-GC3.1 models at different spatial resolutions was studied. The decomposition diagnostic relies on (a) boundary densities within the nearest full cells to coastal boundaries, (b) ocean surface wind stress, and (c) meridional velocity in the bottom cell of each fluid column.
Reconstructed estimates of the total overturning streamfunction obtained by summing these three components have been shown to perform generally well throughout the Atlantic basin, and are able to capture the time-mean overturning streamfunction well when all components are volume-compensated. Large contributions to the time-mean overturning streamfunction are made by boundary density and depth-independent terms within the interior of the ocean, and the Ekman component dominates the surface boundary layer. The time-mean estimate of the total overturning streamfunction calculated from model data at $1/4^\circ$ spatial resolution, is able to capture $95\%$ of the spatial variation in the expected overturning streamfunction. Using data from higher-resolution models increases the spatial variance explained. Additional incomplete cells adjacent to bathymetry make a sizeable contribution to the time-mean overturning streamfunction, and its spatial variability. An estimate for the additional cell contribution is made using meridional velocities in these additional cells.
Discrepancies have been found between the estimated overturning streamfunction (from the decomposition) and the expected overturning streamfunction (calculated using meridional velocities) at certain latitudes within the Atlantic basin. Investigation of the various terms contributing to the GCM's momentum budget indicated that the theoretical capability of the diagnostic framework was greater than that achieved. This prompted an investigation of the influence of time-average input fields on the decomposition, and the contributions of localised processes (e.g. momentum advection). The non-linear dependence of in-situ density on potential temperature $\theta$ leads to errors if annually-averaged potential temperature data is used for the diagnostic calculations. Time-averaging of salinity and the inclusion of varying cell thickness (via the model's $e3t$ term) have also been shown to make a small contribution to the observed discrepancy, but these effects are insignificant in comparison to errors induced by time-averaging potential temperature. Calculations using densities averaged over every time-step have been shown to improve the resulting horizontal pressure gradient trend and overturning streamfunction estimate.
The time-mean maximum overturning streamfunction (in depth space) is found to be relatively constant with latitude, for latitudes south of $35^\circ$N, and thereafter reduces with increasing latitude, for all model resolutions considered. There is good agreement between the expected time-mean maximum $\Psi_{\max}^c$ and the decomposition estimate $\tilde{\Psi}_{\max}^c$ for all latitudes. The boundary contributions to $\tilde{\Psi}_{\max}^c$ show considerably larger variation with latitude, of the order of $20$Sv. Differences in boundary contributions at different model resolutions can be attributed to better characterisation of bathymetry and bottom currents in the higher resolution models. At all model resolutions, the sum of western and eastern density contributions dominates $\tilde{\Psi}_{\max}^c$ in the southern hemisphere; the northern hemisphere exhibits more complex behaviour.
An investigation into the relative importance of western and eastern boundary density contributions to the total thermal wind contribution ($\Psi^c_{{th}_{\max}}$) of the estimated maximum overturning streamfunction is complicated by the strong correlation between these eastern and western boundary components.
Further decomposition of the thermal wind contribution into sub-domains (-components) for the mid-Atlantic ridge (MAR), Gulf of Mexico and Caribbean Sea (GMC), Atlantic boundary and remaining contributions (Rest) reveal (a) a significant positive (or northward) contribution within the GMC due to eastern GMC densities, (b) a weak overall contribution made by the MAR, (c) a gradual weakening of eastern Atlantic boundary density contribution with latitude, and large changes in the Atlantic western boundary density contribution near $27^\circ$N. We also find (d) a strong effect of the Bahamas and surrounding islands on density-driven contributions to the maximum overturning streamfunction at approximately $26^\circ$N.
This work motivates the next chapter, where we investigate the contribution of boundary components to temporal variability of the Atlantic overturning circulation.
\chapter{Variability of AMOC boundary components} \label{TJ_Var}
In recent decades, significant progress has been made in understanding the underlying physical mechanisms of the AMOC. Characterising the natural variability of the overturning is critical to identifying and quantifying the potential impacts of a changing climate. Data from cross-basin monitoring arrays, buoys, floats and high resolution models have suggested possible drivers of variability at short timescales and demonstrated the apparent lack of coherence in AMOC variability with latitude. In this chapter, we seek to improve our understanding of the spatial and temporal variability of the AMOC and the factors contributing to it. We aim to improve our understanding of the AMOC's variability at decadal timescales and longer. We consider the temporal variability of the maximum overturning streamfunction timeseries at RAPID ($26^\circ$N) and SAMBA ($34^\circ$S) latitudes, and the temporal variation of the contributing boundary components. We investigate the correlation between the thermal wind and bottom components, and the source of large oscillations found in their timeseries at SAMBA for the $1/4^\circ$ HadGEM-GC3.1 model. The basin-wide spatial variability of in-situ densities and bottom velocities, used in the overturning streamfunction decomposition outlined in Chapter \ref{TJ_TM}, is summarised for different timescales; this analysis leads to the implementation of a multiple linear regression model incorporating Correlation Adjusted coRrelation scores (MLR-CAR, \citealt{Zuber2010}) to quantify the contributions of boundary components to the total overturning streamfunction variability at different depths, latitudes and timescales. We also briefly consider the influence of model resolution upon inferences from MLR-CAR analysis.
\section{Overview: AMOC temporal and spatial variability}
The AMOC varies on timescales from days to millennia. The relatively short timeseries available from the RAPID AMOC observing programme has shed some light on short-term AMOC variability (Sections \ref{I_obsAMOC} and \ref{I_var_obs}). On seasonal timescales, observations and GCM output both show that wind stress on the ocean's surface dominates AMOC variability (e.g. \citealt{Hirschi2007}) via processes such as Ekman transport and coastal upwelling, resulting in large seasonal fluctuations in the strength of the overturning.
GCMs and observations agree in general (\citealt{Buckley2016}, Section \ref{I_var_mod}) that: (a) intra-annual timescales are dominated by wind stress via Ekman transport; (b) longer inter-annual timescales, the thermal wind (or geostrophic) component dominates, implying the density structure of the basin is very important; (c) the AMOC is not coherent between the subtropical and subpolar gyres on inter-annual timescales. Interestingly the sub-tropical gyre is dominated by inter-annual variability whereas the sub-polar gyre is dominated by variability with decadal periods (e.g. \citealt{Buckley2016}). Models have shown at decadal and longer timescales that there is some meridional coherence within the AMOC (e.g. \citealt{Delworth1993}). Further: (d) upper ocean western boundary density anomalies are important, playing a key role in AMOC variability. On decadal timescales it has been shown that the western boundary buoyancy anomalies are meridionally coherent, originating from the subpolar gyre or along the subtropical-subpolar gyre boundary (e.g. \citealt{Robson2012a}, \citealt{Buckley2012}). Interestingly, \cite{Frajka-Williams2018} find a lack of coherence when comparing fluctuations in transport at $16^\circ$N and $26^\circ$N between 2004 and 2017. With numerous years of data from several monitoring arrays throughout the Atlantic basin (RAPID, MOVE, OSNAP and SAMBA), they suggest the average strength and variability of the overturning streamfunction varies with latitude, with greater variability found in the South Atlantic (\cite{Frajka-Williams2019}).
Numerous model investigations have shown that inter-annual to decadal variability is strongly linked to regions of deep water formation at high latitudes, particularly the convective mixing occurring in the Labrador Sea (e.g. \citealt{Kuhlbrodt2007}, \cite{Gelderloos2012}, Section \ref{I_var_mod}). However, no conclusive observational evidence for a link between Labrador Sea deep water formation and AMOC variability has been shown, and greater variability is in fact found at OSNAP East (\citealt{Lozier2019}). Convective mixing is strongly tied to the North Atlantic Oscillation resulting in fluctuations in convective mixing rates. Periods (e.g. in 1972) of total shut-down in convective mixing within the Labrador Sea have been observed (e.g. \citealt{Gelderloos2012}).
Salinity anomalies become important at centennial to millennial timescales within the HadCM3 model. Since salinity drives overall density fluctuations at high latitudes, this leads to variations in the AMOC (e.g. \citealt{Jackson2013}).
The first dynamical decomposition of the overturning circulation was performed by \cite{Lee1998} to investigate the time-mean and variability of the Indian overturning circulation. Recently, \cite{Waldman2021} decomposed the Atlantic overturning circulation into multiple combinations of components, including Ekman, barotropic, baroclinic, thermal, haline and western and eastern boundary contributions, to investigate centennial variability of the AMOC within the CNRM-CM6 model. The thermal wind component explains over 80\% of the low-frequency AMOC transport variance for all latitudes. They show that this variability is driven by southward propagating western boundary temperature anomalies at depths between 500m and 1500m, originating in the western subpolar gyre. Some variability can be attributed to the contributions made by salinity anomalies along the eastern boundary of the South Atlantic. They find it difficult to interpret variability of individual boundary contributions, due to the close correlation between boundary components (similar to Section \ref{s_TW_Dcmp}). This allows for density-compensating thermohaline variations and covarying densities at both boundaries, especially in the South Atlantic. The sources of mid-depth density variation found, are attributed to deep convection in the Labrador Sea and dense water overflow in the Davis Strait.
Many studies have highlighted a possible slowing of the AMOC (e.g. \citealt{Rahmstorf2015}, \citealt{Caesar2018,Caesar2021}, Section \ref{I_AMOCrole}), emphasizing the need for improved understanding of the inherent natural variability shown by the AMOC through time and space.
\section{Evaluation of decomposition at RAPID $\&$ SAMBA}
\label{S3_RS}
Having understood the limitations of the overturning decomposition diagnostic in Chapter \ref{TJ_TM}, this section investigates its performance in comparison to published results for the RAPID and SAMBA arrays, located in the North and South Atlantic, respectively. The decomposition diagnostic is used in its final form (Equation \ref{e_tilde_TcF}) for output of HadGEM-GC3.1 simulations at $1^\circ$, $1/4^\circ$ and $1/12^\circ$ spatial resolutions, where time-average input fields $\theta$ and $S$ are used alongside the $eos80$ seawater library (\citealt{Millero1980}) to calculate the thermal wind component of the overturning streamfunction. The depth-independent, Ekman and additional cell components are calculated as in Section \ref{S_App_DD}.
\subsection{RAPID array} \label{S3_RS_Rpd}
The RAPID array (introduced in Section \ref{I_oAMOC}) monitors the basin-wide meridional volume transport from Florida across to Tenerife at $26.5^\circ$N. Recent estimates of $17.0 \pm 3.3 $Sv are quoted by \cite{Frajka-Williams2019} for the maximum of the overturning streamfunction, at $\approx 1100$m depth.
The measurement methodology at RAPID uses a telephone cable to estimate the transport through the Florida Strait, and density moorings elsewhere along the boundaries of the open ocean. In contrast, the decomposition diagnostic relies on boundary densities and velocities for the whole section from Florida to the African coastline. Therefore, small differences in overturning streamfunction estimates between the two approaches are to be expected. Figure \ref{Rpd_ts} shows a timeseries of the maximum compensated overturning streamfunction at $26.5^\circ$N for $1/4^\circ$ model resolution, and contributing components estimated using the decomposition diagnostic, calculated using the methodology from Section \ref{s_var_tm_ovr_lat} and Equation \ref{e_Psi_max}. Specifically, the boundary component contributions are taken at the depth of the maximum of the compensated estimate for the overturning streamfunction $\tilde{\Psi}_{\max}^c$, hereafter, for brevity, known as the maximum estimated overturning streamfunction.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/QTdeg_Dcmp26_5N_MaxStrm_ts.png}}
\caption[Timeseries of maximum overturning streamfunction at $26.5^\circ$N for the $1/4^\circ$ model resolution, and contributing components estimated using decomposition diagnostic.]{Timeseries of the maximum overturning streamfunction at $26.5^\circ$N for the $1/4^\circ$ model resolution, and contributing components estimated using the decomposition diagnostic. Shaded red region indicates the $\pm$ one standard deviation interval for the RAPID-observed streamfunction as quoted by \cite{Frajka-Williams2019}. The volume-compensated total expected streamfunction $\Psi^c_{\max}$ is shown in black. Other coloured lines indicate the volume-compensated estimated streamfunction ($\tilde{\Psi}_{\max}^c$, dashed grey), depth-independent flow contribution ($\Psi_{bot_{\max}}^c$, purple), additional cell contribution ($\Psi_{AC_{\max}}^c$, orange), Ekman transport ($\Psi_{Ekm_{\max}}^c$, yellow) and the thermal wind component ($\Psi_{W_{\max}}^c+\Psi_{E_{\max}}^c$, dark green).}
\label{Rpd_ts}
\end{figure*}
The estimated maximum overturning streamfunction ($\tilde{\Psi}_{\max}^c$, light grey, i.e. the cumulative meridional transport from the bottom up to the depth of the maximum) is approximately $1$ to $2$Sv larger than the expected maximum ($\Psi^c_{\max}$, black) overturning streamfunction calculated directly using meridional velocities (Equation \ref{T}). Possible explanations for the offset were discussed earlier in Chapter \ref{TJ_TM}, involving the quality of decomposition estimates for the thermal wind component $\Psi_{W_{\max}}^c+\Psi_{E_{\max}}^c$. For the majority of the timeseries, both the expected and estimated maximum overturning streamfunctions are within the one standard deviation uncertainty band for the observed maximum overturning streamfunction found by \cite{Frajka-Williams2019}. This suggests that the decomposition diagnostic is performing relatively well, and by implication suggest the $1/4^\circ$ HadGEM-GC3.1 model is performing well. Figure \ref{Rpd_ts} is further supported by Table \ref{Tab_SVRpd}, which gives the percentage temporal variation of $\Psi_{\max}^c$ explained by $\tilde{\Psi}_{\max}^c$ and $\tilde{\Psi}^c_{\max} - \Psi_{AC_{\max}}^c$ (which excludes additional cells).
For this analysis, increasing HadGEM3-GC3.1 model resolution to $1/12^\circ$ appears to improve the ability of the overturning diagnostic to explain the temporal variation of expected maximum overturning streamfunction. Moreover, at coarser model resolutions, inclusion of additional cell contribution $\Psi_{AC_{\max}}^c$ provides clear improvement in diagnostic skill. The poor results found for the $1/4^\circ$ model, ignoring additional cells, can be attributed to the
models ability to resolve the bathymetry of the western boundary around the Bahamas, and flow through the Florida Strait.
\begin{table}[h!]
\centering
\begin{tabular}{ |P{2.7cm}||P{2.7cm}|P{2.7cm}|P{2.7cm}| }
\hline
\multicolumn{4}{|c|}{Temporal variance of $\Psi^c_{\max}$ explained at RAPID ($26.5^\circ$N)} \\
\hline
& $1^\circ$ & $1/4^\circ$ & $1/12^\circ$ \\
\hline
$\tilde{\Psi}^c_{\max}-\Psi_{AC_{\max}}^c$ & $75.9\%$ &$56.0\%$ & $95.8\%$\\
$\tilde{\Psi}^c_{\max}$ & $99.8\%$ & $97.9\%$ & $97.7\%$\\
\hline
\end{tabular}
\caption[Role of additional cell contribution to the temporal variation of the maximum expected overturning streamfunction at the RAPID array, $26.5^\circ$N.]{Temporal variation and additional cell contribution to the maximum expected overturning streamfunction at the RAPID array ($26.5^\circ$N), calculated at the depth of maximum estimated overturning streamfunction: temporal variation of $\Psi^c_{\max}$ explained by $\tilde{\Psi}^c_{\max}-\Psi_{AC_{\max}}^c$ and $\tilde{\Psi}^c_{\max}$ calculated using Equation \ref{e_PrcVarExp} in the Appendix.}
\label{Tab_SVRpd}
\end{table}
Whereas comparison of total overturning streamfunction estimates from RAPID observations and the decomposition diagnostic is possible, comparison of boundary contributions apart from the Ekman component $\Psi_{Ekm_{\max}}^c$ is not possible. At RAPID, the magnitude of the Ekman component has been observed to be of the order of $5$Sv (\citealt{Team2021}), whereas the decomposition diagnostic provides estimates of the order of $3$Sv, shown in Figure \ref{Rpd_ts}. Other observed values (\citealt{Team2021}) are provided for observed Gulf Stream ($31$Sv) and Upper-mid ocean (-$18$Sv) contributions. The upper mid-ocean transport per day, based on the RAPID mooring data, is the vertical integral of the transport per unit depth down to the deepest northward velocity (at approximately $1100$m).
Large additional cell $\Psi_{AC_{\max}}^c$ and depth-independent flow $\Psi_{bot_{\max}}^c$ contributions, due to strong currents through the Florida Strait, dominate the estimated maximum overturning streamfunction. In the real ocean these contributions to the overturning streamfunction at $26.5^\circ$N are mainly detected by telephone cable rather than mooring data. It is interesting to note a small negative $\Psi_{th_{\max}}^c$ contribution to the maximum estimated overturning streamfunction, suggesting a weak dependence of AMOC strength on geostrophic density gradients at RAPID.
The depth of the maximum overturning streamfunction becomes shallower in the $1/4^\circ$ model with time, as illustrated in Figure \ref{p_Rpd_ts_dpt}. The depth initially is around $\approx 1050$m, slightly shallower than that reported for the RAPID array. But, at the end of the $1/4^\circ$ model run, the depth of the maxima has reduced to only $\approx900$m. This is a shift of approximately 1 to 2 vertical cells, considering the gridded structure of the model. A shallower AMOC maximum streamfunction could indicate a weakening of the overturning circulation, with less dense return flows from high latitudes, or alternatively a less dense lower AMOC cell due to less dense AABW water, leading to a shallower upper AMOC cell. The expected maximum overturning streamfunction does show some signs of weakening through time, with weak trends found in the thermal wind and additional cell contributions. The reduction in thermal wind contribution to the maximum estimated overturning streamfunction would suggest either denser western boundary densities or lighter eastern boundary densities at depth, with the latter being most likely (see Figure \ref{p_LT_Bden} for trends in boundary densities).
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/QTdeg_26_5N_MaxStrmDpt_ts.png}}
\caption[Depth of maximum overturning streamfunction for $26.5^\circ$N at $1/4^\circ$ model resolution.]{Depth of maximum overturning streamfunction for $26.5^\circ$N at $1/4^\circ$ model resolution, using the volume-compensated total expected overturning streamfunction $\Psi^c_{\max}$ (black) and the volume-compensated estimated overturning streamfunction ($\tilde{\Psi}_{\max}^c$, orange). Value from RAPID observations relative to a level of no motion at $4820$dbar in grey.}
\label{p_Rpd_ts_dpt}
\end{figure*}
\subsection{SAMBA} \label{S3_RS_Smb}
The SAMBA array (introduced in Section \ref{I_oAMOC}) is situated at approximately $34.5^\circ$S in the South Atlantic, to investigate the South Atlantic MOC, including features such as the eastern Aghulas leakage and the western Malvinas current. In contrast to RAPID, no telephone cable measurements are used; the monitoring array relies on density profiles, PIES (pressure inverted echo sounders) and CPIES (``sea'' PIES) providing baroclinic and barotropic estimates of the temporal variability of the overturning streamfunction. The use of PIES and CPIES at $1350$dbar renders the zero net flow assumption obsolete; no volume compensation is applied at SAMBA. However there is an issue with PIES and CPIES measurements due to sensor drift. Capturing the time-average barotropic (depth-independent) component is difficult and unreliable. As a work-around, a time-mean reference velocity is acquired from the OFES (Ocean For the Earth Simulator) model, which is also used to provide an estimate of the meridional volume transport in regions inshore of the $1,350$dbar isobath (see \citealt{Meinen2018} for more information). The monitoring array estimates a maximum overturning streamfunction of the order of $14.6 \pm 5.4$Sv, shown by the shaded pink region in Figure \ref{Smb_ts}. Similarly to RAPID, comparison of boundary components except for the Ekman component is not possible at SAMBA. Moreover, capturing the time-mean overturning streamfunction at SAMBA is difficult due to sensor drift. The focus of research at SAMBA tends therefore to be short-term variability, quantified in terms of transport anomalies as opposed to ``absolute'' values.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/QTdeg_Dcmp34_05S_MaxStrm_ts.png}}
\caption[Maximum overturning streamfunction timeseries for $34.05^\circ$S at $1/4^\circ$ model resolution, and contributing boundary components estimated using the decomposition diagnostic.]{Maximum overturning streamfunction timeseries for $34.05^\circ$S at $1/4^\circ$ model resolution, and contributing boundary components estimated using the decomposition diagnostic. Shaded red region indicates the $\pm$ one standard deviation interval for the SAMBA-observed maximum overturning streamfunction as quoted by \cite{Frajka-Williams2019}. The volume-compensated total expected maximum overturning streamfunction $\Psi^c_{\max}$ is shown in black. Other coloured lines indicate the volume-compensated estimated maximum streamfunction ($\tilde{\Psi}_{\max}^c$, dashed grey), depth-independent flow contribution ($\Psi_{bot_{\max}}^c$, purple), additional cell contribution ($\Psi_{AC_{\max}}^c$, orange), Ekman contribution ($\Psi_{Ekm_{\max}}^c$, yellow) and the thermal wind component ($\Psi_{W_{\max}}^c+\Psi_{E_{\max}}^c$, dark green).}
\label{Smb_ts}
\end{figure*}
Applying the decomposition diagnostic requires the existence of both western and eastern boundaries. Due to variations in coastline locations at different model resolution we find the eastern boundary near South Africa is absent in the $1/4^\circ$ and $1^\circ$ model at latitude $34.5^\circ$S; therefore exact replication of the monitoring array is not possible. The closest possible latitudes for which the eastern boundary is present ($34.05^\circ$S and $33.84^\circ$S for both $1/4^\circ$ and $1^\circ$ resolutions respectively) are used, along with $34.53^\circ$S for the $1/12^\circ$ model.
Figure \ref{Smb_ts} indicates that the reconstructed estimate $\tilde{\Psi}_{\max}^c$ is comfortably within the one standard deviation limits for observations at SAMBA. The density component $\Psi_{th_{\max}}^c$ is considerably larger than any other contributor to $\tilde{\Psi}_{\max}^c$. There is a strong negative relationship between the density component $\Psi_{th_{\max}}^c$ of the maximum overturning streamfunction and the depth-independent flow $\Psi_{bot_{\max}}^c$ component, especially for years $100$ to $200$, indicating deviations from the average contributions expected, but having little to no effect upon the total estimated maximum overturning streamfunction. Some compensation between $\Psi_{bot_{\max}}^c$ and $\Psi_{th_{\max}}^c$ is to be expected due to the formulation of the decomposition diagnostic and the volume compensation applied (within the decomposition diagnostic) to ensure net zero flow. At this latitude, the contribution $\Psi_{AC_{\max}}^c$ of additional cells is smaller due to (a) the presence of fewer additional cells due to simpler bathymetry in comparison to the RAPID section (especially on the western boundary, no Florida Strait etc.), and (b) the presence of a weaker Malvinas current along the western boundary (compared with the Gulf Stream found near the RAPID array). The Ekman contribution $\Psi_{Ekm_{\max}}^c$ is stable around $1.5$Sv; this contrasts with estimates for $\Psi_{Ekm_{\max}}^c$ from \cite{Meinen2018} which show greater variation. This difference may be due to the higher resolution temporal data used in \cite{Meinen2018} compared with the annual mean data used here. A general weakening of both the maximum estimated and expected overturning streamfunction is observed in time, in conjunction with a shallowing of the maximum, especially in the first 150 years (not shown). It is not clear to which component this weakening might be attributed; Table \ref{Tab_CrSmb} in Appendix \ref{App_SmbRap}, shows correlations between boundary components and the total estimate, suggesting that $\Psi_{th_{\max}}^c$ is a candidate. Interestingly, at both SAMBA and RAPID, we find an increase in maximum expected and estimated overturning streamfunction in the final 50 years of this 657 year run, with $\Psi_{th_{\max}}^c$ showing similar tendencies.
Similarly to the RAPID timeseries (Table \ref{Tab_SVRpd}), the total estimated maximum streamfunction $\tilde{\Psi}^c_{\max}$ for SAMBA captures the temporal variation of the expected maximum overturning streamfunction $\Psi^c_{\max}$ well (Appendix \ref{App_SmbRap}, Table \ref{Tab_SVSmb}). Generally, we find the percentage variance explained is higher than for the RAPID timeseries, except for particularly poor results found for the $1/4^\circ$ simulation.
\subsection{Further analysis at SAMBA} \label{S3_RS_SmbMore}
Closer investigation of Figure \ref{Smb_ts} shows high negative correlation between $\Psi^c_{th_{\max}}$ and $\Psi^c_{bot_{\max}}$ evaluated at the depth of maximum estimated overturning streamfunction $\tilde{\Psi}_{\max}^c$. For the $1/4^\circ$ model this relationship is particularly evident for the period $2060$ to $2240$, where large oscillations are present in both components. These oscillations are found to be present for a number of latitudes up to $30^\circ$S. Investigating correlations of all the components at the maximum streamfunction depth (Appendix \ref{App_SmbRap} Table \ref{Tab_CrSmb}) for all three resolutions shows that this relationship exists in both the $1/4^\circ$ and $1/12^\circ$ models. In this section, we investigate the relationship between $\Psi^c_{th_{\max}}$ and $\Psi^c_{bot_{\max}}$ for the $1/4^\circ$ model further.
\subsubsection{Maxima and minima of $\Psi^c_{bot_{\max}}$}
First, we isolate the years for which the $\Psi^c_{bot_{\max}}$ contribution (see Figure \ref{Smb_ts}) is a local temporal maximum or minimum. Then, we isolate the sea surface height (SSH), bottom densities and bottom velocities for each longitude of the SAMBA section corresponding to these years, and calculate average values for years of $\Psi^c_{bot_{\max}}$ maxima and minima. Figures \ref{p_MxMn_SSH} and \ref{p_MxMn_Bv} show the resulting composite SSH and bottom velocity properties with respect to longitude, for latitude $34.05^\circ$S.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/SSH_Line_AvgMxMn_Anom_qtd.png}}
\caption[Average sea surface height over the $5$ years yielding local maximum and minimum of the $\Psi^c_{bot_{\max}}$ contribution to the maximum estimated streamfunction at SAMBA, with longitude.]{Sea surface height for years of maximum and minimum $\Psi^c_{bot_{\max}}$. Panel (a): average SSH over the $5$ years yielding local maximum (green) and minimum (blue) of the $\Psi^c_{bot_{\max}}$ contribution to the maximum overturning streamfunction, plotted against longitude along $34.05^\circ$S. The average of SSH over the entire simulations is also shown in orange. Panel (b) shows corresponding anomalies (i.e. differences from the temporal average over entire simulation) with longitude.}
\label{p_MxMn_SSH}
\end{figure*}
Panel (a) of Figures \ref{p_MxMn_SSH} and \ref{p_MxMn_Bv} shows the average SSH and bottom velocity (orange) over the model run, and the average SSH and bottom velocity corresponding to the 5 years with local maxima (green) and minima (blue) of the $\Psi^c_{bot_{\max}}$ contribution to the maximum estimated overturning streamfunction. Panel (b) shows corresponding anomalies relative to the average over the whole simulation (orange in Panel (a)).
Figure \ref{p_MxMn_SSH}(a) indicates a large change in SSH with longitude near $50^\circ$W, in the vicinity of the Brazil-Malvinas Confluence (BMC), where the colder northward Malvinas current (centred at $52^\circ$W) meets the warmer southward Brazil current (centred at $49^\circ$W). Figure \ref{p_MxMn_SSH}(b) emphasises the decrease in SSH anomaly for $\Psi^c_{bot_{\max}}$ maxima years near $51^\circ$W, associated with colder denser waters. Variation in the SSH anomaly for $\Psi^c_{bot_{\max}}$ minima years (blue) in the same region is also present, but is not of the same magnitude.
Figure \ref{p_MxMn_Bv}(a) indicates larger bottom velocities between $48^\circ$W and $54^\circ$W, again in the region of the BMC. Figure \ref{p_MxMn_Bv}(b) emphasises the increase in bottom velocity anomaly for $\Psi^c_{bot_{\max}}$ maxima years for these longitudes, and a corresponding decrease in bottom velocity anomaly for $\Psi^c_{bot_{\max}}$ minima years (blue) of approximately the same magnitude.
During years of strong $\Psi^c_{bot_{\max}}$ contribution, Figure \ref{p_MxMn_Bv} suggests a stronger northward Malvinas current (centred at $52^\circ$W) but a weaker southward Brazil current (centred at $49^\circ$W). In contrast, in years of weak $\Psi^c_{bot_{\max}}$, a weaker northward Malvinas current and stronger southward Brazil current are observed.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/BotVel_Line_AvgMxMn_Anom_qtd.png}}
\caption[Average bottom velocities over the $5$ years yielding local maximum and minimum of the $\Psi^c_{bot_{\max}}$ contribution to the maximum overturning streamfunction at SAMBA, plotted as a function of longitude at $34.05^\circ$S.]{Bottom velocities for years of maximum and minimum $\Psi^c_{bot_{\max}}$. Panel (a): average bottom velocities over the $5$ years yielding local maximum (green) and minimum (blue) of the $\Psi^c_{bot_{\max}}$ contribution to the maximum overturning streamfunction, plotted as a function of longitude at $34.05^\circ$S. The overall temporal average of bottom velocities is also shown in orange. Panel (b) shows corresponding anomalies (i.e. differences from the temporal average).}
\label{p_MxMn_Bv}
\end{figure*}
Figures (not shown) for bottom density, analogous to Figures \ref{p_MxMn_SSH} and \ref{p_MxMn_Bv}, reveal anomalously light densities near $53^\circ$W for $\Psi^c_{bot_{\max}}$ minima years, but heavier densities at longitudes in the region of $53^\circ$W for $\Psi^c_{bot_{\max}}$ maxima years. The most significant features for each of SSH, bottom velocity and bottom density are found at longitudes corresponding to the Brazilian continental shelf, with little difference over the rest of the section.
Figure \ref{Smb_ts} indicates that in the time period 2060 to 2240, there are clear intervals corresponding to systematic weakening and strengthening of the bottom component $\Psi^c_{bot_{\max}}$ (purple) contributing to the maximum estimated overturning streamfunction. In the same period, the opposite temporal trends are observed in the thermal wind component $\Psi^c_{th_{\max}}$ (green). To further investigate the relationship between the contributions of $\Psi^c_{bot_{\max}}$ and $\Psi^c_{th_{\max}}$ to the maximum estimated overturning streamfunction at SAMBA, we consider three time intervals, $2065$ to $2090$, $2090$ to $2119$ and $2163$ to $2175$. For each of the three time intervals, we investigate the changes in SSH, bottom velocity and bottom density in terms of (a) longitude-time sections of anomalies, and (b) temporal trend in anomaly as a function of longitude.
\subsubsection{Analysis of time intervals $2065-2090$, $2090-2119$ and $2163-2175$}
Panels (a)-(c) of Figures \ref{p_Anl_SSH}, \ref{p_Anl_Bv}, and \ref{p_Anl_Bdens} show the anomalies in SSH, bottom velocity and bottom densities with respect to time and longitude for each of the three time periods $2065$ to $2090$, $2090$ to $2119$ and $2163$ to $2175$. For each quantity of interest, the anomaly is calculated by subtracting the temporal mean (over all years of model run) from the value of the quantity in a given year. In each figure, Panel (a) refers to a period where $\Psi^c_{bot_{\max}}$ increases from a minimum to a maximum, whereas both Panels (b) and (c) start at a maximum of $\Psi^c_{bot_{\max}}$ and end at a minimum.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/SSH_Yrs_MxMn_Anom_qtd.png}}
\caption[Sea surface height anomalies (m) with time and longitude for three selected time intervals at SAMBA array for $1/4^\circ$ model.]{Panels (a)-(c) show sea surface height anomalies (m) with time and longitude for three selected time intervals (a) $2065-2090$, (b) $2090-2119$ and (c) $2163-2175$. Panel (d) shows the trend in SSH at each longitude for each of the three time intervals, $2065-2090$ (dashed blue), $2090-2119$ (green) and $2163-2175$ (red). Dashed lines in Panel (d) indicate a time interval corresponding to increasing $\Psi^c_{bot_{\max}}$, and solid lines time intervals of decreasing $\Psi^c_{bot_{\max}}$. Shaded regions indicate $95$\% confidence intervals for the estimated trend.}
\label{p_Anl_SSH}
\end{figure*}
In Figure \ref{p_Anl_SSH}(a)-(c), anomalies are present for all longitudes and years but the largest magnitude anomalies appear along the western side of the basin. Otherwise, there are regions of each section which suggest local persistent positive (or negative) anomalies with time and longitude, e.g. the region of positive anomaly centred near year 2168 and longitude $20^\circ$W in Figure \ref{p_Anl_SSH}(c). There appears to be westward propagation of SSH anomalies in this region, possibly attributed to Rossby waves. Panel (d) shows large negative trends in SSH near $51^\circ$W for time intervals $2090$ to $2119$ and $2163$ to $2175$ respectively. The uncertainty in the trend for time interval $2163$ to $2175$ (red) is relatively large, due to the relatively short length of the time interval.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/BotVel_Yrs_MxMn_Anom_qtd.png}}
\caption[Meridional bottom velocity anomalies (ms$^\text{-1}$) with time and longitude for three selected time intervals at SAMBA array for $1/4^\circ$ model.]{Panels (a)-(c) show meridional bottom velocity anomalies (ms$^\text{-1}$) with time and longitude for three selected time intervals (a) $2065-2090$, (b) $2090-2119$ and (c) $2163-2175$. Panel (d) shows the trend in bottom velocity at each longitude for each of the three time intervals, $2065-2090$ (dashed blue), $2090-2119$ (green) and $2163-2175$ (red). Dashed lines in Panel (d) indicate a time interval corresponding to increasing $\Psi^c_{bot_{\max}}$, and solid lines time intervals of decreasing $\Psi^c_{bot_{\max}}$. Shaded regions indicate $95$\% confidence intervals for the estimated trend.}
\label{p_Anl_Bv}
\end{figure*}
Figure \ref{p_Anl_Bv}(a)-(c) presents corresponding anomalies in bottom velocities. In contrast to SSH, bottom velocity anomalies are concentrated in two bands of longitudes, around 51$^\circ$W and 8$^\circ$E . Despite large anomalies in the eastern section, Panel (d) shows no clear trends here. In contrast, the western part of the section shows a significant trend in anomalies in a narrow band around 51$^\circ$W, positive for the time interval in Panel (a), and negative for the other two time intervals. For the latter time intervals, this suggests a weakening of bottom velocities in conjunction with the decreasing $\Psi^c_{bot_{\max}}$ contribution to the maximum estimated streamfunction. For the first time interval (in Panel (a)) we observe strengthening of bottom velocities with increasing $\Psi^c_{bot_{\max}}$.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/BotDens_Yrs_MxMn_Anom_qtd.png}}
\caption[Bottom density anomalies (kgm$^\text{-3}$) with time and longitude for three selected time intervals at SAMBA array for $1/4^\circ$ model.]{Panels (a)-(c) show bottom density anomalies (kgm$^\text{-3}$) with time and longitude for three selected time intervals (a) $2065-2090$, (b) $2090-2119$ and (c) $2163-2175$. Panel (d) shows the trend in bottom density at each longitude for each of the three time intervals, $2065-2090$ (dashed blue), $2090-2119$ (green) and $2163-2175$ (red). Dashed lines in Panel (d) indicate a time interval corresponding to increasing $\Psi^c_{bot_{\max}}$, and solid lines time intervals of decreasing $\Psi^c_{bot_{\max}}$. Shaded regions indicate $95$\% confidence intervals for the estimated trend.}
\label{p_Anl_Bdens}
\end{figure*}
Finally, Figure \ref{p_Anl_Bdens} shows anomalies in bottom densities. Anomalies with large magnitudes are concentrated at westernmost longitudes, in a band west of approximately 50$^\circ$W. It is interesting that this band is concentrated further onshore for the western coastline relative to bottom velocity and SSH anomalies. Panel (d) indicates a significant positive temporal trend in bottom density for the first time interval of increasing $\Psi^c_{bot_{\max}}$; i.e. bottom water is becoming denser. In contrast, Panel (d) also suggests generally significant negative trends for the other time intervals (of decreasing $\Psi^c_{bot_{\max}}$), corresponding to lighter bottom waters. The lightening of western boundary bottom densities through time (e.g. red colour in Panel (c)) correspond to an increasing west-east density gradient and therefore, an increase in $\Psi^c_{th_{\max}}$ contribution.
Summarising, changes in the $\Psi^c_{bot_{\max}}$ contribution to the maximum overturning streamfunction occur predominantly in the western part of the section. Investigation of three time intervals of significant change in $\Psi^c_{bot_{\max}}$ suggests a weakening of $\Psi^c_{bot_{\max}}$ is coupled to variability of the Brazil-Malvinas Confluence (BMC). We find (e.g. Figure \ref{p_MxMn_Bv}(a)) during years of strong $\Psi^c_{bot_{\max}}$, a strong northward Malvinas current and a weaker southward Brazil current; in years of weaker $\Psi^c_{bot_{\max}}$, a weaker northward Malvinas current and stronger Brazil current is present. Prominent features in the longitude-time sections of each of SSH, bottom velocity and bottom density near the Brazilian continental shelf suggest denser waters are present during years of strong $\Psi^c_{bot_{\max}}$, possibly due to a stronger Malvinas current drawing colder ACC waters northward.
\subsubsection{Brazil-Malvinas Confluence (BMC)}
\label{S_BMC}
We hypothesise that a driving factor in the fluctuations found in the $\Psi^c_{bot_{\max}}$ contribution to the estimated maximum overturning streamfunction $\tilde{\Psi}^c_{\max}$ at SAMBA in this model is the BMC, and in particular changes in the relative strengths of the northward Malvinas and southward Brazil currents. In this section we investigate the location of the BMC with respect to latitude. Previous work (\citealt{Combes2014}) has shown that the location of the BMC is near $38^\circ$S, and that this location exhibits strong seasonal and inter-annual variability of some $1^\circ$ to $6^\circ$ latitude (\citealt{Goni2011}).
\cite{Goni2011} used satellite observations of SSH and SST to locate the latitude of the BMC. The latitude of maximum gradient of SSH or SST along the 1000m isobath is considered to identify the location of the BMC. Both SSH and SST data show comparable results for the latitude of the BMC.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Avg_SST_SSH.png}}
\caption[Time-average sea surface temperature and sea surface height above the geoid for the western South Atlantic in the $1/4^\circ$ model.]{Time-average sea surface temperature (SST, Panel (a)) and sea surface height (SSH, Panel (b)) above the geoid for the western South Atlantic. Time-average taken over entire $1/4^\circ$ model control run. The green line denotes the location of the $1047$m isobath, while dashed black lines show the average latitude of the BMC using the corresponding SST or SSH data. The average BMC latitude from SST data is $33.76^\circ$S and $33.74^\circ$S from SSH.}
\label{p_Avg_SS}
\end{figure*}
Panels (a) and (b) of Figure \ref{p_Avg_SS} show the time-average SST and SSH taken over the entire $1/4^\circ$ model control run, respectively. The green line denotes the location of the 1047m isobath, the closest isobath depth to 1000m within the model configuration. The dashed black lines indicate the time-average latitude of the BMC calculated using the methodology of \cite{Goni2011}, based on SST (Panel (a)) and SSH (Panel (b)). The latitude of the BMC is near $34^\circ$S when estimated from model SSH and SST, almost $4^\circ$ further north than the expected location based on observations (\citealt{Combes2014}). We also note that seasonality alone cannot be a driver, since we are examining annual average data. We find the BMC latitude to be near $38^\circ$S in the $1^\circ$ model.
This rather surprising estimate for the location of BMC in the $1/4^\circ$ model warrants further investigation. Figure \ref{p_Avg_SS} suggests that a combination of strong Malvinas current and weak Brazil current is present in the model from both time-average SST and SSH data. Therefore, the cold water of the Malvinas current penetrates further north, resulting in a more northerly BMC than expected within the $1/4^\circ$ model.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=12cm]{Fig_AMOC/Avg_PT2.png}}
\caption[Time-average potential temperature at 200m for $1/4^\circ$ model, and resulting BMC latitude.]{Time-average potential temperature ($\theta$) at 200m. The green line denotes the 1047m isobath and the white line shows the $10^\circ$C isotherm. Time-average taken over entire $1/4^\circ$ model control run. Using the approach of \cite{Garzoli1987}, the location of BMC is denoted by the latitude of intersection of the isotherm and isobath. The average latitude of the BMC is denoted by the black dashed line is $33.79^\circ$S.}
\label{p_Avg_PT200}
\end{figure*}
Prior to the satellite-based analysis of \cite{Goni2011}, \cite{Garzoli1987} determined the location of the BMC as the latitude where the $10^\circ$C isotherm at 200m intersects with the 1000m isobath. To validate our estimates for BMC location from SSH and SST data, we also calculate the BMC latitude using this approach. The 10$^\circ$C isotherm is shown in Figure \ref{p_Avg_PT200} as a white line on the potential temperature ($\theta$) plot at depth 200m. The 1047m isobath is shown as a green line. The time-average latitude of the BMC over the entire $1/4^\circ$ model run is then shown by the black dashed line.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/grad_SS_sth_ts.png}}
\caption[Timeseries of Brazil-Malvinas confluence latitude calculated using SSH, SST and $10^\circ$C $\theta$ isotherm for the $1/4^\circ$ model.]{Timeseries of Brazil-Malvinas confluence latitude calculated using SSH (green), SST(red) and $10^\circ$C isotherm (orange) for the $1/4^\circ$ model. Coloured dashed lines represent the linear trend shown in each respective timeseries.}
\label{p_mgr_3}
\end{figure*}
Figure \ref{p_mgr_3} shows the smoothed timeseries of the BMC latitude throughout the $1/4^\circ$ model for all three BMC latitude calculations, using SSH (green), SST(red) and the $10^\circ$C isotherm (orange). The spin-up phase of the control run is included in the figure, between 1920 and 1950. A $\pm7$ year running mean is used to remove short-term variability. All three methods used show excellent agreement for the location of the BMC, and moreover show similar temporal variability throughout, with slight differences in the extent of excursions. Further, all estimates show a southward trend in the location of the confluence in time, with corresponding linear fits shown by the coloured dashed lines.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/BMCvsBtt_ts.png}}
\caption[Comparison of $\Psi^c_{bot_{\max}}$ contribution to the maximum estimated overturning streamfunction and location of Brazil Malvinas confluence.]{Timeseries of the $\Psi^c_{bot_{\max}}$ contribution (purple) to the maximum estimated overturning streamfunction $\tilde{\Psi}_{\max}^c$ at SAMBA ($34.05^\circ$S) on the right hand y-axis, and the latitude of the Brazil-Malvinas confluence (orange) on the left hand axis. The $R^2$ value comparing the timeseries is 0.66.}
\label{p_Tgr_Btt}
\end{figure*}
Figure \ref{p_Tgr_Btt} compares timeseries of the $\Psi^c_{bot_{\max}}$ (depth-independent, bottom) contribution (purple) to the maximum estimated overturning streamfunction $\tilde{\Psi}_{\max}^c$ at SAMBA and the latitudinal fluctuations of the BMC (orange) using the \cite{Garzoli1987} methodology. During periods of a northward (southward) shifting BMC, the figure shows increased (decreased) $\Psi^c_{bot_{\max}}$. A similar comparison between $\Psi^c_{th_{\max}}$ at SAMBA and the latitudinal fluctuations of the BMC shows strong negative correlation (not shown). This supports the inference that fluctuations found in $\Psi^c_{bot_{\max}}$ and $\Psi^c_{th_{\max}}$ in Figure \ref{Smb_ts} due to changes in bottom velocities and densities are strongly associated with the corresponding fluctuations in the latitude of the Brazil-Malvinas confluence. Preliminary investigations of the relationship between temporal variation of BMC latitude with the meridional wind stress, zonal wind stress and curl of the wind stress did not indicate strong trends.
\section{Standard deviation of bottom properties}
We now turn our attention to the spatial and temporal variability of boundary properties relevant to the decomposition diagnostic, for the whole Atlantic basin.
\subsection{Decadal and long term standard deviation (sd.)} \label{Std:Thr}
The standard deviation of a variable is one measure of the variation of the variable about its mean. It is a useful way of quantifying whether, for example, the basin-wide mean overturning streamfunction over a $p$-year period (e.g. $p=10$ for decadal mean overturning streamfunction) at some location changes over the full period of data available (itself dependent on model resolution). For a sample $\{x_i\}_{i=1}^n$ of n values of random variable $X$, the sample standard deviation used here is defined by
\begin{equation}
sd\left[\left\{x_i\right\}_{i=1}^n\right] = \sqrt{\frac{1}{n-1}\sum_{i=1}^{n}\left(x_i - \overline{x}\right)^2}.
\end{equation}
with sample mean $\overline{x}=(1/n)\sum_{i=1}^n x_i$.
Using the $1/4^\circ$ model dataset as an example, we have access to annual data for the overturning streamfunction $\Psi^c(t_{\ell},y_j,z_k)$, and each of its contributing boundary components. In general, $\Psi^c$ is a function of time $t_{\ell}$ ($\ell = 1,...,n_t = 657$), latitude $y_j$ ($j=1,...,n_y = 491$) and depth $z_k$ ($k=1,...,n_z = 75$), but the focus here is on temporal characteristics, so we write $\Psi(t_{\ell})$ only, and perform separate analyses for each individual basin latitude and depth.
The $p$-year average of $\Psi^c$, written $A_p$, is estimated and its variability examined using the standard deviation $sd$. The $p$-year average of $\Psi^c$ estimated using years $h$ to $p-1+h$ is given by
\begin{equation}
A_{ph} = \frac{1}{p} \sum_{\ell=h}^{p-1+h}\Psi^c_{\ell}
\end{equation}
for $h=1,2,...,(n_t+1-p)$ where $\Psi^c_{\ell}=\Psi^c(t_{\ell})$. The standard deviation for $A_p$ over all $n_t$ years, for initial year $q$ of calculation, is found using
\begin{equation}
s_{pq} = sd\left[ A_{pq}, A_{p (q+p)}, A_{p (q+2p)}, ... , A_{p (q + \lfloor \frac{n_t+1-p}{p}\rfloor p)} \right]
\end{equation}
for $q =1,2,...,p$, the $\lfloor\bullet\rfloor$ ``floor'' symbol indicates the largest integer less than or equal to $\frac{n_t+1-p}{p}$.
Here, $s_{pq}$ represents the standard deviation of the $A_p$, assuming that the $p$-year average is defined with respect to starting year $q$. Since we are concerned that estimates for $A_p$ may be sensitive to the choice of starting year, for each latitude and depth, we explicitly quantify variation in $A_p$ independent of the choice of starting year. Specifically, we can use the average of the standard deviations $s_{pq}$ over $q$ to give
\begin{equation}
\overline{s}_p = \frac{1}{p} \sum_{q=1}^{p} s_{pq}
\label{e_sp_bar}
\end{equation}
which is an estimate of the standard deviation of $A_p$ over the $n_t$ years of data, not dependent on the starting year for the calculation. Further, we can use the standard deviation of $s_{pq}$ ($q=1,2,...,p$) given by
\begin{equation}
\sigma_{sp}=sd\left[ \left[ s_{pq} \right]_{q=1}^p \right]
\end{equation}
as a diagnostic to assess the extent to which the starting year affects the calculation of the standard deviation of $A_p$. Even when $\sigma_{sp}$ is small, we can be confident that the arbitrary choice of starting year does not impact the estimate for the standard deviation of $A_p$. When $\sigma_{sp}$ is large, $\overline{s}_p$ is still the best choice of estimate for the standard deviation of the $p$-year average of $\Psi^c$, $A_p$, but we are somewhat more careful with our interpretation.
As a heuristic to aid visual assessment of $\overline{s}_p$ and $\sigma_{sp}$ over latitude and depth, we calculate the ratio
\begin{equation}
\sigma_{sp}^* = \frac{\sigma_{sp}(y_j, z_k)}{\max_{j',k'}(\overline{s}_p(y_{j'},z_{k'}))}
\label{e_sigma_sp}
\end{equation}
of $\sigma_{sp}$ per latitude-depth combination with respect to the maximum value of $\overline{s}_p$ (over all latitudes and depths).
\subsection{Linear trend in bottom densities}
\label{s_LR_D}
In-situ bottom densities at all model resolutions exhibit linear trends in time, illustrated in Figure \ref{p_LT_Bden}. These were estimated as the slope term from linear regression of bottom density, with time as the covariate over the full length of each model control run.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/LR.png}}
\caption[Temporal trend in in-situ bottom densities for the (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ model datasets in the Atlantic basin.]{Temporal trend in in-situ bottom densities over the entire model control run for the (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ model datasets in the Atlantic basin.}
\label{p_LT_Bden}
\end{figure*}
The trend in the $1^\circ$ model (Figure \ref{p_LT_Bden}(a)) is larger than that in both higher resolution models. For the majority of the basin we see a general positive trend in the $1^\circ$ model, indicating increasing densities in time, except along the Mid Atlantic Ridge (densities over shallower bathymetry getting lighter) and in the Labrador and Nordic Seas. In contrast, the $1/4^\circ$ model exhibits a negative trend throughout the basin, suggesting a lightening of densities everywhere with time. The highest resolution model (Figure \ref{p_LT_Bden}(c)) shows large regions of little to no trend. However, for the southern hemisphere there is a negative trend within the Brazil and Argentine basins, also present in the $1/4^\circ$ model. In the northern subpolar region of the $1/12^\circ$ model we find a positive trend within the Labrador Sea and south of Greenland, suggesting densities increasing in time here.
The trends in bottom densities for higher resolution models are likely to be attributed to model spin-up of the deep ocean. The shorter length $1^\circ$ model could be showing some signs of longer term variability; however, the trends here are also likely to be model spin-up. The lightening of densities in the $1/4^\circ$ model Argentine-Brazil basin is further discussed in Section \ref{ACC_BdryD} and could be caused by a reduction in Weddell Sea deep water formation, as a result of large biases in Southern Ocean and ACC characteristics in the $1/4^\circ$ model.
We calculate $\overline{s}_p$, $\sigma_{sp}$, $\sigma_{sp}^*$ here having eliminated the linear trend in time, using the residuals of bottom densities found with the fitted linear regression model. However, removal of the trends shown in Figure \ref{p_LT_Bden} is found to have negligible influence on the estimates of $\overline{s}_p$, $\sigma_{sp}$, $\sigma_{sp}^*$. This is not surprising given the magnitudes of the trends reported in the figure, relative to the typical magnitude of in-situ density.
\subsection{Averaged standard deviations, \textbf{$\overline{s}_p$}}
In this section we investigate the averaged standard deviation $\overline{s}_p$ (Equation \ref{e_sp_bar}). Our primary interest is in longer timescale variability. Therefore, we concentrate our investigation of bottom properties on periods $p=10$ and $40$ years shown in Figures \ref{p_sb_p10} and \ref{p_sb_p40}. We also mention results for $p=1$.
The $1/4^\circ$ model (Figure \ref{p_sb_p10}) exhibits sizeable decadal variation ($\overline{s}_{10}$) for the interior, especially prominent within the Brazil and Argentine basins. All three model resolutions show the largest variation in decadal means ($\overline{s}_{10}$) along the western boundary, which is slightly smaller than that exhibited by the $\overline{s}_1$ case. Interestingly, on closer inspection, decadal variation near the western boundary seems to increase with increasing resolution; this is particularly evident near (a) $30^\circ$N on the western boundary, (b) $10^\circ$N on the eastern boundary, and (c) $20-30^\circ$S on the western boundary. All of these locations coincide with meandering western or eastern boundary currents, and the increase in decadal variation with higher resolution is likely due to the greater ability of higher resolution models to resolve these boundary currents. At high northern latitudes the $1^\circ$ model exhibits greater variability near the Greenland-Scotland ridge overflow regions, than both higher resolution models. This could be related to the weaker subpolar gyre and DWBC found in the $1^\circ$ model and changes in deep water properties and its southward transport.
\begin{figure}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Sbar_10_Bd.png}}
\caption[Averaged standard deviation $\overline{s}_{10}$ estimated using $p=10$ year observation periods of in-situ bottom density for $1^\circ$, $1/4^\circ$ and (c) $1/12^\circ$ model resolution within the Atlantic basin.]{Averaged standard deviation $\overline{s}_{10}$ estimated using $p=10$ year observation periods for in-situ bottom density. Panel (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ show $\overline{s}_{10}$ for each model resolution within the Atlantic basin.}
\label{p_sb_p10}
\end{figure}
For $p=1$ (not shown), estimates of $\overline{s}_1$ (average yearly standard deviation) show similar characteristics at each resolution. There is a significant $\overline{s}_1$ value near the western boundary throughout the Atlantic basin, due to the presence of western boundary currents. The only major difference between $\overline{s}_1$ values for different model resolutions is found within the $1/4^\circ$ model. Here we find a larger $\overline{s}_1$ within the majority of the interior and stronger signal in the Brazil-Argentine basin, very similar to that found within Figure \ref{p_sb_p10}(b) for $\overline{s}_{10}$, suggesting changes in bottom water densities in the basin. Figure \ref{p_sb_p10} shows greater variation between resolutions for $\overline{s}_{10}$ compared to $\overline{s}_1$.
For $p=40$ years, we cannot estimate $\overline{s}_{40}$ reliably for the $1^\circ$ model due to its short simulation length of 104 years. Figure \ref{p_sb_p40} for $p=40$ years at $1/4^\circ$ and $1/12^\circ$ model resolutions shows relatively similar structure to that shown in Figure \ref{p_sb_p10}. Comparing the $1/4^\circ$ and $1/12^\circ$ estimates for $\overline{s}_{40}$, there is agreement except for (a) the Cayman Trough near the Caribbean Sea, and (b) along the eastern boundary in the southern hemisphere. Generally, there is a greater multi-decadal variability ($\overline{s}_{40}$) within the interior for the $1/4^\circ$ model as mentioned previously. Differences for the Cayman trench could be attributed to the ability of the model to accurately resolve the trough itself.
\begin{figure}[ht!]
\centerline{\includegraphics[width=11cm]{Fig_AMOC/Sbar_40_Bd.png}}
\caption[Averaged standard deviation $\overline{s}_{40}$ estimated using $p=40$ year observation periods of in-situ bottom density for $1^\circ$, $1/4^\circ$ and (c) $1/12^\circ$ model resolution within the Atlantic basin.]{Averaged standard deviation $\overline{s}_{40}$ estimated using $p=40$ year observation periods for in-situ bottom density. Panel (a) $1/4^\circ$ and (b) $1/12^\circ$ show $\overline{s}_{40}$ for each model resolution within the Atlantic basin.}
\label{p_sb_p40}
\end{figure}
The largest effect of increasing $p$ from 10 to 40 years is found in the $1/12^\circ$ model. Here, we see weaker $\overline{s}_{40}$ values near (a) $30^\circ$N on the western boundary, and (b) the northern section of the Gulf of Mexico. More generally, there is somewhat less variability near the western boundary, suggesting lesser variability in bottom densities and western boundary currents at longer multi-decadal timescales, in the Gulf of Mexico and Caribbean Sea. Cayman Trough densities in the $1/12^\circ$ model show similar variability for both $p=$ 10 and 40 years, whereas shallower regions in the Gulf do not show variability at longer timescales. The lack of change in variability between averages over 10 and 40 years for the $1/4^\circ$ model in the Argentine-Brazil basin and over a large part of the basin suggests long term fluctuations, unaffected by the removal of model linear trend. Another possibility could be a non-linear trend in bottom densities, with largest changes at the start of the model run. Section \ref{ACC_BdryD} will discuss discrepancies in ACC transport through the Drake Passage, Antarctic boundary densities, and the magnitude of Weddell Gyre, which could possibly lead to long-term variability or shift in Atlantic bottom densities, via changes in Antarctic deep water formation.
\subsection{Bottom velocities}
The bottom component ($\Psi^c_{bot}$) of the overturning streamfunction is calculated from bottom velocities within the basin. We find relatively weak linear trends in the bottom velocities at each model resolution, illustrated in Figure \ref{p_LT_Bvel}. From Figure \ref{p_RC_Bvel}, the largest average values of bottom velocities are $\pm$0.1ms$^{\text{-1}}$, large compared to the magnitudes of linear trends in Figure \ref{p_LT_Bvel}; therefore, there is little concern about the impact of model drift.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/LR_Bv.png}}
\caption[Linear temporal trend in meridional bottom velocities across entire length of model runs for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ model datasets in the Atlantic basin.]{Linear temporal trend in meridional bottom velocities for the (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ model datasets in the Atlantic basin.}
\label{p_LT_Bvel}
\end{figure*}
Linear trends are strongest in the $1^\circ$ and $1/12^\circ$ models (Figure \ref{p_LT_Bvel}(a)(c)) especially on the western side of the basin, to be expected since the strongest boundary currents occur here. Recalling that the $1/4^\circ$ timeseries is considerably longer than that of the other two model simulations, this may have some influence on the relative size of trends observed. The $1^\circ$ model exhibits larger trends in the northern subpolar regions, coinciding with the significantly weaker DWBC found (Figure \ref{p_RC_Bvel}) and weaker subpolar gyre (Figure \ref{p_RC_BrtStrm}). Interestingly, we note alternating positive and negative trends for the $1/12^\circ$ model, where the DWBC flows southwards near the western coast of the U.S., suggesting strong influence of bathymetry on bottom flows in these regions.
\begin{figure}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Sbar_10_Bv.png}}
\caption[Averaged standard deviation $\overline{s}_{10}$ estimated using $p=10$ year observation periods of meridional bottom velocity for $1^\circ$, $1/4^\circ$ and (c) $1/12^\circ$ model resolutions within the Atlantic basin.]{Averaged standard deviation $\overline{s}_{10}$ estimated using $p=10$ year observation periods for bottom velocity. Panel (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ shows $\overline{s}_{10}$ for each model resolution within the Atlantic basin.}
\label{p_sbv_p10}
\end{figure}
Figure \ref{p_sbv_p10} shows the averaged standard deviation, $\overline{s}_{p=10}$ for bottom velocities at each model resolution. Larger decadal variation is observed for both higher resolution models (Figure \ref{p_sbv_p10}(b)(c)), especially along the western boundary. This is likely due to greater variability within surface and deep western boundary currents, and improved resolution of the bottom currents along the shelf in higher resolution models. Output of all three model resolutions shows a relatively large $\overline{s}_{10}$ near the equator, even away from coastlines. This effect could possibly be attributed to tropical waves travelling along the equator. For the longer period variation ($\overline{s}_{p=40}$, not shown), we find the $1/4^\circ$ and $1/12^\circ$ models to again be very similar to each other and to the corresponding $\overline{s}_{p=10}$, but with smaller magnitudes and therefore less variation at longer timescales. We find a greater extent of large bottom velocity $\overline{s}_{10}$ (Figure \ref{p_sbv_p10}) in comparison to regions of large bottom density $\overline{s}_{10}$ (\ref{p_sb_p10}). The normalised variation in standard deviation, \textbf{$\sigma_{sp}^*$} for bottom densities and velocities is discussed in Appendix \ref{Var_BD_Nrm}.
\section{Multiple Linear Regression and Correlation Adjusted coRrelation}
In this section we assess the extent to which the variation of the expected compensated overturning streamfunction $\Psi^c$ can be explained in terms of the variation of boundary component contributions. That is, we seek to quantify the relative importance of boundary components in explaining $\Psi^c$ variations using statistical modelling. There is a large body of relevant statistics literature, including \cite{George2000} and \cite{Davison2003}. Here, we will make use of straightforward statistical techniques such as multiple linear regression (MLR), and so-called correlation-adjusted correlations (CAR).
To begin, Figure \ref{f_Corr} shows scatter plots from timeseries of detrended $\Psi^c$ (centred so that its time-mean is zero) and the main boundary components, for 4 latitude-depth combinations within the Atlantic basin. Panels (a), (e) and (i) show the relationship between $\Psi^c$ (x-axis) and the Atlantic western and eastern densities ($\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$), Ekman ($\Psi^c_{Ekm}$) and Atlantic bottom ($\Psi^{c,AB}_{bot}$, excluding bottom velocity contribution from GMC) components respectively, at latitude $11.18^\circ$S and depth $2102$m. $\Psi^{c,AB}_{bot}$ and $\Psi^{c,GMC}_{bot}$ refer to the depth-independent contribution made by Atlantic and Gulf of Mexico and Caribbean Sea basins, as shown in the decomposition of the basin into sub-domains in Figure \ref{p_Bath_sp}. The value of squared correlation between the components and the expected streamfunction is also shown in the top left of each panel. For this latitude-depth combination, we observe that Panel (a) exhibits the strongest correlation ($R^2=0.52$), suggesting the variability in annual mean $\Psi^c$ is more related to variability in densities along the eastern and western boundaries of the Atlantic section, than to $\Psi^c_{Ekm}$ or $\Psi^{c,AB}_{bot}$.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/QTdeg_CTrsp_Sct_Ex.png}}
\caption[Scatter plots of detrended timeseries (centred to zero time-mean) of West + East Atlantic densities, Ekman and bottom Atlantic (GMC not included) components against the expected compensated overturning streamfunction $\Psi^c$ for 4 latitude-depth combinations.]{Scatter plots of detrended and time-mean-removed timeseries of West + East Atlantic densities (Panels (a)(b)(c)(d)), Ekman (Panels (e)(f)(g)(h)) and Bottom Atlantic (Panels (i)(j)(k)(l)) components against the expected compensated overturning streamfunction $\Psi^c$ for 4 latitude-depth combinations. Locations for Panels are as follows: (a)(e)(i) are latitude $11.18^\circ$S and depth $2102$m, (b)(f)(j) are latitude $25.37^\circ$N and depth $5$m, (c)(g)(k) are latitude $35.1^\circ$N and depth $857$m and (d)(h)(l) are latitude $35.1^\circ$N and depth $4488$m. The squared correlation $R^2$ values are shown in the top left corner of each Panel.}
\label{f_Corr}
\end{figure*}
At latitude $35.1^\circ$N and depth $4488$m (Panels (d)(h)(l)) we find $\Psi^{c,AB}_{bot}$ (Panel (l)) shows the greatest correlation with expected streamfunction $\Psi^c$ ($R^2=0.75$). In a similar fashion, $\Psi^c_{Ekm}$ appears to be relatively influential at latitude $25.37^\circ$N and depth $5$m (Panel (f)), confirming the view that the Ekman contribution ($\Psi^c_{Ekm}$) is important near the surface, and bottom velocities ($\Psi^{c,AB}_{bot}$) are important at depth.
Another common approach to assess the relationship between two variables is partial correlations. The method involves estimating the residuals of two multiple linear regressions on a set of so-called controlling variables, and then calculating the correlation between the regression residuals. In this sense, only that correlation between variables which cannot be explained by the controlling variables is accounted for in the partial correlation. Thus, for example, if we were interested in the contribution made by $\Psi^{c,AB}_{bot}$ to $\Psi^c$, firstly we would individually regress $\Psi^{c,AB}_{bot}$ and $\Psi^c$ onto all the remaining boundary components. Then we would calculate the correlation between residuals from both these regression models as an estimate of the partial correlation between $\Psi^{c,AB}_{bot}$ and $\Psi^c$. The partial correlation lies between $-1$ and $1$, with a value near $\pm1$ implying a strong partial correlation. A weakness of both correlation analysis and partial correlation is the lack of physical interpretability. The ability to quantify the contribution is important to understanding the contributions made by the boundary components, and the physical implications thereof. Therefore, instead of partial correlations we use correlation-adjusted correlation (CAR) to assess the importance of boundary components in explaining variation of $\Psi^c$.
\subsection{CAR methodology}
The following discussion draws on the presentation of CAR scores by \cite{Zuber2010} and linear regression modelling from \cite{Davison2003}. Consider fitting a linear regression to a sample $\{y_i\}_{i=1}^n$ of size $n$ for response $Y$ in terms of a sample $\{x_j\}_{i=1,j=1}^{n,p}$ of $p$ explanatory variables $\{X_j\}_{j=1}^p$, using a linear model. For each observation, we fit
\begin{eqnarray}
y_i = \sum x_{ij} \beta_j + \epsilon_i \label{Rgr1}
\end{eqnarray}
where $\{\epsilon_i\}_{i=1}^n$ are Gaussian noise random variables with zero mean and some unknown variance $\sigma^2$, and $\{\beta\}_{j=1}^p$ is a set of regression coefficients we want to estimate. We can write Equation~\ref{Rgr1} in matrix terms as
\begin{eqnarray}
\mathbf{y} = \mathbf{X} \mathbf{\beta} + \mathbf{\epsilon} \label{Rgr2}
\end{eqnarray}
where $\mathbf{y}$ is a $n \times 1$ column vector of response values, $\mathbf{\beta}$ is a $p \times 1$ column vector of regression parameters and $\mathbf{X}$ is a $n \times p$ matrix of values for the explanatory variables. The least squares and maximum likelihood estimate $\hat{\mathbf{\beta}}$ for $\mathbf{\beta}$ is then
\begin{eqnarray}
\hat{\mathbf{\beta}} = (\mathbf{X}' \mathbf{X})^{-1}\mathbf{X}'\mathbf{y}, \label{Rgr3}
\end{eqnarray}
and the fitted values of $Y$ are
\begin{eqnarray}
\hat{\mathbf{y}} = \mathbf{X} \hat{\mathbf{\beta}}. \label{Rgr4}
\end{eqnarray}
For the case of our AMOC decomposition, the response $Y$ is the total expected AMOC overturning streamfunction ($\Psi^c$), and the explanatory variables $\{X_j\}_{j=1}^{13}$ are: (a) Ekman, (b) additional cell, (c) Atlantic bottom, (d) GMC bottom, (e) compensation included for $\Psi^c$ (volume imbalance term), and the eastern and western density contributions of (f) Atlantic, (g) GMC, (h) MAR and (i) remainder components (time-mean contributors for which are illustrated in Figure \ref{p_TM_CAR}). It can be shown that the variance of the estimated parameter vector is given by
\begin{eqnarray}
\mathbf{var}(\hat{\mathbf{\beta}}) = \sigma^2 (\mathbf{X}'\mathbf{X})^{-1}. \label{Rgr5}
\end{eqnarray}
The goodness of fit of the linear model is often quantified using the $R^2$ statistic, which is given by
\begin{eqnarray}
(\mathbf{y}'\mathbf{y}) (1-R^2) &=& (\mathbf{y}-\hat{\mathbf{y}})' (\mathbf{y}-\hat{\mathbf{y}}) \nonumber \\
&=& (\mathbf{y}-\mathbf{X} (\mathbf{X}'\mathbf{X})^{-1} \mathbf{X}' \mathbf{y})' (\mathbf{y}-\mathbf{X} (\mathbf{X}'\mathbf{X})^{-1} \mathbf{X}' \mathbf{y})' \nonumber \\
&=& \mathbf{y}' \mathbf{y} - \mathbf{y}' \mathbf{X} (\mathbf{X}'\mathbf{X})^{-1} \mathbf{X}' \mathbf{y}. \label{Rgr6}
\end{eqnarray}
Equation~\ref{Rgr5} suggests that, in order to decorrelate the estimated regression vector $\hat{\mathbf{\beta}}$, we need to pre-multiply it by a quantity proportional to the square root of its variance. This motivates the definition of a CAR vector given by
\begin{eqnarray}
\mathbf{c} = (\mathbf{y}' \mathbf{y})^{-1} (\mathbf{X}' \mathbf{X})^{-1/2}\mathbf{X}'\mathbf{y}. \label{Rgr7}
\end{eqnarray}
With this definition, Equation~\ref{Rgr6} becomes
\begin{eqnarray}
(\mathbf{y}'\mathbf{y}) (1-R^2) = \mathbf{y}' \mathbf{y} (1 - \mathbf{c}' \mathbf{c} ) \label{Rgr8}
\end{eqnarray}
or
\begin{eqnarray}
R^2 = \mathbf{c}' \mathbf{c}. \label{Rgr9}
\end{eqnarray}
That is, the squares of the elements $\{c_j\}_{j=1}^p$ of the CAR vector sum to $R^2$, and quantify the contribution of each explanatory variable to the quality of fit of the regression model. If we define the CAR scores $\{\phi_j\}_{j=1}^p$ by
\begin{eqnarray}
\phi_j = c_j^2,
\end{eqnarray}
then $\phi_j$ quantifies the value (in terms of $R^2$) of explanatory variable $X_j$ in the regression model. Similarly, we see that the part of the sum of squares $\mathbf{y}'\mathbf{y}$ described by explanatory variable $X_j$ is given by $(\mathbf{y}'\mathbf{y})\phi_j$. It is therefore also useful to define the scaled CAR scores using
\begin{eqnarray}
\phi_j^* = (\mathbf{y}'\mathbf{y}) c_j^2. \label{Rgr10}
\end{eqnarray}
Both CAR scores $\{\phi_j\}$ and scaled CAR scores $\{\phi_j^*\}$ provide useful measures of how different explanatory variables explain the response variable. By mean-centering the response and explanatory variables before CAR analysis, the CAR scores can be expressed in terms of the variance of the response.
In summary, CAR scores supplement a standard regression analysis, providing a clearer description of how explanatory variables correlate with the response variable. In a standard regression analysis, the estimated regression coefficients are correlated. It is therefore possible to explain the variability in the response almost equally well using different choices of regression variables. This means that it is difficult to uniquely allocate variation of the response to a particular explanatory variable (and this is one motivation for considering partial correlations). The CAR vector is created by rotating the estimated regression variables such that they become uncorrelated and independent, but such that rotated regression variables explain the variation of the response exactly as well as the original regression variables. This means that the CAR vector (and scores) gives an ``importance'' measure for the contribution of each explanatory variable in explaining the response. CAR scores therefore also provide a means of ranking the importance of contributions from different explanatory variables. It is important to note that the quality of fit of the MLR-CAR model is identical to that of the MLR model. However, the MLR-CAR scores aid the physical interpretation of the role of the covariates. Further details are provided by \cite{Zuber2010}, with applications, extensions and discussion including \cite{Bocinsky2014}, \cite{Teisseyre2016}, \cite{Kessy2018}, \cite{Setiawan2018}, \cite{Welchowski2019}, \cite{Bommert2021}, \cite{Mimi2021}.
The time-mean contributions of the decomposition components to the overturning streamfunction for the $1/4^\circ$ model are shown in Figure \ref{p_TM_CAR}, where eastern and western density contributions are summed together into a single thermal wind term for each region for conciseness. The components contributing to $\Psi^c$ are: (a) compensation to $\Psi^c$ ($\Psi_{vol}$, volume imbalance); (b) additional cell contribution ($\Psi^c_{AC}$); (c) Ekman component ($\Psi^c_{Ekm}$); (d) bottom component for the Atlantic boundary ($\Psi^{c,AB}_{{bot}}$); (e) bottom component within the GMC ($\Psi^{c,GMC}_{{bot}}$); (f) eastern and western boundary density contribution of the Atlantic boundary ($\Psi^{c,AB}_{W}$; $\Psi^{c,AB}_{E}$); (g) eastern and western boundary density contribution of the MAR ($\Psi^{c,MAR}_{W}$, $\Psi^{c,MAR}_{E}$); (h) eastern and western boundary density contribution of the GMC ($\Psi^{c,GMC}_{W}$, $\Psi^{c,GMC}_{E}$) and (i) the remaining unaccounted for eastern and western boundary density contribution ($\Psi^{c,R}_{W}$, $\Psi^{c,R}_{E}$). To be clear, the eastern and western boundary density contributions for each region are accounted for as separate explanatory variables within the MLR-CAR score analysis, but for brevity are usually displayed as the sum of components.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/plt_TM_MLRcmp.png}}
\caption[Contribution to the time-mean overturning streamfunction for $1/4^\circ$ model.]{Contributions to the time-mean overturning streamfunction made by (a) compensation to $\Psi^c$ ($\Psi_{vol}$, volume imbalance), (b) additional cell contribution ($\Psi^c_{AC}$), (c) Ekman component ($\Psi^c_{Ekm}$), (d) bottom component for the Atlantic boundary ($\Psi^{c,AB}_{{bot}}$), (e) bottom component within the GMC ($\Psi^{c,GMC}_{{bot}}$), (f) eastern and western density contribution of the Atlantic boundary ($\Psi^{c,AB}_{W}$, $\Psi^{c,AB}_{E}$), (g) eastern and western density contribution of MAR ($\Psi^{c,MAR}_{W}$, $\Psi^{c,MAR}_{E}$), (h) the remaining unaccounted for eastern and western density contribution ($\Psi^{c,R}_{W}$, $\Psi^{c,R}_{E}$) and (i) eastern and western density contribution of the GMC ($\Psi^{c,GMC}_{W}$, $\Psi^{c,GMC}_{E}$). The same component will be used as explanatory variables in the MLR-CAR analysis.}
\label{p_TM_CAR}
\end{figure*}
Figure \ref{p_TM_CAR} shows the dominant contribution made by Atlantic boundary density components (Panel (f)) to the time-mean overturning streamfunction $\Psi^c$ throughout the basin. It is interesting to note, however, that densities in the Gulf of Mexico and Caribbean Sea (Panel (i)) are responsible for a large proportion of the northward surface transport for latitudes near $20^\circ$N. As discussed in Chapter \ref{TJ_TM}, the Atlantic bottom (Panel (d)) and additional cell (Panel (b)) components make large contributions to the overturning streamfunction near $28^\circ$N, due to boundary currents. The large contribution in the same region from the ``remainder'' or ``Rest'' density term (Panel (h)), is attributed to intermediatory islands, e.g. the Bahamas. Other regions of strong density contributions that do not come from the GMC, MAR or main Atlantic boundaries are at depth in the southern hemisphere, due to the Malvinas islands and Walvis ridge. Panel (g) shows MAR densities make a large contribution to time-mean $\Psi^c$ at southern hemisphere low latitudes and high northern latitudes due to the Reykjanes ridge.
Now we estimate the MLR-CAR scores of boundary component contributions to the temporal variation of $\Psi^c$ with respect to latitude and depth for the $1/4^\circ$ model. Specifically, for each latitude-depth combination we use compensated boundary component contributions which have been temporally linearly detrended, with time-mean removed. The analysis is performed for various timescales by first filtering the data using a 6th order Butterworth band-pass filter.
The CAR vector estimating the contribution of each boundary component is closely related to the $R^2$ value of the MLR model. Usually, $R^2<1$ since the regression model is not perfect. That is, there is variation in $\Psi^c$ unexplained by boundary components for each latitude-depth pair. It is useful to also visualise this unexplained variation, shown under the title ``Unexplained in MLR'' in the figures that follow. We find the total $R^2$ value of the MLR-CAR model is comfortably above an acceptable threshold (95$\%$) except for cells adjacent to the ocean's surface. Here, the overturning streamfunction is zero so shows no variation (by construction, to conserve volume), and therefore we cannot reasonably expect the MLR-CAR model to perform well.
\subsection{Results}
Figure \ref{f_v_CAR_T} shows the total variance of $\Psi^c$ for the different band-pass timescales under consideration. By construction (Equation \ref{Rgr8}), the total variance of $\Psi^c$ is equal to the sum of scaled CAR scores $\phi_i^*$ over components $i$, plus the unexplained variance for the chosen timescale band.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Qg_CARv_TotT.png}}
\caption[Total variance of $\Psi^c$ for increasing timescales in the $1/4^\circ$ model.]{Total variance of $\Psi^c$ for (a) unfiltered annual-mean data, and for data that has been band-pass filtered (b) 1-50 years, (c) 1-10 years, (d) 10-30 years and (e) 30-50 years in the $1/4^\circ$ model. Estimation is not possible in shaded grey regions due to bathymetry and latitudes near the equator.}
\label{f_v_CAR_T}
\end{figure*}
We see that the total variance of $\Psi^c$ reduces systematically, with larger variation at shorter timescales (and high frequencies). The largest variance is found between $30^\circ$N and $45^\circ$N for all timescales. This supports the results in Section \ref{Std:Thr}, where large variation was found in bottom velocities and densities at short and long periods for this region, south of Nova Scotia. The greater variance can be attributed to the DWBC flowing over and around a number of seamounts (New England and Corner Rise), resulting in large variation in boundary properties.
The MLR-CAR analysis below investigates the contributions made by each boundary component to the total variance of $\Psi^c$. Results are presented as latitude-depth plots for the whole Atlantic basin in two formats, (a) showing scaled CAR scores (Equation~\ref{Rgr10}), the contribution of each component to the total variance of $\Psi^c$ (in $\text{Sv}^2$), and (b) the percentage contribution of the scaled CAR score of each component (such that the total percentage contribution per latitude-depth pair is 100\%). Unfiltered (annual mean) data is presented first, then results for a range of different band-pass time filter intervals are discussed.
\subsubsection{Unfiltered data}
Figure \ref{f_v_CAR_1} shows the contribution of each component (using the scaled CAR score $\phi_i^*$ per component $i$) to the total variance exhibited by $\Psi^c$ (Figure \ref{f_v_CAR_T}(a)), for the full annual-mean dataset with no filter applied. In the figure, western and eastern boundary contributions are summed for convenience (thermal wind contribution) but will be considered individually later, and Panel (i) shows the unexplained variance from the MLR-CAR model. Further, the contribution from the $\Psi^c$ compensation term ($\Psi_{vol}$, Panel (a) Figure \ref{p_TM_CAR}) is not shown, since it was found to be negligible.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Qg_CARv_1a.png}}
\caption[Scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for unfiltered annual-mean data.]{Scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for unfiltered annual-mean data. Panels show the contribution made by (a) $\Psi^{c,AB}_{{bot}}$, (b) $\Psi^{c,GMC}_{{bot}}$, (c) $\Psi^c_{Ekm}$, (d) $\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$, (e) $\Psi^{c,MAR}_{W}$ + $\Psi^{c,MAR}_{E}$, (f) $\Psi^{c,R}_{W}$ + $\Psi^{c,R}_{E}$, (g) $\Psi^{c,GMC}_{W}$ + $\Psi^{c,GMC}_{E}$, (h) $\Psi^c_{AC}$ and (i) variance unexplained by MLR-CAR. Estimation is not possible in shaded grey regions corresponding to one or more of bathymetry, latitudes near the equator, and latitudes not relevant for GMC components.}
\label{f_v_CAR_1}
\end{figure*}
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Qg_CARp_1a.png}}
\caption[Percentage scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for unfiltered annual-mean data.]{Percentage scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for unfiltered annual-mean data. Panel details are given in Figure \ref{f_v_CAR_1} caption.}
\label{f_p_CAR_1}
\end{figure*}
We find a large $\Psi^{c,AB}_{{bot}}$ variance contribution (Panel (a)) at mid-depths between $30^\circ$N and $45^\circ$N, in addition to small pockets at mid-depths in the subtropical and South Atlantic regions. We find the largest $\Psi^{c}_{{Ekm}}$ contributions (Panel (c)) within the upper 2500m in the northern hemisphere, particularly between $10^\circ$N and $20^\circ$N, and $35^\circ$N and $50^\circ$N, at the locations of surface easterlies and westerlies. The density contribution to variance is large especially from the Atlantic boundaries (Panel(d), $\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$) and the GMC, (Panel (g), $\Psi^{c,GMC}_{W}$ + $\Psi^{c,GMC}_{E}$). The largest contributor to the variance within the southern hemisphere at mid-depth is $\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$, with little contribution from $\Psi^{c,AB}_{{bot}}$ near latitudes of the Malvinas islands and current. $\Psi^{c,R}_{W}$ + $\Psi^{c,R}_{E}$ (Panel (f)) exhibits a relatively strong contribution at 3500m near $30^\circ$S, potentially attributable to the Malvinas islands or the Walvis ridge; analysis of the individual components (not shown) reveals that this contribution is due to $\Psi^{c,R}_{W}$ (eastern side of the Walvis ridge or eastern side of the Malvinas islands). Scaled CAR scores for the other components are small. Summarising, Figure \ref{f_v_CAR_1} provides general context regarding the contributions made by different boundary components on a basin-wide scale: large contributions from Atlantic boundary densities ($\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$), wind-stresses ($\Psi^{c}_{{Ekm}}$) due to westerly and trade winds, and the Atlantic bottom velocity tern ($\Psi^{c,AB}_{{bot}}$) south of Newfoundland, attributable to the DWBC and presence of seamounts.
Figure \ref{f_p_CAR_1} shows the corresponding percentage local scaled CAR score for each boundary component. That is, for each latitude-depth pair, the figure shows the value of $100 \times \phi_i^*/\sum_j \phi_j^*$, where the indices $i$ and $j$ refer to the 13 components considered for this decomposition. Figure \ref{f_p_CAR_1} illustrates that the CAR score contributions are consistent with physical intuition. From Panel (c) we see that $\Psi^{c}_{{Ekm}}$ dominates near the ocean's surface, with some residual influence down to 3000m depth in the northern hemisphere. Panel (a) shows that $\Psi^{c,AB}_{{bot}}$ dominates the abyss, but with sizeable influence at depths as shallow as 2000m for latitudes between $20^\circ$N and $40^\circ$N. Panel (h) shows that additional cells provide important contributions to variance near bathymetry ($\Psi^c_{AC}$). From Panel (d) we see that $\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$ variation is an important explanatory factor in the ocean's interior, particularly in the southern hemisphere. The influence of Greenland boundary densities is visible in Panel (f) ($\Psi^{c,R}_{W}$ + $\Psi^{c,R}_{E}$) at high northern latitudes, as is the contribution from boundary densities within the Gulf of Mexico and Caribbean Sea (in Panel (g), $\Psi^{c,GMC}_{W}$ + $\Psi^{c,GMC}_{E}$).
Figures \ref{f_v_CAR_1} and \ref{f_p_CAR_1} suggest that $\Psi^c_{AC}$ (Panels (h)) has a large role in explaining the variance (of small absolute magnitude) near bathymetry, but in a basin-wide context this contribution is small. In contrast, the scaled CAR scores for the Atlantic bottom velocity term $\Psi^{c,AB}_{{bot}}$ between $30^\circ$N and $45^\circ$N is large, but the corresponding local percentage contribution is relatively small in shallower water, and locally dominant at greater depths. The scaled CAR scores for Ekman, Atlantic bottom velocity and Atlantic boundary density contributions between $30^\circ$N and $45^\circ$N are all large, reflecting the large variance of $\Psi^c$ here. $\Psi^{c}_{{Ekm}}$ appears to be dominant down to almost 3000m in Figure \ref{f_v_CAR_1} Panel (c) but is confined to shallow waters in Figure \ref{f_p_CAR_1}, suggesting greater local influence of the Atlantic boundary densities and Atlantic bottom velocity components.
\subsubsection{Band-pass filtering admitting cycles with periods between 1 and 50 years}
The total length of the $1/4^\circ$ model simulation analysed here is 657 years. In simple terms, to estimate periodic behaviour in the overturning streamfunction requires that the data length corresponds to at least 10 cycles. Therefore, in this analysis we can only expect to be able to identify cycles with periods of at most approximately 50 years with any confidence. With that caveat in mind, initially we apply a band-pass filter to eliminate cyclic variation with periods greater than 50 years.
\begin{figure*}[ht]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Qg_CARv_1_50a.png}}
\caption[Scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods greater than 50 years removed.]{Scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods greater than 50 years removed. Panels show the contribution made by (a) $\Psi^{c,AB}_{{bot}}$, (b) $\Psi^{c,GMC}_{{bot}}$, (c) $\Psi^c_{Ekm}$, (d) $\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$, (e) $\Psi^{c,MAR}_{W}$ + $\Psi^{c,MAR}_{E}$, (f) $\Psi^{c,R}_{W}$ + $\Psi^{c,R}_{E}$, (g) $\Psi^{c,GMC}_{W}$ + $\Psi^{c,GMC}_{E}$, (h) $\Psi^c_{AC}$ and (i) variance unexplained by MLR-CAR. Estimation is not possible in shaded grey regions corresponding to one or more of bathymetry, latitudes near the equator, and latitudes not relevant for GMC components.}
\label{f_v_CAR_1_50}
\end{figure*}
With contributions to the total variance of $\Psi^c$ at longer timescales (greater than 50 years) removed, we find in Figure \ref{f_v_CAR_1_50} a reduction in $\phi_i^*$ for $\Psi^{c,AB}_{W}$ and $\Psi^{c,AB}_{E}$ (Panel (d); c.f. Figure~\ref{f_v_CAR_1}). The reduction is particularly evident in the northern hemisphere tropical region, and also south of $18^\circ$S, suggesting longer term variation or spin-up in these boundary densities. Other plots (not shown) suggest that the southern hemisphere weakening can be attributed to a weaker contribution by $\Psi^{c,AB}_{W}$, whereas in the northern hemisphere we find a reduction of $\Psi^{c,AB}_{E}$. The contribution of $\Psi^{c,AB}_{{bot}}$ (Panel (a)) shows a slight reduction at mid-depths near $30^\circ$S and $17^\circ$N. However, $\Psi^{c,AB}_{{bot}}$ near $35^\circ$N is still a large factor in $\Psi^c$ variation. Variation of the Atlantic bottom velocity component $\Psi^{c,AB}_{{bot}}$ in this region is likely due to the separation of the Gulf Stream from the U.S. coast and its interaction with the southward flowing DWBC. We also note the disappearance of the $\Psi^{c,R}_{W}$ + $\Psi^{c,R}_{E}$ contribution (Panel (f)) at depths in the southern hemisphere near $30^\circ$S. The percentage contribution of components is very similar to that shown in Figure \ref{f_p_CAR_1} (for unfiltered data), with the contribution of $\Psi^{c}_{{Ekm}}$ penetrating deeper, and a stronger $\Psi^{c,AB}_{{bot}}$ at depths to compensate for the reduction in $\Psi^{c,AB}_{W}$ and $\Psi^{c,AB}_{E}$ in both hemispheres.
\subsubsection{Band-pass filtering admitting cycles with periods between 1 and 10 years}
Figure \ref{f_v_CAR_1_10} shows the scaled CAR scores $\phi_i^*$ of each component $i$ for band-pass filtered annual-mean overturning streamfunction data, such that all variation corresponding to periods greater than 10 years is removed, illustrating contributions to the variability of $\Psi^c$ on short sub-decadal timescales.
Figure \ref{f_v_CAR_1_10} exhibits similar characteristics to those in Figures \ref{f_v_CAR_1} and \ref{f_v_CAR_1_50}, with weakening of each of the major contributions, most notably for $\Psi^{c,AB}_{W}$ and $\Psi^{c,AB}_{E}$. At these short timescales we see weaker Atlantic boundary density contributions ($\Psi^{c,AB}_{W}$, $\Psi^{c,AB}_{E}$) in the northern hemisphere, but with the contribution between $10^\circ$S and $20^\circ$S remaining relatively strong. The GMC density contribution shows a reduction, but remains important for shallow waters in the GMC region. The similarity of results using 1-50 year and 1-10 year band-pass filters indicates that variation on short timescales dominates within the Atlantic basin, as previously noted (e.g. \citealt{Buckley2016}).
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Qg_CARv_1_10a.png}}
\caption[Scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods greater than 10 years removed.]{Scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods greater than 10 years removed. Panels show the contribution made by (a) $\Psi^{c,AB}_{{bot}}$, (b) $\Psi^{c,GMC}_{{bot}}$, (c) $\Psi^c_{Ekm}$, (d) $\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$, (e) $\Psi^{c,MAR}_{W}$ + $\Psi^{c,MAR}_{E}$, (f) $\Psi^{c,R}_{W}$ + $\Psi^{c,R}_{E}$, (g) $\Psi^{c,GMC}_{W}$ + $\Psi^{c,GMC}_{E}$, (h) $\Psi^c_{AC}$ and (i) variance unexplained by MLR-CAR. Estimation is not possible in shaded grey regions corresponding to one or more of bathymetry, latitudes near the equator, and latitudes not relevant for GMC components.}
\label{f_v_CAR_1_10}
\end{figure*}
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Qg_CARp_1_10a.png}}
\caption[Percentage scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods greater than 10 years removed.]{Percentage scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods greater than 10 years removed. Panel details are given in Figure \ref{f_v_CAR_1_10} caption.}
\label{f_p_CAR_1_10}
\end{figure*}
Figure \ref{f_p_CAR_1_10} shows the corresponding percentage $100 \times \phi_i^*/\sum_j \phi_j^*$ for band-pass filtered annual-mean overturning streamfunction data, with all variation corresponding to periods greater than 10 years removed. The characteristics of the figure are similar to those of the unfiltered data in Figure \ref{f_p_CAR_1} and of the 1-50 year band-pass filtered data (not shown). The most notable difference (also seen in the 1-50 year band-pass filtered data) is a weakened relative contribution from $\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$ in the northern hemisphere (Panel (d)), compensated for by somewhat increased $\Psi^{c,AB}_{{bot}}$ and $\Psi^{c}_{{Ekm}}$ contributions (Panels (a) and (c)). In the southern hemisphere, we find a greater $\Psi^{c,AB}_{{bot}}$ contribution at depth, compensating for a weaker Atlantic boundary density component at depth, and at southerly latitudes.
Figure \ref{f_v_CAR_1_10d} illustrates the scaled CAR scores $\phi_i^*$ for individual components $i$ corresponding to eastern and western boundary densities separately, in order to compare the relative contributions of those boundaries for 1-10 year band-pass filtered data. The figure shows that western boundary density variation dominates, in the Atlantic ($\Psi^{c,AB}_{W}$, Panel (a)) and GMC ($\Psi^{c,GMC}_{W}$, Panel (g)) for these short timescales; eastern boundary density variations do not contribute much to overturning variability. Similar spatial features are seen for the percentage contribution of the density boundary components (not shown); we note a relatively large contribution (50\%) from eastern MAR densities ($\Psi^{c,MAR}_{W}$) at high northern latitudes near Greenland. Inter-annual variation in the Brazil current could be the cause of the large contribution made by western Atlantic boundary densities near $10^\circ$S and $20^\circ$S. In the same region we find a relatively large contribution by $\Psi^{c,AB}_{{bot}}$, suggesting variability in local boundary currents.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Qg_CARv_1_10d.png}}
\caption[Scaled CAR score contributions of density driven boundary components and $\Psi^{c,AB}_{{bot}}$ to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods greater than 10 years removed.]{Scaled CAR score contributions of density driven boundary components and $\Psi^{c,AB}_{{bot}}$ to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods greater than 10 years removed. Panels show the contribution made by (a) $\Psi^{c,AB}_{W}$, (b) $\Psi^{c,AB}_{E}$, (c) $\Psi^{c,MAR}_{W}$, (d) $\Psi^{c,R}_{W}$, (e) $\Psi^{c,R}_{E}$, (f) $\Psi^{c,MAR}_{E}$, (g) $\Psi^{c,GMC}_{W}$, (h) $\Psi^{c,GMC}_{E}$ and (i) $\Psi^{c,AB}_{{bot}}$. Estimation is not possible in shaded grey regions corresponding to one or more of bathymetry, latitudes near the equator, and latitudes not relevant for GMC components.}
\label{f_v_CAR_1_10d}
\end{figure*}
\subsubsection{Band-pass filtering admitting cycles with periods between 10 and 30 years}
Figures \ref{f_v_CAR_10_30} and \ref{f_p_CAR_10_30} show scaled CAR scores for boundary components, and their corresponding percentage contributions, for band-pass filtered annual-mean data with all variation corresponding to periods less than 10 years, or greater than 30 years, eliminated. The figure therefore illustrates the influence of each boundary component on the overturning streamfunction on timescales between 10 and 30 years.
For these timescales, in comparison with the 1-10 year case in Figure \ref{f_v_CAR_1_10}, we see markedly reduced contributions to the total variance of $\Psi^c$ from every component, due to the reduced variance in $\Psi^c$ (Figure \ref{f_v_CAR_T}); the structural features of Figures \ref{f_v_CAR_1_10} and \ref{f_v_CAR_10_30} are otherwise generally similar. Surprisingly, the contribution from $\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$ at mid to high northern latitudes remains relatively large, compared with reduced contributions in the subtropics and particularly in the southern hemisphere between $10^\circ$S and $15^\circ$S.
The reduced contributions to total variance (Figure \ref{f_v_CAR_10_30}) impact the resulting percentage contribution made by each component shown in Figure \ref{f_p_CAR_10_30}. Relative to the analysis of 1-10 year band-pass filtered data in Figure~\ref{f_p_CAR_1_10}, we observe a weakening of the contributions from $\Psi^{c}_{{Ekm}}$ and $\Psi^{c,AB}_{{bot}}$ (Panels (c) and (a)). This is compensated by an increase of boundary density contributions from the Atlantic and GMC (Panels (d) and (g)), supporting the view that the geostrophic component is more important to decadal variability of the overturning ($\Psi^c$) rather than its interannual variations. Atlantic bottom component ($\Psi^{c,AB}_{{bot}}$) remains influential for latitudes between $30^\circ$N and $50^\circ$N, and between $25^\circ$S and $35^\circ$S, where strong boundary currents are present.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Qg_CARv_10_30a.png}}
\caption[Scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods less than 10 years, or greater than 30 years removed.]{Scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods less than 10 years, or greater than 30 years removed. Panels show the contribution made by (a) $\Psi^{c,AB}_{{bot}}$, (b) $\Psi^{c,GMC}_{{bot}}$, (c) $\Psi^c_{Ekm}$, (d) $\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$, (e) $\Psi^{c,MAR}_{W}$ + $\Psi^{c,MAR}_{E}$, (f) $\Psi^{c,R}_{W}$ + $\Psi^{c,R}_{E}$, (g) $\Psi^{c,GMC}_{W}$ + $\Psi^{c,GMC}_{E}$, (h) $\Psi^c_{AC}$ and (i) variance unexplained by MLR-CAR. Estimation is not possible in shaded grey regions corresponding to one or more of bathymetry, latitudes near the equator, and latitudes not relevant for GMC components.}
\label{f_v_CAR_10_30}
\end{figure*}
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Qg_CARp_10_30a.png}}
\caption[Percentage scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods less than 10 years, or greater than 30 years removed.]{Percentage scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods less than 10 years, or greater than 30 years removed. Panel details are given in Figure \ref{f_v_CAR_10_30} caption.}
\label{f_p_CAR_10_30}
\end{figure*}
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Qg_CARp_10_30d.png}}
\caption[Percentage scaled CAR score contributions of boundary density components and $\Psi^{c,AB}_{{bot}}$ to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods less than 10 years, or greater than 30 years, removed.]{Percentage scaled CAR score contributions of boundary density components and $\Psi^{c,AB}_{{bot}}$ to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods less than 10 years, or greater than 30 years, removed. Panels show the contribution made by (a) $\Psi^{c,AB}_{W}$, (b) $\Psi^{c,AB}_{E}$, (c) $\Psi^{c,MAR}_{W}$, (d) $\Psi^{c,R}_{W}$, (e) $\Psi^{c,R}_{E}$, (f) $\Psi^{c,MAR}_{E}$, (g) $\Psi^{c,GMC}_{W}$, (h) $\Psi^{c,GMC}_{E}$ and (i) $\Psi^{c,AB}_{{bot}}$. Estimation is not possible in shaded grey regions corresponding to one or more of bathymetry, latitudes near the equator, and latitudes not relevant for GMC components.}
\label{f_p_CAR_10_30d}
\end{figure*}
There is evidence for increased importance of the MAR density terms in the southern hemisphere in Panel (e) of Figure \ref{f_p_CAR_10_30}, barely visible in the corresponding panel of Figure \ref{f_p_CAR_1_10}. On closer analysis of constituent boundary density terms in Figure \ref{f_p_CAR_10_30d}, the MAR signal is seen to stem from the contribution of densities on the western side of the ridge, $\Psi^{c,MAR}_{E}$ (Figure \ref{f_p_CAR_10_30d} Panel (f)).
The decomposition shown in Figure \ref{f_p_CAR_10_30d} indicates the dominant percentage contributions of western boundary densities to the variance of 10-30 year band-pass filtered data, particularly within the interior of the basin; the contribution of $\Psi^{c,AB}_{W}$ dominates within the majority of the basin, $\Psi^{c,GMC}_{W}$ near the surface and $\Psi^{c,R}_{W}$ at high northern latitudes along the coast of Greenland. Panel (b) shows evidence for an increased contribution made by $\Psi^{c,AB}_{E}$ in the southern hemisphere, although it is of small magnitude. In general, the eastern boundary densities play a small role at these timescales too.
\subsubsection{Band-pass filtering admitting cycles with periods between 30 and 50 years}
Figure \ref{f_p_CAR_30_50} shows percentage scaled CAR scores for the boundary components, for band-pass filtered annual-mean data, with all variation corresponding to periods less than 30 years and greater than 50 years eliminated. As previously mentioned, at increased timescale we find weaker total variance, and hence weaker boundary contributions to the total variance (not shown). We find $\Psi^{c,AB}_{{bot}}$ and $\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$ scaled CAR scores at 30-50 year timescales (not shown), exhibit a similar structure to that found in the corresponding panels of Figure \ref{f_v_CAR_10_30} (for 10-30 year band-pass filter).
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/Qg_CARp_30_50a.png}}
\caption[Percentage scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods less than 30 years, or greater than 50 years removed.]{Percentage scaled CAR score contributions of boundary components to the variance of $\Psi^c$ for band-pass filtered annual-mean compensated overturning streamfunction data, with variation corresponding to periods less than 30 years, or greater than 50 years removed. Panels show the contribution made by (a) $\Psi^{c,AB}_{{bot}}$, (b) $\Psi^{c,GMC}_{{bot}}$, (c) $\Psi^c_{Ekm}$, (d) $\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$, (e) $\Psi^{c,MAR}_{W}$ + $\Psi^{c,MAR}_{E}$, (f) $\Psi^{c,R}_{W}$ + $\Psi^{c,R}_{E}$, (g) $\Psi^{c,GMC}_{W}$ + $\Psi^{c,GMC}_{E}$, (h) $\Psi^c_{AC}$ and (i) variance unexplained by MLR-CAR. Estimation is not possible in shaded grey regions corresponding to one or more of bathymetry, latitudes near the equator, and latitudes not relevant for GMC components}
\label{f_p_CAR_30_50}
\end{figure*}
Compared to Figure \ref{f_p_CAR_10_30}, we note an increase of boundary density contributions (Panels (d) and (g)) at the expense of $\Psi^{c}_{{Ekm}}$ (Panels (c)) and $\Psi^{c,AB}_{{bot}}$ (Panel (a)). Further analysis (not shown) indicates that the boundary density contributions are dominated by $\Psi^{c,AB}_{W}$, with a slight intensification of the $\Psi^{c,AB}_{E}$ contribution between $7^\circ$S and $20^\circ$S in the upper 2000m. The $\Psi^{c,AB}_{W}$ increased contribution at decadal and longer timescales near $45^\circ$N would suggest density variations in DWBC properties and dense waters formed in the Labrador Sea dominate the local variation in the overturning streamfunction in this model simulation. Similarly multi-decadal variability in the southern hemisphere is almost solely dependent on western boundary densities. We find that with increasing timescales, the relative role of $\Psi^{c,AB}_{W}$ + $\Psi^{c,AB}_{E}$ increases within the basin, suggesting that if longer timescales were available, we'd find some agreement with \cite{Waldman2021}, who saw that at centennial timescales, AMOC overturning streamfunction variability in the CNRM-CM6 model is dominated by the thermal wind component (greater than 80\%), and the Ekman component is negligible.
We note an area of increased $\Psi^{c,AB}_{{bot}}$ contribution in the upper 1000m at $30^\circ$N. Interestingly we find that the contribution of southern hemisphere MAR densities at depth in Panel (e) is weaker for these timescales; in contrast, a stronger contribution for latitudes between $10^\circ$N and $20^\circ$N is observed. Further analysis (not shown) indicates that this is due to reduction in $\Psi^{c,MAR}_{E}$ (relative to that observed in the 10-30 year band-pass timescale) in the southern hemisphere, and increase in $\Psi^{c,MAR}_{W}$ at depth near $15^\circ$N at timescales between 30 and 50 years.
\subsection{Summary of MLR-CAR score analysis for 1$^\circ$ and 1/12$^\circ$ models}
The MLR-CAR score analysis was repeated for the $1^\circ$ and $1/12^\circ$ models, and inferences are summarised here. The analysis procedure is similar to that reported above with one exception. Density contributions for the $1^\circ$ and $1/12^\circ$ models are now assigned to western and eastern basin boundary terms only, in contrast to the regional separation (to Atlantic, MAR and GMC) applied to the $1/4^\circ$ model data. This gives a somewhat less detailed description of the boundary density contributions to the overturning variability. Further, we note that for the $1^\circ$ and $1/12^\circ$ simulations the timeseries are shorter than for the $1/4^\circ$ simulation. As a result, the uncertainty in our estimates increases, especially for the longer band-pass timescales considered.
For percentage scaled CAR scores, the impact of model resolution is minimal; in general, similar cross-basin features are present for component contributions at all resolutions. The relative importance of the Atlantic bottom component $\Psi^c_{bot}$ appears to increase with increased resolution, especially in the interval $30^\circ$N to $45^\circ$N due to increased fidelity and strength of bottom velocities. $\Psi^c_E$, representing all eastern boundary density contributions, increases for the $1^\circ$ model at 10-30 year timescale and the $1/12^\circ$ model at 30-50 year timescale, for depths around 1000-3000m between $7^\circ$S and $25^\circ$S. However, the relatively short lengths of the timeseries from the $1^\circ$ and $1/12^\circ$ models suggests that we should interpret our findings more cautiously, relative to the $1/4^\circ$ model.
The total variance of the overturning streamfunction $\Psi^c$ is shown in Figure \ref{f_vRes_CAR}, for all model resolutions and three band-pass timescales (including unfiltered annual-mean data). For unfiltered data in particular, the total variance of $\Psi^c$ is considerably greater for $1/4^\circ$ and $1/12^\circ$ model resolutions. Interestingly, the region of high variance around $37^\circ$N seen in the $1/4^\circ$ and $1/12^\circ$ models is not present at $1^\circ$, regardless of the band-pass timescale.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_AMOC/ResComp_TV.png}}
\caption[Total variance of $\Psi^c$ for different model resolutions and band-pass filtered periods.]{Total variance of $\Psi^c$ for different model resolutions and band-pass filters. (a) unfiltered $1^\circ$, (b) unfiltered $1/4^\circ$, (c) unfiltered $1/12^\circ$, (d) band-pass filtered 1-10 years $1^\circ$, (e) band-pass filtered 1-10 years $1/4^\circ$, (f) band-pass filtered 1-10 years $1/12^\circ$, (g) band-pass filtered 10-30 years $1^\circ$, (h) band-pass filtered 10-30 years $1/4^\circ$ and band-pass filtered 10-30 years $1/12^\circ$. Estimation is not possible in shaded grey regions corresponding to one or more of bathymetry and latitudes near the equator.}
\label{f_vRes_CAR}
\end{figure*}
In terms of scaled CAR score contribution to the total variance of $\Psi^c$, we find for short timescales (band-pass filtered 1-10 years, not shown) a weaker contribution from $\Psi^c_W$ + $\Psi^c_{E}$ for the $1^\circ$ model especially in the southern hemisphere, reflecting the lower total variance of $\Psi^c$ at this coarsest resolution (Figure \ref{f_vRes_CAR}).
Scaled CAR score boundary contributions of the $1/12^\circ$ model show similar features to those of the $1/4^\circ$ model. The influence of $\Psi^c_{bot}$ penetrates to even shallower water near $37^\circ$N. At longer timescales we find that contributions decrease in a similar fashion to the $1/4^\circ$ model, except for the contribution of $\Psi^c_{bot}$ in a small region between 500-4000m at $37^\circ$N, which is uncharacteristically strong for both 10-30 year and 30-50 year band-pass timescales (reflected in Panels (f) and (i) of Figure \ref{f_vRes_CAR}). The unexplained variance is also relatively large within the upper 1000m at $37^\circ$N. Figure \ref{p_RC_Bvel} of Chapter \ref{TJ_TM} shows large positive and negative meridional velocities near $37^\circ$N for the $1/12^\circ$ model. These changes in velocities are likely to be due to the New England or Corner Rise seamounts, two chains of extinct volcanic mountains, off the coast of Massachusetts. These seamounts do not appear in the $1^\circ$ model bathymetric or bottom velocity data. However, they are partially resolved in the $1/4^\circ$ model, but not to the same extent as the $1/12^\circ$ model. This suggests the ability of the $1/12^\circ$ model to resolve smaller-scale features results in greater variability in this region.
\section{Summary}
In this chapter, we have investigated the spatial and temporal variability of the AMOC and factors contributing to it.
The decomposition diagnostic has been shown to give good quantification of the maximum overturning streamfunction at $26.5^\circ$N and $34.5^\circ$S, compared with observations taken from the RAPID and SAMBA arrays, and the expected maximum of the compensated overturning streamfunction $\Psi^c_{\max}$ calculated directly from meridional velocities. The maximum of the compensated estimated overturning streamfunction ($\tilde{\Psi}^c_{\max}$) explains a large proportion of the temporal variability in the maximum compensated expected overturning streamfunctions. We find high correlation between thermal wind component ($\Psi^c_{th_{\max}} = \Psi^c_{W_{\max}}+\Psi^c_{E_{\max}}$, sum of western and eastern boundary density components) and the depth-independent component ($\Psi^c_{bot_{\max}}$, bottom) at SAMBA, especially within high resolution models, clearly visible as large oscillations in the $1/4^\circ$ model.
We considered the role of SSH, bottom velocities and bottom densities for (a) years with maxima and minima in $\Psi^c_{bot_{\max}}$ and (b) specific time-intervals of increasing or decreasing $\Psi^c_{bot_{\max}}$. We find that north-south variation of the latitude of the Brazil-Malvinas confluence impacts western boundary bottom velocities and densities resulting in fluctuation in depth-independent (bottom) and thermal wind (density) contributions to the estimated maximum compensated overturning streamfunction at SAMBA.
Standard deviations of bottom densities at increasing timescales reveal large variations along the western boundary (regions of strong currents) within both higher resolution models, showing larger values of 10-year variability $\overline{s}_{10}$ and 40-year variability $\overline{s}_{40}$ in the Brazil-Argentine basins. Bottom velocities also show large 10-year variability $\overline{s}_{10}$ at high resolutions especially in regions of strong currents (such as the North Brazil, Gulf Stream, North Equatorial currents).
A multiple linear regression model incorporating Correlation Adjusted coRrelation scores (MLR-CAR, \citealt{Zuber2010}) was used to estimate the contributions of boundary components to the total variance of the expected compensated overturning streamfunction $\Psi^c$, using timeseries subject to different band-pass filters, to expose the influence of different timescales. At all timescales, large scaled CAR score contributions to the total variance of $\Psi^c$ are estimated for (a) the Ekman contribution, especially in the upper 2500m for the northern hemisphere, (b) the Atlantic boundary depth-independent (bottom velocity, $\Psi^{c,AB}_{{bot}}$) contribution at mid-depths between $30^\circ$N and $45^\circ$N, and (c) the western Atlantic boundary density component within the majority of the ocean's interior. When the influence of longer timescales are removed from the data (e.g. using a 1-10 year band-pass filter), western (mainly southern hemisphere) and eastern (northern hemisphere) Atlantic boundary density components show a reduced contribution to the total variance. The western Gulf of Mexico and Caribbean Sea boundary density contribution is particularly large at shallow depths, suggesting that surface variance of the total expected overturning streamfunction is dictated by densities in the Gulf of Mexico and Caribbean Sea at these latitudes.
MLR-CAR results based on band-pass filtered data, for different band-pass intervals, suggest that the largest contributions to total variance of the expected overturning streamfunction occur at short timescales (annual, multi-year). Total variance of the expected compensated overturning streamfunction within the interior of the southern hemisphere is (western) boundary-density-dominated, with an influential Atlantic boundary depth-independent contribution at depth also important, and Ekman contribution dominant near the surface. Large variation is found due to the Brazil current, visible within the western Atlantic boundary density and Atlantic bottom velocity terms. In the northern hemisphere, contributions are evenly distributed between the Ekman, western Atlantic boundary density and Atlantic boundary depth-independent contributions, with the latter prevalent throughout the fluid column near $35^\circ$N, possibly due to the interaction of the DWBC and Gulf Stream. The contribution of additional cells is small on a basin-wide scale, but locally crucial in explaining the variance of the expected compensated overturning streamfunction near bathymetry. At longer timescales, the relative importance of the western Atlantic boundary densities increase in the southern hemisphere interior. Interesting localised contributions from eastern and western mid Atlantic ridge boundary density contributions are observed near the equator, possibly connected to Antarctic bottom water moving northward through fracture zones. There is some evidence for important boundary density contributions along the Greenland coast, shown via the boundary density remainder component.
Estimates of the total variance of the expected overturning streamfunction are considerably larger for the $1/4^\circ$ and $1/12^\circ$ model resolutions than for the $1^\circ$ model. However, the impact of model resolution is found to be minimal for percentage scaled CAR score contributions of boundary components.
\chapter*{Acknowledgments}
\vspace{60pt}
\begin{center}
\textit{``Dyfal donc a dyr y garreg.''}\\
\hspace{150pt}Welsh proverb
\end{center}
\vspace{60pt}
Thanks to my supervisors Helen, David and Mike for their support at Oxford and the Met Office over the last four years. Thanks further to Mike, Dave and Pat for their guidance and welcome at the Met Office. Diolch i fy nheulu am bopeth, yn enwedig i mam, dad a Emma am eu help, cefnogaeth ac am ddyfalbarhau tan y diwedd.
\vspace{60pt}
\begin{center}
\textit{``Po\v{r}\'adn\'a kn\'i\v{z}ka nen\'i pro to, aby \v{c}ten\'a\v{r} l\'ip usnul, ale vysko\v{c}il z postele a rovnou v podvl\'ika\v{c}k\'ach b\v{e}\v{z}el panu spisovateli napl\'acat dr\v{z}ku.''}\\
\hspace{150pt}Bohumil Hrabal
\end{center}
\chapter{Appendix}
\label{TJ_App}
\section{Reliability of decomposition time-mean estimate for AMOC and the role of additional cells} \label{App_SpatVar}
Suppose we have observations $\{\Psi_i\}_{i=1}^n$ of the time-mean total overturning at a depth-latitude combination indexed by $i$, and observations $\{\Psi_i^*\}_{i=1}^n$ of the time-mean interior transport, which ignores contributions from partial and sidewall cells. Suppose we also have estimates $\{\hat{\Psi}_i\}_{i=1}^n$ of the time-mean total overturning from the AMOC decomposition model (also ignoring partial and sidewall cells). We would like to quantify and compare how well our estimate $\hat{\Psi}$ explains $\Psi$ and $\Psi^*$. We can do this by estimating the \textbf{mean} $m$ and \textbf{variance} $v$ of the difference between $\Psi$ or $\Psi^*$ and $\hat{\Psi}$, using the equations
\begin{eqnarray}
m(x) &=& \frac{1}{n} \sum_{i=1}^{n} x_i \\
v(x) &=& \frac{1}{n} \sum_{i=1}^{n} \left( x_i - m(x) \right) ^2
\end{eqnarray}
where $x$ refers to the difference between streamfunction estimates, and the summation is made over all latitudes and depths in the cross-section. We find that for the $1/4^\circ$ model
\begin{eqnarray}
\mu &=& m(\Psi-\hat{\Psi}) = -0.056 \text{ Sv} \nonumber \\
\sigma^2 &=& v(\Psi-\hat{\Psi}) = 1.018 \text{ Sv}^2 \nonumber \\
\mu^* &=& m(\Psi^*-\hat{\Psi}) = -0.161 \text{ Sv} \nonumber \\
\sigma^{*2} &=& v(\Psi^*-\hat{\Psi}) = 0.458 \text{ Sv}^2. \label{e_A_BiasVar}
\end{eqnarray}
Results show that the variance of the interior transport error $\Psi^*-\hat{\Psi}$ is almost exactly one quarter the variance of the full transport error $\Psi-\hat{\Psi}$, indicating that additional cells are responsible for increasing the uncertainty of our estimate for $\Psi$. However, the additional contributions also reduce the magnitude of the bias (or mean error) from $-0.161$Sv for $\Psi^*$ to $0.056$ Sv for $\Psi$.
Assuming the true value $\Psi$ follows a Gaussian distribution given $\hat{\Psi}$, we can write
\begin{eqnarray}
\Psi &=& \hat{\Psi} + \mu + \sigma \times \epsilon \nonumber \\
\Psi^* &=& \hat{\Psi} + \mu^* + \sigma^* \times \epsilon
\end{eqnarray}
where $\epsilon$ is a random number with a standard Gaussian distribution (with mean zero and variance one). Our data also shows that
\begin{eqnarray}
v(\Psi) = 24.427 \text{Sv}^2 \nonumber \\
v(\Psi^*) = 21.466 \text{Sv}^2 .
\end{eqnarray}
Comparing $v(\Psi)$ with $v(\Psi-\hat{\Psi})$ (or $v(\Psi^*)$ with $v(\Psi^*-\hat{\Psi})$), we see that our estimate explains most of the variability of time-mean transport. For example, the percentage of the variance of $\Psi$ explained by $\hat{\Psi}$ is $(1 - 1.018/24.427) \times 100 = 95.8 \%$. The percentage of the variance of $\Psi^*$ explained by $\hat{\Psi}$ is $(1 - 0.458/21.466 ) \times 100 = 97.9 \%$.
\subsection*{Percentage variance explained}
In more general terms, the percentage variance explained by estimate $Y$ of variable $X$ is calculated using
\begin{eqnarray}
1-\frac{\sum_{i=1}^n \left( (x_i - \overline{x}) - (y_i - \overline{y}) \right) ^2}{\sum_{i=1}^n (x_i - \overline{x})^2}
\label{e_PrcVarExp}
\end{eqnarray}
where $\{x_i\}_{i=1}^n$ and $\{y_i\}_{i=1}^n$ are samples of values for $X$ and $Y$ of size $n$, and $\overline{x}$ and $\overline{y}$ are sample means, which might be observed over time or space or both.
\section{Effect of Coriolis acceleration and bottom drag on the estimates of the overturning streamfunction}
\label{App_Coriolis}
Analysis of the overturning streamfunctions in Chapter \ref{TJ_TM} shows a depth-dependent difference for numerous latitudes between the estimated $\tilde{\Psi}^c$ and expected $\Psi^c$ streamfunctions, which increases as the surface is approached. This suggests the possibility that small errors near the bottom of the fluid column are propagated upwards due to vertical integration in overturning streamfunction calculations, leading to large differences near the surface.
Possible sources of error could be the simplification of the Coriolis acceleration ($fv$) calculation near bathymetry, or lack of consideration of bottom friction and drag within the diagnostic framework. These influences can be accounted for as additional terms within the geostrophic balance equation (Equation \ref{HB}), satisfied by the NEMO model. We extend the geostrophic balance equation to include two correction terms $B$ and $C$. $B$ represents a bottom drag correction and $C$ a correction for the simplified treatment of Coriolis acceleration. Informally, the extended equation is
\begin{equation}
\frac{\partial P}{\partial x} = \rho_0 f(y) v_{4pt} + C + B
\label{e_HBext}
\end{equation}
where the Boussinesq approximation is applied ($\rho \rightarrow \rho_0$) and $v_{4pt}$ refers to the northward velocity calculated at the $u$-point using a 4-point average of the local northward velocities at the neighbouring $v$-points.
Although there is a contribution to the overturning streamfunction from bottom drag, we find that the coarse vertical grid structure, especially at depth, renders any inclusion of a bottom drag term within the geostrophic balance equation of negligible value. The contribution of the bottom drag only influences cells adjacent to bathymetry, which are generally additional cells. However, discrepancies between estimated $\tilde{\Psi}^c$ and expected $\Psi^c$ streamfunctions are also found for complete interior cells, which are not influenced by bottom drag. We therefore set $B\approx0$.
Currently, the NEMO model (\citealt{Madec2016}) uses an Energy and Enstrophy conserving scheme (EEN, \ref{EEN_fv}) for the Coriolis term. The Coriolis acceleration term within the \textit{EEN} scheme is calculated using a 12-point average, taking the form of 4 sets of triads (3 points in an ``L'' shape) which together form a weighted average for $fv$ at each $u$-point, according to
\begin{equation}
\label{EEN_fv}
(q v^*)_{EEN} = \frac {2}{3} \overline{ ( \overline{q}^x v^* ) }^{xy}
+ \frac {2}{3} \left( { \overline{q}^y \overline{v}^{*xy} } \right)
- \frac {1}{3} \overline{ \left( { q \overline{v}^{*x} } \right) }^y , \\
\end{equation}
where $v^* = H_v v$ is the flux through a cell face and $H_v$ is the depth of the cell face. Further, $q = (f + \zeta)/H_{\zeta}$, where $\zeta$ is the vorticity and $H_{\zeta}$ is the cell depth at the vorticity point. The $12$ contributions to $(fv)_{EEN}$ in the \textit{EEN} scheme take the form \textbf{$\frac{f}{H_f}H_v v$} where $H_f$ and $H_v$ are the depths of a grid cell at $f$ and $v$ points respectively. The resulting Coriolis acceleration is equal to the sum of weighted values of $v$ at surrounding $4$ points, with unequal weightings. The Coriolis acceleration correction in Equation \ref{e_HBext} is
\begin{eqnarray}
C = (fv^*)_{EEN} - fv_{4pt}.
\end{eqnarray}
The greater number of calculation locations within the EEN scheme improves the resulting overturning streamfunction estimates, reducing errors caused by bathymetry in places where the simplified 4-point velocity method performs poorly. Improvement in estimated overturning streamfunction is found especially in regions of rough bathymetry; these improvements are, however, small in comparison to the overall differences $\Psi^c - \tilde{\Psi}^c$ in time-mean overturning streamfunction present.
\section{Difficulty in decoupling western and eastern contribution to the maximum overturning streamfunction}
\label{App_MaxWestEast}
In Figures \ref{p_Qtd_MxStrmLtt} and \ref{p_RC_BdryMxStrmLtt}a of Chapter \ref{TJ_TM} we find $\Psi^c_{{th}_{\max}}$ (i.e. the sum of the eastern and western boundary components) exhibits significant variability with latitude. Examination of the individual eastern and western boundary components is difficult due to the magnitude and close coupling found between the components (Figure \ref{p_MaxStrm_WE}). For brevity, we choose to refer to these quantities as $\Psi^c_{E_{\max}}$ and $\Psi^c_{W_{\max}}$, evaluated at depth $\tilde{d}_{\max}^c$ of the maximum estimated streamfunction.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_App/plt_Cmp_MaxStrm_tD_WE.png}}
\caption[Western and eastern boundary component contributions to the maximum estimated compensated overturning streamfunction with respect to latitude for the $1/12^\circ$ model.]{Western and eastern boundary component contributions to the maximum estimated overturning streamfunction with respect to latitude for the $1/12^\circ$ model. Panel (a) shows the actual values for the components, (b) the absolute values of $\Psi^c_{{W}_{\max}}$ and $\Psi^c_{{E}_{\max}}$ components and (c) the absolute ratio of $\tilde{\Psi}^c_{\max}$ relative to $\Psi^c_{{E}_{\max}}$. Western boundary component shown in pink, eastern in blue and their sum in orange.}
\label{p_MaxStrm_WE}
\end{figure*}
Figure \ref{p_MaxStrm_WE} highlights the contribution of $\Psi^c_{{W}_{\max}}$, $\Psi^c_{{E}_{\max}}$ and $\Psi^c_{{th}_{\max}}$ components to the maximum estimated overturning streamfunction ($\tilde{\Psi}^c_{\max}$) for each latitude for the $1/12^{\circ}$ model. Panel (a) shows the actual values for the components, (b) the absolute values of $\Psi^c_{{W}_{\max}}$ and $\Psi^c_{{E}_{\max}}$ components and (c) the absolute ratio of $\Psi^c_{{W}_{\max}}$ relative to $\Psi^c_{{E}_{\max}}$. We find different behaviour in each of three latitude intervals. First, throughout the southern hemisphere, $\Psi^c_{{W}_{\max}}$ is positive, and its magnitude is greater than the (negative-valued) $\Psi^c_{{E}_{\max}}$ (Figure \ref{p_MaxStrm_WE}(a,b)); this leads to a significant northward $\Psi^c_{{th}_{\max}}$ contribution. Next, at low latitudes in the northern hemisphere up to $40^\circ$N, sign reversal of $\Psi^c_{{E}_{\max}}$ and $\Psi^c_{{W}_{\max}}$ components occurs due to the change in Coriolis parameter relative to the southern hemisphere, and now the (positive-valued) $\Psi^c_{{E}_{\max}}$ is of greater magnitude; this again leads to a northward volume transport. Finally, at high northern latitudes, the magnitudes of both $\Psi^c_{{W}_{\max}}$ and $\Psi^c_{{E}_{\max}}$ reduce dramatically. We observe another sign-reversal at around $40^\circ$N, and also a relatively larger $\Psi^c_{{W}_{\max}}$ contribution. Between latitudes $50^\circ$N and $60^\circ$N, the magnitude of $\Psi^c_{{E}_{\max}}$ is considerably smaller than that of $\Psi^c_{{W}_{\max}}$.
We further investigate the characteristics of the red line in Figure \ref{p_RC_BdryMxStrmLtt}(a) for two latitude bands where a large change in $\Psi^c_{{th}_{\max}}$ is seen, corresponding to latitudes $7^\circ$N to $17^\circ$N and $24^\circ$N to $36^\circ$N. Within these latitude ranges we find large changes in the $\Psi^c_{{th}_{\max}}$ contribution to $\tilde{\Psi}^c_{\max}$. Figure \ref{p_MaxStrm_WE_inv} emphasises the difficulty in decoupling $\Psi^c_{{E}_{\max}}$ and $\Psi^c_{{W}_{\max}}$, and attributing a change in $\Psi^c_{{th}_{\max}}$ to a strengthening or weakening in either of $\Psi^c_{{E}_{\max}}$ or $\Psi^c_{{W}_{\max}}$. The figure shows actual and absolute $\Psi^c_{{E}_{\max}}$ or $\Psi^c_{{W}_{\max}}$ for the two intervals of interest.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_App/plt_Cmp_MaxStrm_tD_WE_inv.png}}
\caption[$\Psi^c_{{E}_{\max}}$ and $\Psi^c_{{W}_{\max}}$ contributions to $\tilde{\Psi}^c_{\max}$ with respect to latitude for the $1/12^\circ$ model.]{$\Psi^c_{{E}_{\max}}$ and $\Psi^c_{{W}_{\max}}$ contributions to $\tilde{\Psi}^c_{\max}$ with respect to latitude for the $1/12^\circ$ model. Panels (a) and (b) refer to latitudes $7^\circ$N to $17^\circ$N and Panels (c) and (d) refer to latitudes $24^\circ$N to $36^\circ$N. $\Psi^c_{{W}_{\max}}$ is given in pink, and $\Psi^c_{{E}_{\max}}$ in blue. Their sum ($\Psi^c_{{th}_{\max}}$) is given in orange. Panels (b) and (d) show the absolute values of boundary components in Panels (a) and (c).}
\label{p_MaxStrm_WE_inv}
\end{figure*}
From Figure \ref{p_MaxStrm_WE_inv}(b), we attribute the peak in $\Psi^c_{{th}_{\max}}$ at approximately $8^\circ$N to an apparent relative reduction in the magnitude of $\Psi^c_{{W}_{\max}}$, which is not as pronounced at higher latitudes, resulting in a subsequent reduction in the overall northward $\Psi^c_{{th}_{\max}}$ contribution as latitude increases here.
For the majority of the northern hemisphere (up to $40^\circ$N), $\Psi^c_{{E}_{\max}}$ has a greater magnitude. In the interval $24^\circ$N to $27^\circ$N, the magnitudes of both $\Psi^c_{{E}_{\max}}$ and $\Psi^c_{{W}_{\max}}$ reduce significantly. However, the magnitude of $\Psi^c_{{W}_{\max}}$ reduces more quickly at around $25^\circ$N, resulting in an enhanced $\Psi^c_{{th}_{\max}}$ there. In the interval between $25^\circ$N to $27^\circ$N, the magnitudes of $\Psi^c_{{E}_{\max}}$ and $\Psi^c_{{W}_{\max}}$ contributions reduce in tandem resulting in a weakening $\Psi^c_{{th}_{\max}}$.
Between $30^\circ$N and $35^\circ$N, $\Psi^c_{{E}_{\max}}$ is largest in magnitude. Around $31.5^\circ$N, a relatively strong $\Psi^c_{{W}_{\max}}$ leads to a weakening in $\Psi^c_{{th}_{\max}}$. North of $31.5^\circ$N, $\Psi^c_{{W}_{\max}}$ weakens to a small value, resulting in a relatively large overall northward transport. At near $36^\circ$N, $\Psi^c_{{E}_{\max}}$ becomes negative, but $\Psi^c_{{W}_{\max}}$ positive, again resulting in a small northward transport.
\section{Decomposition of SAMBA timeseries}
\label{App_SmbRap}
Table \ref{Tab_SVSmb} shows the temporal variation of $\Psi^c_{\max}$ explained by $\tilde{\Psi}^c_{\max} - \Psi_{AC_{\max}}^c$ and $\tilde{\Psi}^c_{\max}$ calculated using Equation \ref{e_PrcVarExp} for the SAMBA array timeseries, discussed in Chapter \ref{TJ_Var}. With $\Psi_{AC}^c$ included, percentage variance explained does not improve materially with model resolution. Without additional cells, the percentage variance explained is better than for RAPID timeseries in general, but particularly poor results are found for the $1/4^\circ$ resolution.
\begin{table}[h!]
\centering
\begin{tabular}{ |P{2.7cm}||P{2.7cm}|P{2.7cm}|P{2.7cm}| }
\hline
\multicolumn{4}{|c|}{Temporal variance of $\Psi^c_{\max}$ explained at SAMBA ($34.5^\circ$S)} \\
\hline
& $1^\circ$ & $1/4^\circ$ & $1/12^\circ$ \\
\hline
$\tilde{\Psi}^c_{\max} - \Psi_{AC_{\max}}^c$ &$97.8\%$ & $62.5\%$ & $88.1\%$\\
$\tilde{\Psi}^c_{\max}$ & $99.4\%$ & $91.0\%$ & $92.3\%$\\
\hline
\end{tabular}
\caption{The role of additional cell transport contributions at SAMBA array ($34.5^\circ$S), calculated at the depth of maximum estimated overturning streamfunction: temporal variation of $\Psi^c_{\max}$ explained by $\tilde{\Psi}^c_{\max} - \Psi_{AC_{\max}}^c$ and $\tilde{\Psi}^c_{\max}$ calculated using Equation \ref{e_PrcVarExp}.}
\label{Tab_SVSmb}
\end{table}
\subsection*{Correlation between components at SAMBA}
Investigating the correlation between all boundary components at the maximum estimated streamfunction depth, displayed in Figure \ref{Tab_SVSmb}, reveals a strong relationship between $\Psi^c_{th_{\max}}$ and $\Psi^c_{bot_{\max}}$, especially within the $1/4^\circ$ and $1/12^\circ$ models. Results are shown in Table \ref{Tab_CrSmb}.
\begin{table}[h!]
\centering
\begin{tabular}{ |P{3.3cm}||P{2.7cm}|P{2.7cm}|P{2.7cm}| }
\hline
\multicolumn{4}{|c|}{Correlation between components at SAMBA ($34.5^\circ$S)} \\
\hline
& $1^\circ$ & $1/4^\circ$ & $1/12^\circ$ \\
\hline
$\Psi^c_{\max}$ vs. $\tilde{\Psi}^c_{\max}$ &$1.00$ & $0.96$ & $0.96$\\
$\tilde{\Psi}^c_{\max}$ vs. $\Psi^c_{th_{\max}}$ & $0.58$ & $0.45$ & $0.62$\\
$\tilde{\Psi}^c_{\max}$ vs. $\Psi^c_{bot_{\max}}$ & $0.32$ & $0.01$ & $-0.15$\\
$\Psi^c_{th_{\max}}$ vs. $\Psi^c_{bot_{\max}}$ & $-0.46$ & $-0.86$ & $-0.84$\\
$\Psi^c_{th_{\max}}$ vs. $\Psi^c_{W_{\max}}$ & $0.17$ & $0.65$ & $0.39$\\
$\Psi^c_{th_{\max}}$ vs. $\Psi^c_{E_{\max}}$ & $0.68$ & $-0.03$ & $-0.16$\\
\hline
\end{tabular}
\caption[Correlation between components at SAMBA array.]{Correlation between components at SAMBA array ($34.5^\circ$S), calculated at depth of maximum estimated overturning streamfunction, $\tilde{\Psi}^c_F$ for all model resolutions.}
\label{Tab_CrSmb}
\end{table}
Lagged correlations between $\Psi^c_{th_{\max}}$ and $\Psi^c_{bot_{\max}}$ are no larger than the values shown in Table \ref{Tab_CrSmb}.
\section{Normalised variation in standard deviation \textbf{$\sigma_{sp}^*$} of Atlantic bottom densities and velocities}
\label{Var_BD_Nrm}
The normalised variation $\sigma_{sp}^*$ in the standard deviation $\overline{s}_p$ is introduced in Section \ref{Std:Thr}. Using Equation \ref{e_sigma_sp}, we calculate $\sigma_{sp}^*$ for in-situ bottom densities for various periods $p$. $\sigma_{sp}^*$ for $p=10$ and $40$ years is shown in Figures \ref{p_SS_p10} and \ref{p_SS_p40} respectively. Figure \ref{p_SS_p10} indicates large variation between model resolutions. We find that the majority of the interior has a small $\sigma_{s10}^*$ value, for all resolutions, suggesting insensitivity to the starting year. In contrast, near the boundaries, especially in the west, we find large values of $\sigma_{s10}^*$ particularity in the $1^\circ$ and $1/12^\circ$ models, indicating $\overline{s}_{10}$ in western boundary currents are sensitive to the initial starting year.
The $1^\circ$ model shows large values along the majority of the western boundary, including the shores of Greenland; the same cannot be said for the $1/4^\circ$ model. One could argue that a similar structure is found at $1/4^\circ$, but with much smaller magnitudes; regions of strong $\sigma_{s10}^*$ are isolated to (a) northern Brazil and (b) northern Gulf of Mexico. The $1/12^\circ$ model shows large values of $\sigma_{s10}^*$ near (a) Malvinas current, (b) northward Brazilian current, (c) northern Gulf of Mexico and (d) the region of the Gulf Stream. All of these are locations of boundary currents and therefore regions of high variability; the chosen start year will have greater impact on $\overline{s}_{10}$.
\begin{figure}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_App/SigmaStar_10_Bd.png}}
\caption[Normalised variation in standard deviation, $\sigma_{s10}^*$ for in-situ bottom density for $1^\circ$, $1/4^\circ$ and $1/12^\circ$ models.]{Normalised variation in standard deviation, $\sigma_{s10}^*$ for in-situ bottom density. Panel (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ show $\sigma_{s10}^*$ for each model resolution within the Atlantic basin, using observation periods of $p=10$ years.}
\label{p_SS_p10}
\end{figure}
Figure \ref{p_SS_p40} shows a clear increase in $\sigma_{s40}^*$ for both $1/4^\circ$ and $1/12^\circ$ model resolutions, compared with $\sigma_{s10}^*$. For the $1/4^\circ$ model, increasing $p$ from 10 to 40 results in greater normalised variation in the standard deviation $\sigma_{sp}^*$ off the eastern coast of the United States. This characteristic is also reflected in the $1/12^\circ$ model, for which large values of $\sigma_{s40}^*$ occur along the majority of the western boundary. In contrast to $\sigma_{s10}^*$, we find $\sigma_{s40}^*$ has much larger values within the interior of the southern hemisphere basin for both resolutions, especially evident in the $1/12^\circ$ model (Figure \ref{p_SS_p40}(b)) at (a) south of the Walvis ridge off the south-eastern flank of Africa and (b) the Brazil and Argentine basins.
\begin{figure}[ht!]
\centerline{\includegraphics[width=12cm]{Fig_App/SigmaStar_40_Bd.png}}
\caption{Normalised variation in standard deviation $\sigma_{s40}^*$ for bottom densities. Panel (a) $1/4^\circ$ and (b) $1/12^\circ$ show $\sigma_{s40}^*$ for each model resolution within the Atlantic basin, using observation periods of $p=40$ years.}
\label{p_SS_p40}
\end{figure}
Figure \ref{p_SSbv_p10} shows the normalised variation $\sigma_{s10}^*$ in the standard deviation for bottom velocities, with $\sigma_{s10}^*$ larger for coarser resolution models. The $1^\circ$ model in Panel (a) exhibits largest values at high northern latitudes, potentially due to poorer representation of the subpolar gyre and DWBC. Large $\sigma_{s10}^*$ suggest sensitivity to the initial year used for $\overline{s}_{10}$ calculation. Therefore both coarser models suggest larger discrepancies in bottom velocities in the first few years, whereas the $1/12^\circ$ model shows bottom velocities especially in the DWBC are relatively stable for this initial period. The sensitivity of the $1^\circ$ model supports the larger linear trend in the subpolar region, indicating significant changes in DWBC and likely deep water formation in the Labrador and Nordic Seas during the first 10 years. For longer periods ($\sigma_{s40}^*$, not shown), we find a similar spatial structure for both higher resolution models, with the $1/12^\circ$ model showing increased magnitudes of $\sigma_{s40}^*$, suggesting greater sensitivity to the starting year from year 10 to 40.
\begin{figure}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_App/SigmaStar_10_Bv.png}}
\caption{Normalised variations in standard deviation $\sigma_{s10}^*$ for bottom velocities. Panel (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ for each model resolution within the Atlantic basin, using observation periods of $p=10$ years.}
\label{p_SSbv_p10}
\end{figure}
\section{Reconstruction of isopycnal structure using depth profiles from a limited number of locations}
\label{Bdry_MCMC_App}
There is evidence in Chapter \ref{TJ_Bdry} that the isopycnal structure on an eastern boundary of an ocean basin is less complex, as a function of latitude and depth, than on the western boundary. For example, on the eastern boundary of the Atlantic, we hypothesise that density profiles with depth over a large range of latitudes can be approximated by just a small number of depth profile measurements at well-chosen locations. Using just these density profile measurements, linear interpolation is then adequate to estimate density profiles at other latitudes of interest. This would yield a simple useful low-order reconstruction of eastern boundary density in particular. In this section we seek to quantify the extent to which this the case on both eastern and western boundaries of the Atlantic.
Briefly, we estimate a piecewise linear model for time-mean density $\rho$ (over all years of data available) as a function of latitude $y$ and depth $z$, independently for each of the eastern and western boundaries discussed in the Section \ref{Bdry_EW}. We aim to use the minimum number of linear pieces to reconstruct the density structure to some pre-specified quality. We expect that, if the eastern boundary exhibits a simpler isopycnal structure, that the minimum number of linear pieces to explain the eastern boundary will be lower than for the western boundary. The methodology is implemented using Bayesian inference, and is explained in more detail in Appendix \ref{App_MCMC}.
Results below consider reconstruction of time-average eastern and western latitude-depth profiles for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ models. First we examine the influence of the choice of number of depth profiles on the quality of reconstruction for each model resolution. Next we identify the minimum number of depth profiles required to achieve a specified quality of reconstruction for each model resolution. Then we visualise the actual and reconstructed latitude-depth boundary densities.
Figure \ref{F_MCMC_nLL1} illustrates the quality of reconstruction as a function of the number of depth profiles $n_r$ (see Appendix \ref{App_MCMC}) for the eastern and western boundaries, for each model resolution. Reconstruction quality is quantified in terms of the posterior negative log likelihood (PNLL). PNLL is closely related to the least squares error between the actual and estimated latitude-depth density structure over all latitudes and depths; as PNLL decreases, the quality of reconstruction increases. We see that, to achieve a given value of PNLL requires considerably larger $n_r$ on the western boundary compared to the eastern at all model resolutions. For example, given 5 depth profiles on the eastern boundary, to achieve a comparable quality on the western boundary would require 13 profiles for the $1^\circ$ model.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=12cm]{Fig_App/plt_Nloglkhdm.png}}
\caption[Reconstruction quality of time-mean latitude-depth density structure for eastern and western boundaries quantified using posterior negative log likelihood, for $1^\circ$, $1/4^\circ$ and $1/12^\circ$ models.]{Reconstruction quality of time-mean latitude-depth density structure for eastern (blue) and western (red) boundaries quantified using posterior negative log likelihood (PNLL), for (a) $1^\circ$, (b) $1/4^\circ$ and (c) $1/12^\circ$ models. Dashed line indicates the PNLL threshold for 5 eastern boundary profiles.}
\label{F_MCMC_nLL1}
\end{figure*}
Figure \ref{F_MCMC_BdryRecon1} provides an illustration of actual and reconstructed time-mean latitude-depth density structures, and their difference, for the eastern and western boundaries. The number $n_r$ of depth profiles is chosen so that the quality of reconstruction (over all depths) is the same for the eastern and western boundaries. We note that the number of depth profiles required to explain the eastern boundary is smaller than that for the western boundary. Figure \ref{F_MCMC_BdryRecon1}(e)(f) show the eastern boundary reconstruction struggles at latitudes $35^\circ$N to $50^\circ$N near 2000m, since as discussed previously, the flat eastern boundary approximation breaks down here. The original densities (Panel (a)) shows a region of denser water at 2000m and approximately $37^\circ$N, very close to the Mediterranean outflow. Other notable differences are found near the overflow regions at latitude $60^\circ$N.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_App/Bdens_recom_13nPW_5nPE_GC3_1C_ap466.png}}
\caption[Original and reconstructed time-mean latitude-depth density structures, and their difference, for the eastern and western boundaries using $n_r=$ 13 and 5 for the West and East respectively, corresponding to equal reconstruction performance in the $1^\circ$ model.]{Original and reconstructed time-mean latitude-depth density structures, and their difference, for the eastern and western boundaries using $n_r=$ 13 and 5 for the West and East respectively, corresponding to equal reconstruction performance in the $1^\circ$ model. Panels are: (a) Original West, (b) Original East, (c) Estimated West, (d) Estimated East, (e) Original-Estimate for West, (f) Original-Estimate for East. Dashed vertical lines indicated the location of depth profiles required for reconstruction.}
\label{F_MCMC_BdryRecon1}
\end{figure*}
The current analysis provides a means of automatically selecting locations for depth profile measurements of density along both eastern and western boundaries, in order to provide optimal reconstruction of the density structure (with latitude and depth) for the whole basin. Results indicate that the number $n_r$ of profiles required to reconstruct the eastern boundary density structure is lower that that required for the western boundary. The current methodology raises the possibility of further simplifying modelling of boundary density contributions to the decomposition diagnostic for the whole Atlantic basin.
\subsection{Outline of estimation procedure}
\label{App_MCMC}
Here we provide an outline of the statistical analysis performed to assess the complexity of latitude-depth contours for eastern and western boundary densities.
We assume measurements $\{\rho(y_j,z_k)\}_{j=1,k=1}^{n_y,n_z}$ are available for density on a grid of $n_y$ latitudes $\{y_j\}_{i=1}^{n_y}$ and $n_z$ depths $\{z_k\}_{k=1}^{n_z}$ for each of the eastern and western boundaries. The objective of the analysis is to find the smallest subset $\mathcal{R}=\{r_a\}_{a=1}^{n_r}$ of $|\mathcal{R}|=n_r$ latitudes, with $n_r \ll n_y$, which provides an adequate piecewise linear reconstruction of the full latitude-depth cross-sectional density.
Piecewise linear approximations to functions have been studies for many years (e.g. \citealt{Hamann1994}, \citealt{Muggeo2003}). Writing the piecewise linear estimate for density given $\mathcal{R}$ as $\hat{\rho}(y,z|\mathcal{R})$, we seek to minimise the loss function $L(\mathcal{R})$
\begin{eqnarray}
L(\mathcal{R}) = \sum_j \sum_k (\rho(y_j,z_k)-\hat{\rho}(y_j,z_k|\mathcal{R}))^2 \label{e_MCMC_1}
\end{eqnarray}
where $\hat{\rho}(y,z|\mathcal{R})$ is the piecewise linear reconstruction given by
\begin{eqnarray}
\hat{\rho}(y,z|\mathcal{R}) = \frac{\delta_L \rho(r_{a^*},z) + \delta_U \rho(r_{{a^*}+1},z)}{{r_{{a^*}+1}-r_{a^*}}} \label{e_MCMC_2}
\end{eqnarray}
where $a^*=\argmax{a}\{r_a : r_a<y\}$, $\delta_L=y-r_{a^*}$ and $\delta_U=r_{a^*+1}-y$. The optimal choice $ \mathcal{R}^*_{n_r}$ of $\mathcal{R}$ for any $n_r$ is then
\begin{eqnarray}
\mathcal{R}^*_{n_r} = \argmin{\mathcal{R} , |\mathcal{R}|=n_r} L(\mathcal{R}) . \label{e_MCMC_3}
\end{eqnarray}
Then we find the smallest value of $n_r$ such that the loss using the corresponding $\mathcal{R}^*_{n_r}$ is less than a given value $C$
\begin{eqnarray}
\mathcal{R}^* = \argmin{\mathcal{R}^*_{n_r}} L(\mathcal{R}^*_{n_r})<C \label{e_MCMC_4}
\end{eqnarray}
and the corresponding optimal size of $\mathcal{R}$ is $n_r^*=|\mathcal{R}^*|$.
We perform this analysis separately for the eastern and western boundaries. If we compare boundaries fairly, e.g. by adopting a common value for $C$ on both boundaries, we are therefore able to quantify whether one boundary exhibits a more complex density structure, simply by comparing the values of $|\mathcal{R}_E^*|$ and $|\mathcal{R}_W^*|$ for a given $C$.
\subsection{Bayesian inference}
There are many different approaches that could be used to find solutions to Equation \ref{e_MCMC_3}. Here, for each choice of $n_r$, we find the optimal $\mathcal{R}$ using a statistical technique for Bayesian inference called Markov chain Monte Carlo (MCMC; \citealt{Gamerman2006}, \citealt{Gelman2013}). MCMC inference is widely used in many scientific fields, including modelling of ocean density (e.g. \citealt{Economou2019}). For the current work, in precise terms, we seek to find the values of $r_1, r_2, ...r_{n_r}$ which provide the best piecewise linear representation of the latitude-depth density cross-section, by minimising $L(\mathcal{R})$. We proceed as follows.
\textbf{Prior specification:} First, we specify prior distributions $f(r_a)$ for each of the $r_a$, $a=1,2,...,n_r$. Here, we assume that each is uniformly distributed on the latitude domain of interest. The joint prior distribution of all the boundary locations is then $f(\mathcal{R})=\prod_{a=1}^{n_r} f(r_a)$, where each of the terms $f(r_a)$ is a constant defined on the latitude domain. Since the prior is a constant for a given value of $n_r$ regardless of the choices $r_1, r_2, ...r_{n_r}$, the prior plays no further role in the analysis.
\textbf{Likelihood:} Next, we specify a likelihood for any piecewise linear representation given the observed density data. Writing the observed data set as $\{\rho(y_j,z_k)\}_{j=1,k=1}^{n_y,n_z}=D$ for brevity, we assume that this likelihood is Gaussian, of the form
\begin{eqnarray}
f(D | \mathcal{R}) = \frac{1}{(2 \pi)^{1/2} \kappa} \exp\left[ - \frac{L(\mathcal{R})}{2 \kappa^2}\right] \label{e_MCMC_5}
\end{eqnarray}
where $L(\mathcal{R})$ is the loss criterion from Equantion~\ref{e_MCMC_1} and $\kappa$ is the measurement uncertainty which we also specify before the analysis. The setting of $\kappa$ is discussed below, in particular to accommodate the effects of varying GCM cell dimensions with latitude and depth.
\textbf{Posterior estimation:} Finally, we estimate the posterior distribution $f(\mathcal{R}|D)$ of the parameters $\mathcal{R}$ by applying Bayes' theorem
\begin{eqnarray}
f(\mathcal{R}|D) = \frac{f(D | \mathcal{R}) f(\mathcal{R})}{f(D)} \label{e_MCMC_6}
\end{eqnarray}
where $f(D)$ is called the ``evidence'' and is in general an expensive integral to calculate. $f(\mathcal{R}|D)$ provides the joint posterior distribution of the optimal latitudes $\mathcal{R}$ at which density profile measurements should be made.
\textbf{Gibbs sampling:} Fortunately, using MCMC, we can estimate $f(\mathcal{R}|D)$ without calculating $f(D)$. The procedure we use is called Gibbs sampling, and works by iteratively sampling from each of the following full conditional distributions in turn
\begin{eqnarray}
&f(r_1 | D, r_2, r_3, ..., r_{n_r})& \label{e_MCMC_7} \\
&f(r_2 | D, r_1, r_3, ..., r_{n_r})& \nonumber \\
&...& \nonumber \\
&f(r_n | D, r_1, r_2, ..., r_{n_r-1})& . \nonumber
\end{eqnarray}
If a sufficient number of iterations is used, it can be shown that this sampling procedure yields a random sample from the posterior distribution $f(\mathcal{R}|D)$.
\textbf{Metropolis-Hastings sampling:}
Unfortunately, we cannot evaluate the full conditional distributions in Equation~\ref{e_MCMC_7} in closed form. Therefore, we use a Metropolis-Hastings (MH) sampling scheme to sample from each one of them for each iteration of the Gibbs sampler. For example, to sample from $f(r_1 | D, r_2, r_3, ..., r_{n_r})$, suppose we have already generated values $r_1^{i}, r_2^{i}, ..., r_{n_r}^{i}$ at the end of iteration $i$ of the Gibbs sampler, and we are about to start iteration $i+1$. Then we propose a new candidate value $r_1^{i*}$ according to
\begin{eqnarray}
r_1^{i*} = r_1^{i} + \epsilon \gamma \label{e_MCMC_8}
\end{eqnarray}
where $\epsilon$ is a random number from the standard Gaussian distribution, and $\gamma$ is a proposal standard deviation specified before hand. This is called a Gaussian random walk proposal. We then accept the candidate $r_1^{i*}$ as $r_1^{i+1}$ with probability $\alpha$ given by
\begin{eqnarray}
\alpha=\min\left\{ 1, \frac{f(D|r_1^{i*}, r_2^{i}, r_3^{i}, ..., r_{n_r}^{i})}{f(D|r_1^{i},r_2^{i}, r_3^{i}, ..., r_{n_r}^{i})} \right\} . \label{e_MCMC_9}
\end{eqnarray}
If we reject the candidate value, we simply set $r_1^{i+1}=r_1^{i}$. After this accept/reject step for $r_1$, we proceed to perform the same step for $r_2$, $r_3$, $...$ . Again, it can be shown that the MH scheme provides a valid sample from the correct full conditional. It is recommended that the value of $\gamma$ is set so that the acceptance rate of proposals is about $1/4$. It can be seen that the acceptance criterion in Equation \ref{e_MCMC_9} is based on the likelihood ratio of the candidate and current states. This is easily calculated using Equation ~\ref{e_MCMC_5}, and provides for a computationally efficient scheme.
\textbf{Adaptive Metropolis sampling:} After some iterations, the values of $r_1^{i}, r_2^{i}, ..., r_{n_r}^{i}$ already generated typically begin to show correlation. For subsequent iterations, it is therefore advantageous to exploit this correlation structure using the approach of \cite{Roberts2009}. Now we replace the single location proposal in Equation \ref{e_MCMC_8} with a joint proposal for all $n_r$ locations using
\begin{eqnarray}
\un{r}^{i*} = \un{r}^i + (1-\beta) \mathcal{N}(\un{0}, {2.38^2 \un{\Sigma}_r}/{n_r}) + \beta \mathcal{N}(\un{0}, {0.1^2 \un{I}_{n_r}}/{n_r}) \label{e_MCMC_10}
\end{eqnarray}
where $\un{r}^i$ is the full set of locations at iteration $i$, and $\un{r}^{i*}$ is the corresponding candidate vector. $\un{\Sigma}_r$ is an estimate for the covariance structure of the $r$s from previous iterations, and $\un{I}_{n_r}$ is an $n_r \times n_r$ identity matrix. $\beta$ is a parameter which is usually set to 0.05, and $\mathcal{N}$ indicates a Gaussian random variable with given mean and variance.
Using the complete Metropolis-Hasting within Gibbs sampling scheme, we are able to find the joint distributions of optimal locations $\mathcal{R}$ for any choice of $n_r$. In particular, we can take the combination of locations which occurs most often in $f(\mathcal{R}|D)$ as the optimal choice of $\mathcal{R}$ for given $n_r$.
\subsection{Implementation refinements}
Here we describe specific refinements of the method above, needed to apply it reasonably for the eastern and western boundaries of the Atlantic.
\textbf{Latitude and depth scaling:} We seek reconstructions of the boundary density structures which are equally good at any location in depth-latitude space. Since the depth grid used to calculate densities consists of unevenly spaced depth points $z_k$, it is important to use an appropriate weighting scheme so that the model performs equally well at any choice of depth. Similarly, the latitude grid for the data is uniform: however, this does not correspond to a uniform distance scale. Weighting is also therefore necessary, so that the model performs equally well for any choice of latitude on the boundary.
Specifically, we prefer to estimate the piecewise linear reconstruction of the boundary density so that the quality of fit is the same per metre of depth, and per metre along the boundary. To achieve this, we modify Equation \ref{e_MCMC_1} to include two weight vectors $\alpha_{\text{Ltt}}$ and $\alpha_{\text{Dpt}}$ so that the modified equation becomes
\begin{eqnarray}
L(\mathcal{R}) = \sum_j \sum_k \left(\alpha_{\text{Ltt}}(y_j) \ \alpha_\text{Dpt}(z_k) \left(\rho(y_j,z_k)-\hat{\rho}(y_j,z_k|\mathcal{R})\right)\right)^2 \label{e_MCMC_11} .
\end{eqnarray}
Here, $\alpha_{\text{Ltt}}(y_j)$ is the number of kilometres per degree latitude at latitude $y_j$ and similarly $\alpha_\text{Dpt}(z_k)$ is the thickness in metres of the depth grid at $z_k$.
\textbf{Accommodating bathymetry:} The bathymetry of the eastern and western boundaries varies with latitude. Therefore, the value of density at the deepest cells is often missing, because that depth does not actually correspond to ocean. It is possible during the MCMC sampling procedure, that neighbouring locations for depth profiles $\mathcal{R}$ might be selected corresponding locally to relatively shallow bathymetry, with intervening latitudes having deeper bathymetry. In this situation, it is impossible to define the piecewise linear function at depths beyond those available at the two depth profiles in $\mathcal{R}$. When this occurs, we simply use the estimated value of density at the deepest point available, per latitude, to in-fill for deeper cells. This approximation is found to have minimal effect on reconstructions; in any event, the MCMC sampling scheme is able to adjust the locations of depth profiles $\mathcal{R}$ to accommodate the effect.
Further, the locations of neighbouring depth profiles in $\mathcal{R}$ are likely to correspond to different bathymetry depths. When only the deeper depth profile is able to provide a density value, this value of density (at the deeper profile) is assumed for intervening latitudes at that depth. Again, this approximation is found to have minimal effect on reconstructions; in any event, the MCMC sampling scheme is able to adjust the locations of depth profiles $\mathcal{R}$ to accommodate it.
\subsection{Possible extensions}
\textbf{Accommodating boundary profiles for different times:} The description above is easily extended to include optimal reconstruction of boundaries for multiple times. In this case, $\mathcal{L}$ in Equation \ref{e_MCMC_11} takes the form
\begin{eqnarray}
L(\mathcal{R}) = \sum_j \sum_k \sum_\ell \left(\alpha_{\text{Ltt}}(y_j) \ \alpha_\text{Dpt}(z_k) \left(\rho(y_j,z_k,t_\ell)-\hat{\rho}(y_j,z_k,t_\ell|\mathcal{R})\right)\right)^2 \label{e_MCMC_12}
\end{eqnarray}
where the measurements and reconstructions are now available over latitude, depth and time.
\textbf{Emphasising quality of overturning streamfunction estimates from reconstruction:} The quality of density reconstruction at large depths is of greater importance for estimation of overturning streamfunction. For this reason, it may be advantageous to change the depth weighting vector $\alpha_\text{Dpt}(z_k)$, by increasing weights corresponding to large depths, to improve reconstruction quality at depth. Inspection of the overturning streamfunction (Equation \ref{T}) suggests that a sensible alternative depth weighting $\alpha_\text{Dpt}^\Psi(z_k)$ would be the depth of the centre of the $T$-cell at $z_k$ multiplied by the thickness of the $z_k$ depth cell.
\section{Temporal variability of Atlantic along-boundary neutral densities}
\label{App_Bdry_Tmp}
In this section we investigate the variation of Atlantic boundary neutral densities (discussed in Section \ref{CntMpBnd}) over different timescales. We quantify the variation in terms of standard deviations of the form $\overline{s}_p$ defined in Equation \ref{e_sp_bar} for different time intervals $p$ in years; the calculation performed is the same as that reported in Chapter \ref{TJ_Var}. This analysis is performed using the full timeseries available for each HadGEM-GC3.1 model (as opposed to the first 100 years only), whereas the GK and GloSea5 datasets are not considered due to short dataset lengths.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_App/Std_BdryDens_neutD_GC3_1C_ap466_AtlExt_int.png}}
\caption[Average standard deviation $\overline{s}_{p}$ for boundary neutral density in the $1^\circ $-$LM$ model, for various timescales.]{Average standard deviation $\overline{s}_{p}$ for boundary neutral density in the $1^\circ $-$LM$ model. Periods considered are (a) p=1 and (b) p=10 years. White dashed lines indicate locations of the Equator, Drake Passage and southern tip of South Africa.}
\label{F_BD_nStd_1LM}
\end{figure*}
Figure \ref{F_BD_nStd_1LM} shows the average standard deviation of boundary densities for time intervals $p=1$ and $p=10$ years. For the upper 300m, $\overline{s}_p$ shows considerable temporal variability due to the influence of wind-stress on both timescales. The variability found reduces with increasing timescale, since Ekman processes tend to dominate on shorter timescales. Away from the surface layers, large values of $\overline{s}_{p}$ indicate considerable variability in boundary densities at depth, near $46\times 10^3$km or the Reykjanes ridge, possibly due to overflows. Considerable variability remains in the surface layers for the 10-year timescales from $47$-$50\times 10^3$km. Variability in this region is replicated in Figure \ref{F_BD_nStd_1LL} for the $1^\circ $-$LL$ model, for $p=$1, 10 and 40 years, with somewhat less pronounced features at depth. Large variability at longer timescales between $35$ and $50\times 10^3$km is attributed to intense air-sea interaction and deep water formation in the Labrador Sea. These results are consistent with that found in Figure \ref{p_sb_p10}, where greater variability at high latitudes and near overflow regions is found in in-situ bottom densities for the $1^\circ$-$LM$ in comparison to both the $1/4^\circ$ and $1/12^\circ$ resolution models.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_App/Std_BdryDens_neutD_GC3_LL_ar766_AtlExt_int.png}}
\caption[Average standard deviation $\overline{s}_{p}$ for boundary neutral density in the $1^\circ $-$LL$ model, for various timescales.]{Average standard deviation $\overline{s}_{p}$ for boundary neutral density in the $1^\circ $-$LL$ model. Periods considered are (a) p=1, (b) p=10 and (c) p=40 years. White dashed lines indicate locations of the Equator, Drake Passage and southern tip of South Africa.}
\label{F_BD_nStd_1LL}
\end{figure*}
For both $1^\circ$ models and all timescales considered, temporal variability in neutral density, quantified by $\overline{s}_{p}$, outside the surface layer is largest in the North Atlantic region, specifically locations downstream of areas of deep water formation. Variability near the surface and at depths down to 1500m at locations to the east of the Reykjanes ridge can be attributed to variability in local mixed-layer depth.
Estimates of $\overline{s}_{p}$ for boundary density in the $1/4^\circ$ model at timescales $p=$1, 10 and 40 years exhibit similar surface features to those of the $1^\circ$ models. However, Figure \ref{F_BD_nStd_025} reveals less variability in the North Atlantic and interesting features at all timescales in equatorial regions at depths of approximately 1200m. The latter could be attributed to internal or boundary waves, or possibly depth-variability in the overturning streamfunction. Along the western Atlantic boundary, the $\overline{s}_{p}$ signal appears to increase northward and decrease southward along the boundary from the Equator. The increased variability throughout the fluid column around the Malvinas Islands might be associated with temporal variation of AABW.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_App/Std_BdryDens_neutD_GC3_025C_aj368_AtlExt_int.png}}
\caption[Average standard deviation, $\overline{s}_{p}$ for boundary neutral density in the $1/4^\circ$ model, for various timescales.]{Average standard deviation, $\overline{s}_{p}$ for boundary neutral density in the $1/4^\circ$ model. Periods considered are (a) p=1, (b) p=10 and (c) p=40 years. White dashed lines indicate locations of equator, Drake Passage and southern tip of South Africa}
\label{F_BD_nStd_025}
\end{figure*}
Comparison of $\overline{s}_{p}$ for eastern and western boundary sections across resolutions would suggest somewhat greater variability at depth (300-1000m) on the western boundary.
\section{Ocean-only and sensitivity experiments} \label{Sct_Inter_Oth_A}
Here we summarise brief investigations into the impact of model parameterisation, and atmospheric coupling, on the ACC transport at the Drake Passage discussed in Section \ref{Sct_Inter_Oth}.
\subsection{$1/4^\circ$ sensitivity experiments}
In a small number of preliminary sensitivity experiments for the $1/4^\circ$ model, the ACC transport is decomposed into its components, using the methodology outlined in Section \ref{Sct_ACC_Mth}. Model runs of approximately 20 years were considered, with different eddy parameterisation and diffusivity operators starting from EN4 temperature and salinity fields and 1950s fixed forcing. The first model incorporated a weak Gent-McWilliams (GM, \citealt{Gent1990}) parameterisation scheme, and the second a $3\times$ BiLaplacian (3xBiLap, \citealt{Madec2016}) viscosity scheme. A control run was also performed (without GM). Timeseries of expected $T_{ACC}$ and estimated $\tilde{T}_{ACC}$ from both the weak GM and 3xBiLap viscosity iterations were observed to be more constant with time when compared to the control run. Over the 20 years of simulation, $T_{ACC}$ from the 3xBiLap model stabilised after approximately 5 years to 145Sv, compared to stabilisation after a similar period to 135Sv for the weak GM model. In contrast, $T_{ACC}$ from the control run weakens approximately linearly from an initial 150Sv to approximately 105Sv over the period of the simulation, with no stabilisation observed. The reduced weakening found for $\tilde{T}_{ACC}$ was mainly attributed to a stable southern density contribution $T_{S}$ of approximately -10Sv to -20Sv for both weak GM and 3xBiLap runs, in contrast to a strengthening $T_{S}$ for the control run from approximately 0Sv to -90Sv over the period of the simulation. The weak-GM parameterisation scheme acts to dampen the resolved small-scale features such as eddies in the $1/4^\circ$ model. The GM scheme removes available potential energy from the resolved flow by flattening isopycnal slopes. Therefore the $1/4^\circ$ model appears to behave more similarly to the $1^\circ$ model, and hence has an improved ACC transport. The BiLaplacian scheme should also dampen the flow and reduce the variance of the velocity field, by reducing the overall kinetic energy and smoothing out the jets. The BiLaplacian viscosity scheme is more scale-selective in its smoothing compared to a simple increase in viscosity, therefore the scheme will have a tendency to smooth small grid-scale features preferentially.
\subsection{Ocean-only GCM}
Analysis of forced ocean-only model runs for $1^\circ$, $1/4^\circ$ and $1/12^\circ$ resolutions was conducted to investigate the impact of atmospheric and sea-ice coupling on the ACC transport through the Drake Passage. For a run period of approximately 30 years, ocean-only models showed significantly less weakening of ${T}_{ACC}$ at higher resolutions compared with estimates using a coupled atmosphere. ${T}_{ACC}$ was found to stabilise at $120$Sv and $130$Sv for the $1/4^\circ$ and $1/12^\circ$ models, respectively. In contrast, ${T}_{ACC}$ for the $1^\circ$ model stabilised near $154$Sv. The difference between estimates for different model resolutions is primarily attributable to $T_{bot}$. Notably, the sum of density contributions ($T_S$, $T_N$ and $T_{\beta}$) for the three model resolutions was found to be very similar, reducing from an initial 120Sv to approximately 105Sv for all models over the period of simulation. We conclude that atmospheric and sea-ice coupling contributes to the weakening of the ACC transport in both higher resolution models.
This suggests atmosphere-ocean-sea-ice interactions in the model, near the Antarctic coast, act to cool and freshen surface waters, leading to a slumping rather than outcropping of isopycnals towards the coast, and a reverse flow along the coast. We find freshwater input from ice shelves (basal melt and iceberg calving) is approximately twice as great in the coupled model, compared to the forced ocean-only model. This would explain the freshening observed along the coast, and contribute to gyre spin-up. Local cold easterly wind stress could also play a part in enhancing the cooling of surface water along the Antarctic coastline within coupled models. On a larger scale, changes in winds could act to spin-up gyres.
\chapter{Hydrographic properties on ocean boundaries} \label{TJ_Bdry}
This chapter investigates along-boundary properties for the Atlantic basin and surrounding coastlines. We characterise the spatial and temporal structure of the boundary densities underpinning the decomposition diagnostic for the overturning streamfunction, applied to the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ HadGEM-GC3.1 model simulations in Chapters \ref{TJ_TM} and \ref{TJ_Var}. We assess the extent to which isopycnals are flat along the (sloping) eastern boundary, as is commonly assumed within reduced-gravity models.
We introduce a boundary mapping algorithm, motivated by the work of \cite{Hughes2018}, to map large sections of along-shore continental boundaries across multiple depths. We use the mapping algorithm to explore spatio-temporal variations in potential temperature, salinity, neutral and potential density for (HadGEM-GC3.1) model , (GloSea5) reanalysis and (Gouretski and Kolterman) climatological datasets, centred on the Atlantic basin.
\section{Background}
For a meridional pressure gradient to be sustained on the eastern boundary, a velocity into the boundary is required. Since the boundary is impenetrable, coastally-trapped boundary waves propagate and distribute any eastern boundary anomaly evenly along the boundary, and therefore flat eastern boundary isopycnals are likely (Section \ref{I_adjAMOC}). Flat eastern boundary isopycnals imply that stratification of the water column does not vary with latitude. Hence, given that Rossby waves transmit eastern boundary information into the ocean's interior, density stratification is quasi-uniform over much of the domain. This raises the possibility of using properties from only a few stratification profiles to constrain the overturning at latitudes where no data is available, and quantifying eastern boundary contributions to the overturning streamfunction using just a small number of profiles.
The assumption of flat isopycnals along the eastern boundary has been widely used within idealised models (e.g. \citealt{Johnson2002}, \citealt{Cessi2010}, \citealt{Nikurashin2011}, \citealt{Marshall2017}, \citealt{Nieves2018}). However, previous work (e.g. \citealt{Johnson2002}, \citealt{Sun2016}) has shown that the assumption of flat isopycnals is only valid away from the surface, since here Ekman processes dominate, and geostrophy breaks down. Further, \cite{Cessi2013a} argue that instability processes along the eastern boundary act to erode the density structure. They highlight the importance of agesotrophic processes in locations where adiabatic laminar thermocline theories (\citealt{Luyten1983}, \citealt{Pedlosky1983}, \citealt{Gill1985}) predict a singularity as a result of opposing needs for (1) a meridional gradient in buoyancy at the surface and (2) the boundary condition of no normal flow. The relative flatness of eastern boundary isopycnals in basins to the north of the circumpolar Southern Ocean (e.g. Atlantic basin, \citealt{Gnanadesikan1999}, \citealt{Johnson2007}, \citealt{Samelson2009}, \citealt{Radko2011}, \citealt{Nikurashin2011}, \citealt{Shakespeare2012}, \citealt{Marshall2013}, \citealt{Sun2016}, \citealt{Marshall2017}) has been exploited within diagnostic models; the success of this approach may be due to the contrasting roles of the eastern and western boundaries in establishing the hydrographic structure and circulation found in ocean basins, as shown by \cite{Johnson2002}. See Section \ref{I_adjAMOC} for further information.
An important area of research in this thesis is the role that boundary characteristics play in describing the AMOC and its variability. In particular, this requires understanding and exploitation of the density structures on the eastern and western boundaries, which underpin the decomposition diagnostic.
There have been no previous attempts to map density, potential temperature or salinity properties along sloping boundaries of ocean basins, or to characterise their spatial variation. Our ability to map boundary properties by direct observation is limited due to the huge distances involved, and the expense of hydrographic studies. The present work to improve our understanding of the along-boundary properties is motivated in part by the work of \cite{Hughes2018} in mapping bottom pressures.
\section{Preliminary analysis of boundary densities}
\label{Bdry_exp} \label{Bdry_EW}
The objective of this section is to provide an initial visual inspection of eastern and western boundary potential densities for the Atlantic basin, and to motivate further investigation of along-boundary variation. We explore the time-mean boundary potential densities (referenced to the surface) for HadGEM-GC3.1 models at three different spatial resolutions, as used in the decomposition diagnostic. Findings are illustrated as latitude-depth contour plots along the sloping eastern and western boundaries.
In estimating a latitude-depth plot of boundary potential density, eastern and western boundaries are defined, at each latitude, as the easternmost and westernmost water boundaries available for the Atlantic basin at the required depth. This mimics the approach used in the development of the overturning streamfunction decomposition diagnostic in Section \ref{S_App_DD}, with intermediate boundaries ignored. Therefore, for each latitude-depth pair, there is a unique longitude corresponding to each of the eastern and western boundaries, and unique eastern and western boundary density values.
Figures \ref{F_sBdry_12_300}-\ref{F_sBdry_12_5900} show time-mean potential densities (relative to the surface and averaged over the full time periods of $1/12^\circ$ model output) for western and eastern boundaries, and west-east difference.
\subsection{General features of time-average boundary potential densities at $1/12^\circ$ resolution}
Figure \ref{F_sBdry_12_300} for the upper $300$m shows considerable spatial variation in boundary potential density, and suggests that the hypothesis of flat eastern boundary isopycnals is not valid in the mixed layer, as we might expect. However, Figure \ref{F_sBdry_12_300}(b) shows that for latitudes between $15^\circ$S and $10^\circ$N there is some evidence of approximately constant boundary density with latitude on the eastern boundary. We find surface densities near the equator to be lightest due to warming from the sun (surface heat fluxes). Along the western boundary we find that lighter surface densities extend further towards the poles due to western boundary currents (e.g. Gulf Stream and North Brazil current) redistributing warmer conditions poleward.
At high northern latitudes in Figure \ref{F_sBdry_12_300}(a,b), there is evidence for outcropping of isopycnals on both boundaries. We find some similar tendencies on the eastern boundary in the southern hemisphere. The apparent discontinuity in Figure \ref{F_sBdry_12_300}(a) at approximately $30^\circ$N is because the western boundary shifts here from the Gulf of Mexico to the Florida Straits, highlighting a weakness of the current approach to boundary definition, and motivating the improved approach introduced in Section \ref{CntMpBnd}.
Figure \ref{F_sBdry_12_300}(c) illustrates that, in the upper $100$m of the Atlantic, western boundary waters are generally less dense than the corresponding eastern boundary waters. The reverse trend at 300m for latitudes $30^\circ-50^\circ$N is attributed to lighter outflow waters from the Mediterranean Sea, also leading to a discontinuity in eastern boundary isopycnals at these latitudes.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_Bdry/BdryD_12C_aj_Dpt_301}}
\caption[Western and eastern Atlantic boundary potential densities for the upper $301$m in the $1/12^\circ$ model.]{Western (a) and eastern (b) Atlantic boundary potential densities (referenced to the surface) for the upper $301$m in the $1/12^\circ$ model. Panel (c) indicates the difference between western (a) and eastern (b) densities. Time-mean taken over the 176-year control run.}
\label{F_sBdry_12_300}
\end{figure*}
Figure \ref{F_sBdry_12_1100}, the corresponding latitude-depth density contour plot for the upper $1100$m, shows many of the same features as Figure \ref{F_sBdry_12_300}. Panel (a) suggests that western isopycnals vary slowly and approximately linearly (slowly rising northwards) with latitude south of $20^\circ$N, at depths outside the mixed layer. In contrast, Panel (b) shows eastern boundary isopycnal depths vary little with latitude, south of $35^\circ$N and outside the mixed layer. North of $35^\circ$N, the lack of continuity in densities along the eastern boundary suggests a discontinuous boundary or the action of ageostrophic processes, resulting in a breakdown of the flat isopycnal hypothesis. Variation in isopycnal depth near $35^\circ$N can be attributed to the mixing of warmer saltier Mediterranean waters with those of the Atlantic near the Strait of Gibraltar. Above $60^\circ$N on the eastern boundary, the fingerprint of denser waters formed at higher northern latitudes is clear at depths greater than 600m. On the western boundary, at high northern latitudes, denser water masses are introduced due to the southward flowing DWBC and Labrador Current.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_Bdry/BdryD_12C_aj_Dpt_1152}}
\caption[Time-average western and eastern Atlantic boundary potential densities for the upper $1152$m in the $1/12^\circ$ model.]{Western (a) and eastern (b) Atlantic boundary potential densities (referenced to the surface) for the upper $1152$m in the $1/12^\circ$ model. Panel (c) indicates the difference between the eastern (b) and western (a) densities. Time-mean taken over the 176-year control run.}
\label{F_sBdry_12_1100}
\end{figure*}
Figures \ref{F_sBdry_12_1100}(c) and \ref{F_sBdry_12_5900}(c) shows the difference between eastern and western boundary densities, and are consistent with a resulting northward transport in both hemispheres, in the upper 2000m and outside the mixed layer. In the southern hemisphere, denser eastern waters and a negative Coriolis parameter lead to a general northward volume transport. This is inferred by using the geostrophic relationship, denser eastern waters result in a negative pressure gradient across the basin, combined with a negative Coriolis parameter, gives positive meridional (northward) velocities. In contrast, in the northern hemisphere, denser western boundary waters combine with a positive Coriolis parameter, again leading to northward transport. At approximately $62^\circ$N, we find a large region of denser eastern boundary water, likely to be southward flowing from the Norwegian Sea. Figures \ref{F_sBdry_12_5900}(a) and (b) show the presence of denser boundary waters at depth for the highest northern latitudes, attributed to denser waters pooling north of the Greenland-Scotland ridge, with Denmark Strait and Iceland-Scotland overflows resulting in regions of denser waters south of the ridge. At latitudes greater than $63^\circ$N, signs of dense waters are clear on both the eastern and western boundaries; Figures \ref{F_sBdry_12_5900}(a) and (b) show the NADW tongue formed at high latitudes flowing southwards, extending further on the western boundary. In the southern hemisphere of Panel (b), above 2500m, isopcynals exhibit approximately linear structure. Note that the discontinuity on the western boundary in Figure \ref{F_sBdry_12_5900}(a) at around $10^\circ-30^\circ$N is again an artefact of the simplistic approach adopted here to identify boundaries, which can be eliminated by a wiser choice of continuous boundary based on depth contours, as explored in Section \ref{CntMpBnd}.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_Bdry/BdryD_12C_aj_Dpt_5902}}
\caption[Western and eastern Atlantic boundary densities for the upper $5902$m in the $1/12^\circ$ model.]{Western (a) and eastern (b) Atlantic boundary potential densities (referenced to the surface) for the upper $5902$m in the $1/12^\circ$ model. Panel (c) indicates the difference between the eastern (b) and western (a) densities. Time-mean taken over the 176-year control run.}
\label{F_sBdry_12_5900}
\end{figure*}
\subsection*{Impact of model resolution on time-average boundary potential densities}
Plots equivalent to Figures \ref{F_sBdry_12_300}-\ref{F_sBdry_12_5900} at $1/4^\circ$ and $1^\circ$ spatial resolutions (not shown), reveal the general isopycnal structure is very similar across model resolution. We find the isopycnal structure with latitude becomes increasingly smooth, with decreasing model resolution.
Discontinuities are again apparent at approximately $30^\circ$N on the western boundary and $35^\circ$N on the eastern boundary, but less obvious at $1^\circ$, and denser southward flowing waters at high northern latitudes can be seen. We note that the eastern boundary densities at $1^\circ$ spatial resolution do not show obvious signs of Mediterranean outflow. Eastern boundary isopycnals are again approximately flat south of $35^\circ$N, and western boundary isopycnals show an approximately linear isopycnal structure (slowly rising northwards) with latitude, with some discontinuities attributed to contributions from the western Gulf of Mexico for shallower depths, and eastern boundary of the Caribbean Islands at depth. When interest lies in the general time-average characteristics of eastern and western boundary densities with latitude and depth, there appears to be little to gain from increased model resolution. However, we expect that local spatial features are more adequately represented using a higher-resolution model.
\subsection*{Reconstruction of isopycnal structure using depth profiles from a limited number of locations}
\label{Bdry_MCMC}
There is evidence that the isopycnal structure on an eastern boundary of an ocean basin is less complex, as a function of latitude and depth, than on the western boundary. For example, on the eastern boundary of the Atlantic, we hypothesise that density profiles with depth over a large range of latitudes can be approximated by just a small number of depth profile measurements at well-chosen latitudes. Using just these along slope density profile measurements, linear interpolation is then adequate to estimate density profiles at other latitudes of interest, along both eastern and western boundaries. This would yield a simple useful low-order reconstruction of eastern boundary density in particular. In Appendix \ref{Bdry_MCMC_App}, we quantify the extent to which this is the case on both eastern and western boundaries of the Atlantic. We develop a piecewise linear model for time-mean potential density, using the minimum number of linear pieces to reconstruct along-boundary density structure.
The analysis presented in Appendix \ref{Bdry_MCMC_App} provides a means of automatically selecting locations for depth profile measurements of density along both eastern and western boundaries, in order to provide optimal reconstruction of the density structure (with latitude and depth) for the whole basin. Results indicate that the number of profiles required to reconstruct the eastern boundary density structure is approximately half that required for the western boundary. The methodology raises the possibility of further simplifying the description of boundary density contributions to the AMOC decomposition diagnostic for the whole Atlantic basin, and could be particularly valuable for paleooceanographic applications (discussed in Section \ref{Smm_Dsc}).
\section{Densities along continuous boundaries}
\label{CntMpBnd}
\subsection{Motivating an improved boundary representation}
\label{Bdry_NM_mot}
The results in Section \ref{Bdry_EW} above support the hypothesis of flat Atlantic eastern boundary isopycnals beneath the mixed-layer, south of $35^\circ$N. Moreover, western boundary isopycnals appear to vary approximately linearly with latitude (slowly rising northwards) throughout the basin, but particularly south of $35^\circ$N. However, the approach taken to identify boundaries so far is simplistic, and leads to discontinuities in boundary properties. Specifically, for each latitute-depth pair, a unique longitude is identified corresponding to each of the eastern and western boundaries. Discontinuities in latitude-depth plots appear when adjacent boundary locations are at similar latitudes but different longitudes, e.g. in the Gulf of Mexico, Florida Straits and the zonal boundary of the southern coast of west Africa.
We now develop a more sophisticated boundary mapping algorithm, able to map basin boundaries for the whole planet continuously at any depth of interest. We expect that this mapping algorithm will provide more reliable estimates of the structure of eastern and western boundary isopycnals for the Atlantic and other basins. The mapping procedure quantified the continuous boundary in terms of a set of discrete locations. Multiple adjacent locations at the same latitude are possible, in contrast to the method used in Section \ref{Bdry_exp}.
The procedure motivated by \cite{Hughes2018} consists of two main stages. In the first (described in Section \ref{CntMpBnd.1}), the Atlantic boundary location is estimated for a given fixed depth. In the second (Section \ref{CntMpBnd.2}), the boundary location corresponding to any ``remote'' second depth is registered (or paired) with the original reference boundary. The final boundary set is constructed by registering (or pairing) the boundary locations at all depths with the reference boundary. We note that the procedure is performed using depth indices ($k$, denoting vertical levels within a model) in place of actual depth.
\subsection{Mapping the boundary at a given depth}
\label{CntMpBnd.1}
We seek a pathway in longitude and latitude which estimates a bathymetric contour around the Atlantic basin at a specified depth. At that depth, we create a binary mask on the model longitude-latitude grid, partitioning ``ocean'' from ``land''. The pathway is created by moving (through mask locations corresponding to ocean only) from a starting point to an end point along the coastline, always keeping the land (i.e. mask locations corresponding to land) to the left in the direction of travel.
Where necessary, we specify extra artificial land barriers to prevent pathways into regions not of interest in this study. For example, a barrier at latitude 76$^\circ$N across the Labrador and Nordic Seas avoids pathways entering the Arctic region. The steps of the algorithm are as follows.
\textbf{Identify pathway start and end points:} Longitude and latitude indices for initial start and initial end points in the ocean are specified, along with the ``depth'' $k$-level of interest. When the initial start and end points do not lie adjacent to land, the initial start and end points are iterated in one of four pre-specified cardinal directions until land is found (at the specified depth). The resulting points are adopted as the start and end points for the pathway.
\textbf{Iterate around coastline:} From the given starting point, we iterate (or progress) around the coastline (at the specified depth) keeping the land on the left hand side in the direction of travel. This is achieved by attempting iterations in strict sequence relative to the direction of travel, taking the first available option: (1) to the left, (2) forward, (3) to the right, (4) backward (\citealt{Pavlidis1982}, Moore-neighbour tracing algorithm). We note that the backward step is always possible (since it is already a point on the pathway); in practice, for the Atlantic basin mapping, the backward step is never required. Hence, with starting point near the coast of Alaska, the pathway evolves ``hugging'' the Alaskan coast southwards, down the American coast and around the tip of South America into the Atlantic basin. The pathway then follows the east coast of South America northwards through the Caribbean Sea and Gulf of Mexico up to the Labrador Sea. Here, an artificial barrier prevents entry into the Arctic Ocean, resulting in the pathway incorporating the coastline of Greenland, before continuing eastwards and southwards around the west of the UK, continental Europe and Africa. The pathway ends at the end point (off the coast of north-west India). The resulting pathway for $k=49$ in the $1/4^\circ$ model is illustrated in Figure \ref{F_mapBdry_k49}.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_Bdry/Path_1266_k49_025M_AtlExt_int.png}}
\caption[Boundary pathway at depth level $k=49$ (corresponding to 1266m), for the $1/4^\circ$ model, running from the coast of Alaska to India.]{Boundary pathway at depth level $k=49$ (corresponding to 1266m, in green), for the $1/4^\circ$ model, running from the coast of Alaska to India. The start and end points of the pathway are denoted by green and red squares, respectively.}
\label{F_mapBdry_k49}
\end{figure*}
\textbf{Estimate the contour by smoothing the pathway:} The pathway found consists of straight line segments between locations on the model longitude-latitude grid. We obtain a more continuous contour (at the specified depth) by smoothing the pathway. Smoothing is particularly important to ensure that estimates of distances along the contour are not inflated (due to the potential ``zig-zag'' nature of a pathway on the discrete longitude-latitude grid). Smoothing is performed using a $\pm n_{\text{HW}}$ point vector moving-average (with respect to angles of longitude and latitude) to yield the contour estimate. Great circle distances between points on the contour (i.e. the smoothed pathway) are then calculated, so that distances between arbitrary points on the contour can be determined. In the current work, we find that $n_{\text{HW}}=3$ provides reasonable estimates for contour distances.
\subsection{Pairing any other depth contour to the reference contour}
\label{CntMpBnd.2}
To analyse density properties across multiple contours at different depths, it is advantageous to have a common measure of distance along all contours. To achieve this, we choose a ``reference'' contour, to which every other depth contour will be ``registered''; in the present analysis of the Atlantic basin, we choose the $2262$m ($k=56$) depth contour as reference. Here, we describe how the registration of other depth (``remote'') contours onto the reference contour is performed.
\textbf{Identify nodes on the reference contour}: We identify equally-spaced locations along the reference contour, and call these reference nodes (\citealt{Hughes2018}). In the example discussed below, an inter-node interval of 400km is used. For definiteness, we assume we find $N+1$ reference nodes $P_0, P_1, ..., P_N$.
\textbf{Identify remote nodes}: For each reference node, we find the closest point (in terms of great circle distance) on the remote depth contour, and call this the ``remote node''; on each remote contour, there will hence also be $N+1$ remote nodes labelled $Q_0, Q_1, ..., Q_N$.
\textbf{Define segment pairs on the reference and remote contours}: We next define a set of reference segments $\{A_i\}_{i=1}^N$ and remote segments $\{B_i\}_{i=1}^N$ having consecutive reference and remote nodes as start and end points, as follows:
\begin{eqnarray}
A_i=(P_{i-1},P_i] \text{ for } i=1,2,...,N, \\
B_i=(Q_{i-1},Q_i] \text{ for } i=1,2,...,N .
\end{eqnarray}
Thus, $A_i$ (and $B_i$) consists of the boundary points on the reference (and remote) contour between consecutive nodes $P_{i-1}$ and $P_i$ (and $Q_{i-1}$ and $Q_i$). We refer to $(A_i, B_i)$ as a segment pair.
We want to find a unique mapping between points on the reference and remote contours. However, the procedure above to identify segment pairs does not guarantee this. It is possible that a point on the remote contour can occur in more than one remote segment and hence in more than one reference segment. This means that the remote point could be linked to more than one point on the reference contour; this would obviously not produce a unique pairing. Therefore, we proceed iteratively to accept remote segments into the final boundary mapping, favouring segment pairs with the smallest separation, whilst avoiding non-unique pairs of remote - reference boundary points (specifically by eliminating overlapping remote segments).
\textbf{Calculate the horizontal distance between a segment pair}: The distance $D_i>0$ between segments $A_i$ and $B_i$ is calculated by first finding $n_{\text{Prt}}$ within-segment locations along each segment, which are approximately equally-spaced in terms of the number of points on the pathway (or ``boundary index'') for that segment. We then calculate the great circle distance between the corresponding $n_{\text{Prt}}$ points in each segment, and use the average distance as a measure $D_i$ of the distance between segments. In the current work, we find that $n_{\text{Prt}}=10$ is a suitable choice, but note that in general this choice should consider the internode distance on reference and remote contours.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_Bdry/Nodes_Link_kRef_56_kRmt_34_025M_AtlExt_int.png}}
\caption[Pairing of reference contour for depth-level $k=56$ (2262m) and remote contour at depth-level $k=34$ (271m).]{Pairing of reference contour (black dots) for depth-level $k=56$ (2262m) and remote contour at depth-level $k=34$ (271m). Accepted points on the remote contour are shown in green, and rejected points in red. Nodes spaced every 400km along the reference contour are shown as black circles. Corresponding nodes on the remote contour are shown as green circles (if linked) and red squares (if not linked). Linked reference-remote node pairs are joined by a blue line (e.g. visible around the Malvinas Islands). The pathway shown runs from the coast of Alaska to India for the $1/4^\circ$ model.}
\label{F_mapNod_k56k34}
\end{figure*}
\textbf{Create final boundary mapping by iteratively accepting/rejecting segment pairs}: We start (a) by finding the segment pair with smallest overall separation distance, and admit it to the final boundary mapping. Then (b) we consider the candidate segment pair with next smallest overall separation distance. We only admit the segment pair if its remote segment does not overlap with any of the remote segments already admitted to the final boundary mapping. If overlapping occurs, the candidate segment pair is rejected (since the candidate remote segment is effectively already used in the final boundary mapping, with a preferred, smaller separation distance). We continue (c) by iterating over all remaining segment pairs (ordered in terms of increasing separation distance), until all segment pairs have either been accepted or rejected. In the example discussed here, segment pairs with distance greater than 600km are not considered realistic pairings, and hence are rejected. Densities for points on the remote contour which are not mapped in the final boundary mapping are allocated NaN values.
Once the mapping is complete, we have a consistent measure of distance along contours at all depths, allowing boundary characteristics at a given distance to be fairly compared over all depths. We can therefore visualise e.g. boundary densities in terms of depth-distance plots, analogous to the depth-latitude plots shown in Section \ref{Bdry_exp}, but avoiding the latter's discontinuities.
Figure \ref{F_mapNod_k56k34} shows the pairing of reference contour $k=56$ (2262m) and remote contour $k=34$ (271m). Rejected points (within rejected segments), shown as red dots, can be seen in the Gulf of Mexico and Caribbean Sea where the reference contour follows the outline of the Caribbean islands, whereas the shallower remote contour follows the Central American shoreline; hence the distance between segment pairs on reference and remote contours is large, and the segment pairings rejected. Similar behaviour is observed at high northern latitudes, e.g. on the Norwegian coast.
\subsection{Calculation of boundary densities}
\label{CntMpBnd.3}
We extract potential temperature $\theta$, salinity $S$, T-cell depth, longitude and latitude for each point along the boundary for all depth contours available. We then calculate potential and neutral densities using the Gibbs Seawater Oceanographic toolbox utilising the $TEOS$-$10$ algorithms (\citealt{McDougall2020}). Specifically, for calculation of potential density, the conservative temperature and absolute salinity fields are first calculated using the NEMO EOS-80 potential temperature and salinity fields, and then the potential density field is calculated in line with the $TEOS$-$10$ methodology.
Investigations in Section \ref{TM_TA} demonstrated discrepancies in density calculation using time-average input fields, especially on secondary calculations such as the overturning streamfunction. For our current purpose of calculating the boundary density, we must accept the accuracy of densities from time-average input fields. Nevertheless, we again highlight the absence of instantaneous outputs from the HadGEM-GC3.1 model, or equivalently of correctly time-average densities.
From Section \ref{CntMpBnd.2}, the along-contour distance $D$ is known for contours at all depths. Hence we can interpolate boundary densities (calculated independently for each depth contour) onto a common distance for visualisation.
\subsection{Datasets analysed}
\label{Bdry_Mod}
The along-boundary mapping algorithm outlined in Sections \ref{CntMpBnd.1}-\ref{CntMpBnd.3} is applied to HadGEM-GC3.1 model simulations (at three different oceanic and atmospheric spatial resolutions), GloSea5 reanalysis and \cite{GouretskiViktorandKoltermann2004} (GK) climatology.
For the HadGEM-GC3.1 model (Section \ref{S3_ModDes}, \citealt{Williams2018} and \citealt{Roberts2019}) at $1^\circ$ oceanic spatial resolution, we use two model simulations which have different atmospheric resolutions. The coarser N96 atmospheric resolution is denoted by $LL$ (low ocean, low atmosphere) and the ``medium'' N216 atmospheric resolution by $LM$ (low ocean resolution, medium atmosphere resolution). The $1^\circ$-$LL$ model run has a length of 399 years, and the $1^\circ$-$LM$ a length of 104 years. In addition, we use the $1/4^\circ$ HadGEM-GC3.1 model (Chapters \ref{TJ_TM} and \ref{TJ_Var}), with run length of 657 years. All HadGEM-GC3.1 models considered are control runs with fixed 1950s forcing, where a 30 year spin-up was used prior, using EN4 dataset for initial conditions. We extract annual-mean properties for visualisation.
The GloSea5 reanalysis dataset (\citealt{MacLachlan2015}) is used for seasonal forecasting at N216 atmospheric and $1/4^\circ$ oceanic spatial resolutions, similar therefore to the HadGEM-GC3.1 $1/4^\circ$ model. The vertical resolution is also the same as that of the HadGEM-GC3.1 models. The oceanic and sea-ice components of the model are based on NEMO's three-dimensional variational ocean data assimilation, which itself is based on the multi-institution NEMOVAR project (\citealt{Mogensen2009}, \citealt{Mogensen2012}). We use monthly data available for 23 years from 1995 to 2018.
The \cite{GouretskiViktorandKoltermann2004} (GK) climatology dataset is produced at $1/2^\circ$ horizontal resolution for 44 vertical levels, with vertical spacings ranging from 10m at the surface to 250m at depth. The climatology is isopycnally averaged in order to minimise bias due to formation of artificial water masses. It incorporates 1,059,535 hydrographic profiles from the World Ocean Circulation Experiment (WOCE), German Oceanographic Data Centre, Alfred-Wegener-Institute, Arctic and Antarctic Research Institute and the French Oceanographic Data Centre.
\section{Characteristics of the time-average boundary densities in an extended Atlantic basin}
We first discuss the general features of the Atlantic along-boundary neutral densities for the $1/4^\circ$ HadGEM-GC3.1 model. We then proceed to investigate the characteristics of boundary isopycnals in the $1^\circ$-$LL$ and $1^\circ$-$LM$ HadGEM-GC3.1 models, GloSea5 reanalysis and GK climatology. To reduce potential issues with model drift, only the first 100 years of output for each model are considered in the time-average. GloSea5 time-averaging is performed for all 23 years of data available. Figure \ref{F_mapRef_k56} shows the reference contour at depth level $k=56$ (2262m), together with the common along-boundary distance scale used to reference boundary contours at all depth levels $k$ (blue numbers, $\times 10^3$km).
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_Bdry/BdryRef_GC3_025C_aj368_kRef_56_AtlExt_int.png}}
\caption[Reference contour at $k=56$ (2262m) for the extended Atlantic basin with nodes, distance markers and cumulative distance.]{Reference contour (black dots) at $k=56$ (2262m) for the extended Atlantic basin with nodes (red circles), distance markers (pink star, every $5 \times 10^3$km) and cumulative distance (blue numbers, $\times 10^3$km). Remote contours at all depths are referenced to this reference contour.}
\label{F_mapRef_k56}
\end{figure*}
\subsection{General features of boundary isopycnals in the $1/4^\circ$ HadGEM-GC3.1 model}
Figure \ref{F_BD_n025M} shows along-boundary neutral densities for the extended Atlantic region, corresponding to the pathway shown in Figure \ref{F_mapRef_k56}. Panel (a) concentrates on the along-boundary isopycnal structure for the upper 400m, whereas Panel (b) uses a different colour scale to emphasise isopycnal structure on the boundary at depth. Grey regions in this figure, and subsequent figures in this chapter, indicate depth-distance combinations rejected during boundary mapping. This occurs when remote and reference contour segment pairs do not meet the criteria outlined in Section \ref{CntMpBnd.2}.
Figure \ref{F_BD_n025M}(a) shows large variability in isopycnal structure along the boundary near the surface. Typically, within the mixed-layer where wind-stress variability dominates, the hypothesis of flat eastern boundary isopycnals breaks down. However, eastern boundary regions corresponding to distances between $1$-$19\times 10^3$km (Pacific) and $55$-$65\times 10^3$km (Atlantic) are seen to have relatively flat isopycnals even within the upper 400m. At greater depths, Panel (b) also suggests the presence of flat isopycnals along eastern boundaries.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_Bdry/BdryDensGrad_neutD_GC3_025C_aj368_AtlExt_int.png}}
\caption[Time-average neutral densities for the first 100 years of the $1/4^\circ$ model for the upper 400m and for all depths.]{Time-average neutral densities for the first 100 years of the $1/4^\circ$ model for (a) the upper 400m and (b) all depths. Dashed white lines indicate locations of Equator, Drake Passage and southern tip of South Africa. }
\label{F_BD_n025M}
\end{figure*}
With a starting point offshore of Alaska, the first $19\times 10^3$km follows the eastern boundary of the Pacific. Along-boundary isopycnals are remarkably flat here below 400m. This uniform structure is also reflected by the corresponding isotherms shown in Figure \ref{F_BD_dps025M}(b). Locations near the Equator at distance $12\times 10^3$km are rejected by the boundary mapping algorithm in the region of the Galapagos Islands, due to excessive distances between remote and reference contours. At $20\times 10^3$km, a breakdown in flat isopycnals occurs at the Drake Passage. The following $3\times 10^3$km exhibits highly-variable isopycnal structure, persisting around the Malvinas islands until we return to the Atlantic's western boundary near southern Brazil. Here, the steep gradients in isopycnals at depth can be attributed to colder denser Antarctic Bottom Waters (AABW) moving northward from the Weddell Sea. Similarly, denser surface waters can be attributed to the Malvinas current.
Northwards from southern Brazil ($24\times 10^3$km), for $10\times 10^3$km towards the Caribbean islands, gently upward-sloping isopycnals are observed at depths from 350m down to 2000m. A number of shallower points are rejected in the Gulf of Mexico. Further north, isopycnals continue to slope upwards, and we note a large shallowing of deeper isopycnals (depths 500-1500m) at around $36\times 10^3$km, likely attributed to the separation of the Gulf Stream from the U.S. coast. At $39.5\times 10^3$km, near Nova Scotia and Newfoundland, another steep tilt in deep isopycnals (400-800m) is witnessed, associated with cold southward-bound Labrador Current and DWBC waters shown by the time-average meridional bottom velocities in Figure \ref{p_RC_Bvel}.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_Bdry/Bdry_rhoTS_neutD_GC3_025C_aj368_AtlExt_int.png}}
\caption[Depth-distance plots for neutral density, potential temperature and salinity, time-average for the first 100 years of $1/4^\circ$ model.]{Depth-distance plots for (a) neutral density, (b) potential temperature and (c) salinity, time-average for the first 100 years of $1/4^\circ$ model. Dashed white lines indicate locations of the Equator, Drake Passage and southern tip of South Africa. }
\label{F_BD_dps025M}
\end{figure*}
The densest boundary surface waters are observed near Newfoundland ($40\times 10^3$km). This location corresponds to the end of upward-sloping isopycnals along the sloping western boundary; entering the Labrador Sea, more variable isopycnals occur due to strong air-sea interaction and deep convection. Once more, shallower remote contours at $43\times 10^3$km are rejected due to the large distances between reference and remote contours entering into Hudson Bay and the Arctic Ocean. In this region, we constrain boundary contours from entering Baffin bay, forcing them to return along the west Greenland coast. Isopycnals along eastern Greenland ($44$-$45\times 10^3$km) and along the western side of the Reykjanes ridge are relatively smooth, with a gentle downward slope. Gradients in surface isopycnals (upper 200m) are smaller than those of deeper isopycnals, resulting in a large pool of waters (300-900m) with density of approximately 27.7kgm$^{\text{-3}}$ at a distance of $45\times 10^3$km, possibly attributed to Iceland-Scotland Overflow Waters flowing around the Reykjanes ridge.
Near Iceland the contour path heads southward and then northward along the eastern side of the Reykjanes ridge, crossing from the western to the eastern side of the Atlantic basin. Local bathymetry leads to rejection of points (eastern side of Reykjanes ridge) shown in Figure \ref{F_BD_n025M} prior to $50\times 10^3$km. In this region boundary paths for different depths have separated, leading to shallower contours being rejected. Variable surface isopycnals occur on the west coast of Ireland (distance $50\times10^3$km) due to ageostrophic processes, before an interval of flat isopycnals along the eastern Atlantic boundary, corresponding to the western coast of France, Portugal and north-east Africa.
A perturbation in the isopycnal structure is found near the Mediterranean outflow at $53\times 10^3$km in Figures \ref{F_BD_dps025M}(b,c); the Strait of Gibraltar appears as a thin vertical column of rejected points in shallow waters. At this location, there is a clear ``step'' in isopycnal depths (at intermediate depths, Figure \ref{F_BD_n025M}). Above 750m, isopycnal levels drop suddenly indicating the presence of lighter waters, whereas at depths between 900m and 1400m isopycnals become shallower, indicating the presence of denser (warmer, saltier) Mediterranean Overflow Waters (Figure \ref{F_BD_dps025M}(b,c)).
The eastern Atlantic boundary ($51$-$66\times 10^3$km) is similar to its Pacific counterpart, with flat boundary isopycnals. Crossing the Equator ($61\times 10^3$km), a region of light water is found near the surface, caused by warming and evaporation from surface buoyancy fluxes. Flat eastern boundary isopycnals persist on the sloping boundary for the majority of the East-African coastline (until $66\times 10^3$km). As the boundary path passes the Cape of Good Hope into the Indian Ocean, isopycnals slump due to the presence of warmer and saltier surface waters from the Madagascar and Agulhas currents.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_Bdry/BdryDensGrad_potD_GC3_025C_aj368_AtlExt_int.png}}
\caption[Time-average boundary potential densities for the first 100 years of the $1/4^\circ$ model for the upper 400m and all depths.]{Time-average boundary potential densities (referenced to the surface) for the first 100 years of the $1/4^\circ$ model for (a) the upper 400m and (b) all depths. Dashed white lines indicate the locations of the Equator, Drake Passage and southern tip of South Africa.}
\label{F_BD_p025M}
\end{figure*}
At approximately $68\times 10^3$km, a region of rejected points down to 2000m is caused by a sudden change in the reference contour pathway off the coast of Durban, South Africa. Surprisingly, boundary isopycnals along the east coast of Africa (western Indian Ocean) are relatively flat, passing through the Mozambique channel and northward past Ethiopia and the Gulf of Aden ($74\times 10^3$km). This flatness of isopycnals along the western coast of the Indian Ocean is interesting, and is addressed further in Section \ref{Bdry_WMec}.
To complement the along-boundary investigation of neutral densities above, we briefly consider potential density, referenced to the surface. Figure \ref{F_BD_p025M} shows the potential density equivalent of Figure \ref{F_BD_n025M}. Here again, there is evidence for flat isopycnals on eastern boundaries. However, isopycnals are generally less stable in comparison to the neutral density isopycnals, especially at depth and in the northern Atlantic ($30$-$50\times 10^3$km). For example, near the Mediterranean outflow ($53\times 10^3$km), Figure \ref{F_BD_p025M}(b) shows dense water up to 1500m, leading to a steep gradient in deep isopycnals.
\subsection{Boundary densities from model at different resolutions, reanalysis and climatology}
Having characterised the boundary density structure for the extended Atlantic section in the $1/4^\circ$ model, we now explore the equivalent sections for various other datasets for comparison.
Figure \ref{F_BD_Rest} shows along-boundary neutral densities for (a) low ($LL$, N96) and (b) medium ($LM$, N216) atmospheric resolutions of the $1^\circ$ model over the first 100 years. Panels (c) and (d) show the isopycnal structure for the GloSea5 reanalysis and GK climatology respectively. We note for the $1^\circ$ models and GK climatology, the pathway taken by the reference contour deviates near Madagascar ($65\times 10^3$km). In the $1/4^\circ$ model and GloSea5 datasets it follows the African coastline through the Mozambique channel, whereas in coarser models and climatology it follows the east coast of Madagascar instead, resulting in a slightly longer section. Elsewhere the pathways taken by the reference contours are almost identical.
\begin{figure*}[ht!]
\centering
\includegraphics[width=\textwidth]{Fig_Bdry/BdryDensGrad_neutD_GC3_LL_ar766_AtlExt_int_s.png}\\
\includegraphics[width=\textwidth]{Fig_Bdry/BdryDensGrad_neutD_GC3_1C_ap466_AtlExt_int_s.png}\\
\includegraphics[width=\textwidth]{Fig_Bdry/BdryDensGrad_neutD_GloSea5_AtlExt_int_s.png}\\
\includegraphics[width=\textwidth]{Fig_Bdry/BdryDensGrad_neutD_GKclim_AtlExt_int_s.png}
\caption[Neutral densities along sloping Atlantic boundary for the (a) $1^\circ$-$LL$ model, (b) $1^\circ$-$LM$ model, (c) GK climatology and (d) GloSea5 reanalysis for all depths.]{Neutral densities along sloping Atlantic boundary for the (a) $1^\circ$-$LL$ model, (b) $1^\circ$-$LM$ model, (c) GK climatology and (d) GloSea5 reanalysis for all depths. Model outputs are averaged over the first 100 years and GloSea5 over 23 years of run. Dashed white lines indicate locations of the Equator, Drake Passage and southern tip of South Africa.}
\label{F_BD_Rest}
\end{figure*}
All panels of Figure \ref{F_BD_Rest}, with common colour scale, show similar along-boundary isopycnal structures. Two main differences are observed between the $1^\circ$ models (Panels (a,b)). First, the coarser resolution atmospheric model ($1^\circ$-$LL$, (a)) shows lighter surface density conditions throughout the basin. Secondly, the $1^\circ$-$LL$ also exhibits less dense waters near the surface in the North Atlantic (centred at $38\times 10^3$km at 1000m depth). For the $1/4^\circ$ and $1^\circ$-$LM$ models, and to a certain extend the GK climatology, deep isopycnals (depth 500-1500m near $36$-$45\times 10^3$km) slope upwards with increasing distance along the western Atlantic boundary; the $1^\circ$-$LL$ model and GloSea5 output show flatter isopycnals in this region. The latter suggest less dense waters near Newfoundland ($40\times 10^3$km), possibly attributed to changes in deep water formation and resulting Labrador Current and DWBC.
Isopycnal structure for the GloSea5 dataset shown in Figure \ref{F_BD_Rest}(c) is most similar to the $1^\circ$-$LL$ model. In general however, at great depths, GloSea5 shows signs of denser AABW (following the Drake Passage at around $24\times 10^3$km), penetrating further up the slope than for any other dataset considered; the $1^\circ$-$LL$ model exhibits the lightest deep waters (2000m and deeper). However, generally all datasets considered show remarkably similar isopycnal structures.
An investigation of the influence of seasonality on along-boundary isopycnal structure was conducted using monthly GloSea5 data, averaged into seasons of three consecutive months starting in December (i.e. DJF, MAM, JJA and SON). Results (not shown) indicated little to no seasonal effects, except near the east coast of Greenland where the 27.6kgm$^{\text{-3}}$ isopycnal (located near 500m depth) was 100-200m shallower in atmospheric summer months; this variation is attributed to changes in the East Greenland Coastal Current and mixed layer depths.
Appendix \ref{App_Bdry_Tmp} investigates the variation of neutral boundary densities over different timescales. For the $1^\circ$ models, temporal variability outside the surface layer is largest in the North Atlantic, specifically locations downstream of areas of deep water formation. Greater variability at depth in the $1^\circ$-$LM$ model than the $1^\circ$-$LL$ model, can likely be attributed to greater variability in mixed-layer depth due to higher atmospheric resolution. The $1/4^\circ$ model shows interesting variability at 1200m depth, along both Atlantic boundaries, for all timescales considered. The along-boundary extent of these features might be attributed to internal or boundary waves. Remnants of this increased variability are propagated into the Pacific and Indian Oceans. In general, little difference in variability is found when comparing eastern and western boundary sections.
\subsection{Mechanisms setting isopycnal slopes on western boundaries}
\label{Bdry_WMec}
The previous section establishes general consensus regarding along-boundary isopycnal structure across models, reanalysis and climatology for the Atlantic and surrounding boundaries. The presence of flat eastern boundary isopycnals is demonstrated, which can be explained by the action of boundary waves produced in response to anomalies in density (present due to e.g. propagation around the basin, or wind- or local-buoyancy-forcing). The condition of no flow through the boundary prevents a pressure gradient forming; therefore, any anomaly is quickly spread along the boundary by coastally-trapped waves, leading to flat isopycnals. However, on western boundaries, isopycnal structures along the Atlantic and Indian basins show contrasting behaviours, considered further here.
On the western boundary, flows are not geostrophic due to friction and non-linearities, therefore flat isopcynals are not to be expected here. However, in the presence of flat Atlantic eastern boundary isopycnals, a Coriolis parameter increasing with distance from the equator, and assuming the thermal wind relationship dominates the MOC, western boundary isopycnals must slope upward towards the north to sustain an overturning circulation. Using the thermal wind relation (Equation \ref{TW}), an increasing Coriolis parameter with latitude requires that the zonal density gradient ($\partial \rho$/$\partial x$) increases in magnitude ($\rho_W>\rho_E$) with latitude in the northern hemisphere to maintain the thermal wind shear ($\partial v$/$\partial z$). Therefore, upward-sloping western boundary isopycnals are necessary to increase the East-West density gradient, and maintain the overturning circulation. However, from this argument, we cannot infer a causal relationship between the isopycnal structure and overturning circulation.
The sloping western boundary isopycnals observed, are consistent with the results outlined by \cite{MacCready1994}, who discuss the frictional decay of abyssal boundary currents (e.g. DWBC), and present a theory for the observed longevity of these boundary currents along a sloping bathymetry. The majority of the current's energy is stored as potential energy via the upturn of isopycnals (normal to the boundary) on its slope. However, the rate of decay of the energy of the abyssal boundary current is dictated by the current's kinetic energy, which is considerably smaller than its potential energy. They develop a simple ($1\frac{1}{2}$-layer) mathematical model with a diffusion equation governing the DWBC vertical thickness, to predict the change in cross-sectional shape of the abyssal current during ``spin-down'' along the boundary. They find, as the current spins-down, kinetic energy is dissipated and replenished from a large pool of available potential energy, leading to the current's upper tip exhibiting down-slope movement, whilst the bottom of the current becomes thicker and wider. The resulting along-slope transport of the abyssal current is found to remain stable, even as its energy decreases. For a typical DWBC case, such an abyssal current may travel thousands of kilometers before being drained due to Ekman pumping. The Atlantic DWBC is known to flow southward from the Nordic Seas along the western side of the Atlantic down to the Southern Ocean, where it becomes a part of the ACC. Motivated by the findings of \cite{MacCready1994}, it is possible that the DWBC slowly migrates down-slope as it flows southward; if the density signature is maintained, this would result in a gradual deepening of isopycnals (southwards) along the boundary, and hence generate a gradient in along-boundary isopycnals.
Along the Indian Ocean's western boundary, in marked contrast to its Atlantic counterpart, flat isopycnals are present within a basin exhibiting a predominately wind-driven overturning circulation (\citealt{Lee1998}), for which neither large East-West density gradients nor sloping western boundary isopycnals are necessary. Further, we note that the Indian Basin exhibits no DWBC; for a DWBC to be formed, a source of surface dense water is required at high latitudes in the basin (such as the Nordic Seas in the case of the Atlantic); however, no such source exists in the Indian Ocean (\citealt{Tamsitt2019}). The densest Indian Deep Water is formed diffusively within the interior from diapycnal mixing of abyssal waters (\citealt{Talley2013}). It is interesting that flat western boundary isopycnals coincide with the absence of a DWBC within the Indian Basin, in line with \cite{MacCready1994}.
A further point of interest is the presence of deep eastern boundary currents (DEBC) within the southern hemisphere of the Pacific, Atlantic and Indian Oceans. Their presence at depths between 2000m and 4000m would likely influence the along-boundary density structure. The dynamics of these DEBCs are not very well understood (\citealt{Tamsitt2019}), and current GCMs do not represent them well (\citealt{Yang2020a}). They are smaller than their western counterparts, but still play an important role in the overturning circulation (\citealt{Wunsch1983}, \citealt{Arhan2003}, \citealt{Tamsitt2017}). \cite{Yang2020a} proposes mechanisms for the dynamics of the Pacific DEBC using GCMs and idealised models. They show that DEBCs do not behave like typical boundary currents, or their stronger DWBC counterparts. They propose a framework to explain the Pacific DEBC as a manifestation of topographic stretching plus a dynamical mode, that decays away from the eastern boundary. This occurs only when temperature, diffusion, viscosity and stratification effects are considered together. However, their linearised framework does not work well for the Atlantic DEBC, due to its non-linear dynamics.
\section{Summary}
In this chapter, we have investigated along-boundary properties of an extended Atlantic basin. Latitude-depth contour plots for eastern and western boundary densities for the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ HadGEM-GC3.1 models reveal that density variation on the eastern boundary is less complex, with intervals of flat isopycnals outside the surface layer, particularly for latitudes south of $35^\circ$N. In contrast, western boundary isopycnals slope approximately linearly with latitude, above 2500m and south of $35^\circ$N. The regularity of density variation with latitude and depth is disrupted by features such as the Mediterranean Outflow (on the eastern boundary), and Gulf Stream separation (on the western boundary). Lighter surface densities are found on the western boundary due to strong boundary currents there. For depths between 200m and 1500m, eastern boundary densities are heavier than their western counterparts in the southern hemisphere, whereas the reverse is true in the northern hemisphere. These findings do not appear to be dependent on model spatial resolution.
We have introduced a boundary mapping algorithm which determines the location of sloping continental boundaries. The algorithm allows us to map and project the ocean's sloping boundaries in a two-dimensional (along-boundary distance, and depth) coordinate system. We have used the mapping algorithm to explore spatio-temporal variations in potential temperature, salinity, neutral and potential densities for (HadGEM-GC3.1) model simulations, (GloSea5) reanalysis and (Gouretski and Kolterman) climatology, centred on the Atlantic basin. The general consensus across these datasets is that the assumption of flat isopycnals along the eastern boundary is valid. Quantifying whether isopycnal slope at a location is zero or not is a challenging problem. Slope estimates are uncertain because locally along the boundary we only have limited data to estimate them. Therefore, estimates of isopycnal gradients are either very noisy, or require a lot of smoothing using non-local data, casting doubt on the estimate of the local isopycnal gradient.
Along the eastern Pacific boundary, isopycnals are flat. At the Drake Passage and near the Malvinas islands, northward flow of colder denser Malvinas current and deep AABW result in a shallowing of isopycnals. Along the western Atlantic boundary, isopycnals slope gently upwards along the boundary in a northward direction. Discrepancies between boundary densities in different models are greatest at depth (approximately 1500m) in the North Atlantic, and may be related to specifics of deep water formation at high latitude within each model.
To the north of the Mediterranean outflow, the lack of continuous boundary between the British Isles and continental Europe, and ageostrophic eastern boundary flows result in non-flat isopcynals. For the eastern Atlantic boundary at latitudes south of the Mediterranean outflow, flat isopycnals occur. These are disturbed temporarily near the Cape of Good Hope, where isopycnals deepen due to the input of warm Indian Ocean waters. However, beyond South Africa on the western boundary of the Indian Ocean, isopycnals remain flat, in contrast to their behaviour on the western Atlantic boundary.
One explanation for the observed linear slope in Atlantic western boundary isopycnals is the presence of an overturning circulation in the basin together with flat eastern boundary isopycnals and a Coriolis parameter increasing with distance from the equator. Thermal wind balance would indicate western boundary isopycnals must slope upwards to the north to maintain the overturning circulation. These results are consistent with the work of \cite{MacCready1994}, where a southward-propagating DWBC migrates normally down the western continental slope with along-boundary distance (becoming thicker and wider at depth), leading to a slope in isopcynals.
We hypothesise that flat isopycnals on the western boundary of the Indian Ocean are explained by the mainly wind-driven overturning and weak geostrophic component (\citealt{Lee1998}) there, making sloping (western boundary) isopycnals unnecessary. Further, the absence of a clear DWBC in the Indian basin, together with the presence of flat western boundary isopycnals there, is again consistent with the view set out by \cite{MacCready1994}. The observed flat isopycnal structure along the eastern Atlantic and Pacific boundaries is attributed to the no normal flow through the boundary condition, resulting in any anomalies on the eastern boundary being propagated quickly and distributed evenly along the boundary by coastally-trapped waves.
\chapter{Introduction} \label{TJ_Int}
This chapter provides a brief description of the Earth's ocean overturning circulation, its key features, driving mechanisms and spatio-temporal variability. It also outlines the theory of the dynamics of large-scale flow. It discusses the global importance of the Meridional Overturning Circulation (MOC), and summarises current literature on the Atlantic MOC (AMOC) and the Antarctic Circumpolar Current (ACC). To finish, thesis aims and structure are outlined.
\section{Overview}
\label{I_ovr}
The Earth's oceans cover more than two thirds of its surface. These reservoirs of liquid water help determine the Earth's climate, storing and transporting heat, carbon, salt and other nutrients, and maintaining a moist atmosphere compatible with life. Physical processes in the ocean and atmosphere occur on a wide range of spatial and temporal scales (from $10^{\text{-2}}$m to $10^{\text{6}}$m, and 1s to $10^8$s), including phase transitions between liquid water, ice and vapour. The oceans' capacity to store energy far exceeds that of the atmosphere (e.g. \citealt{Williams2011}).
A highly complex system of fluid circulation in the Earth's atmosphere and oceans maintains the pattern of heat flux resulting from the need to balance the budget between solar irradiation of the Earth's surface, which is mainly in the tropics, and radiative cooling at all latitudes. In general, heat is redistributed from the tropics to the poles. Overall heat transport in the Pacific and Indian Oceans are poleward, as might be expected; however, in the Atlantic Ocean, the strong overturning circulation there results in a northward overall ocean heat flux, even in the southern hemisphere (\citealt{Trenberth:2017}). This has profound effects on multiple features of global climate, including shifts in the Inter-Tropical Convergence Zone (ITCZ), Sahel and Indian Monsoons, Atlantic hurricanes, El Nino Southern Oscillation (ENSO), and European, North American and Asian climates (\citealt{Zhang2019}). Characterising the MOC is therefore crucial to our understanding of global fluid circulation, wider climate effects and resulting societal and economic impacts. MOCs in each ocean basin are connected through the Antarctic Circumpolar Current, due to the zonally continuous Southern Ocean, isolating Antarctica.
Anthropogenic release of carbon dioxide and other greenhouse gasses into the atmosphere is currently destabilising the climate system and its overturning circulations. Approximately 93\% of the energy accumulated from anthropogenic global warming has been absorbed by the oceans, and widespread surface warming has been observed in recent decades (e.g. IPCC 2021, \citealt{Anderson2016}). As much as two-thirds of ocean heat uptake (OHU) since 1955 has been by the ocean's upper 700m (\citealt{Levitus2012}). The overturning circulation is critical to transferring heat into the deeper ocean. To increase our confidence in climate projections, we require a detailed understanding of the processes and mechanisms active in the climate system, including those responsible for ocean uptake of heat and carbon.
Our understanding of the global overturning is relatively poor, due to sparsity of direct observations, and the complexity of the Navier-Stokes equations defining its behaviour (\citealt{ROBINSON1959}). In particular, interpreting output from general circulation models (GCMs) can be problematic, and numerical solutions do not always provide intuitive understanding of the physical mechanisms at play. This has motivated the development of many simpler diagnostic models, to describe the characteristics of the MOC, underpinned by elementary physical relationships which govern the large-scale circulation. Near coastal boundaries, near the surface and near bottom, these simple models break down. Nevertheless, diagnostic tools provide useful means of simplifying complex four-dimensional behaviour in space and time in terms of e.g. two-dimensional cross-sectional summaries by depth and latitude, longitude or along-boundary distance.
\subsection*{Meridional overturning circulation}
\label{I_moc}
\begin{figure}[h!]
\centerline{\includegraphics[width=13cm]{Fig_Intro//MOCs_schem.png}}
\caption[Schematic of the global overturning circulation and the processes in play (reproduced from \citealt{Kuhlbrodt2007}).]{Schematic of the global overturning circulation and the processes in play (reproduced from \citealt{Kuhlbrodt2007}). }
\label{MOC_schem}
\end{figure}
Figure \ref{MOC_schem} depicts a simplified global overturning circulation consisting of surface currents (red arrows) and deep currents (blue arrows). Overturning circulations are present in each ocean basin, and their characteristics are determined in part by the locations of boundary land masses, and bathymetry. In the Atlantic, we find warm, salty northward currents (e.g. the Gulf Stream) flowing towards high northern latitudes where large buoyancy loss to the atmosphere results in densification and downwelling (e.g. \citealt{Williams2011}); a colder, denser southward return flow is found at great depths (Deep Western Boundary Current, DWBC). To complete the cycle, the southward return flow is eventually upwelled via diffusion, diapycnal mixing and strong Ekman upwelling driven by Southern Ocean winds (e.g. \citealt{Toggweiler1995}). Overturning circulations for all ocean basins are connected via the ACC. \cite{Buckley2016} provide a review of observations, inferences and mechanisms for the AMOC. Due to the deficit of freshwater supply to the Atlantic relative to the Pacific, Atlantic salinity is higher and its sea water denser; hence North Atlantic Deep Water (NADW) formation can be observed, at high northern latitudes. This is not the case in the North Pacific (\citealt{Warren1983}) or Indian Ocean (\citealt{Tamsitt2017}). Only at a few locations at high latitudes (e.g. Nordic Seas, Weddell Sea) do the surface waters become dense enough to sink to the deep ocean (yellow discs, Figure \ref{MOC_schem}): these regions of deep water formation are few and far between, but their role in the climate system is pivotal.
\section{Driving mechanisms of the AMOC and ACC}
\label{I_drv}
In broad terms, there are two main contributors to a sustained overturning circulation: \textbf{\textit{surface buoyancy forcing}} and \textbf{\textit{mechanical forcing}}. In the past, it was believed that surface buoyancy was the sole driver of the overturning which was hence described as the ``thermohaline circulation''. Our current understanding is different. We now know the thermohaline circulation requires an input of mechanical energy to sustain a deep overturning (\citealt{Sandstrom1916}, \citealt{Munk1998}, \citealt{Yuan2005}). By now, the thermohaline circulation refers not only to flows resulting from density variations, but also to those from mechanically-driven mixing and surface water-mass transformations within the ocean.
The density of sea water is influenced by precipitation, evaporation, atmospheric heating and cooling, and ice melting (e.g. \citealt{Marshall2008}). Density increases with depth: a water parcel of lighter density within a denser water mass therefore tends to move upwards to the level of a water mass with similar density. Sea water of the same density can vary in temperature and salinity (\citealt{Emery2001}) due to their compensatory effects.
Due to the unequal warming of the Earth by the Sun, we find warmer, less dense waters at low latitudes and colder, denser waters at high latitudes. At low latitudes, diffusion and vertical mixing due to winds distributes the heat downwards slowly. In contrast, at high latitudes, cooling of surface waters is dominated by convection (\citealt{North2014}), a much faster process resulting in denser colder waters being formed (via deep convection e.g. NADW), ultimately leading to downwelling. Formation of deep water masses occurs in narrow regions (e.g. Labrador, Nordic, Irminger, Weddell and Ross Seas) due to intense air-sea forcing (e.g. \citealt{Haine2008}). In addition, proximity to ice can drive deep water formation. Salt rejection when ice is formed increases seawater density. In North Atlantic marginal seas, regions of deep convection share common features; weak mean interior flows, doming isopycnals and cyclonic boundary currents. Air-sea interaction results in heat loss in the interior of the basin, which is balanced by lateral eddy fluxes from the cyclonic boundary current. This results in denser waters along the boundary and a reduction in vertical shear. This is followed by downwelling near the boundary, in order to maintain geostrophic balance. Therefore, lateral eddy fluxes connect the buoyancy loss by vertical transport near the boundary, and deep convection in the interior of the basin (\citealt{Johnson2019}). NADW is comprised of multiple water masses, but is typically described by two distinct layers, Lower NADW consisting of colder GIN (Greenland, Iceland and Norway) Seas water and, warmer Upper NADW formed in the Labrador Sea.
In the early 1900s, \cite{Sandstrom1916}, using only the laws of thermodynamics, came to the conclusion that a closed steady buoyancy forced circulation within the ocean could only be sustained if the heating source was located deeper than the cooling source. Therefore, an ocean interacting with the atmosphere at the surface should only exhibit a very shallow overturning. Further work by \cite{Munk1998} investigating the power ($\approx$ 2.1TW) required to support mixing in the abyssal layers and hence return deep water to the surface, and revealed the crucial role of wind and tides. The contributions of surface buoyancy forcing and geothermal heating through the sea floor were found to be negligible. \cite{Munk1998} state that an ocean, driven only by surface buoyancy forcing, would become a``stagnant pool of cold salty water with equilibrium maintained by near-surface mixing with very weak convectively driven surface-intensified circulation'' within a few thousand years. We now appreciate (e.g. \citealt{Gnanadesikan1999}) that a strong overturning circulation requires deep stratification. Stratification is deepened by sources of mechanical energy including Southern Ocean winds and tidally-driven diapycnal mixing: in the absence of winds or tidal forcing, a shallow stratification or weak overturning would be observed.
Wind stress on the ocean's surface results in fluid transport. In approximate terms, we can think of the ocean's near-surface mixed layer itself as a sequence of horizontal layers, each layer exerting a frictional force on its neighbours. Due to the Earth's rotation and the resulting Coriolis effect, the direction of fluid flow is deflected increasingly to the right with depth in the northern hemisphere, and to the left in the southern hemisphere, forming an Ekman spiral (\citealt{Ekman1905}). Net transport due to wind stress in the surface Ekman layer is orthogonal to the direction of the wind stress. Due to equatorward coastal winds, we find coastal upwelling along eastern boundaries, bringing nutrient-rich cold water to the surface.
Ekman transport is particularly important for the Southern Ocean (\citealt{Gill1968}, \cite{Toggweiler1995} and Figure \ref{SO_schem}). Strong westerly winds around Antarctica cause northward surface Ekman transport, outcropping isopycnals and large-scale upwelling of NADW (formed in the high-latitude North Atlantic) before the continental shelf (\citealt{Doos1994}, \citealt{Toggweiler1995}). Due to the absence of a meridional boundary in the upper ocean at Drake Passage latitudes, a zonal pressure gradient is only formed below sill depths (\citealt{R.Rintoul2001}), maintaining a meridional geostrophic flow which can balance the northward Ekman transport in the Southern Ocean. The resulting upwelling from depth occurs mainly along isopycnals that outcrop due to the strong westerly winds (e.g. \citealt{Toggweiler1995}, \citealt{Marshall2012}). Sloping isopycnals result in an eastward ACC. The upwelling of NADW here is crucial to sustaining the AMOC (e.g. \citealt{Visbeck2007}). \cite{Munk1951a} showed that, in order to balance the input of zonal momentum from wind stresses over the Southern Ocean, a sink of momentum into the solid Earth must be present, in the form of bottom form stress (i.e. action of pressure against bathymetric barriers on the sea floor). In shallower waters, where bathymetric barriers are not present, the role of transient eddies in creating southward isopycnal flux is crucial in balancing the northward Ekman transport. To maintain the heat balance of the Southern Ocean, a poleward transfer of heat must be present (\citealt{deSzoeke1981}); \cite{Bryden1979} showed the important role of mesoscale eddies in this respect.
\begin{figure*}[ht]
\centerline{\includegraphics[width=13cm]{Fig_Intro/SouthernOcean.jpg}}
\caption{ Schematic of the processes at play in the Southern Ocean and along the coastline of Antarctica (reproduced from \citealt{NationalResearchCounci2011}).}
\label{SO_schem}
\end{figure*}
\cite{Munk1998} emphasise the role of tides in returning the NADW to the surface. The gravitational pull of the Sun and moon results in stratified water being moved up and down sloping topography, producing waves on density interfaces in the ocean's interior. If a small water parcel is displaced from its equilibrium position, it will experience a returning force downwards due to gravity or upwards due to buoyancy. If the water parcel overshoots its original equilibrium position, the disturbance forms an internal gravity wave. The breaking of these internal waves causes dissipation of energy in the surrounding water parcels and thus diapycnal mixing within the interior of the ocean (as illustrated in Figure \ref{MOC_Mch}, \citealt{Marshall2012}, \citealt{Talley:2013}). Winds can also produce internal waves.
\begin{figure}[h!]
\centerline{\includegraphics[width=13cm]{Fig_Intro/AMOC_processes.png}}
\caption{Schematic illustrating the key processes which determine the strength, structure, and variability of the AMOC (reproduced from \citealt{Johnson2019}). }
\label{MOC_Mch}
\end{figure}
\cite{Polzin1997} showed that within the Brazilian basin, enhanced diapycnal mixing is observed over rough bathymetry. Recently, considerable doubt has been placed on the methodology set out by \cite{Munk:1966} and \cite{Munk1998}: there is evidence that waters are found to sink rather than rise in regions where mixing rate intensifies near the ocean floor (\citealt{Gargett:1984}, \citealt{Polzin1997}, \citealt{Simmons:2004}, \citealt{Kunze:2012}). \citealt{Ferrari2016} argue instead that abyssal waters rise to the surface in ``narrow turbulent boundary layers'' which are found along abyssal ridges and continental margins. This upwelling is so intense that it overwhelms the diapycnal sinking found in the interior, resulting in net upward flow. They highlight the need for observations of these turbulent bottom boundary layers to characterise flow and mixing. Further, they question how well GCMs capture interior diapycnal downwelling, and upwelling near boundaries. In summary therefore, the role of Southern Ocean wind and diapycnal mixing is vital in returning deep water to the surface (e.g. \citealt{Badin2013}). In addition, eddies, bottom friction and turbulence influence the dynamics of large-scale overturning circulation.
Characteristics of the overturning circulation are related to numerous local oceanic phenomena, some of which are now outlined.
\textbf{Gyres} in subtropical and subpolar regions of ocean basins, are caused by wind-stress curl. In the northern hemisphere we find easterlies (or trade winds) at equatorial latitudes, westerlies at mid latitudes and easterlies at high latitudes. In conjunction with the Coriolis force, this wind forcing pattern leads to gyres whose structure was explained by \cite{Sverdrup1947}. For example, for the subtropical gyre found in the North Atlantic, easterlies near the equator induce a northward Ekman transport, westerlies at the mid latitudes induce a southward Ekman transport leading to Ekman convergence at intermediate latitudes. Water columns beneath the Ekman layer are squashed, and conservation of potential vorticity requires interior flow within the subtropical gyre to be equatorward. A balancing northward flow is achieved via narrow fricitional or inertial western boundary currents (\citealt{Stommel1948}, e.g. the Gulf Stream in the North Atlantic).
\textbf{Southward flow of NADW:} Greater hydrostatic pressure at high latitudes (e.g. \citealt{North2014}) results in a deep equatorward flow of polar waters. Due to the Earth's rotation, these deep currents flow along the western boundary of the basin (i.e. as the DWBC). \cite{Stommel1959, Stommel1959a} investigate western intensification of the abyssal ocean and find, for low Rossby number, that large-scale dynamics in the ocean's interior are governed by geostrophic balance (see Equation \ref{GB}). The interior of the abyssal ocean is found to flow poleward, implying the presence of the DWBC. However, analysis of float trajectories in the subpolar North Atlantic (e.g. \citealt{Lavender2000}, \citealt{Bower:2009, Bower2011}) shows a large proportion of floats are deflected into the interior of the basin, rather than following the DWBC. \cite{Bower:2009,Bower2011} show that almost 70\% of floats are deflected eastwards from the western boundary, following the path of the deep North Atlantic Current. Less than 10\% of floats were found to continue southwards with the DWBC. These interior pathways highlight the large exchange between the DWBC and the basin interior, especially on the boundary between subtropical and subpolar gyres. This can be attributed to the presence of coherent eddies, eddy-driven mean recirculation (\citealt{Lozier1997}), and mixing and stirring in the deep layers (\citealt{Lozier1997}, \citealt{Gary2011}, \citealt{Bower2019}).
South of the $43^\circ$N, the majority of southward AMOC flow is confined to the DWBC. This persists southwards through the tropics and into the South Atlantic (\citealt{Gary2011}). Some recirculation of shallower waters is found where the DWBC flows under the Gulf Stream, and due to eddies in the tropics. Near the Victoria-Trinidade Ridge, we find two possible DWBC pathways: the main flow continues southwards along the South American coastline, with a secondary weaker flow proceeding into the interior of the South Atlantic, through the Mid Atlantic Ridge, and into the Cape Basin. We find the DWBC is eventually upwelled due to strong winds in the Southern Ocean (\citealt{Marshall2012}) and diapycnal mixing. This upwelling occurs within all basins, suggesting the AMOC and corresponding MOCs in other basins are connected via the Southern Ocean on centennial to millennial, timescales (\citealt{Buckley2016}). \cite{Bower2019} provide a review of AMOC pathways.
\textbf{Antarctic Bottom Water} (AABW) is formed within the Weddell and Ross Seas (Figure~\ref{MOC_schem}) along Antarctica's continental shelf. The upwelling of NADW before the continental shelf results in buoyancy gain at the surface from the atmosphere, but within the Weddell and Ross Seas, ice-formation allows for shelf waters of sufficient salinity to sink to the greatest depths of the ocean. The AABW flows northwards in the abyss of all ocean basins, including into the North Atlantic, forming a second lower overturning cell. Diapycnal mixing between the AABW and NADW is important for the buoyancy budget of this abyssal cell.
\textbf{Ocean eddies} are formed mainly due to bathymetric obstacles (stationary eddies) or baroclinic instability (transient eddies, e.g. \citealt{Marshall2008}). Transient eddies are time-dependent, small-scale structures. Eddies are ubiquitous around the global ocean and are particularly evident in major ocean currents, including the Gulf Stream, the Agulhas Current, the Kuroshio Current, and the ACC. \cite{deSzoeke1981} showed the important role eddies play in the Southern Ocean, transporting heat poleward, and hence balancing the northward heat flux resulting from Ekman transport. Southern Ocean eddies are critical in opposing isopycnal steepening induced by Ekman upwelling around Antarctica.
\section{Dynamics of large-scale flow}
\label{I_dynL}
MOCs are large-scale oceanic flows, the behaviour of which is ultimately governed by the Navier-Stokes equations (e.g. \citealt{Williams2011}, \citealt{Vallis2019}). GCMs provide ever-improving approximate numerical solutions to the Navier-Stokes equations, but obtaining these numerical solutions is computationally demanding, and interpreting the huge quantities of data they produce is challenging. On an ocean basin scale, geostrophy and hydrostatic balance provide useful approximations to the Navier-Stokes equations appropriate for synoptic-scale and larger flows. Gesotrophy and hydrostatic balance are obtained from the Navier-Stokes equations by retaining only the dominant Coriolis, gravity and pressure gradient terms. The Rossby number $R_o = {U}/{fL}$, for synoptic length $L$, speed $U$ and Coriolis parameter $f$ is $<<1$ for large-scale ocean flows, indicating the dominance of Coriolis forces.
Adopting a local Cartesian frame of reference with horizontal axes $x$ (eastwards) and $y$ (northwards), and $z$ axis vertically upwards from the ocean surface (at $z=0$), \textbf{\textit{hydrostatic balance}} (Equation \ref{HB}) describes a force balance between the acceleration due to gravity $g$, and the upward pressure gradient force per unit mass $\frac{1}{\rho}\frac{\partial P}{\partial z}$, for pressure $P$ and density $\rho$:
\begin{equation}
\frac{\partial P}{\partial z} = - \rho g.\label{HB}
\end{equation}
The local horizontal components of the Navier-Stokes equations simplify to a pair of equations known as \textbf{\textit{geostrophic balance}}. Horizontal pressure gradient forces are balanced by the Coriolis force on perpendicular horizontal gesotrophic flows, where the components of flow velocity in the local Cartesian frame are ($u$, $v$, $w$):
\begin{equation}
\frac{1}{\rho} \frac{\partial P}{\partial x} = fv,
\label{GB}
\end{equation}
\begin{equation}
\frac{1}{\rho} \frac{\partial P}{\partial y} = - fu.
\end{equation}
The \textbf{\textit{thermal wind relation}} is obtained by substituting the hydrostatic balance (Equation \ref{HB}) into the vertical derivative of the gesotrophic relation (Equation \ref{GB}). The meridional thermal wind relationship (Equation \ref{TW}) relates horizontal zonal density gradients to the vertical gradient of the northward velocity,
\begin{equation}
\frac{\partial v}{\partial z} = - \frac{g}{\rho_0 f} \frac{\partial \rho}{\partial x},
\label{TW}
\end{equation}
where $\rho$ is the in-situ densities of sea water, and $\rho_0$ is a constant reference value having assumed Boussinesq flow.
The use of geostrophy throughout the basin to calculate the net meridional flow might seem simplistic, especially when the zonal integral must pass through a western boundary current where non-linear and agesotrophic terms may dominate. However, even in these boundary current settings, it is generally accepted that geostrophy dominates the cross-stream balance where the flow is parallel to the coastline, except in the surface Ekman layer. Issues may arise when currents are no longer along-shore, but these effects should be minor so long as they result from narrow currents (\citealt{Bingham2008}, \citealt{Bell2011}).
\section{The Atlantic Meridional Overturning Circulation (AMOC)}
\label{I_oAMOC}
\subsection{Quantifying the AMOC}
\label{I_qtfAMOC}
The AMOC is defined in terms of the Atlantic meridional overturning streamfunction or zonally-integrated northward volume transport:
\begin{equation}
\Psi(z;y) \equiv \int_{W(y)}^{E(y)} \int_{-H(x,y)}^{z} v(x,y,z') dz'dx,
\label{T}
\end{equation}
for meridional velocity $v(x,y,z)$, ocean depth $H(x,y)$, and western and eastern boundaries $W(y)$ and $E(y)$. The AMOC overturning streamfunction is therefore a function of ``latitude'' or meridional coordinate $y$ and depth $z$ (see Section \ref{S_App_DD} for more grid details), with units m$^{3}\text{s}^{\text{-1}}$, where $10^6\text{m}^{\text{3}}\text{s}^{\text{-1}}$ is defined as equal to a Sverdrup (Sv).
The strength of the AMOC for a given $y$ is commonly defined as the maximum of the overturning streamfunction over depth $z$
\begin{equation}
\Psi_{\max}(y) = \max_{z} \Psi(z;y).
\label{E_MOCStr}
\end{equation}
\subsection{Observing the AMOC}
\label{I_obsAMOC}
One of the main issues facing physical oceanography generally, and a major constraint on our ability to better understand oceanic systems, is the sparsity of ocean observations in space and time. Long observational datasets are limited, and confined to popular shipping routes or coastal regions. Surface properties have been routinely recorded via satellite since the 1990s. Yet only from the early 2000s has continuous monitoring of the sub-surface limbs of the overturning been performed, following hydrographic studies revealing an apparent weakening AMOC (e.g. \citealt{Bryden2005}).
\cite{Marotzke1999} showed using the adjoint of the Massachusetts Institute of Technology general circulation model (MITgcm) that the dynamical sensitivity of heat transport variability is generally greatest to density perturbations near meridional boundaries of the basin, consistent with the idea that the net thermal wind integrated across a basin is controlled by boundary density variations. This suggests the possibility of ``boundary monitoring'' the AMOC. \cite{Hirschi2003} and \cite{Baehr2004} demonstrated the practical feasibility of using an array of sparse moorings to monitor the AMOC within two eddy-permitting numerical ocean models.
\subsubsection*{RAPID (2004-date)}
Studies such as those referenced above led to the deployment of the RAPID/MOCHA/WBTS (hereafter RAPID) mooring array in $2004$, at a latitude of 26.5$^\circ$N, to monitor zonally-integrated volume and heat transports. The main components of the estimated overturning streamfunction estimated are: (a) geostrophic interior transport, (b) Ekman transport due to wind-stresses, and (c) transport through Florida straits (\citealt{Cunningham2007}, \citealt{Kanzow2007}, \citealt{McCarthy2015, McCarthy2020}).
RAPID consists of a sequence of moorings concentrated on the Atlantic's eastern and western boundaries and on each side of the Mid Atlantic Ridge (MAR). Estimates of the geostrophic interior flow are made using thermal wind (Equation \ref{TW}) relative to a reference level ($z = -4740$m), based on work by \cite{Lee1998} on decomposing Indian Ocean overturning to investigate its variability. Esimates for the Florida Strait transport, the surface Ekman transport and the geostrophic interior transport are summed to estimate the total overturning (e.g. \citealt{Cunningham2007}). At 26.5$^\circ$N, the contribution of the overturning through the Florida strait is estimated using a measurement of the voltage across a disused telephone cable, exploiting the fact that seawater is a conductor moving in the Earth's magnetic field, inducing a current in the cable. Usually it is assumed that the overturning streamfunction conserves volume; to achieve this, the overturning streamfunction estimate is adjusted by a uniform velocity field in the opposite direction, across the section. AMOC strength (Equation \ref{E_MOCStr}) for the period 2004 to 2017 is estimated to be approximately $17$Sv (\citealt{Frajka-Williams2019}).
\begin{figure*}[ht!]
\centerline{\includegraphics[width=13cm]{Fig_Intro/RAPID.jpg}}
\caption[ Schematic of the MOC observing system across 26.5$^\circ$N in the North Atlantic (reproduced from \citealt{Hirschi2003}).]{ Schematic of the MOC observing system across 26.5$^\circ$N in the North Atlantic. The MOC is estimated from the zonal wind stress and from vertical density profiles taken at different longitudes across the basin. \cite{Hirschi2003} assumed the transport through the Florida Strait is known ($v_F$). Knowing the wind stress allows the calculation of an Ekman velocity $v_{ek}$. East of the Florida Strait a depth-dependent velocity field $v_g$ is estimated using the thermal wind relation between adjacent density profiles. The meridional transport associated with $v_g$, $v_F$ and $v_{ek}$ is compensated by constant velocity corrections $v_c$ (for $v_g$) and $v_b$ (for $v_F$ and $v_{ek}$), in order to ensure zero net meridional transport contribution. Reproduced from \cite{Hirschi2003}.}
\label{Rapid}
\end{figure*}
Figure \ref{Rapid} shows the components contributing to the overturning at RAPID. Observations suggest that the major components of the circulation are the Gulf Stream, Ekman transport, southward flow of NADW and subtropical gyre, and northward flow of Antarctic Intermediate Water. Estimates for some of these quantities are provided by the AMOC timeseries (from www.rapid.ac.uk). The Gulf Stream contribution is found to be approximately 31Sv, Ekman transport around 4Sv and the upper mid-ocean contribution is around -18Sv.
\subsubsection*{SAMBA (2009-date)}
More recently, two additional cross-basin mooring arrays have been deployed in the South Atlantic (SAMBA) and the subpolar North Atlantic (OSNAP).
The SAMBA array is situated at approximately $34.5^\circ$S, to investigate the South Atlantic MOC, including features such as the Agulhas leakage and the Malvinas current. In contrast to RAPID, no telephone cable measurements are used; the monitoring relies on density profiles, PIES (pressure-inverted echo sounders) and CPIES (``sea'' PIES) providing baroclinic and barotropic estimates of the temporal variability of the overturning streamfunction. The use of PIES and CPIES at $1350$ dbar renders the zero net flow assumption obsolete, and hence volume compensation is not required. However, sensor drift is an issue for both PIES and CPIES measurements. Hence, capturing the time-average barotropic (depth-independent) component is difficult and estimates are unreliable. As a work-around, a time-mean reference velocity is acquired from the OFES (Ocean For the Earth Simulator) model, providing an estimate of the meridional volume transport in regions inshore of the $1,350$ dbar isobath (\citealt{Meinen2018}). A number of sensitivity studies at SAMBA have explored the temporal variability of each of the SAMBA measurement components (discussed in Section \ref{I_var}) due to a known ``leveling'' issue (\citealt{Watts1990}, \citealt{Donohue2010}) associated with the bottom pressure recorders. The monitoring array reports a maximum overturning streamfunction of the order of $15$Sv (\citealt{Frajka-Williams2019}).
\subsubsection*{OSNAP (2014-date)}
The OSNAP observing array was set up using similar principles to RAPID, to observe and quantify the subpolar North Atlantic overturning circulation. Further, it is used to explore overflow pathways, subtropical and subpolar connectivity and relate AMOC variability to deep water mass variability. It is composed of two legs, OSNAP West across the mouth of the Labrador Sea and OSNAP East across the Irminger and Iceland basins. Each leg consists of multiple moorings along the continental boundaries and Reykjanes ridge, carrying instruments including CTDs (conductivity, temperature and depth), current meters, thermistors, acoustic Doppler current profilers and moored profilers. Away from mooring arrays, where required, geostrophic velocities are estimated from temperature and salinity fields constructed from Argo profiles, gliders etc. (\citealt{Lozier2017} and \citealt{Li2017}).
Using density coordinates to better reflect the overturning (since much of the water mass transformation occurs in the horizontal circulation of the subpolar gyre), \cite{Lozier2019} show the importance of the regions north of the OSNAP East section to the overturning. Here, northward-flowing warm and salty Atlantic waters from the subtropics are replaced with colder, fresher southward-flowing waters moving along the western boundaries of the Iceland and Irminger basins (\citealt{Frajka-Williams2019}). The monitoring array observed a maximum overturning streamfunction of the order of $16.6$Sv (\citealt{Li2021}) over the period 2014 to 2018.
For time-mean overturning, OSNAP monitoring (\citealt{Lozier2019}, \citealt{Li2021}) has revealed that : (a) the Labrador Sea (OSNAP East) has minimal contribution to the subpolar overturning circulation defined in density space (2.6Sv), (b) westerly wind-induced Ekman transport contributions are small (-1.7Sv), and (c) the OSNAP East contribution to the overturning at around 16.8Sv is around 7 times larger than its western counterpart.
The sum of MOC estimates across OSNAP West and East cannot be used to estimate the total MOC, due to cancellations of northward and southward transports within the same density class; southward flow around eastern Greenland cancels with some of the northward flow along western Greenland. OSNAP East plays a dominant role for meridional heat transport, and OSNAP West dynamics are found to be important for freshwater transport, which could have a direct effect on salinity anomalies downstream. Specifically, OSNAP West is shown to provide half of the freshwater transport measured across the OSNAP array.
\subsubsection{Other monitoring}
RAPID, SAMBA and OSNAP arrays are part of a larger set of observational campaigns to investigate and quantify the ocean's overturning circulations (see \citealt{McCarthy2020} for overview of technologies and methods). The deployment of around 4000 Argo floats at any time has improved the spatial coverage of ocean hydrographic data (\citealt{Riser2016}). Each Argo float drifts around the world's oceans at around 1km depth, submerging approximately every 10 days to a depth of around 2km, before returning to the surface, to transmit a profile of temperature and salinity with depth via satellite. Argo floats leave the deep seas un-sampled. Newer floats have been tested which submerge to around $5$km, but these have issues in shallower waters.
Bottom pressure recorders and current meters can be used to obtain further information regarding the contribution of the depth-independent flow (or external-mode) component (e.g. \citealt{Donohue2016}). This is especially useful along the Atlantic's sloping western boundary where strong boundary currents are observed. The challenge using data from bottom pressure recorders is sensor drift, leading to inaccuracies and difficulty in estimating transport associated with the depth-independent flow component (\citealt{Watts1990}, \citealt{Hughes2013}).
\subsection{AMOC variability}
\label{I_var}
\subsubsection{Observational assessment of AMOC variability}
\label{I_var_obs}
The AMOC varies on timescales from days to millennia. The relatively short continuous timeseries available for the RAPID, SAMBA and OSNAP observing programmes have shed some light on short-term AMOC variability. For the subtropical Atlantic, RAPID has shown that wind stress on the ocean's surface dominates the seasonal variability of the AMOC (e.g. \citealt{Hirschi2007}, \citealt{Srokosz2015}) via processes such as Ekman transport, wind-stress curl impact on boundary densities, coastal upwelling and Sverdrup balance. Large seasonal fluctuations (4Sv to 34.9Sv) in the strength of the overturning circulation have been observed. Wind-stress curl on the eastern side of the basin are found to be particularly important on seasonal timescales (e.g. \citealt{Kanzow2010}, \citealt{Chidichimo:2010}).
On interannual timescales, a weakening of the AMOC from 2004 to 2012 is attributed to a strengthening southward flow in the upper mid ocean due to a stronger recirculation of the subtropical gyre (\citealt{Smeed2014}, \citealt{Srokosz2015}). \cite{Smeed2018} show that the AMOC at RAPID has been in a reduced state since 2008 relative to 2004-2008, concurrent with a northward shift and broadening of the Gulf Stream, and altered patterns in ocean heat content and sea surface temperature (SST). In conjunction with changes in air-sea fluxes above the western boundary, these suggest that the AMOC is a major factor in decadal scale North Atlantic variability. At OSNAP, \cite{Li2021} (and \citealt{Lozier2019}) reveal that overturning variability is dominated by the overturning in the eastern subpolar gyre, measured by OSNAP East, on monthly to interannual timescales. They find that 82\% of AMOC variance is explained by OSNAP East between 2014-18.
Low-frequency variability at RAPID is attributed to the geostrophic (boundary density) term. \cite{Elipot2014} find the variance of the western boundary density term to be significantly larger than that of its eastern counterpart, although theoretical arguments suggest eastern boundary density contribution should be largest (\citealt{Johnson2002, Johnson:2002b}). Boundary buoyancy anomalies may be fundamental to understanding mechanisms of long-timescale AMOC variability.
\cite{Meinen2018} discuss the variability found at SAMBA for the South Atlantic. High frequency (short-timescale) variability dominates, with largest contributions from the geostrophic relative velocity term (boundary density contribution) with both Ekman and geostrophic reference velocity (volume compensation term, equivalent to depth-independent term) playing secondary roles. Interestingly, whereas variability at RAPID is dominated by western boundary density variations, overturning variation at SAMBA is equally dependent on both boundaries. This reflects the important role of inter-ocean exchanges between the South Atlantic and both Indian and Pacific Oceans. Seasonal variation is dominated by semi-annual density variation near the eastern boundary. Interannual variations are equally dependent on density and pressure (depth-independent, external mode) terms. In years where density variation is key, the eastern boundary is found to dominate AMOC variability, in contrast to what is found at RAPID (\citealt{McCarthy2015}). For years of large pressure or depth-independent contributions, eastern and western pressure contributions are similar.
\subsubsection{Model-based assessment of AMOC variability}
\label{I_var_mod}
Prior to the observational data discussed in Sections \ref{I_obsAMOC} and \ref{I_var_obs} becoming available, estimates of AMOC strength and variability were mostly model-based, with the exception of limited AMOC estimates from hydrographic sections, giving snapshots in time (e.g. \citealt{Hall1982}). These estimates provide a complete description of the AMOC in space and time. However, they are highly dependent on model-specific factors, including spatial resolution, and representations of small-scale processes such as overflows, convection, ocean eddies, mixing etc. Hence, accurate quantification of even the strength of the time-mean AMOC is problematic.
On short timescales there is general consensus that the Ekman component dominates AMOC model-based variability (e.g. \citealt{Hakkinen1999}, \citealt{Dong:2002}, \citealt{Xu2014}). \cite{Zhao:2014} and \cite{Pillar2016} also show that wind forcing dominates variability at $26.5^\circ$N.
At interannual and decadal timescales, the geostrophic component dominates variability (e.g. \citealt{Hirschi2007}, \citealt{Cabanes2008}). \cite{Bingham:2007} find that the AMOC is not coherent between subpolar and subtropical gyres; interannual variability dominates the subtropical gyre whereas decadal variability dominates the subpolar gyre (e.g. \citealt{Wunsch2013}). \cite{Buckley2016} note a lack of consensus regarding mechanisms for low frequency AMOC variability, and the concern that AMOC variability in model output is dependent on uncertain model and parameter specification. Further they observe that characterising AMOC variability on inter-annual to decadal timescales remains challenging, since observations to validate model-based estimates are lacking.
At decadal timescales, models show meridionally coherent modes of AMOC and Atlantic ocean heat transport (e.g. \citealt{Delworth1993}, \citealt{Delworth2000}, \citealt{Knight:2005}, \citealt{Danabasoglu2012}). \cite{Zhang:2010} show that meridionally coherent AMOC anomalies emerge from subpolar regions, as a result of time-variable buoyancy forcing (\citealt{Biastoch2008}, \citealt{Robson2012a}, \citealt{Yeager2014}).
There is no consensus from models regarding what sets the timescales dominating low frequency AMOC variability. However, numerous studies using idealised models (\citealt{Marshall2001a}, \citealt{Lee2010}) and GCMs (\citealt{Hirschi2007}, \citealt{Zanna2011,Zanna2012}, \citealt{Buckley2012}) suggest the dominant timescale of AMOC variability is determined by the time it takes for a baroclinic Rossby wave to propagate across the basin. Other studies examine the role of advective processes such as spin-up and spin-down of gyre circulation (\citealt{Delworth:1997}, \citealt{Dong2005}) and build-up of high- or low-density water in regions of deep water formation (\citealt{Dong2005}, \citealt{Msadek:2009}).
Many model studies support the idea of decadal AMOC variability being tied to regions of deep convection or deep water formation. \cite{Zhang:2010} and similar studies (e.g. \citealt{Delworth1993}, \citealt{Dong2005}, \citealt{Danabasoglu2008}, \citealt{Kwon:2012}, \citealt{Danabasoglu2012}, \citealt{Jackson2013}, \citealt{Roberts2013}) use multiple lagged correlation, to suggest that anomalies in regions of deep water formation lead AMOC anomalies. Other studies show that buoyancy forcing over regions of deep water formation dictates decadal AMOC variability (e.g. \citealt{Yeager:2012} for the Labrador Sea). \cite{Zhang:2005} and others suggest that large changes in the AMOC can be caused by disruption to deep water formation, or overflow waters (\citealt{Zhang:2011}). Hosing experiments suggest that excess freshwater input at high northern latitudes promotes AMOC weakening or collapse (\citealt{Zhang:2005}, \citealt{Zhang:2007}, \citealt{Jackson2015}).
\cite{Biastoch2008a} show a possible role of the Agulhas leakage in decadal AMOC variability, with anomalies in thermocline depth propagated across the South Atlantic via Rossby waves, and then northward along the western boundary via Kelvin waves.
Changes in Southern Ocean westerlies, which return NADW to the surface, could impact the ``pulling'' part of the AMOC (\citealt{Visbeck2007}). However, the local overturning response to changes in Southern Ocean wind-forcing appears to be small, due to eddy compensation (e.g. at equilibrium \citealt{Farneti2010a}, \citealt{Farneti2010a}, \citealt{Farneti2011}, \citealt{Gent2011}, \citealt{Gent2016}). The response time of the AMOC in the North Atlantic to changes in Southern Ocean winds is of the order of multiple decades or centuries, suggesting Southern Ocean winds are more likely to influence centennial rather than decadal AMOC variability (\citealt{Spence2009}, \citealt{Spooner2013}). \cite{Delworth2012} show, that in the CM2.1 model, AMOC centennial variability is tied to propagation of salinity anomalies from the Southern Ocean, northward through the basin.
Studies such as \cite{Lozier2010} and \cite{Lozier2012} dispute the importance of deep convection for decadal AMOC variability. \cite{Lozier2019} and \cite{Li2021} show that recent OSNAP observations disagree with inferences from modelling studies (e.g. \citealt{Buckley2012}, \citealt{Buckley2016}, \citealt{Thornalley2018}), which suggest that multi-annual to decadal AMOC variability is attributed to the export and propagation of density anomalies from the Labrador Sea, due to deep convection. \cite{Menary2020} find that density anomalies, advected from the eastern subpolar North Atlantic, dominate the density variability in the western boundary of the Labrador Sea. Therefore, density anomalies found in the Labrador Sea are likely to carry a signature of upstream forcing anomalies. Furthermore, other recent studies have suggested that the linkage between Labrador Sea convection and downstream AMOC variability can be explained by shared variability in response to the North Atlantic Oscillation and other atmospheric forcings (e.g. \citealt{Zou:2019,Zou2020}) rather than by the equatorward propagation of AMOC anomalies from the Labrador Sea.
It is widely accepted that western boundary buoyancy anomalies are central to understanding AMOC variability on decadal timescales. However, there is no consensus about driving mechanisms, as highlighted by the recent OSNAP observations. From earlier studies using idealised models (\citealt{Zanna2011}, \citealt{Buckley2012}) and GCMs (\citealt{Danabasoglu2008}, \citealt{Zhang:2008}, \citealt{Tulloch2012}), \cite{Zanna2012} suggests that the boundary between the subpolar and subtropical gyres can be thought of as the ``pacemaker'' for decadal AMOC variability, referred to by \cite{Buckley2016} as the ``transition zone''.
\cite{Buckley2016} review the processes which could form buoyancy anomalies in this region, including (a) local atmospheric forcing (e.g. \citealt{Frankignoul:1977}, \citealt{Buckley2014, Buckley2015}), (b) advection of buoyancy anomalies by mean currents (e.g. \citealt{Tulloch2012}, \citealt{Kwon:2012}), (c) buoyancy signals communicated via baroclinic Rossby waves (e.g. \citealt{Hirschi2007}, \citealt{Cabanes2008}, \citealt{Frankcombe2009}, \citealt{Zanna2011, Zanna2012}, \citealt{Buckley2012}), (d) gyre wobbles, and shifts in Gulf Stream path (e.g. \citealt{Marshall2001}, \citealt{Zhang:2008}), (e) changes in deep convection and water mass transformation (e.g. \citealt{Curry1998}, \citealt{Pena-Molino2011}, \citealt{VanSebille2011}) and (f) the role of salinity (\citealt{Delworth1993}, \citealt{Holliday2003}, \citealt{Dong2005}, \citealt{Tulloch2012}).
\subsection{Meridional propagation of western boundary buoyancy anomalies} \label{I_adjAMOC}
For meridionally-coherent AMOC anomalies to form, buoyancy anomalies must be propagated southwards along the western boundary. Two main mechanisms have been proposed, namely fast meridional propagation via boundary waves (e.g. \citealt{Kawase1987}, \citealt{Johnson:2002b}, \citealt{Schloesser2012}, \citealt{Marshall2013}) and slow advection by the DWBC or interior pathways (\citealt{Curry1998}, \citealt{Koltermann1999}, \citealt{VanSebille2011}, \citealt{Pena-Molino2011}).
Observational studies by \cite{Curry1998}, \cite{Koltermann1999}, \cite{VanSebille2011} and \cite{Pena-Molino2011} at locations downstream of the Labrador Sea (48$^\circ$N, 36$^\circ$N and 24$^\circ$N) generally find slow propagation of buoyancy anomalies, suggesting the advective pathway. However, the velocity inferred from the time lag corresponding to maximum anomaly correlation between locations, suggests a slow propagation of the order of centimeters per second, slower than the speed of the DWBC. The lower velocity can be attributed to eddy-driven recirculation gyres generated by instabilities of the Gulf Stream and North Atlantic Current (\citealt{Lozier1997}, \citealt{Gary2011}).
Within a simple reduced-gravity model such as the one documented by \cite{Johnson2002}, a layer thickness anomaly produced at high latitudes due to e.g. thermohaline forcing, creates meridional velocities in the ocean's surface layers which are in geostrophic balance with a density, temperature or pressure anomaly. These anomalies are propagated southward along the western boundary via boundary waves. This process is fundamental to how the ocean adjusts to changes in its boundary conditions (\citealt{Wajsowicz1986}, \citealt{Kawase1987}). The amplitude of these waves reduces whilst nearing the equator in accordance with geostrophic balance (due to the diminishing Coriolis effect). The equator acts as a pseudo-barrier, due to the Coriolis force being zero there, forcing boundary waves to propagate along the equator and poleward along the eastern boundary, with waves distributing the anomaly evenly along that boundary. On the eastern boundary, a meridional pressure gradient can only exist if there is a geostrophic velocity into the boundary. In accordance with no normal flow through the boundary, there can therefore be no along-boundary gradient in pressure or density. This results in flat isopycnals along eastern boundaries, implying the stratification of the water column does not vary with latitude. Given that Rossby waves transmit eastern boundary information into the interior, the stratification in these simple models is quasi-uniform over much of the domain. This suggests that considerable information concerning the global overturning can be extracted from only a few stratification profiles, especially along the eastern boundary. The assumption of flat isopycnals along the eastern boundary has been widely used within diagnostic models (e.g. \citealt{Cessi2010}, \citealt{Nikurashin2011}, \citealt{Marshall2017}, \citealt{Nieves2018}). However, no attempt has been made to map the isopycnal structure along the boundary within GCMs or observationally.
Originally, Kelvin waves were thought to propagate the buoyancy anomaly. However, work by \cite{Marshall2013} showed that this is true only for short wave periods; for longer periods, the anomaly is propagated by long and short Rossby waves, both zonally and along the western and eastern boundaries respectively.
\subsection{Wider role of the AMOC}
\label{I_AMOCrole}
\subsubsection{Atlantic Multidecadal Variability}
\label{I_role_AMV}
On seasonal to decadal timescales, AMOC variations have been shown to influence North Atlantic SST (e.g. \citealt{Duchez2016}) and sea levels on the eastern U.S. coast (\citealt{Little2017}). Decadal variability in North Atlantic SST is quantified using the Atlantic Multidecadal Variability (AMV, or the Atlantic Multidecadal Oscillation). Studies suggest lagged correlations between low-frequency AMOC and SST (via AMV) in models (\citealt{Enfield:2001}, \citealt{Zhang2019}); however, the causal role of AMOC in creating these SST patterns is unclear. These SST changes directly influence temperatures in Northern America and Europe, as well as Atlantic hurricane activity and rainfall across Northern Africa and India. Models further suggest that the AMOC and AMV are coherent over multidecadal timescales (e.g. \citealt{Zhang:2007}). However, the AMOC's influence is not fully represented within models (\citealt{Zhang2019}), which fail to capture the observed AMV pattern under realistic external forcing (\citealt{Buckley2016}).
\cite{Kim2020} show ocean dynamical changes are key to driving the AMV, but there is debate about the impact of the AMOC on AMV. Some argue the AMV is a result of internal AMOC variability (e.g. \citealt{Enfield:2001}), whereas others argue the AMV is a result of the SST response to changes in radiative forcing (e.g. \citealt{Booth2012}, \citealt{Bellomo:2018}) or atmospheric forcing (e.g. \citealt{Clement2015,Clement2016}, \citealt{Cane:2017}); these mechanisms have however been disputed by other authors (e.g. \citealt{Zhang2017}), \citealt{Zhang:2013}, \citealt{Zhang:2016}, \citealt{Yan:2017}, \citealt{Yan2018a}). Other studies have proposed the AMOC-AMV relationship is a coupled system with a two-way feedback (e.g. \citealt{Fraser2021}). The high number of SST-based studies is at least in part the result of the availability of relatively long records, in comparison to other observables.
\subsubsection{AMOC influence on mean climate}
\label{I_role_clm}
Numerous studies have considered the relationship between the AMOC and Altantic climate (e.g. \citealt{Cassou:2018}). The AMOC accounts for approximately 90\% of northward ocean heat transport within the Atlantic (\citealt{Johns:2011}), estimated to be approximately 1.3PW at RAPID. This has a substantial effect on winter temperatures in north-western Europe. Warm western boundary currents prevent excessive sea ice formation in the sub-polar North Atlantic, releasing heat into the atmosphere. The result is relatively warm conditions over the greater North Atlantic region compared to similar latitudes of the North Pacific (\citealt{Palter2015}).
From spring 2009 to spring 2010, the AMOC exhibited an approximate 30\% decline, partially due to a strong negative North Atlantic Oscillation impacting the wind field. This reduction resulted in a northward heat transport of only 0.9PW at RAPID, leading to an abrupt reduction of North Atlantic SST and a noticeably colder winter for the UK (e.g. \citealt{Srokosz2015}).
Since the AMOC transports heat across the equator, the northern hemisphere is slightly warmer than its southern counterpart. This uneven spatial distribution of heat results in the Inter Tropical Convergence Zone (ITCZ) being located to the north of the equator (e.g. \citealt{Marshall2014a}). The location of the ITCZ affects tropical rainfall and characteristic wet and dry seasons. A potential weakening in the AMOC could result in more Arctic sea ice and a colder Arctic, an equator-ward shift of the ITCZ, and weakened Asian and Indian monsoons; conversely a stronger AMOC could lead to less sea ice, warmer Arctic and a northward shift of the ITCZ (e.g. \citealt{Vellinga2002}).
In general, the AMOC influences the rate and location at which heat is taken up and released by the ocean (e.g. \citealt{Kostov2014}), the vertical distribution of heat, and the rise in sea level due to thermal expansion. The timescale of the ocean (and climate) response to increases in atmospheric $\text{CO}_2$ depends upon the rate of ocean heat uptake and the efficiency with which the ocean transports carbon and heat into the deep ocean. With increasing greenhouse gas emissions (\citealt{HANSEN1985}, \citealt{Takahashi2009}, \citealt{Perez2013}), the criticality of the AMOC's role in sequestering heat and carbon in the deep ocean increases.
\subsubsection{AMOC weakening}
As outlined in Section \ref{I_obsAMOC}, deployment of new monitoring technologies will improve the coverage of current and future observational data for the Atlantic Ocean in space and time. Together with better data-constrained models, this will improve our understanding of the AMOC. However, at the current time, our knowledge of the inherent longer-term natural variability of the AMOC, and the processes governing this variation, is rather limited. The need to consider and quantify the effects of climate change complicate things further.
There is general consensus between models that the AMOC will weaken in future (see Figure \ref{AMOC_slow}). \cite{Bryden2005} claimed evidence for a weakening of $30\%$ in the AMOC, based on data from 5 hydrographic sections measured over a period of 47 years. However, their study was strongly dependent on one AMOC estimate from 1957, and moreover the observed trend is thought to be biased due to aliasing of seasonal and higher-frequency variability (\citealt{Kanzow2010}).
The concern about a weakening overturning, and its climate and societal consequences, was one of the main motivations to establish the RAPID array. Some fear a complete shut-down in circulation is possible, but this is highly debated, and serves to emphasise the importance of improved understanding of the processes governing the ocean's response to different forcing mechanisms at all timescales.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_Intro/AR6_IPCC.jpg}}
\caption[AMOC changes at $26^\circ$N as simulated by 27 models (reproduced from IPCC 2021 - Figure 6.8).]{AMOC changes at $26^\circ$N as simulated by 27 models. The dotted line shows the observation-based estimate at $26^\circ$N (\citealt{McCarthy2015}) and the thick beige/blue/red lines the multi-model ensemble mean. Values of AMOC maximum at $26^\circ$N (in units $10^6 \text{m}^3\text{s}^{\text{-1}}$ ) are shown in historical simulations (most of the time 1850–2005) followed for 2006–2100 by a) Representative Concentration Pathway (RCP)2.6 simulations and b) RCP8.5 simulations. (The value following ``RCP'' indicates a specific emissions scenario. e.g. RCP8.5 results in a net top-of-atmosphere radiative imbalance of 8.5 Wm$^{\text{-2}}$ by the year 2100). In c) and d), the timeseries show the AMOC strength relative to the value during 2006–2015, a period over which observations are available. c) shows historical followed by RCP2.6 simulations and d) shows historical followed by RCP8.5 simulations. The 66\% and 100\% ranges of all-available CMIP5 simulations are shown in beige for historical, blue for RCP2.6 scenario and red for RCP8.5 scenario.. Reproduced from IPCC (2021) - Figure 6.8.}
\label{AMOC_slow}
\end{figure*}
\cite{Rahmstorf2015}, \cite{Caesar2018} and \cite{Caesar2021} utilise proxies such as salinity, SST-based indices, sortable silt and $\delta^{\text{18}}$O to infer that the AMOC has weakened over the past century, and is currently at its weakest point in the last millennium. However, \cite{Worthington2021} reconstruct the AMOC timeseries for the past 30 years using an empirical model based on RAPID and hydrographic cruise data and find no signs of AMOC weakening. Similarly, \cite{Fraser2021} reconstruct the AMOC for the past 120 years using the Bernoulli inverse applied to the EN4.2.1 observational dataset to suggest no significant weakening trend.
Changes of the AMOC have been linked to paleo-climate shifts (\citealt{Broecker2003}), suggesting that abrupt changes in climate might have been caused by the AMOC switching on and off. Idealised models suggest the AMOC might also be bistable (e.g. \citealt{STOMMEL1961}), but this has yet to be shown with a state-of-the-art eddy-resolving GCM (\citealt{Mecking2016}, \citealt{Weijer2019a}). Whether or not the AMOC is in a bistable regime is difficult to say; possible deficiencies in the models such as inaccurate freshwater budgets, key to a bistable AMOC, have been highlighted (\citealt{Buckley2016}, \citealt{Weijer2019a}). \cite{Weijer2019a} provide a review of AMOC stability.
With an ever warming climate, and polar amplification, increased ice melting will lead to less dense surface waters at high northern latitudes and a reduction of deep convection in sinking regions. The lighter NADW formed may become too light to outcrop in the Southern Ocean (\citealt{Wolfe2010}, \citealt{Wolfe2014}) further weakening the overturning.
Freshwater fluxes at high latitudes had been thought to be key in understanding the slowing of the AMOC, but recent research has shown changes in air-sea heat fluxes and ocean temperature are primarily the cause (\citealt{Gregory2005}, \citealt{Weaver2007}, \citealt{Marshall2015}). Hosing experiments within GCMs have shown a weakening of the AMOC, and if the period of hosing is sufficient, a possible shutdown (\citealt{Jackson2018}); but no shutdown has been observed under typical global warming scenarios. \cite{Jackson2015} introduce large amounts of freshwater at high latitudes to simulate the impact of a collapsed AMOC within the HadGEM3 GCM. Notable impacts on the North Atlantic include stronger storm tracks, widespread cooling, greater sea-ice coverage and large changes in precipitation. For Europe, they find less summer precipitation, more precipitation as snow and less vegetation and crop productivity.
In summary, it is difficult to estimate trends (such as possible recent slowing) in AMOC from short periods of observation, since it is not possible to distinguish between the effects of climate change and those of inherent natural variability at different timescales. Model-based assessment, using models whose credibility is established by comparison with available observations, is also crucial.
\section{The Antarctic Circumpolar Current (ACC)}
\label{I_SO} \label{I_ACC}
\subsection{The Southern Ocean}
Southern Ocean dynamics play a critical role in maintaining the AMOC. High northern latitude deep water formation is largely balanced by the Ekman upwelling caused by Southern Ocean westerly winds (\citealt{Toggweiler1995}, \citealt{Kuhlbrodt2007}). The unblocked Drake Passage latitudes allow for inter-basin exchange of waters, nutrients and biogeochemical products (e.g. \citealt{Moon:2018}). \cite{Gnanadesikan1999} highlights the Southern Ocean's critical role in determining the structure of the global pycnocline, and consequent impacts on the ocean's ability to store carbon at depth. The upwelling of NADW in the Southern Ocean, and subsequent formation of Antarctic Intermediate Water and AABW, make it an ideal region for uptake of carbon into the ocean interior. The Southern Ocean is responsible for almost 40\% of the global ocean's anthropogenic carbon uptake (\citealt{Gruber2019}), and 75\% of the uptake of excess heat accumulated in the Earth system (\citealt{Frolicher2015}) since 1861.
The major large-scale features of Southern Ocean dynamics are the eastward ACC, and the Weddell and Ross gyres. Along the Antarctic boundary we find a reversal of wind stresses (easterlies) and a westward flowing Antarctic Slope Current (ASC). \cite{Thompson2018} review the properties of the Antarctic shelf to determine the dynamical processes that influence the local variability of the ASC, and cross-slope transport. They classify the Antarctic shelf into three categories: (a) fresh shelf, (b) dense shelf and (c) warm shelf (see Figure 2-4 of \citealt{Thompson2018}).
The ``warm'' shelf is characterised by a shallow layer (upper 200m) of cold surface waters, lying directly above warm waters. A warm shelf occurs typically along the West Antarctic Peninsula, where there are weak easterly winds, and no ASC (in fact, a reverse current is present). Typically, these are sites where Antarctic ice shelves are thinning rapidly (\citealt{Pritchard2012}). ``Fresh'' shelves are characterised by cold shelf waters, penetrating deeper than those on a warm shelf (over the upper 600m), adjacent to large ice shelves. East Antarctica, the western Amundsen Sea and eastern Ross Sea are regions of strong coastal easterly winds, giving rise to Ekman transport, leading to downwelling, and a barrier between cold and fresh shelf waters, and warm and salty offshore waters. In these regions, isopycnals perpendicular to the coast, slump near the coast, forming a strong ASC. ``Dense'' shelves typically correspond to regions of deep water formation, such as the Weddell and Ross Seas and near the Adelie coast, and exhibit a sideways V-shape frontal structure of dense water. Typically, near the surface (the upper side of the V) waters are cold and fresh; the lower down-slope side consists of denser saline waters flowing down the continental shelf and slope. These regions are crucial to forming densest water masses such as the AABW.
The ACC is the strongest oceanic current, flowing without restriction around Antarctica. Zonally unblocked latitudes are crucial for its formation and strength. An equator-to-pole temperature gradient and strong westerly winds act to steepen isopycnal gradients, leading to an eastward-flowing current. The majority of the ACC flow is concentrated into 4 fronts, namely the (a) subtropical front, (b) subantarctic front, (c) polar front, and (d) southern ACC front. The sheer size of the Southern Ocean, and its harsh conditions, make the ACC difficult to observe. The easiest monitoring location is at the Drake Passage, the point of narrowest constriction between South America and Antarctica. The driving mechanisms for the ACC, leading to tilted isopycnals with northward Ekman transport compensated by a deep geostrophic return flow and mesoscale eddies, have been discussed in Section \ref{I_drv}.
\subsection{Observing the ACC}
\label{I_obsACC}
The ACC was studied as long ago as the HMS Discovery expedition of $1929-1930$. Early estimates of the ACC transport varied greatly, in part due to the use of different reference depths for geostrophic calculations, and lack of knowledge of near-bottom current characteristics. \cite{Bryden1977} used a pilot array of 15 moorings in the Drake Passage to show a level of no-motion near the sea-floor would not be appropriate, due to bottom currents of the order of 1-2cms$^{\text{-1}}$. This work informed the International Southern Ocean Studies (ISOS) programme in the late 1970s and early 1980s (\citealt{Whitworth1985}). ISOS utilised bottom-pressure gauges, dynamic-height moorings, current-meter moorings and hydrography (\citealt{Whitworth1982}, \citealt{Whitworth1983}, \citealt{Whitworth1985}), leading to significantly improved understanding of the Southern Ocean, and estimates of ACC transport of $134\text{Sv}$ with standard deviation $11.2$Sv. \cite{Cunningham2003} revisited the ISOS data, methodology and assumptions, estimating a mean ACC transport of $134$Sv with uncertainty of between $15$Sv and $27$Sv. They cite baroclinic variability as an important contributor to net ACC variability.
\cite{Meredith2011} reviewed estimates of the baroclinic transport through the Drake Passage using $15$ hydrographic cruises along the $SR1b$ section. They showed, for the period $1993-2009$ (including data from the World Ocean Circulation Experiment), that the baroclinic transport through the Drake Passage is $136.7\pm6.9\text{Sv}$, and remained relatively constant throughout the period of observation. \cite{Firing2011} and \cite{Renault2011} provide other reviews. \cite{Koenig2014} estimate a 20-year timeseries of total ACC transport using the DRAKE mooring array, and satellite data. Moored current meter data from 2006 to 2009, and satellite altimetry data from 1993 to 2012 are combined to create a look-up table of vertical velocity profiles to estimate transport. This method relies on a dependence of the vertical velocity structure on surface velocity and latitude. They find the full‐depth transport to be $141\pm 2.7$Sv over the 20 year period, with a baroclinic component of 136Sv.
Using data collected from the $cDrake$ experiment, a combination of CPIES and bottom current and pressure meters at a location slightly west of $SR1b$, \cite{Chidichimo2014} show that the resultant average baroclinic transport through the passage is $127.7\pm8.1\text{Sv}$ between $2007-2011$. They again emphasise the relative stability of the year-on-year time-mean transport. A further study by \cite{Donohue2016}, based on the same $cDrake$ experiment, quotes an additional depth-independent or barotropic transport component of $45.6$Sv, calculated using bottom current recorders. The updated estimate of $173.3\pm10.7$Sv for the total baroclinic plus barotropic transport through the Drake Passage is approximately $30\%$ larger than the benchmark of $130$ to $140$Sv against which ACC transport in climate models is typically compared.
\subsection{ACC variability}
Meridional density gradients across the Southern Ocean drive the ACC. Therefore, any process which acts to steepen or flatten isopycnals can influence ACC transport. For example, all of freshening, surface heating, diapycnal mixing and changes in NADW properties influence the strength of the ACC. Many studies of ACC temporal variability have concentrated on the Drake Passage. The observational studies discussed in Section \ref{I_obsACC} suggest a large uncertainty in the estimate of total transport there. This is particularly true for the barotropic or depth-independent component of the ACC transport.
On short timescales, the barotropic component is found to be the dominant contributor to variability (\citealt{Whitworth1983}, \citealt{Hughes1999}). On sub-seasonal timescales, \cite{Hughes1999} find that ACC transport variability can be related to sea level near the Antarctic coast. They show within a numerical model of the Southern Ocean, that ACC transport is closely tied with the wind field there. Stronger westerly winds lead to a stronger northward Ekman transport, and therefore a drop in sea level near Antarctica. These changes in wind field correlate with the Southern Annual Mode (SAM) index (e.g. \citealt{Meredith2004}), and influence ACC transport variability within models over a range of timescales (\citealt{Hughes2014}). The SAM is the dominant mode of atmospheric variability in the southern hemisphere, and therefore it is not surprising that it influences ACC transport variability. Others suggest possible secondary influences of the Madden-Julian Oscillation and the El-Ni\~no Southern Oscillation (e.g. \citealt{Matthews2004}, \citealt{Sallee2008}).
\cite{Thompson2002} observe an increasing SAM index since the 1970s in conjunction with stronger westerly winds. However, no observational evidence of a long-term trend in ACC transport is found to corroborate this (\citealt{Cunningham2003}, \citealt{Meredith2011}, \citealt{Koenig2014}, \citealt{Donohue2016}). \cite{Straub1993} and later \cite{Hallberg2001} hypothesise that this lack of dependence is due to eddy saturation. This implies that stronger wind-forcing increases mesoscale eddy energy, instead of changing the time-mean strength of the current, and hence the ACC is insensitive to changes in wind stress. The eddy saturation hypothesis is supported by model results (e.g. \citealt{Gnanadesikan2006}, \citealt{Munday2013}, \citealt{Abernathey2014}) and in line with surface observations by \cite{Hogg2015}. However, no direct ACC transport observations have yet verified this hypothesis.
The ACC is sparsely observed, and any potential monitoring programme would require high-frequency temporal sampling, to prevent aliasing (\citealt{Meredith2005}). \citealt{Hughes2014} highlight the difficulty of resolving vertical structure which varies spatially and temporally. Once again, model-based studies would appear extremely useful in characterising ACC transport structure, and its longer-timescale variability, provided we have sufficient confidence in the model: good agreement between model output and observation (where available) is an essential first step.
\section{Thesis aims}
\label{I_aims}
This thesis seeks to explore the extent to which MOCs can be constrained by relatively easy-to-measure boundary information, and hence whether boundary information provides a basis for useful reconstruction of MOCs. The thesis aims to:
\begin{itemize}
\item Develop and evaluate a conceptual model of basin-wide AMOC using only boundary information. This diagnostic model represents the complex three-dimensional large-scale fluid structure in two dimensions, allowing easier interpretation of spatio-temporal patterns.
\item Quantify contributions of the various boundary components in the decomposition to the time-mean AMOC overturning streamfunction, and its variability on annual and longer timescales, as a function of latitude and depth.
\item Investigate biases in UK Met Office HadGEM-GC3.1 models for the ACC at the Drake Passage, using a similar decomposition diagnostic tailored to zonal flows.
\item Identify and physically interpret regions of flat and sloping along-boundary isopycnals in oceanic basins including the Atlantic and Antarctic.
\end{itemize}
\section{Thesis Structure}
\label{I_strct}
The layout of the thesis is as follows. Chapter \ref{TJ_TM} introduces a methodology to decompose the basin-wide AMOC overturning streamfunction into contributions from boundary densities, surface wind stresses and bottom velocities. The decomposition is applied to HadGEM-GC3.1 GCM output, to investigate the time-mean contributions of boundary components. Chapter \ref{TJ_Var} investigates the contributions of each boundary component to the variability of the AMOC overturning streamfunction. It includes analysis at RAPID and SAMBA locations, and quantification of the importance of the contributions from each boundary component across a range of timescales. Chapter \ref{TJ_Bdry} maps and characterises boundary density, potential temperature and salinity along sloping boundaries of the Atlantic basin and neighbouring coastlines, using a novel boundary mapping algorithm. Regions of along-boundary flat or linearly-sloping isopcynals are investigated, and possible underlying physical mechanisms discussed. Chapter \ref{TJ_ACC} extends the decomposition method of Chapter \ref{TJ_TM} to the ACC at the Drake Passage and elsewhere, to investigate biases within HadGEM-GC3.1 models at different spatial resolutions. Density structure is also investigated along the sloping boundaries of Antarctica. Chapter \ref{TJ_Smm} provides a summary of results, conclusions and opportunities for further work.
\chapter{Summary and conclusions} \label{TJ_Smm}
The Meridional Overturning Circulation (MOC) is a system of surface and deep ocean currents, eddy-driven circulations and regions of water mass transformation. The Atlantic MOC (AMOC) exchanges fluid (and hence heat, salt, nutrients, carbon, etc.) vertically and horizontally throughout the Atlantic basin, strongly influencing the climate of the northern hemisphere. Climate models suggest that changes to the AMOC are indicators and drivers of climate shifts of considerable societal concern. The Antarctic Circumpolar Current (ACC) is a dominant feature of the MOC, connecting all of the Earth's major ocean basins. Long-term variation of the ACC affects the rate of heat and carbon exchange between atmosphere and ocean in the Southern Ocean, with further implications for climate change.
The AMOC and ACC generated by general circulation models (GCMs) are often difficult to interpret, involving many interacting complex physical processes. Simple theoretical diagnostic models are therefore useful in explaining observed behaviour in terms of elementary mechanisms including thermal wind and geostrophy, and the characteristics of fundamental quantities such as density on the ocean boundary.
This thesis has considered ocean boundary properties and their impact on the AMOC and the ACC. The aim of the thesis has been to determine the extent to which large-scale ocean circulation features such as the AMOC and ACC can be reconstructed and understood in terms of properties measured on ocean boundaries. Specific goals are four-fold: (a) Development and application of a basin-wide AMOC decomposition diagnostic explaining the total meridional overturning streamfunction in space and time in terms of its constituent boundary components. (b) Quantification of basin-wide spatial and temporal structure and variability of boundary components, for the Atlantic basin. (c) Exploration of ACC transport and contributing components within a hierarchy of HadGEM-GC3.1 models using a decomposition diagnostic modified for zonal flow. (d) Characterisation of density, potential temperature and salinity along the sloping continental boundaries around the Atlantic and Antarctic continent, using a boundary mapping algorithm.
This chapter summarises the findings of the thesis in a broader oceanographic context, and outlines opportunities for further study. Due to the increasing prevalence of models with $1/4^\circ$ spatial resolution within the latest GCMs, we draw conclusions for the HadGEM-GC3.1 $1/4^\circ$ model in particular from across the thesis.
\section{Thesis summary}
Chapter \ref{TJ_TM} develops the MOC decomposition diagnostic. In the Atlantic, this diagnostic allows us to decompose the AMOC overturning streamfunction for a given latitude into five constituent boundary components: western and eastern boundary densities, ocean surface wind stress, meridional velocity in bottom cells, and meridional velocities in additional side-wall and partial cells. The decomposition diagnostic is evaluated using the HadGEM-GC3.1 model at $1^\circ$, $1/4^\circ$ and $1/12^\circ$ spatial resolutions. Estimates of the total overturning streamfunction in terms of the five components generally perform well throughout the Atlantic basin, and are able to capture the time-mean AMOC overturning streamfunction when all components are volume-compensated to form individual streamfunctions which close at the surface. Remarkably, boundary information is largely sufficient to reconstruct the time-mean overturning streamfunction and its temporal variability, without any information from the ocean's interior.
Building on the existing literature, we show that western and eastern boundary density components generally dominate the basin-wide time-mean overturning streamfunction within the ocean's interior (c.f. \citealt{Buckley2016}, \citealt{McCarthy2020}). The Ekman (wind stress) term dominates near the surface (\citealt{Ekman1905}, \citealt{Baehr2004}). For latitudes $25-30^\circ$N, boundary currents through the Florida Straits result in a strong bottom (depth-independent) flow contribution (\citealt{Sime2006}). Contributions from additional incomplete cells adjacent to the bathymetry also make a sizeable contribution in regions of strong boundary currents. As model spatial resolution increases, the size and importance of contributions from additional cells reduces.
The theoretical capacity of the decomposition diagnostic framework is greater than that achieved in practice using time-average quantities which do not preserve non-linear dependencies, such as that between in-situ density and potential temperature (because of Jensen's inequality, \citealt{Jensen1906}). Calculations using instantaneous densities, averaged over time-steps, improve the reconstructed horizontal pressure gradient momentum trend, and hence the overturning streamfunction estimate. We conclude that decomposition of the overturning using time-average quantities should be undertaken with caution, and that models such as NEMO might usefully output correctly time-average computations for densities in the future.
In agreement with \cite{Hirschi2020}, we find that AMOC strength, quantified in terms of the maximum (with respect to depth) of the overturning streamfunction, increases with model spatial resolution for all latitudes south of $35^\circ$N. We also find that AMOC strength south of $35^\circ$N is approximately constant with latitude. The five constituent boundary components are also approximately independent of latitude in the southern hemisphere, but show large variability with latitude in the northern hemisphere. Boundary density terms no longer dominate for the latitude range $15-30^\circ$N. Further regional decomposition of density terms reveals a number of interesting features, including a large contribution to the maximum estimated overturning streamfunction from eastern boundary density in the Gulf of Mexico and Caribbean Sea, as well as the eastern boundary of the Bahamas and neighbouring islands.
Chapter \ref{TJ_Var} examines the spatial and temporal variability of the AMOC and the mechanisms by which boundary components contribute to it. Compared with data from RAPID and SAMBA arrays (\citealt{Frajka-Williams2019}), the decomposition diagnostic gives good estimates for the maximum overturning streamfunction. The overturning reconstructed from boundary properties captures the temporal variability of the full overturning well.
For the $1/4^\circ$ model, statistical analysis (using MLR-CAR, \citealt{Zuber2010}, combining elements of linear regression and correlation analysis) is used to estimate the contributions of boundary components to the total variance of the expected overturning streamfunction, for different timescales set using band-pass filtering. The largest variation is found at short timescales. At all timescales, we observe large contributions from the Ekman component in the upper 2500m for the northern hemisphere, attributable to variable winds. Contributions from the depth-independent component at mid-depths between $30^\circ$N and $45^\circ$N are large, possibly due to seamounts and the interaction of the Gulf Stream and DWBC. The influence of the western Atlantic boundary density dominates variation at mid-depths. Variation in the upper layers at Gulf of Mexico and Caribbean Sea latitudes is dominated by western boundary densities in those marginal basins. The Ekman contribution to the total variance of the overturning streamfunction dominates near the surface for all latitudes and timescales considered (e.g. \citealt{Srokosz2015}, \citealt{Buckley2016}). Due to the discretised model grid, the contribution of additional cells is always important locally near bathymetry. On shorter timescales (i.e. at higher frequency) in the southern hemisphere, western densities provide important contributions at mid-depths, with an influential Atlantic boundary depth-independent contribution at depth, attributed to variation of the Brazil current. In the northern hemisphere, mid-depth contributions are evenly distributed between the Ekman, western Atlantic boundary density and Atlantic boundary depth-independent contributions, with the latter prevalent throughout the fluid column near $35^\circ$N, possibly due to the separation of the Gulf Stream and its interaction with the DWBC. At longer timescales, we find, in the southern hemisphere, that western Atlantic boundary densities assume increasing importance. The Atlantic boundary depth-independent contribution remains influential at depth. At high northern latitudes, Greenland boundary densities make larger contributions to the total variance.
Corresponding analysis for the $1^\circ$ and $1/12^\circ$ models reveals similar contributions of boundary components. However, with increasing model resolution, the total variance of the overturning increases. This is particularly evident near $37^\circ$N for the $1/12^\circ$ model due to better resolution of the New England and Corner Rise seamount chains.
In Chapter \ref{TJ_Bdry}, along-boundary properties for the Atlantic and surrounding basins are considered in detail. Using a rudimentary analysis in latitude-depth space for $1^\circ$, $1/4^\circ$ and $1/12^\circ$ HadGEM-GC3.1 model output, flat eastern boundary isopycnals are observed to the south of $35^\circ$N at all model resolutions, as commonly assumed within reduced-gravity models, supporting the idea that fast boundary wave propagation removes density anomalies (\citealt{Johnson2002}).
Motivated by \cite{Hughes2018}, a boundary tracking algorithm is developed to pinpoint the location of sloping continental boundaries across multiple depths. The algorithm produces novel descriptions of neutral and potential densities, potential temperature and salinity, centred on the Atlantic basin from HadGEM-GC3.1 model, GloSea5 reanalysis and \cite{GouretskiViktorandKoltermann2004} climatology. There is general agreement across all sources for the presence of flat isopycnals along the eastern Pacific boundary, along the eastern Atlantic boundary south of the Mediterranean, and along the western boundary of the Indian Ocean. Isopycnals whose depth varies linearly with along-boundary distance are only found along the western boundary of the Atlantic basin up to the Labrador Sea; we find some intervals of steeper isopycnal gradients with along-boundary distance e.g. in the vicinity of Gulf Stream separation near Cape Hatteras. Discrepancies between the boundary density structure in different models are greatest at depth in the North Atlantic, due potentially to differences in model descriptions of deep water formation at high latitudes.
Given that eastern boundary isopycnals are flat, that the thermal wind relationship dominates the MOC, and given that the Coriolis parameter $f$ varies with latitude, isopycnals on the western boundary of the Atlantic must slope to maintain an overturning circulation. The linear slope of isopycnals on the western boundary is consistent with down-slope migration of southward-propagating DWBC proposed by \cite{MacCready1994}. The absence of a similar mechanism in the Indian Ocean is a possible cause for flat isopycnals on the western boundary there.
Chapter \ref{TJ_ACC} shows that estimates for the magnitude of ACC transport at the Drake Passage in HadGEM-GC3.1 models at different resolutions do not agree, and all underestimate the $173$Sv observed by \cite{Donohue2016}, which includes a depth-independent contribution of 45Sv. ACC strengths of 150Sv, 60Sv and 120Sv are found in the $1^\circ$, $1/4^\circ$ and $1/12^\circ$ resolution simulations at steady state. Further, in the $1/4^\circ$ model run, the ACC transport declines rapidly in the first 40 years. These issues reflect a wider range of deficiencies in the HadGEM-GC3.1 description of Southern Ocean dynamics. The UK Met Office is interested in understanding and interpreting these deficiencies. As part of this effort, the decomposition diagnostic for the overturning streamfunction, developed in Chapter \ref{TJ_TM} is modified to decompose the zonal cumulative ACC transport at the Drake Passage, reported in Chapter \ref{TJ_ACC}. The total ACC transport is described in terms of boundary properties and an additional $\beta$ term arising from a meridionally-varying Coriolis parameter. Decomposition-based estimates for the total ACC transport perform well for the Drake Passage within the $1^\circ$ and $1/4^\circ$ models. Discrepancies at $1/12^\circ$ resolution are attributed to inaccurate density information within localised trenches, only resolved at this resolution; an improved estimate is obtained using a simplified boundary description at the Drake Passage.
As the $1/4^\circ$ model spins up, lighter densities and freshening develop near the southern boundary, causing isopycnals to slump towards Antarctica rather than outcrop, strengthening the local along-boundary near-surface ASC return flow. The southern density contribution to the ACC transport increases by 70Sv within the first 40 years, explaining the initial weakening of ACC transport observed. There is an increase in SSH in time, from an initial negative value, around the entire Antarctic coastline, potentially due to large-scale coastal sea-ice melt or changes in Weddell Gyre leakage at this resolution (\citealt{Meijers2016}).
Density characteristics along the sloping boundary of Antarctica vary with choice of model resolution, reanalysis or climatology dataset, making precise characterisation of along-boundary density structure problematic. Nevertheless, along-boundary neutral densities from the HadGEM-GC3.1 $1^\circ$-$LM$ model (at medium atmospheric resolution) agree with those from the GloSea5 reanalysis dataset. Given that the $1^\circ$-$LM$ model also reproduces a realistic ACC transport, we might infer that these estimates are therefore more reliable than others. Two distinct regions of deep water formation are present in the Weddell and Ross Seas, with dense surface water throughout the year. Regions of deep water formation in the eastern Ross Sea and on the Adelie coast show seasonal variation, with dense water throughout the fluid column from June to December.
Mapping of Antarctic boundary densities for $1^\circ$-$LM$ and GloSea5 indicates flat isopycnals along the western Antarctic Peninsula, and sloping isopycnals along East Antarctica. The latter could be explained by a westward Antarctic Slope Current migrating down the boundary, preserving kinetic energy at the expense of its potential energy (\citealt{MacCready1994}). Along the western Antarctic Peninsula, no Antarctic Slope Current is present.
\subsection*{Summary of $1/4^\circ$ HadGEM-GC3.1 model issues}
The HadGEM-GC3.1 $1/4^\circ$ model with medium atmospheric resolution (N216) is part of the next generation of UK Met Office GCMs (HighResMIP, \citealt{Haarsma2015}). The spatial resolution typically used for current CMIP5 and CMIP6 models is $1^\circ$, with $1/4^\circ$ resolution becoming more popular. It is interesting therefore to summarise the findings of this D.Phil. thesis regarding the characteristics of the $1/4^\circ$ model, in studying the AMOC, ACC and continental along-boundary properties.
The $1^\circ$-$LM$ model provides estimates of both AMOC and ACC strengths which agree relatively well with observations. Moreover, the model provides representations of boundary densities which compare well with GloSea5 reanalysis for the Atlantic and Antarctic coastlines. With increasing model spatial resolution at $1/4^\circ$, we might expect similarly good or even better correspondence.
In the Atlantic, the time-mean latitude-depth structure and magnitude of the overturning streamfunction for the $1/4^\circ$ model shows good agreement with time-mean observations at RAPID and SAMBA arrays. Along-boundary potential temperature, salinity and density for the Atlantic and surrounding boundaries are in good agreement with estimates from models at other spatial resolutions, reanalysis and climatology. However a weaker AABW signal is observed near the Drake Passage, especially later in the model run of 657 years.
Fluctuations in the depth-independent and thermal wind (boundary density) components for the $1/4^\circ$ model at SAMBA, reveal that these components are dependent on the Brazil-Malvinas confluence (BMC). These fluctuations appear since the BMC is located almost $4^\circ$ further north than expected (\citealt{Goni2011}), suggesting either a weaker Brazil or stronger Malvinas current within the model.
However, ACC transport through the Drake Passage shows large weakening throughout the $1/4^\circ$ model run, stabilising at 60Sv, only one third of the expected value reported by \cite{Donohue2016}. Similar results are found for other meridional sections across the ACC. The weakened ACC is due to a freshening of the southern boundary and a slumping of isopycnals, resulting in a stronger reverse flow (ASC) along the boundary. Preliminary analysis of local sea surface height and barotropic streamfunction support fresher shelf waters (less dense) and considerably larger Weddell and Ross Sea gyres, respectively.
A possible mechanism for the strengthened Malvinas current is the northward shift of the main ACC pathway, caused by a northward shift of the wind-stress curl and westerly winds. These changes in wind-stress patterns could result in stronger spin-up of gyres, and weaken or even reverse the shelf or slope current through the Drake Passage.
Further, along-boundary investigations for the sloping Antarctic coastline reveal considerably lighter densities compared to the GloSea5 reanalysis data and model output at $1^\circ$. Fresher surface water suggests excessive freshwater flux. As a result, little or no surface dense water formation is observed in the Weddell and Ross Seas. This model simulation also lacks along-boundary down-slope flow of dense water. Together, this suggests a lack of realistic deep water formation, and AABW supply is impaired. This effect is also suggested by the Atlantic along-boundary isopycnal structure in Chapter \ref{TJ_Bdry}: AABW signal to the East of the Drake Passage is reduced compared to other models, reanalysis and climatology datasets. A lightening with time of bottom densities in the Brazil-Argentine basin, coupled to increased decadal and multi-decadal variability of bottom densities there, suggests upstream impairment of deep water formation near the Antarctic coast.
Drawing points from the previous paragraphs together, it is interesting that good agreement is found between observations and model-based estimates of AMOC strength for the $1/4^\circ$ model. Moreover, good agreement is also found for the $1^\circ$-$LM$ and $1/12^\circ$ models. These results might be expected for GCMs calibrated to AMOC strength. However, agreement between observations and model-based estimates of ACC transport are poor at $1/4^\circ$ in particular. The model also provides a poor representation of sloping Antarctic boundary densities, a significant freshening of the Antarctic shelf, slumping rather than outcropping of isopycnals and a stronger ASC as a result. Somewhat surprisingly, this suggests that adequate representation of these Southern Ocean circulation characteristics is not necessary for reasonable representation of Atlantic circulation in the $1/4^\circ$ model, and perhaps more generally.
\section{Discussion}
\label{Smm_Dsc}
Results in Chapters \ref{TJ_TM} and \ref{TJ_Var} suggest that the AMOC decomposition diagnostic provides a useful means to quantify the Atlantic overturning streamfunction and its variability, using only information gathered from boundaries. From Chapter \ref{TJ_Bdry} we find that the structure of western and eastern boundary densities can be represented simply in terms of flat or linearly sloping isopycnals with along-boundary distance. Together, these results suggest an improved method for monitoring basin-wide AMOC, using a small set of density profiles at carefully-chosen locations on the western and eastern boundaries.
Observations at these locations over an extended period would allow the characterisation of basin-wide AMOC variation on longer timescales. Analysis of HadGEM-GC3.1 model output here suggests that, on longer timescales, in accordance with the literature, the geostrophic component becomes ever more important in explaining AMOC variability. Further, on decadal and longer timescales, the MLR-CAR model shows that western boundary densities dominate the variability, but that there is also evidence for an increasing contribution from the eastern boundary.
Other investigations using various forms of AMOC decomposition (e.g. \citealt{Waldman2021}) for long timescales are based on time-average GCM outputs such as potential temperature and salinity. Chapter \ref{TJ_TM} highlights deficiencies in this type of analysis which occur when non-linear relationships between properties, such as density and potential temperature, are not appropriately accounted for. This issue could be resolved in a relatively straightforward manner, were correctly time-averaged densities available as primary outputs from GCMs such as HadGEM-GC3.1. Chapter \ref{TJ_Var} shows that total AMOC variability reduces with increasing timescale, therefore becoming harder to diagnose. As a result, the impact of errors incurred due to incorrect calculation of time-average densities might increase with increasing timescale.
The variation of neutral density with depth and latitude along both eastern and western boundaries might be exploited for paleoclimate studies. Here, along-boundary observations are sparse, typically of density (or surrogate properties) for a limited range of depths along the sloping boundary at a given latitude, as opposed to full depth density-profiles. Nevertheless, such data at multiple locations along the boundary may be sufficient to construct a simple piecewise-linear model for boundary density structure, and provide a basis for AMOC reconstruction (\citealt{Lynch-Stieglitz2008}). The impact of bias and uncertainty associated with paleo-density reconstruction (e.g. using $\delta ^{\text{18}}$O) could be managed within a carefully-formulated statistical model; these uncertainties could possibly be larger than the variability of the estimated AMOC (e.g. \citealt{Hirschi2006}, \citealt{Lynch-Stieglitz2008}).
The decomposition diagnostic is based on the simple principles of geostrophy and thermal wind. However these simple physical relationships do not hold everywhere in the ocean. For example, ageostrophic processes are important in the wind-driven Ekman layer and boundary currents on both sides of the basin. Further, features such as eddies, gyres and internal waves all play a role in ocean circulation, but do not adhere to the simplified dynamics assumed within our decomposition. Nevertheless, it is clear that the decomposition diagnostic is able to capture the overturning strength and variability, without fully accounting for these effects.
\cite{Cessi2013a} highlight the role of ageostrophic flow along the Atlantic eastern boundary; they argue that instabilities act to erode the density structure here. Agesotrophic processes are likely to be key to the breakdown of flat eastern boundary isopycnals, especially north of $35^\circ$N in the Atlantic; eastern boundary eddies act to disturb the flat isopycnal structure.
In addition, further consideration of processes such as along-equator winds and their impact on eastern and western boundary densities in the equatorial regions is needed. Density anomalies at the equator are propagated to high latitudes by fast boundary waves, emphasising the importance of equatorial processes, and any processes forming density anomalies along the boundary.
Mapping along-boundary densities of the Atlantic and surrounding basins has demonstrated the inter-basin continuity of boundary densities. Given that boundary densities are influential for the overturning circulation, this connectivity between basins emphasises that localised changes in boundary properties may have large-scale or global impact.
In the Southern Ocean, the ACC decomposition diagnostic provides a means to quantify the impact of southern boundary freshening on the volume transport. The Coriolis $\beta$ term, incorporated within the diagnostic to ensure complete representation of the zonal transport, contributes minimally to ACC transport for narrow meridional sections, such as the Drake Passage. Combined with a weak Ekman transport contribution there, this suggests that useful monitoring at the Drake Passage could be achieved using only bottom velocity and boundary density measurements. However, the approach would not be appropriate for wider meridional sections, where the Coriolis $\beta$ term is likely to be more influential.
The current work suggests that ASC flows along the southern boundary have a significant influence on zonal transport through the Drake Passage for some GCM resolutions. As a result, the net zonal transport there depends on both ACC and ASC transport components. Therefore, using the net zonal transport as the main means to quantify the ACC is inappropriate.
Given the zonally-constant ACC strength found in GCM output, we might expect along-boundary Antarctic densities to show little spatial variability. The large variability actually found, suggests that ageostrophic and other compensatory effects must be at play to maintain a constant ACC strength. This serves to highlight our lack of understanding regarding the three dimensional structure of the ACC, a topic considered further in Section \ref{Frt_Smm}.
\section{Further investigations} \label{Frt_Smm}
This section outlines areas of potential future research related to the topics addressed in the thesis.
\subsection{Continuity of ACC transport}
In Section \ref{ACC_BdryD} we investigate neutral boundary densities around the Antarctic continent for various model resolutions, reanalysis and climatology.
Intuitively, we might expect that large variations in neutral density observed along the Antarctic coastline would be accompanied by similar variations in the ACC transport. Therefore, the fact that the ACC strength remains relatively constant around the continent (Section \ref{Sct_LngSct}) seems counter-intuitive, and requires further explanation.
Figure \ref{F_BD_T_1LM} shows (a) neutral boundary densities and (b) resulting geostrophic velocity into the boundary for the $1^\circ$-$LM$ model. Panel (c) is the equivalent southern boundary cumulative transport contribution to the ACC ($T_S$, from the ACC decomposition diagnostic) calculated from boundary density data (shown in Panel (a)), gathered using the boundary mapping algorithm.
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\textwidth]{Fig_Smm/BdryVelTrsp_neutD_GC3_1C_ap466_Ant_int.png}}
\caption[Along-boundary Antarctic neutral densities for the $1^\circ$-$LM$ model, geostrophic velocity into the boundary and southern boundary cumulative transport contribution to the ACC transport, using diagnostic calculation. Time-average is taken over the first 100 years of model run.]{Panel (a): time-average along-boundary Antarctic neutral densities for the $1^\circ$-$LM$ model. Panel (b) : geostrophic velocity ($v$) into the boundary, calculated from neutral density using a smoothed density gradient. Panel (c) : southern boundary cumulative transport contribution to the ACC, using diagnostic calculation (Chapter \ref{TJ_ACC}). Time-average is taken over the first 100 years of model run. Dashed white lines indicate locations of longitudes $90^\circ$E, $180^\circ$, $90^\circ$W, $0^\circ$ and that of Elephant Island.}
\label{F_BD_T_1LM}
\end{figure*}
Cumulative transport contributions from southern boundary densities are relatively consistent around Antarctica, with three notable exceptions in the Weddell and Ross Seas, and near the Adelie coast, where dense surface water is found. These regions exhibit a significant increase in southern contribution to the ACC transport. ACC continuity therefore demands compensating changes in the other boundary components.
The $1/4^\circ$ model showed a large weakening in ACC transport, and isolated regions of deep water formation. The corresponding southern boundary cumulative transport contribution (not shown) is found to be markedly different to the $1^\circ$ model (Figure \ref{F_BD_T_1LM}(c)): large contributions are still found near the Weddell and Ross Seas, due to denser surface waters. However, there is greater along-boundary variability of the southern boundary contribution to ACC transport, and generally a larger magnitude of contribution outside regions of deep water formation, due to lighter along-boundary densities (see Figure \ref{F_BD_Ant_Rest}(b)).
A zonally-constant ACC coupled to a variable southern component necessitate variability in other components. Alternatively, ageostrophic properties may contribute in regions of deep water formation, so that the geostrophic principles of the transport decomposition fail. Other possibilities include (a) a greater influence of intermediate boundaries in meridional sections near regions of deep water formation, (b) changes on the northern flank of the ACC to compensate those on the southern boundary, and (c) a greater influence of the $\beta$ term. A considerable obstacle to any investigation would be the difficulty in identifying the northern boundary for the majority of the ACC. Use of an oceanic boundary would be feasible, but compromises the decomposition methodology.
\subsection{Impact of model resolution on bottom velocities}
The ACC transport is commonly used as an indicator of model performance. The depth-independent component is a key contributor to the total ACC transport. \cite{Donohue2016}, using bottom current meters through the Drake Passage, estimate the depth-independent component to be of the order of $45$Sv, 30Sv stronger than that simulated within the $1/4^\circ$ and $1/12^\circ$ models (see Figure \ref{F_Dcmp_bot}). Only the $1^\circ$ model performed well in this respect, a somewhat counter-intuitive finding since higher resolution models should resolve bottom flows better.
Further, ACC decompositions at the Drake Passage and elsewhere (Chapter \ref{TJ_ACC}) indicate that the depth-independent component is maximised at the Drake Passage and Madagascar - Antarctica sections, suggesting stronger bottom flows or currents in these regions. Examination of ACC bottom currents through Drake Passage and other meridional sections, for models at different resolution, and a comparison with e.g. cDrake observations, could highlight issues associated with resolving bottom flows in higher-resolution models. An increase in depth-independent component of 30Sv for the $1/12^\circ$ model would bring it into line with the corresponding value for the $1^\circ$ model; as a result, the total ACC transport from the $1/12^\circ$ and $1^\circ$ models would also agree.
\subsection{Location of the Brazil-Malvinas Confluence (BMC)}
Results in Chapter \ref{TJ_Var} indicate that the location of the BMC within the HadGEM-GC3.1 $1/4^\circ$ model is surprisingly $4^\circ$ further north than expected. This might be explained by a stronger Malvinas current driven by a northward shift of the main ACC pathway through the Drake Passage. This itself would require a northward shift of the wind-stress curl and westerly winds. Atmospheric fields could be examined for evidence to support this. Further, a possible relationship between the location of the BMC and the magnitude of the northern density component of the ACC decomposition diagnostic might be sought.
\subsection{Application and further development of the along-boundary mapping algorithm}
The analysis in Chapter \ref{TJ_Bdry} and Section \ref{ACC_BdryD} demonstrates the utility of the along-boundary mapping algorithm, and the boundary density profiles generated using it. Applying the mapping algorithm more widely (e.g. to CMIP6 models) would offer the opportunity to compare and contrast along-boundary isopycnal structures, and help generate consensus. The sparsity of observational data for the Antarctic coastline makes it difficult to calibrate models, and therefore more likely that boundary characteristics are determined by model-specific dynamics in the Southern Ocean. Examining a broader range of models would provide greater understanding of along-boundary isopycnal structure, and test the influence of the Antarctic Slope Current on the density structure estimated.
The boundary pathway used for the Atlantic and surrounding basins could be extended to accommodate the western Pacific and eastern Indian Oceans, creating a single connected boundary to the north of the Southern Ocean. This analysis would require extra assumptions to accommodate non-continuous boundaries and throughflows between oceans in south-east Asia; preliminary analysis suggests that artificial boundaries in this region might be a useful for some purposes.
Without observations or model data, the western Pacific along-boundary isopycnal structure is difficult to predict, given the weak Pacific MOC. Logic dictates that isopycnal structure should be relatively similar to that of the East Pacific, with possibly slightly sloping isopycnals to the north, to maintain the PMOC. Initial estimates of along-boundary densities support this thinking.
Relatively flat isopycnals are predicted for the eastern Indian Ocean as well. Here, the overturning circulation consists of two shallow cells either side of the Equator (e.g. \citealt{Lee1998}). \cite{Lee1998} find that the Ekman component of their decomposition dominates these two shallow overturning cells. Below 500m, they find a weak vertical shear (or geostrophic) contribution (of 2Sv), showing a positive overturning streamfunction throughout the basin. We speculate, based on simple geostrophic calculations, that eastern boundary isopycnals should show signs of a weak upward slope to the north to maintain the positive cell of the overturning streamfunction throughout the basin (similar to the western boundary in the Pacific). Nevertheless, we caution that there are many physical mechanisms at play in this highly seasonal basin, and the weakness of the vertical shear component found by \cite{Lee1998} suggests that along-boundary density structure might not be strongly-coupled to the overturning circulation here.
\subsection{Reconstruction of along-boundary structure using a limited number of depth profiles}
In Appendix \ref{Bdry_MCMC_App} a piecewise linear model (outlined in Appendix \ref{App_MCMC}) is used to describe the relative complexity of time-mean latitude-depth eastern and western Atlantic boundary densities. The analysis emphasises the greater uniformity of the eastern boundary, and the possibility of estimating along-boundary densities, and overturning streamfunction, with a limited number of density profiles. This provides a quantitative basis to optimise the locations of future density profile observations, even over vast distances across continents, based on GCM output.
The piecewise linear model can be applied routinely to provide a low-dimensional summary of along-boundary isopycnal structure for the Atlantic and Antarctic. In the thesis work, the piecewise linear model is applied to time-mean data. An important but relatively straightforward enhancement would be to extend the algorithm to accommodate time-varying data. Further, if the intention is to simplify the overturning streamfunction calculation, depth-weighting could be introduced to emphasise the important role of densities at depth (Appendix \ref{App_MCMC}).
\subsection{Similarities between thermal wind and depth-independent components}
Atlantic thermal wind (Chapter \ref{TJ_TM}, Figure \ref{F_TM_strm}(d)) and depth-independent components of the overturning circulation (Figure \ref{F_TM_strm}(c)) show a high degree of negative correlation. Preliminary work (using the UK MetOffice GC2 dataset) suggests it might be feasible to predict the uncompensated depth-independent component using a regression model with uncompensated western boundary, eastern boundary and Ekman components as predictors. If successful, this statistical model would further reduce the complexity of the decomposition diagnostic, making reconstruction of the total overturning streamfunction possible in terms of boundary densities and surface wind-stress only.
\section{Final Word}
The meridional overturning circulation is a fundamental component of the Earth's climate system. This thesis shows that the large-scale four-dimensional overturning circulation can be well-represented by information available only on the ocean's boundaries. New mappings of boundary densities across multiple ocean basins reveal simple structures and large-scale continuity, including regions of flat and linearly-sloping isopcynals extending over thousands of kilometers. The physical mechanisms underlying these simple boundary structures ensure that the effects of local density anomalies are propagated along boundaries, demonstrating the inter-connected nature of the global overturning, and its sensitivity to sources of remote boundary anomalies.
|
2,869,038,156,759 | arxiv | \section{Introduction}
There is now a wealth of evidence \cite{sn, wmap, sdss} indicating that the universe is currently in a period of accelerated expansion. One of the biggest challenges in cosmology today is understanding the origin of this late time acceleration.
One possibility is that $70\%$ of the energy content of the universe is
dominated by an as yet unknown form of energy, so-called dark energy. The most popular dark energy candidate is the vacuum energy, which takes the form of a small and positive cosmological constant. In order to explain the
current acceleration, the value of the cosmological constant must contribute a vacuum energy density of the order $\rho_\ensuremath{\Lambda} \sim 10^{-12} ({\tt eV})^4$. This is $10^{120}$ times smaller than what we might expect, given our current understanding of particle physics. Given that particle physics is doing such a miserable job of explaining the accelerated expansion, it is important to look for alternative explanations.
A popular alternative is to interpret this acceleration as a sign that our understanding of gravity is breaking down, and that a large distance modification of Einstein's General Relativity is required. Despite numerous attempts, it is fair to say that an established proposal has yet to emerge that is consistent on both a fundamental and a phenomenological level. Arguably the most successful attempts have been inspired by the braneworld paradigm (for a review see \cite{kkreview}).
In particular, the Dvali-Gabadadze-Porrati (DGP) model~\cite{dgp} was discovered to have two cosmological branches, one of which gave rise to cosmic acceleration even when no matter was present on the brane \cite{dgpsa}. This branch became known as the self-accelerating branch, for obvious reasons, but was later discovered to be haunted by ghost instabilities around the vacuum de Sitter brane \cite{Luty, Nicolis,
arethereghosts, moreonghosts, unexorcised, bubbles, dgpspec, review, newperspective, saperts, modelforSA} (for a review see \cite{SAghosts}). In this context a ghost is a field whose kinetic term has the "wrong" sign. This pathology leads to a choice: either the ghost state has negative norm and unitarity is violated, or the ghost can have arbitrarily negative energy. A ghost in the perturbation spectrum ({\it specter}oscopy!) indicates a catastrophic instability of the background, and therefore an unacceptably sick perturbative theory. In DGP, the other cosmological branch (the "normal" branch), is ghost-free but cannot be an alternative to $\Lambda$CDM since it still needs the introduction of the cosmological constant $\Lambda$ to explain the acceleration. Nevertheless it still has plenty of interesting
phenomenological features \cite{bwDE, arthur, phantoms}.
More recently, Charmousis, Gregory and Padilla (CGP)\cite{cgp} presented a generalisation of the DGP model in which they allowed for bulk curvature and introduced some asymmetry across the brane \cite{battye, asymm1, asymm2, asymm_roy, asymm_kk, asymm_ggs}. This asymmetry could, in principle, apply to the bulk cosmological constant or even the bulk Planck scales, giving rise to a rich variety of cosmologies. The authors focussed on those solutions that possessed asymptotically Minkowski branes, despite the presence of self-accelerating solutions that they (correctly) assumed to be haunted by ghosts. A subset of these solutions were shown to contain vacuum branes that were perturbatively stable, free from the ghoulish instabitities that terrorized the self accelerating DGP brane. The cosmological evolution of this subset was then analysed, and in some cases yielded extremely interesting results. Two limiting models in particular (the "decoupled" limit and the "conformal" limit) were found to exhibit power law acceleration but only when matter is present on
the brane. They dubbed this 'stealth acceleration'.
The cosmology is reminiscent of the Cardassian cosmology proposed by Freese and Lewis~\cite{cardassian}. Here the standard Friedmann equation is modified so that $\rho \to \rho +c\rho^n$, where $n<2/3$, and one also finds that cosmic acceleration is driven by the presence of ordinary matter. The Cardassian model is an interesting empirical model, but did not have a concrete theoretical basis. The stealth model provides that by realising an effective Cardassian cosmology (with $n \approx 0.5$) within the braneworld paradigm.
In this paper we will consider vacuum de Sitter branes within the CGP set-up. This will include self-accelerating solutions, as well as the stealth models with some additional vacuum energy on the brane. We will study the spectrum of linearised perturbations about these solutions, closely following the corresponding analysis in the DGP model \cite{arethereghosts, moreonghosts, dgpspec}. For an infinite volume bulk, we will find, without exception, that the vacuum is unstable because of the presence of ghosts. Just as for the self-accelerating branch of DGP, a ghost will manifest itself either through the radion mode, or through the helicity 0 mode of the lightest graviton. In some cases a ghost will also appear in the spin 1 sector.
The "decoupled" version of the stealth model is now of particular interest. We will find a class of de Sitter solutions that approach the "decoupled" model as the Hubble scale $H \to 0$. As the limit is approached the ghost becomes more and more weakly coupled, until eventually it decouples completely. We will infer some conclusions regarding the stability of the stealth models when matter is present. For small $H$ it seems that we can carry our analysis of de Sitter branes over to the general Friedmann-Robertson-Walker case and conclude that the decoupled stealth model develops an instability albeit a very mild one softened by the weakness of the ghost coupling. For larger $H$ the instability for de Sitter branes would be more severe, but it is not clear whether or not we can transfer this conclusion to the general FRW case.
The rest of this paper is organised as follows: in section \ref{sec:setup} we describe the CGP model in detail, our generalisation, and the background solutions. In section \ref{sec:perts} we analyse the spectrum of linearised perturbations and derive conditions for the presence of an helicity 0 ghost in the spin 2 sector. We study the coupling to matter in section \ref{sec:mat} and calculate the effective action in section \ref{sec:eff}. The effective action helps to reveal any further ghosts, including the radion ghost, which seems to take it in turns with the helicity 0 mode to haunt the background. We end with some concluding remarks in section \ref{sec:disc}.
\section{The CGP model: set up and background solutions} \lab{sec:setup}
The CGP model is an asymmetric generalisation of its celebrated cousin, the DGP model. In both models, our Universe is taken to be a 3-brane, $\Sigma$, embedded in between two five dimensional spacetimes, $\mathcal{M}_i$, where $i=L, R$. In the original DGP scenario, we impose $\mathbb{Z}_2$ symmetry across the brane, identifying ${\mathcal{M}}_L$ with ${\mathcal{M}}_R$ and having vanishing vacuum energy in the bulk. In the CGP model, however, we relax both of these assumptions. The key new ingredient is the introduction of asymmetry. Each spacetime ${\mathcal{M}}_i$ generically has a five
dimensional Planck scale given by $M_i$, and a negative (or zero)
cosmological constant given by $\ensuremath{\Lambda}_i=-6k_i^2$,. However, since we are no longer assuming $\mathbb{Z}_2$ symmetry across the brane, we can have $M_L \neq M_R$ and $\ensuremath{\Lambda}_L \neq \ensuremath{\Lambda}_R$. Allowing for $\ensuremath{\Lambda}_L \neq
\ensuremath{\Lambda}_R$ is familar enough in domain wall scenarios \cite{DW}. The Planck scale asymmetry is less familiar, but could arise
in a number of ways. Suppose, for example, that this scenario is
derived from a fundamental higher dimensional theory. This theory
could contain a dilaton field that is stabilised in different
fundamental vacua on either side of $\Sigma$. From the point of view
of a $5D$ effective description, the $5D$ Planck scales would then
differ accordingly. Indeed naive expectations from string theory point
towards this asymmetric scenario as opposed to a symmetric one.
Different effective Planck scales can also appear on either side of a
domain wall that is bound to a five-dimensional braneworld~\cite{nested}.
In keeping with the braneworld paradigm, all matter and standard model interactions are confined to the brane, although gravity can propagate into the fifth dimension. As in the DGP scenario, we include some intrinsic curvature induced on the brane. This term is rather natural and can be induced by matter loop
corrections~\cite{loops}, finite width effects~\cite{width} or even
classically from higher dimensional modifications of General
relativity~\cite{z}. We will also include some vacuum energy on the brane in the form of some brane tension, $\sigma$. At this point we introduce an important new development. In the original CGP paper, the brane tension was fine-tuned against the bulk cosmological constants in order to admit a Minkowski vacuum solution. This choice corresponds to having vanishing effective cosmological constant on the brane and was the analogue of the Randall-Sundrum fine-tuning. In this paper we will introduce some additional tension so that the vacuum brane is de Sitter. Such detuning of brane tensions helped conjure up the ghost in the DGP model, and we will ultimately find that the same is true here.
This set-up is described by the following action,
\begin{equation}
\lab{act}
S=\sum_{i=L,R} M_i^3\int_{\mathcal{M}_i}
\sqrt{-g}(R-2\Lambda_i)+2M_i^3\int_{\partial\mathcal{M}_i}\sqrt{-\gamma} K^{(i)} +\int_\Sigma\sqrt{-\gamma}(M_{4}^2
\mathcal{R}-\sigma+\mathcal{L}_{ \tt {matter}}),
\end{equation}
where $g_{ab}$ is the bulk metric with corresponding Ricci tensor,
$R$. The metric induced on the brane is given by
$\gamma_{ab}=g_{ab}-n_an_b$
where $n^a$ is the unit normal to $\partial\mathcal{M}_i$ in
$\mathcal{M}_i$ pointing {\it out} of $\mathcal{M}_i$. Of course,
continuity of the metric at the brane requires that $\gamma_{ab}$ is
the same, whether it is calculated from the left, or from the right of
the brane. In contrast, the extrinsic curvature of the brane can jump
from right to left.
In $\mathcal{M}_i$, it is defined as
\begin{equation} K^{(i)}_{ab}=\gamma^c_a
\gamma^d_b \nabla_{(c} n_{d)}, \lab{extrinsic}
\end{equation}
with its trace appearing in the Gibbons-Hawking boundary term in (\ref{act}). In the brane part of the action we have included the brane tension, $\sigma$, and the induced intrinsic
curvature term, $\mathcal{R}$, weighted by a $4D$ mass scale,
$M_4$. $ \mathcal{L}_{ \tt {matter}}$ includes
any additional matter excitations.
The equations of motion in the bulk region, $\mathcal{M}_i$, are just the
Einstein equations, with the appropriate cosmological constant, $\ensuremath{\Lambda}_i$.
\begin{equation}
E_{ab}= R_{ab}-\frac{1}{2} R g_{ab}+\Lambda_i g_{ab}=0.
\lab{bulkeom}
\end{equation}
The equations of motion on the brane are described by the Israel
junction conditions, and can be obtained by varying the action
(\ref{act}), with respect to the brane metric, $\gamma_{ab}$. This gives\footnote{The angled
brackets denote an averaged quantity at the brane. More precisely, for
some quantity $Q_i$ defined on the brane in $\partial \mathcal{M}_i$, we
define the average
$\langle Q \rangle= \frac{Q_L+Q_R}{2}$.
Later on we will
also make use of the difference, $\Delta Q =Q_L-Q_R$.}
\begin{equation} \Theta_{ab}=2\left \langle M^3 (K_{ab}-K
\gamma_{ab}) \right \rangle+M_{4}^2\left( \mathcal{R}_{ab}-\frac{1}{2}
\mathcal{R}\gamma_{ab}\right)+\frac{\sigma}{2}\gamma_{ab}=\frac{1}{2} T_{ab},
\lab{braneeom}
\end{equation}
where $T_{ab}=-\frac{2}{\sqrt{-\ensuremath{\gamma}}}\frac{\partial
\sqrt{-\ensuremath{\gamma}}\mathcal{L}_\tt{matter}}{\partial \ensuremath{\gamma}^{ab}}$.
Note that the Israel equations here do not use the familiar
``difference'', because we have defined the unit normal as pointing
out of $\mathcal{M}_i$ on each side. We adopt this (slightly) unconventional approach since it is more convenient in the asymmetric scenario where the brane is best thought of as the common boundary $\Sigma=\partial{\mathcal{M}}_L=\partial {\mathcal{M}}_R$.
We will now derive the vacuum solutions to the equations of motion
(\ref{bulkeom}) and (\ref{braneeom}). This corresponds to the case
where there are no matter excitations, and so, $T_{ab}=0$. In each
region of the bulk, we introduce coordinates $x^a=(x^\mu, y)$, with
the brane located at $y=0$. We are interested in de Sitter brane solutions of the form
\begin{equation}
ds^2=\bar{g}_{ab} dx^adx^b =dy^2+N(y)^2 \bar \gamma_{\mu\nu}dx^\mu dx^\nu. \lab{background}
\end{equation}
where $\bar \gamma_{\mu\nu}$ is the four dimensional de Sitter metric with curvature, $H$. Inserting this into the bulk equations of motion (\ref{bulkeom}) gives
\begin{equation}
\left(\frac{N'}{N}\right)^2=\frac{H^2}{N^2}+k^2, \qquad \frac{N''}{N}=k^2, \lab{odes4a}
\end{equation}
where "prime" denotes differentiation with respect to $y$, and we have dropped the index $i$ for brevity. One can easily show that
\begin{equation}
\lab{eq:aads}
N(y) = \frac{H}{k}\sinh{k\,(y_h+\theta y)}, \quad y_h \equiv \frac{1}{k}\sinh^{-1} {k/H},
\end{equation}
where $\theta=\pm 1$. Each region of the bulk corresponds to $0< y< y_\tt{max}$ where
\begin{equation}
y_\tt{max}=\begin{cases} \infty & \textrm{for $\theta=1$}, \\
y_h & \textrm{for $\theta=-1$} \lab{ymax}.
\end{cases}
\end{equation}
If we transformed to global coordinates in the bulk, $\theta=1$ would correspond to retaining the asymptotic region (large radius), whereas $\theta=-1$ would correspond to retaining the central region (small radius). For $k \neq 0$, this means that when $\theta=1$ we keep the adS
boundary (growing warp factor) whereas when $\theta=-1$ we keep
the adS horizon (decaying warp factor). Since we are interested in a modification of gravitational physics in the infra-red, we will assume that the bulk volume is infinite, and retain the asymptotic region on at least one side of the bulk. In other words, we do not consider the case $\theta_L=\theta_R=-1$.
The boundary conditions at the brane (\ref{braneeom}) yield
\begin{equation}
6 \langle M^3 N'(0) \rangle+\frac{\sigma}{2}-3H^2 M_4^2=0, \lab{bc4a}
\end{equation}
so that the curvature $H$ is given by the real roots of
\begin{equation}
\sigma=6M_4^2{H}^2-12\left\langle M^3 \theta
\sqrt{{H}^2+k^2}\right\rangle.
\end{equation}
In \cite{cgp}, the brane tension was fine tuned to a critical value, $\sigma_c=-6\langle M^3k\rangle$, so that the effective cosmological constant on the brane vanished. We now introduce some additional tension $\epsilon>0$ so that $\sigma=\sigma_c+\epsilon$. This introduces some positive curvature given by the roots of $\epsilon=F(H^2)$ where, as in \cite{cgp}, we have
\begin{equation}
F(H^2)=6M_4^2{H}^2-12\left\langle M^3 \theta
\left(\sqrt{{H}^2+k^2}-k\right )\right\rangle \lab{F(H^2)}.
\end{equation}
As in DGP, we have two classes of solution. There are those that vanish as $\epsilon \to 0$, so that we recover the Minkowksi brane studied in \cite{cgp}, and there are those that approach a finite positive value, so that we have a de Sitter brane, even in the absence of an effective cosmological constant. The former are the analogue of the normal branch in DGP, whereas the latter are the analogue of the self-accelerating branch. Of course, the class of solution depends on the form of the function $F(H^2)$, discussed in some detail in section 4 of \cite{cgp}. For example, the following represent necessary and sufficient conditions for the existence of a normal branch solution:
\begin{eqnarray}
&& M_4^2>\langle M^3 \theta /k \rangle,\\
& \textrm{or~}& M_4^2=\langle M^3 \theta /k\rangle, \langle M^3 \theta /k^3\rangle>0,\\
& \tt{or~}& M_4^2=\langle M^3 \theta /k\rangle, \langle M^3 \theta /k^3\rangle=0, \langle M^3 \theta /k^5\rangle <0.
\end{eqnarray}
Although we will study both classes of solution, we will be particularly interested in the normal branch since these will include small fluctuations about the finely tuned "stealth" scenarios discussed in \cite{cgp}.
\section{Vacuum fluctuations} \lab{sec:perts}
We shall now consider metric perturbations in the vacuum so that $g_{ab}=\bar g_{ab}+\delta g_{ab}$ and $T_\ensuremath{{\mu\nu}}=0$. In the unperturbed spacetime, given by (\ref{background}) and (\ref{bc4a}), the gauge was fixed in both $\mathcal{M}_1$
and $\mathcal{M}_2$ so that the brane was at $y=0$. However, a general
perturbation of the system must also allow the brane position to
flutter. In $\mathcal{M}_i$, the brane will be located at
\begin{equation}
y=\zeta_i(x^\mu).
\end{equation}
It is convenient to work in a Gaussian Normal (GN) gauge, so that in ${\mathcal{M}}_i$ we have
\begin{equation}
\delta g_{yy}=\delta g_{\mu y}=0, \qquad \delta g_\ensuremath{{\mu\nu}}= h_{i \: \ensuremath{{\mu\nu}}}(x, y).
\end{equation}
In most of this discussion, we will drop the index $i$ although its should be understood that it is really there. Now, it is well known (see, for example, \cite{rs2}) that in the absence of any bulk matter, we may take $h_\ensuremath{{\mu\nu}}$ to be transverse-tracefree $D^\mu h_\ensuremath{{\mu\nu}}=h^\mu_\mu=0$. This is known as Randall-Sundrum gauge. It follows that the bulk equations of motion, $\delta E_{ab}=0$ give
\begin{equation}
\left[\ensuremath{\partial}_y^2+\frac{1}{N^2} (D^2-4H^2) -4k^2 \right]h_\ensuremath{{\mu\nu}}(x, y)=0, \lab{bulkheqn}
\end{equation}
where $D_\mu$ is the covariant derivative on the $4D$ de Sitter slicings, and indices are raised/lowered using the $4D$ metric $\bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}}$. To impose the boundary conditions at the brane, we need to apply a GN to GN gauge transformation that shifts the brane position back to $y=0$. The most general such transformation is given by
\begin{equation}
y \to y-\zeta(x), \qquad x^\mu \to x^\mu-\xi^\mu(x)+D^\mu \zeta\int^y_0 \frac{dz}{N^2(z)},
\end{equation}
so that
\begin{equation}
h_\ensuremath{{\mu\nu}} \to \bar h_\ensuremath{{\mu\nu}}=h_\ensuremath{{\mu\nu}}+h_\ensuremath{{\mu\nu}}^{(\zeta)}+2N^2 D_{(\mu} \xi_{\nu)}. \lab{barh}
\end{equation}
We call this new gauge "brane-GN" gauge. Although the brane position is fixed in this gauge, the original position $\zeta(x)$ still enters the dynamics through a bookkeeping term
\begin{equation}
h_\ensuremath{{\mu\nu}}^{(\zeta)}=-2\left(N^2\int^y_0 \frac{dz}{N^2}\right)D_\mu D_\nu \zeta+2NN' \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}} \zeta.
\end{equation}
The metric perturbation in the new gauge is no longer transverse-tracefree, although it is now straightforward to apply continuity of the metric at the brane
\begin{equation}
\Delta \bar h_\ensuremath{{\mu\nu}} (x, 0)=0, \lab{cont}
\end{equation}
and the vacuum Israel equations (\ref{braneeom})
\begin{equation}
\delta \Theta_\ensuremath{{\mu\nu}}=-\left\langle M^3 \left(\frac{\bar h_\ensuremath{{\mu\nu}}-\bar h \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}}}{N^2}\right)' \Big |_{y=0}\right\rangle +M_4^2 X_\ensuremath{{\mu\nu}}(\bar h)=0, \lab{isrbarh}
\end{equation}
where
\begin{eqnarray}
X_{\mu \nu}(\bar h) &=& \delta G_{\mu \nu} (\bar h)+ 3 H^2 \bar h_{\mu \nu} \nonumber \\
&=& -\frac{1}{2} (D^2-2H^2) \bar h_\ensuremath{{\mu\nu}}+D_{(\mu} D^\ensuremath{\alpha} \bar h_{\nu)\ensuremath{\alpha}}-\frac{1}{2} D_\mu D_\nu \bar h \nonumber\\
&& \qquad \qquad \qquad -\frac{1}{2} \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}}\left[D^\ensuremath{\alpha} D^\beta \bar h_{\ensuremath{\alpha}\beta}- (D^2+H^2 )\bar h \right].
\end{eqnarray}
If we substitute the expression (\ref{barh}) into equation (\ref{isrbarh}) we find
\begin{eqnarray}
\left\langle M^3 \left(\frac{h_\ensuremath{{\mu\nu}}}{N^2}\right)' \Big |_{y=0}+\frac{M_4^2}{2} (D^2-2H^2) h_\ensuremath{{\mu\nu}}(x, 0)\right\rangle =\nonumber \\
2(D_\mu D_\nu-(D^2+3H^2) \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}})\left\langle (M^3-M_4^2N'(0))\zeta\right \rangle.
\lab{isrh}
\end{eqnarray}
Note that this expression is independent of $\xi^\mu(x)$, as expected, since this just corresponds to diffeomorphism invariance along the brane. It is convenient to decompose $h_\ensuremath{{\mu\nu}}$ in terms of the irreducible representations of the $4D$ de Sitter diffeomorphism group
\begin{equation}
h_\ensuremath{{\mu\nu}}=h^{(2)}_\ensuremath{{\mu\nu}}+h^{(1)}_\ensuremath{{\mu\nu}}+h^{(0)}_\ensuremath{{\mu\nu}},
\end{equation}
where $h^{(n)}_\ensuremath{{\mu\nu}}$ corresponds to the spin $n$ contribution. We can treat these modes independently of one another provided they have different masses\footnote{In $4D$ de Sitter, a transverse-tracefree tensor of mass $m$ satisfies $(D^2-2H^2)q^{(m)}_\ensuremath{{\mu\nu}}=m^2 q^{(m)}_\ensuremath{{\mu\nu}}$~\cite{ds}}. Let us now assume that this is indeed the case and analyse each spin separately. It will also be convenient to decompose the field $\xi_\mu(x)$ into its spin 1 and spin 0 components $\xi_\mu=\xi_\mu^{(1)}+\xi_\mu^{(0)}$. The field $\zeta(x)$ is just spin 0.
\subsection{Spin 2 modes} \lab{sec:spin2}
We begin by analysing the spin 2 modes. Since neither $\zeta$ nor $\xi_\mu$ have a spin 2 contribution, we can set them zero here, and can further decompose the spin 2 piece of the metric by separating variables
\begin{equation}
\lab{eq:modes}
h_{\mu \nu}^{(2)}(x,y) = \int_m u_m(y) \chi^{(m)}_{\mu \nu}(x),
\end{equation}
where $\chi^{(m)}_\ensuremath{{\mu\nu}}$ is a $4D$ tensor field of mass $m$ satisfying $\left( D^2 - 2 H^2 \right)\chi^{(m)}_{\mu \nu}(x)=m^2 \chi^{(m)}_{\mu \nu}(x)$, and $\int_m$ denotes a generalised sum, summing over discrete modes and integrating over continuum modes. The bulk equations of motion (\ref{bulkheqn}) now give
\begin{equation}
\lab{eq:bulkeom}
u''_m(y) +\left(\frac{m^2 - 2H^2}{N^2} -4k^2 \right) u_m(y) = 0,
\end{equation}
This is easily solved in terms of the associated Legendre functions:
\begin{equation}
\lab{eq:solads}
u_m(y) =C_1 \left(\frac{k}{H}\right)^2 {\cal P}_{-1/2 \pm \nu}^{\pm2} \left(\coth{k (y_h+\theta y)}\right) + C_2 \left(\frac{k}{H}\right)^2 {\cal Q}_{-1/2 \pm \nu}^{\pm2} \left(\coth{k (y_h+\theta y)}\right),
\end{equation}
where $\nu = \sqrt{9/4 - m^2/H^2}$. ${\cal P}^m_\nu(z)$ and ${\cal Q}^m_\nu(z)$ are the associated Legendre functions of the first and second kind, respectively. Of course, the expression (\ref{eq:solads}) is only well defined for $m^2\leq \frac{9H^2}{4}$, We could, in principle analytically continue our solution to $m^2> \frac{9H^2}{4}$, although this will not be necessary since our ultimate goal is to establish the existence of an helicity-0 ghost which is found in spin 2 modes of mass $0<m^2<2H^2$~\cite{spin2ghost}. Normalisability requires that~\cite{asymm_kk}
\begin{equation}
\int_0^{y_\tt{max}} dy\, \frac{u_m^2}{N^2} < \infty,
\end{equation}
so that for $\theta=1$ we only keep the part proportional to ${\cal P}^{-2}_{-1/2+\nu}(z)$, whereas for $\theta=-1$ we only keep the part proportional to ${\cal Q}^{2}_{-1/2+\nu}(z)$. Since we may assume that $u_m(0)=1$, without loss of generality, we get that the normalizable modes are given by
\begin{equation}
\lab{eq:modeads}
u_m(y) = \begin{cases} \frac{{\cal P}_{-1/2 + \nu}^{-2} \left(\coth{k\,(y_h+y)}\right)}{{\cal P}_{-1/2 + \nu}^{-2} \left(\coth{k\,y_h}\right)} & \tt{for $\theta=+1$}, \\
\frac{{\cal Q}_{-1/2 + \nu}^{2} \left(\coth{k\,(y_h-y)}\right)}{{\cal Q}_{-1/2 + \nu}^{2} \left(\coth{k\,y_h}\right)} &\tt{for $\theta=-1$}. \end{cases}
\end{equation}
It will be instructive to take a closer look at two special cases. For massless modes, this expression simplifies to give
\begin{equation}
\lab{eq:0mode}
u_0(y) = \begin{cases} e^{-2ky}\left(\frac{2+\coth{k(y_h+ y)}}{2+\coth{k\,y_h}}\right) =\frac{N^2 \int_y^{y_\tt{max}} dz/N^4}{\int_0^{y_\tt{max}} dz/N^4}& \tt{for $\theta=+1$}, \\
N^2(y) &\tt{for $\theta=-1$}. \end{cases}
\end{equation}
whereas for "partially massless" modes of mass $m^2=2H^2$ we have
\begin{equation} \lab{eq:usolssa}
u_{\sqrt{2}H}(y)=\begin{cases} e^{-2ky} & \textrm{for $\theta=+1$},\\
\frac{NN'}{N'(0)} & \textrm{for $\theta=-1$}.
\end{cases}
\end{equation}
Of course, neither the massless modes, nor the partially massless modes get excited in general. This is determined by the boundary conditions at the brane. The spin 2 part of the continuity equation (\ref{cont}) now implies that $\Delta \chi^{(m)}_\ensuremath{{\mu\nu}}(x)=0$ for each $m$, so that the spin 2 part of Israel equations (\ref{isrh}) yield the following quantization condition
\begin{equation}
f(m^2)=
\left\langle M^3 \left(\frac{u_m}{N^2}\right)' \Big |_{y=0} \right\rangle+\frac{M_4^2}{2} m^2=0. \lab{fm^2}
\end{equation}
Let us consider the lightest mode. For a finite volume bulk ($\theta_L=\theta_R=-1$), it is well known that this mode is massless so that gravity looks four dimensional out to arbitrarily large distances. We do not consider this case here, and assume, without further loss of generality, that $\theta_R=+1$. The lightest mode is now guaranteed to be massive. If the mass lies in the forbidden region $0<m^2<2H^2$, then
this mode contains an helicity-0 ghost \cite{spin2ghost}. We can now check if such a mode exists, by application of Bolzano's theorem:
\begin{equation}
\lab{eq:ghost}
f(0) f(2H^2) < 0,
\end{equation}
since $f(m^2)$ is continuous over the forbidden region. Although not {\it necessary} for the existence of a ghost, this condition is certainly {\it sufficient}. For an infinite bulk ($(\theta_L, \theta_R) \neq (-1, -1)$), it is easy enough to see that
\begin{equation}
f(0)=-\frac{1}{2} \left \langle M^3(1+\theta)\left[\int_0^{y_\tt{max}} \frac{dz}{N^4} \right]^{-1} \right\rangle <0.
\end{equation}
This means we have an helicity-0 ghost whenever
\begin{equation}
f(2H^2)=\left \langle \frac{ M^3}{2}\left(\frac{(1-\theta)H^2}{\sqrt{H^2+ k^2}}-2(1+\theta)( k+\sqrt{H^2+ k^2})\right)\right\rangle+ M_4^2H^2>0.
\end{equation}
\subsection{Spin 1 modes}
We now turn our attention to the spin 1 modes, neglecting all contributions from spin 2 and spin 0. Recall that $\xi_\mu$ contains a spin 1 piece $\xi_\mu^{(1)}(x)$, which is simply a divergence-free vector that can be chosen in order to guarantee continuity at the brane. The spin 1 part of the metric takes the form
\begin{equation}
h_\ensuremath{{\mu\nu}}^{(1)}=D_\mu A_\nu+D_\nu A_\mu,
\end{equation}
where $A_\mu(x, y)$ is another divergence free vector. Since $h_\ensuremath{{\mu\nu}}^{(1)}$ is transverse-tracefree, one can easily verify that $A_\mu$ behaves like a tachyonic vector in $dS_4$, satisfying
\begin{equation} (D^2+3H^2)A_\mu=0.
\end{equation}
This tachyonic instability is a mild one, associated with the repulsive nature of inflating domain walls~\cite{ipser}. The metric contribution now resembles a massless spin 2 mode, $(D^2-2H^2)h_\ensuremath{{\mu\nu}}^{(1)}=0$, and is therefore guaranteed not to mix with any of the genuine spin 2 modes discussed in the previous section. Furthermore, it follows that the profile in the bulk is given by the normalisable massless wavefunction (\ref{eq:0mode})
\begin{equation}
A_\mu(x, y)=u_0(y)a_\mu(x).
\end{equation}
The spin 1 part of the continuity equation (\ref{cont}), $\Delta (a_\mu+\xi_\mu^{(1)})=0$ , is trivially satisfied by choosing $\xi_\mu^{(1)}(x)=-a_\mu(x)$ on both sides of the brane. The Israel equations (\ref{isrh}) are independent of $\xi_\mu^{(1)}$, and require that
\begin{equation}
\left\langle M^3\left(\frac{u_0}{N^2}\right)' \Big |_{y=0}a_\mu(x) \right\rangle=0. \lab{Abc}
\end{equation}
If we assume, without any great justification, that $\Delta a_\mu=0$, if follows from (\ref{Abc}) that $f(0)a_\mu(x)=0$, and so $a_\mu(x)=0$. However, in a generalised asymmetric scenario there is no reason to assume that the spin 1 mode is symmetric. More generally we can show that
\begin{equation}
f(0)\langle a_\mu \rangle=\frac{1}{8} \Delta \left( M^3(1+\theta)\left[\int_0^{y_\tt{max}} \frac{dz}{N^4} \right]^{-1} \right)\Delta a_\mu,
\end{equation}
which indicates that one spin 1 degree of freedom can, in principle, remain.
\subsection{Spin 0 modes}
We conclude this section with a study of the spin 0 modes, neglecting all contributions from higher spin. The brane bending piece $\zeta$ now plays a role, along with the spin 0 component of $\xi_\mu$, which takes the form $\xi_\mu^{(0)}=D_\mu \psi$, where $\psi(x)$ will be chosen in order to guarantee continuity at the brane. The spin 0 part of the metric perturbation can be written in terms of a pair of scalars, $\Phi(x,y)$ and $h^{(0)}(x, y)$, like so
\begin{equation}
h_\ensuremath{{\mu\nu}}^{(0)}=\left[D_\mu D_\nu-\frac{1}{4} D^2 \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}}\right] \Phi+\frac{1}{4} h^{(0)} \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}}.
\end{equation}
From the transverse-tracefree property of $h^{(0)}_\ensuremath{{\mu\nu}}$, it follows immediately that $h^{(0)}=0$ and
\begin{equation}
(D^2+4H^2) \Phi=0.
\end{equation}
Again, we have a mild tachyonic instability associated with inflating domain walls. The metric contribution now resembles a "partially massless" spin 2 mode, $(D^2-2H^2)h^{(0)}_\ensuremath{{\mu\nu}}=2H^2 h^{(0)}_\ensuremath{{\mu\nu}}$, which could, in principle, mix with one of the genuine spin 2 modes discussed in section~\ref{sec:spin2}. We will discuss this in more detail later on. Assuming for the moment that there is no issue with mixing, we conclude that the scalar's profile in the bulk is given by the partially massless wavefunction
\begin{equation}
\Phi(x, y)=u_{\sqrt{2}H}(y) \phi(x).
\end{equation}
The spin 0 part of the continuity equation is split into a pure gauge part, and a conformally de Sitter part. Requiring continuity of both parts separately implies that
\begin{equation}
\Delta ( \phi+2\psi)=0, \qquad \Delta ( H^2 \phi+2N'(0)\zeta)=0. \lab{contphi}
\end{equation}
The first condition can be trivially satisfied if we chose $\psi(x)=-\phi(x)/2$. The Israel equations (\ref{isrh}) are independent of $\psi$, and require that
\begin{equation}
\left\langle\left( M^3\left(\frac{u_{\sqrt{2}H}}{N^2}\right)' \Big |_{y=0} +M_4^2 H^2\right) \phi(x)\right\rangle=2\langle (M^3-M^2_4 N'(0))\zeta \rangle. \lab{isrphi}
\end{equation}
It follows from (\ref{contphi}) and (\ref{isrphi}) that
\begin{equation}
\Delta \phi=-\frac{2}{H^2} \Delta\left [ \theta \zeta \sqrt{H^2+ k^2}\right ], \qquad \langle \phi \rangle=\alpha \left \langle \theta \zeta \sqrt{H^2+ k^2} \right\rangle +\beta \Delta \left[\theta \zeta \sqrt{H^2+ k^2} \right],
\end{equation}
where
\begin{eqnarray}
\alpha &=&\frac{2}{f(2H^2)}\left[ \left\langle \frac{ M^3 \theta}{\sqrt{H^2+ k^2}} \right \rangle- M^2_4 \right], \\
\beta &=& -\frac{1}{4H^2f(2H^2)}\Delta\left[ M^3 (1+\theta) \left( \frac{(k+\sqrt{H^2+k^2})^2}{\sqrt{H^2+ k^2}} \right)\right].
\end{eqnarray}
Here we see that the fluctuation in the brane position sources the bulk mode $\phi(x)$. We therefore associate it with the radion. Again, there is no reason to assume $\Delta \phi=0$, so that in general the boundary conditions leave us with up to two spin 0 degrees of freedom. Note that both $\alpha$ and $\beta$ diverge as $f(2H^2) \to 0$. This singular limit corresponds to the case where there exists a genuine spin 2 mode with mass $m^2=2H^2$. The divergence in $\alpha$ and $\beta$ reflects the fact that the lightest spin 2 mode is no longer orthogonal to the spin 0 contribution, and cannot be treated independently. The two modes mix and a more careful analysis is required. Finally, we also note that we can write $\alpha=-F'(H^2)/3f(2H^2)$, where $F(H^2)$ is given by equation (\ref{F(H^2)}).
\section{Coupling to matter} \lab{sec:mat}
When we introduce some additional energy-momentum, $T_\ensuremath{{\mu\nu}}$, on the brane, the homogeneous solution discussed in the previous section picks up an additional contribution that describes the responses of fields to the source on the brane,
\begin{equation}
h_\ensuremath{{\mu\nu}}(x, y) \to h_\ensuremath{{\mu\nu}}(x, y)+ \pi_\ensuremath{{\mu\nu}}(x, y), \qquad \zeta_i(x) \to \zeta_i(x)+\pi_i(x),
\end{equation}
where $\pi_\ensuremath{{\mu\nu}}$ is transverse-tracefree. In analogy with the theory of ordinary differential equations, it is useful to think of the homogeneous pieces, $ h_\ensuremath{{\mu\nu}}(x, y)$ and $\zeta_i(x)$, as the "complementary functions" and the inhomogeneous pieces, $\pi_\ensuremath{{\mu\nu}}(x, y)$ and $\pi_i(x)$, as the "particular integrals". The "particular integrals" must be solutions to the following
\begin{equation}
\left[\ensuremath{\partial}_y^2+\frac{1}{N^2} (D^2-4H^2) -4k^2 \right]\pi_\ensuremath{{\mu\nu}}(x, y) =0, \lab{pibulk}
\end{equation}
\begin{equation}
\Delta \left[ \pi_\ensuremath{{\mu\nu}}(x, 0)+2N'(0) \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}} \pi(x) \right]=0, \lab{picont}
\end{equation}
\begin{gather}
\left\langle M^3 \left(\frac{\pi_\ensuremath{{\mu\nu}}}{N^2}\right)' \Big |_{y=0}+\frac{M_4^2}{2} (D^2-2H^2) \pi_\ensuremath{{\mu\nu}}(x, 0)\right\rangle =\nonumber \\ \hspace{3cm}2(D_\mu D_\nu-(D^2+3H^2) \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}})\left\langle (M^3-M_4^2N'(0))\pi(x)\right \rangle
+ \frac{1}{2} T_\ensuremath{{\mu\nu}}. \lab{piisr}
\end{gather}
Tracing the two boundary conditions, and carrying out a little algebra, gives
\begin{equation}
\Delta \left [N'(0) \pi(x)\right]=0, \qquad (D^2+4H^2)\left\langle N'(0) \pi(x)\right \rangle =-\frac{T}{2F'(H^2)}, \lab{mattercoupling}
\end{equation}
where $F(H^2)$ is given by equation (\ref{F(H^2)}). This completely specifies the $\pi_i(x)$, since any homogeneous brane bending is already accounted for in the $\zeta_i(x)$. We now turn our attention to the $\pi_\ensuremath{{\mu\nu}}$. The traceless part of (\ref{picont}) demonstrates that $\Delta \pi_\ensuremath{{\mu\nu}}(x, 0)=0$, whereas the Israel equation (\ref{piisr}) may be rewritten like so
\begin{equation}
\left\langle M^3 \left(\frac{\pi_\ensuremath{{\mu\nu}}}{N^2}\right)' \Big |_{y=0}+\frac{M_4^2}{2} (D^2-2H^2) \pi_\ensuremath{{\mu\nu}}(x, 0)\right\rangle =\frac{1}{2} \tau_\ensuremath{{\mu\nu}}(x),
\end{equation}
where $\tau_\ensuremath{{\mu\nu}}$ is a gauge invariant brane stress energy perturbation defined as~\cite{dgpspec}
\begin{eqnarray}
\tau_\ensuremath{{\mu\nu}}(x)&=&T_\ensuremath{{\mu\nu}}-\frac{2}{3}F'(H^2)(D_\mu D_\nu-(D^2+3H^2) \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}})\left\langle N'(0) \pi(x)\right \rangle \\
&=& T_\ensuremath{{\mu\nu}}+\frac{1}{3}(D_\mu D_\nu-(D^2+3H^2) \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}})\left(\frac{T}{D^2+4H^2}\right).
\end{eqnarray}
It turns out that in ${\cal M}_i$,
\begin{equation}
\pi^{(i)} _\ensuremath{{\mu\nu}}(x, y)=\int d^4 x' \sqrt{-\bar \ensuremath{\gamma}} ~ G^{(i)}_\ensuremath{{\mu\nu}}{}^{\ensuremath{\alpha} \beta}(x, y; x', 0) \tau_{\alpha \beta}(x'),
\end{equation}
where $G^{(i)}_\ensuremath{{\mu\nu}}{}^{\ensuremath{\alpha} \beta}(x, y; x', 0)$ is the relevant Green's function, satisfying
\begin{equation}
\left[\ensuremath{\partial}_y^2+\frac{1}{N^2} (D^2-4H^2) -4k^2 \right]G^{(i)}_\ensuremath{{\mu\nu}}{}^{\ensuremath{\alpha} \beta}(x, y; x', 0) =0, \lab{Gbulk}
\end{equation}
\begin{equation}
\Delta \left[ G _\ensuremath{{\mu\nu}}{}^{\ensuremath{\alpha} \beta}(x, 0; x', 0) \right]=0. \lab{Gcont}
\end{equation}
\begin{gather}
\left\langle M^3 \left(\frac{G_\ensuremath{{\mu\nu}}{}^{\ensuremath{\alpha} \beta}(x, y; x', 0) }{N^2}\right)' \Big |_{y=0}+\frac{M_4^2}{2} (D^2-2H^2) G_\ensuremath{{\mu\nu}}{}^{\ensuremath{\alpha} \beta}(x, 0; x', 0) \right\rangle =\frac{\delta^{(4)}(x-x')}{\sqrt{-\bar \ensuremath{\gamma}}}. \lab{Gisr}
\end{gather}
The Green's function can be expressed in terms of the wavefunctions $u_m(y)$ discussed in section (\ref{sec:spin2}). Defining the {\it normalised} wavefunctions $ \hat u^{(i)}_m(y)={\cal N}_m u^{(i)}_m(y) $, where the ${ \cal N}_m$ are chosen so that
\begin{equation}
\left \langle M_4^2 \hat u_m(0)\hat u_n(0)+ 2M^3 \int_0^{y_\tt{max}} dy ~\frac{\hat u_m \hat u_n}{N^2} \right\rangle=\begin{cases} \delta_{mn} & \tt{for discrete modes}, \\
\delta(m-n) & \tt{for continuum modes}, \end{cases}
\end{equation}
we have
\begin{equation}
G^{(i)}_\ensuremath{{\mu\nu}}{}^{\ensuremath{\alpha} \beta}(x, y; x', 0)= -\int_p \chi_\ensuremath{{\mu\nu}}^{(p)}(x) \chi^{*(p)\alpha \beta}(x') \int_m \frac{\hat u^{(i)}_m(y) \hat u^{(i)}_m(0)}{p^2-m^2}.
\end{equation}
Note that $(D^2-2H^2) \chi_\ensuremath{{\mu\nu}}^{(p)}=p^2 \chi_\ensuremath{{\mu\nu}}^{(p)}$, and $\chi^{* \alpha \beta}$ satisfies
$$\int_p \chi_\ensuremath{{\mu\nu}}^{(p)}(x) \chi^{*(p)\alpha \beta}(x') =\delta^\ensuremath{\alpha}_\mu \delta^\beta_\nu \delta^{(4)}(x-x')/\sqrt{-\bar \ensuremath{\gamma}}.$$
For more details on this construction, at least for DGP gravity, see section 3.3 of~\cite{dgpspec}.
\section{The effective action} \lab{sec:eff}
We now compute the effective $4D$ action of normalisable vacuum perturbations. This will enable us to identify any ghosts: pathological modes with negative kinetic terms. Of course, we already know that whenever $f(2H^2)>0$ a ghost haunts the helicity-0 sector of the lightest spin 2 mode. Our effective action calculation will reveal a generic spin-0 "radion" ghost in the opposite regime, ie when $f(2H^2)<0$.
We begin our calculation in bulk Randall-Sundrum gauge, so that the brane is positioned at $y=\zeta(x)$ and the metric perturbation is given by
\begin{equation}
h_\ensuremath{{\mu\nu}}(x, y)=\int_m u_m(y)\chi^{(m)}_\ensuremath{{\mu\nu}}(x)+u_0(y)h_\ensuremath{{\mu\nu}}^{(a)}(x)+u_{\sqrt{2}H}(y)h_\ensuremath{{\mu\nu}}^{(\phi)}(x),
\end{equation}
where
\begin{equation}
h_\ensuremath{{\mu\nu}}^{(a)}(x) = D_\mu a_\nu +D_\nu a_\mu, \qquad h_\ensuremath{{\mu\nu}}^{(\phi)}(x) = (D_\mu D_\nu +H^2 \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}}) \phi.
\end{equation}
In computing the action, it is important to leave the $4D$ fields off-shell. In other words, we do not assume $(D^2-2H^2)\chi^{(m)}_\ensuremath{{\mu\nu}}=m^2\chi_\ensuremath{{\mu\nu}}^{(m)}$, $(D^2+3H^2)a_\mu=0$, or $(D^2+4H^2)\phi=0$. These equations should follow from variation of the action at the end of the calculation.
Randall-Sundrum gauge is the correct gauge choice far from the brane, since it contains no pure gauge modes with a non-normalisable profile in the bulk. However, in order to compute the effective action, it is convenient to be in brane-GN gauge close to the brane so that it lies at $y=0$ and the $4D$ coordinates match on either side. This can be achieved whilst maintaining Randall-Sundrum gauge far from the brane, but only at a price: we are no longer everywhere Gaussian-Normal. We can transform to this "fixed wall" gauge from everywhere Randall-Sundrum gauge by the following gauge transformation
\begin{equation}
y \to y-\eta^y(x, y), \qquad x^\mu \to x^\mu-\eta^\mu(x, y),
\end{equation}
where
\begin{equation}
\eta^y(x, y)=\begin{cases}
\zeta(x) & \textrm{for $y \ll y_*$}, \\
0 & \textrm{for $y \gg y_*$},
\end{cases} \qquad
\eta^\mu(x, y)=\begin{cases}
\xi^\mu(x)-D^\mu \zeta(x) \int_0^y \frac{dz}{N^2(z)} & \textrm{for $y \ll y_*$}, \\
0 & \textrm{for $y \gg y_*$},
\end{cases}
\end{equation}
where $0<y_*<y_\tt{max}$ is some appropriately chosen finite distance. It follows that
\begin{equation}
\delta g_{ab} \to \delta g_{ab}+2\nabla_{(a} \eta_{b)} \lab{dg},
\end{equation}
where $\nabla$ is the covariant derivative for $\bar g_{ab}$. This new gauge interpolates between Randall-Sundrum gauge deep inside the bulk and brane-GN gauge near the brane. As result, the metric perturbation along the brane is the same as in brane-GN gauge, with
\begin{equation}
\delta \ensuremath{\gamma}_\ensuremath{{\mu\nu}}=h_\ensuremath{{\mu\nu}}(x, 0)+2N'(0)\bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}}\zeta +2D_{(\mu} \xi_{\nu)}. \lab{dga}
\end{equation}
We now perturb the action to quadratic order
\begin{equation}
\delta S=\left \langle M^3 \int_{\mathcal{M}} d^5 x \sqrt{-\bar g} \delta g^{ab} \delta E_{ab}\right \rangle +\frac{1}{2} \int_\Sigma d^4x \sqrt{-\bar \ensuremath{\gamma}} \delta \ensuremath{\gamma}^\ensuremath{{\mu\nu}} \delta \Theta_\ensuremath{{\mu\nu}},
\end{equation}
where $\delta E_{ab}$ and $\delta \Theta_\ensuremath{{\mu\nu}}$ are the linearised bulk equation of motion (\ref{bulkeom}) and vacuum Israel equation (\ref{braneeom}), respectively. Using (\ref{dg}), (\ref{dga}) and the Bianchi identity $\nabla^a\delta E_{ab}=0$, we find that
\begin{equation}
\delta S=\int d^4 x \sqrt{-\bar \ensuremath{\gamma}} \delta \mathcal{L},
\end{equation}
where
\begin{gather}
\delta {\cal L}=\left\langle -M^3\left[ \int_0^{y_\tt{max}} dy \;h^\ensuremath{{\mu\nu}}(x, y) \delta E_\ensuremath{{\mu\nu}}(h)\right]+2M^3 \eta^a(x, 0) \delta E_{ay}(h)\Big |_{y=0} \right\rangle\nonumber\\
\qquad\qquad-\frac{1}{2} \left\langle h^\ensuremath{{\mu\nu}} (x, 0)+2N'(0)\bar \ensuremath{\gamma}^\ensuremath{{\mu\nu}}\zeta +2D^{(\mu} \xi^{\nu)}\right\rangle\delta \Theta_\ensuremath{{\mu\nu}}.
\end{gather}
We cannot assume $D^\mu h_\ensuremath{{\mu\nu}}^{(a)} =D^\mu h_\ensuremath{{\mu\nu}}^{(\phi)} =h^{(\phi)}=0$, since these imply the on-shell equations of motion for $a_\mu$ and $\phi$. We therefore need the following expressions for $\delta E_{ab}$ and $\delta \Theta_\ensuremath{{\mu\nu}}$ for a generic GN perturbation.
\begin{eqnarray}
\delta E_\ensuremath{{\mu\nu}}(h) &=& \frac{1}{N^2} X_\ensuremath{{\mu\nu}}(h)-\frac{1}{2} \left[ \ensuremath{\partial}_y^2-2\left(\frac{H^2}{N^2}+2k^2\right)\right]\left(h_\ensuremath{{\mu\nu}}-h\bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}} \right), \\
\delta E_{\mu y} (h)&=& \frac{1}{2} \ensuremath{\partial}_y \left[\frac{D^\nu(h_\ensuremath{{\mu\nu}}-h \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}})}{N^2}\right], \\
\delta E_{yy}(h)&=&\frac{3 N'}{2N}\ensuremath{\partial}_y\left[\frac{h}{N^2}\right]-\frac{1}{2N^4}(D^{\mu} D^\nu-(D^2+3H^2)\bar \ensuremath{\gamma}^\ensuremath{{\mu\nu}})h_\ensuremath{{\mu\nu}}, \\
\delta \Theta_\ensuremath{{\mu\nu}} &=& -\left\langle \left[M^3 \ensuremath{\partial}_y \left(\frac{ h_\ensuremath{{\mu\nu}}- h \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}}}{N^2}\right)-M_4^2 X_\ensuremath{{\mu\nu}}( h)\right]\Big |_{y=0} \right\rangle \nonumber \\
&& \qquad +2(D_\mu D_\nu-(D^2+3H^2) \bar \ensuremath{\gamma}_\ensuremath{{\mu\nu}})\left\langle (M^3-M_4^2N'(0))\zeta\right \rangle.
\end{eqnarray}
Making use of equations (\ref{eq:bulkeom}), (\ref{fm^2}), (\ref{Abc}), (\ref{contphi}), (\ref{isrphi}), as well as the orthogonality condition
\begin{equation}
\left \langle
2 M^3\int_0^{y_\tt{max}} dy \frac{u_m(y)u_n(y)}{N^2(y)}+M_4^2u_m(0)u_n(0)
\right \rangle
=0, \qquad m \neq n,
\end{equation}
we arrive at the following $4D$ effective Lagrangian
\begin{equation}
\delta {\cal L}=\delta {\cal L}_2+\delta {\cal L}_1+\delta {\cal L}_0,
\end{equation}
where the spin 2, spin 1 and spin 0 contributions are respectively given by
\begin{eqnarray}
\delta {\cal L}_2&=&\frac{1}{2} \int_m \left[\int_0^{y_\tt{max}} dy \frac{u_m(y)^2}{N^2(y)}+M_4^2\right]\chi^{(m) \ensuremath{{\mu\nu}}} (D^2-2H^2-m^2)\chi^{(m)}_\ensuremath{{\mu\nu}}, \\
\delta {\cal L}_1 &=& \frac{1}{4}\left\langle \frac{1}{M^3(u_0/N^2)'|_{y=0}} \right\rangle^{-1}
\Delta a^\mu (D^2+3H^2) \Delta a_\mu, \\
\delta {\cal L}_0 &=& \frac{9 f(2H^2)}{F'(H^2)}\langle\gamma \rangle \left[\langle \phi\rangle+\frac{\Delta \ensuremath{\gamma}\Delta \phi}{4\langle \ensuremath{\gamma} \rangle}\right](D^2+4H^2) \left[\langle \phi\rangle+\frac{\Delta \ensuremath{\gamma}\Delta \phi}{4\langle \ensuremath{\gamma} \rangle}\right] \nonumber \\
&& \qquad +\frac{3H^2}{8} \left\langle \frac{1}{\ensuremath{\gamma}}\right \rangle^{-1}\Delta \phi (D^2+4H^2) \Delta \phi,
\end{eqnarray}
and
\begin{eqnarray}
\gamma &=& M^3 \left(\frac{1+\theta}{2}\right)\frac{ (k+\sqrt{H^2+k^2})^2}{\sqrt{H^2+k^2}}>0, \\
(u_0/N^2)'|_{y=0} &=& - \left(\frac{1+\theta}{2}\right)\left(\int_0^{y_\tt{max}} \frac{dz}{N(z)^4} \right)^{-1} <0.
\end{eqnarray}
As we stated earlier, we do not consider a finite volume bulk ($\theta_L=\theta_R=-1$). Let us now analyse the alternatives. We see immediately that there is a spin 1 ghost whenever $\theta_L=\theta_R=+1$. There are two spin 0 modes, roughly corresponding to the average radion, and the difference. The latter is never a ghost, whereas the kinetic term for the average radion is determined by the sign of $f(2H^2)/F'(H^2)$. Recall that for a well behaved cosmology we require that $F'(H^2) \geq 0$~\cite{cgp}. For finite $F'(H^2)>0$, it follows that we have a radion ghost whenever $f(2H^2)<0$. In section \ref{sec:spin2}, we found that the lightest spin 2 mode contains an helicity-0 ghost in precisely the opposite regime, ie when $f(2H^2)>0$. This is exactly the sort of behaviour found on the self-accelerating branch of DGP: a well behaved spin 2 sector corresponds to a pathological radion, and vice versa~\cite{arethereghosts, moreonghosts, dgpspec, review} .
When $\theta_L=-1,~\theta_R=+1$, the spin 1 mode, and the radion difference decouple completely. In contrast, the average radion typically remains in the spectrum, and we can draw similar conclusions regarding its stability as discussed in the previous paragraph for $\theta_L=\theta_R=+1$. However, there are a few exceptional cases. $F'(H^2)=0$ and $F'(H^2) \to \infty$ ultimately correspond to the "stealth" scenarios identified in~\cite{cgp}, where the brane is Minkowski as opposed to de Sitter. The former is the conformal or strong coupling limit whereas the latter is the decoupling limit. The ghost is absent in both cases. Naively, the case $f(2H^2)=0$ would also appear to be ghost free, since the kinetic term for the radion vanishes. Actually, this conclusion is incorrect. $f(2H^2)=0$ corresponds to the case where the radion mixes with the spin 2 mode, rendering our analysis invalid. The mixing occurs because the lightest spin 2 mode has the same mass as the spin 0 mode ($m_{ \tt {light}}^2=2H^2$). The two modes cease to be orthogonal and a more careful analysis is required. This was done for the self-accelerating branch of the DGP model, where the ghost was shown to remain even when $f(2H^2)=0$~\cite{moreonghosts, dgpspec}, It is natural to expect the same behaviour here.
\section{Discussion} \lab{sec:disc}
In this paper we have considered the stability of de Sitter branes in the CGP model: an asymmetric generalisation of the DGP model. These vacua include the analogue of the normal branch in DGP, as well as the self accelerating branch. Whenever the background bulk has infinite volume, we have found, without exception, that linear perturbations about these vacua contain ghosts. As for the self accelerating branch of DGP, there is always a ghost in either the spin 2 or spin 0 sector. If the spin 2 sector is well behaved, there is a spin 0 ghost corresponding to the average radion. If the spin 0 sector is well behaved, the helicity-0 part of the lightest spin 2 mode is a ghost. A more careful analysis is required in the crossover region, when the two offending modes mix with one another. However, our experience from the self-accelerating branch of DGP would imply that the ghost remains even in this limit~\cite{moreonghosts, dgpspec}. In the most pathological scenarios, there is yet another ghost corresponding to the antisymmetric spin 1 mode.
It is interesting to note that the only way to avoid ghosts in this model is to consider Minkowski branes. This was studied in detail in~\cite{cgp}, where certain interesting vacua were found to be ghost free. These vacua corresponded to the "stealth" models, and had the curious property of giving rise to power law acceleration in the presence of matter, before asymptoting to Minkowski space at late times. Indeed, the stealth model realises the Cardassian cosmology of Freese and Lewis~\cite{cardassian}, as well as offering a possible resolution of the coincidence problem. Given these successes of these models, it is worth asking whether or not our analysis can shed any light on their consistency.
Of course, the stealth vacua do {\it not} include de Sitter branes. In fact, the vacuum brane is Minkowski and is known to be ghost free, in contrast to the de Sitter branes considered here. What we can say is that the introduction of a small brane cosmological constant introduces an instability in the stealth model. It is reasonable to extend this conclusion to any type of matter, at least for small $H$. It follows that the stealth model is unstable close to the asymptotically Minkowski limit. The question now remains: how dangerous is this instability?
A ghost will terrorize the vacuum if it couples to ordinary fields. The problem is that in a unitary theory, the ghost ought to carry negative energy, and can be produced in the vacuum along with ordinary fields without violating energy conservation. In a Lorentz invariant theory, the ghost-non ghost production rate is divergent, no matter how weak the coupling! This occurs because one can always use Lorentz invariance to perform a boost on the 3-momentum cut-off in loop integrals. However, a generic Friedmann-Robertson-Walker brane automatically breaks Lorentz invariance, so the stealth model does not necessarily suffer from this catastrophic instability (for a related discussion, see~\cite{izumi}). If the ghost only couples weakly to other fields, the ghost-non ghost production rate gets suppressed.
The stealth model contains a decoupling scenario where the would be ghost decouples from the spectrum as $H \to 0$. This corresponds to the case where we have $k_L=0, ~\theta_L=-1$ and $k_R>0, ~\theta_R=+1$, so the cosmological dynamics is governed by the following
\begin{equation}
\lab{eq:Friedmann}
\rho = F(H^2) = 6 M_4^2 H^2 - 6 M_R^3 \left( \sqrt{H^2+k_R^2}-k_R \right) + 6 M_L^3H.
\end{equation}
For small $H$, it is easy enough to check that $f(2H^2) \sim -2M_R^2 k_R<0$, from which we conclude that there is a radion ghost. We know from~\cite{cgp} that the radion decouples in the Minkowski limit, so it must be weakly coupled at small $H$. Given that the radion feeds into the brane bending mode, we can see this explicitly by considering the coupling of the brane bending mode to matter (see equation (\ref{mattercoupling})). The coupling strength is given by $1/F'(H^2) \sim H/3M_L^3$, which does indeed go to zero as $H \to 0$. We conclude that this particular stealth model will barely be affected by the ghost at small $H$, owing to the weakness of the coupling. At larger values of $H$, our de Sitter brane analysis suggests that the ghost coupling becomes significant, but we cannot be sure that these results apply to a general FRW brane.
~\\
{\large \bf Acknowledgements}\\
We would like to thank Ruth Gregory, Christos Charmousis and Takahiro Tanaka for useful discussions.
KK was supported by ERC, RCUK and STFC. AP was funded by a Royal Society University Research Fellowship.
FPS was supported by
``Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (Portugal)",
with the fellowship's reference number: SFRH/BD/27249/2006.
|
2,869,038,156,760 | arxiv | \section{Introduction}
This paper is a continuation of our study of $V^G$-module category $\mathcal{C}_{V^G}$ for a regular (rational and $C_2$-cofinite) vertex operator algebra $V$ with a finite automorphism group isomorphic to $G$ of $V$ (cf. \cite{DNR}). It is established that if $V^G$ is regular, the category $\mathcal{E}_{V^G}$ generated by the $V^G$-submodules of $V$ is a symmetric fusion category braided equivalent to the $G$-module category $\mathcal{E}={\rm Rep}(G).$ If $V$ is holomorphic, then the $V^G$-module category $\mathcal{C}_{V^G}$ is a minimal modular extension of $\mathcal{E},$ and is equivalent to the Drinfeld center $\mathcal{Z}({\rm Vec}_G^{\alpha})$ or to the module category of twisted Drinfeld double $D^{\alpha}(G)$ for some $\alpha\in H^3(G,S^1)$ with a canonical embedding of $\mathcal{E}$. This result has been conjectured in \cite{DPR} where the $D^{\alpha}(G)$ was introduced and studied. Moreover, the collection $\mathcal{M}_v(\mathcal{E})$ of equivalence classes of the minimal modular extensions $\mathcal{C}_{V^G}$ of $\mathcal{E}$ for some holomorphic vertex operator algebra $V$ with a $G$-action form a group, which is isomorphic to a subgroup of $H^3(G,S^1).$ Furthermore, any pointed modular category $\mathcal{Z}({\rm Vec}_G^{\alpha})$ is equivalent to $\mathcal{C}_{V_L^G}$
for some positive definite even unimodular lattice $L.$ For any rational vertex operator algebra $U$ with a $G$-action, $\mathcal{C}_{U^G}$ is a minimal modular extension of the braided fusion subcategory $\mathcal{F}$ generated by the $U^G$-submodules of $U$-modules. Furthermore, the group $\mathcal{M}_v(\mathcal{E})$ acts freely on the set of equivalence classes $\mathcal{M}_v(\mathcal{F})$ of the minimal modular extensions $\mathcal{C}_{W^G}$ of $\mathcal{F}$ for any rational vertex operators algebra $W$ with a $G$-action.
It was proved in \cite{CM} that if $G$ is solvable, then the regularity of $V$ implies the regularity of $V^G.$ So we only need to assume $V$ is regular in this case.
More recently, the regularity of $V$ together with the $C_2$-cofiniteness of $V^G$ implies the rationality of $V^G$ \cite{Mc}.
We now give a detail discussion on this paper.
A braided fusion category $\mathcal{C}$ over $\mathcal{E}={\rm Rep}(G)$, simply called a braided $\mathcal{E}$-category, is a pair $(\mathcal{C}, \eta)$ where $\mathcal{C}$ is a braided fusion category and $\eta: \mathcal{E} \to \mathcal{C}$ is a full and faithful braided tensor functor. Throughout this paper, we call any full and faithful braided tensor functor an \emph{embedding}. A braided $\mathcal{E}$-category $(\mathcal{C}, \eta)$ is said to be \emph{nondegenerate} if $\eta: \mathcal{E} \to \mathcal{C}'$ is an equivalence, where $\mathcal{C}'$ denotes the M\"uger center of $\mathcal{C}$. Note that $\mathcal{E}$ is a nondegenerate braided $\mathcal{E}$-category. We may simply write $\mathcal{C}$ for the braided $\mathcal{E}$-category $(\mathcal{C}, \eta)$ when there is no ambiguity.
An equivalence of braided $\mathcal{E}$-categories $(\mathcal{C}_1, \eta_1)$ and $(\mathcal{C}_2, \eta_2)$ is a braided tensor equivalence $F: \mathcal{C}_1 \to \mathcal{C}_2$ such that $\eta_2 \cong F \circ \eta_1$ as braided tensor functors.
A modular extension of a braided $\mathcal{E}$-category $\mathcal{C}$ is a pair $(\mathcal{D}, j_\mathcal{D})$ in which $\mathcal{D}$ is a modular tensor category and $j_\mathcal{D}: \mathcal{C} \to \mathcal{D}$ is an embedding. Similar to the equivalence of two braided $\mathcal{E}$-categories, two modular extensions $(\mathcal{D}_1, j_1), (\mathcal{D}_2, j_2)$ of $\mathcal{C}$ said to be \emph{equivalent} if there exists a braided tensor equivalence $F: \mathcal{D}_1 \to \mathcal{D}_2$ such that $j_2\cong F \circ j_1$ as braided tensor functors. We may simply write $\mathcal{D}$ for the modular extension $(\mathcal{D}, j_\mathcal{D})$ of $\mathcal{C}$ if there is no ambiguity, and $[\mathcal{D}]$ for the equivalence class of $(\mathcal{D}, j_\mathcal{D})$.
A modular extension $\mathcal{D}$ of a nondegenerate braided $\mathcal{E}$-category $\mathcal{C}$ is called \emph{minimal} if ${\rm FPdim}(\mathcal{D})=o(G)\cdot{\rm FPdim}(\mathcal{C}).$
According to \cite{BNRW} there are only finitely many inequivalent minimal modular extensions of $\mathcal{C}$ if there is one. Moreover, by \cite{LKW1}, the collection $\mathcal{M}(\mathcal{E})$ of equivalent classes $[\mathcal{C}]$ of minimal modular extensions of $\mathcal{E}$ forms a finite group isomorphic to $H^3(G,S^1)$ under
the relative Deligne tensor product $\mathcal{C}\otimes_\mathcal{E}\mathcal{D}$ for $[\mathcal{C}],[\mathcal{D}] \in \mathcal{M}(\mathcal{E})$.
In fact, any minimal modular extension of $\mathcal{E}$ is braided equivalent to the Drinfeld center $\mathcal{Z}({\rm Vec}_G^{\alpha})$ where ${\rm Vec}_G^{\alpha}$ is the fusion category of $G$-graded vector spaces over $\mathbb C$ whose associativity isomorphism is given by the 3-cocycle $\alpha.$ Note that $\mathcal{Z}({\rm Vec}_G^{\alpha})$ is braided equivalent to the module category of the twisted Drinfeld double
$D^{\alpha}(G).$
For any pseudounitary nondegenerate braided $\mathcal{E}$-category $\mathcal{F}$, if $\mathcal{F}$ has a minimal modular extension, then the collection $\mathcal{M}(\mathcal{F})$ of equivalence classes of minimal modular extensions of $\mathcal{F}$ admits a natural action of $\mathcal{M}(\mathcal{E})$ via the relative Deligne tensor product. Moreover, $\mathcal{M}(\mathcal{F})$ is a $\mathcal{M}(\mathcal{E})$-torsor \cite{LKW1}.
Our investigation of the $V^G$-module category $\mathcal{C}_{V^G}$ in terms of the minimal modular extensions of certain braided fusion category is influenced greatly by the work of \cite{LKW1}. Note from \cite{Hu} that $\mathcal{C}_{V^G}$ is a modular tensor category.
Associated to a rational vertex operator algebra $V$ with a $G$-action are two more braided fusion subcategories $\mathcal{E}_{V^G}$ and $\mathcal{F}_{V^G}$ of $\mathcal{C}_{V^G}$. Here, $\mathcal{E}_{V^G}$ is the full subcategory of $\mathcal{C}_{V^G}$ generated by the $V^G$-submodules of $V$, and $\mathcal{F}_{V^G}$ is the full subcategory of $\mathcal{C}_{V^G}$ generated by $V^G$-submodules of $V$-modules. $\mathcal{F}_{V^G}$ is a nondegenerate braided $\mathcal{E}$-category which satisfies
$$
C_{\mathcal{C}_{V^G}}(\mathcal{E}_{V^G}) = \mathcal{F}_{V^G} \quad \text{and}\quad C_{\mathcal{C}_{V^G}}(\mathcal{F}_{V^G}) = \mathcal{E}_{V^G}\,,
$$
where $C_\mathcal{C}(\mathcal{B})$ denotes the M\"uger centralizer of the subcategory $\mathcal{B}$ in the braided fusion category $\mathcal{C}$. These two categories are the same if and only if $V$ is holomorphic. The main idea is to put these categories in the context of the minimal modular extensions. Recall from \cite{DLM1} a Schur-Weyl type duality decomposition
$$V=\oplus_{\lambda\in {\rm Irr}}\def \glob{{\rm glob}(G)}W_{\lambda}\otimes V_{\lambda}$$
where $W_{\lambda}$ is the irreducible $G$-module with character $\lambda$ and the multiplicity spaces $V_{\lambda}$ are inequivalent irreducible $V^G$-modules. Then $\mathcal{E}_{V^G}$ is generated by $V_{\lambda}$ for $\lambda\in {\rm Irr}(G).$ Our first
result asserts that $\mathcal{E}_{V^G}$ is a symmetric fusion category braided equivalent to $\mathcal{E}$ for any rational vertex operator algebra $V$ via an embedding $F^{V,G}: \mathcal{E} \to \mathcal{C}_{V^G}$
(also see \cite{Ki}). In particular, $(\mathcal{C}_{V^G}, F^{V,G})$ is a braided $\mathcal{E}$-category.
In the case when $V$ is holomorphic, $(\mathcal{C}_{V^G}, F^{V,G})$ is a minimal modular extension of $\mathcal{E}.$ So $\mathcal{C}_{V^G}$ is braided
equivalent to $\mathcal{Z}({\rm Vec}_G^{\alpha})$ for some $\alpha\in H^3(G,S^1).$ Let ${\bf H}_G$ be the collection holomorphic vertex operator algebras with a $G$-action. Then $\mathcal{M}_v(\mathcal{E})$ consisting of equivalence classes of $(\mathcal{C}_{V^G}, F^{V,G})$ for some $V\in {\bf H}_G$ is a subgroup of $\mathcal{M}(\mathcal{E}).$ We certainly believe that $\mathcal{M}_v(\mathcal{E})=\mathcal{M}(\mathcal{E})$. If $G$ is an abelian group generated by less than 3 elements or an odd dihedral group,
we show that $\mathcal{M}_v(\mathcal{E})=\mathcal{M}(\mathcal{E}).$
We also show that the group operation given in \cite{LKW1} can be realized from the tensor product of vertex operator algebras, i.e.
$\mathcal{C}_{V^G}\otimes^{F^{V,G}, F^{U,G}}_{\mathcal{E}} \mathcal{C}_{U^G} \cong (\mathcal{C}_{(V\otimes U)^G}, F^{V \otimes U, G})$ for $V, U\in {\bf H}_G,$ where the $G$-action on $V \otimes W$ is the diagonal action of $G \times G$.
Now we assume that $\mathcal{F}$ is an arbitrary pseudounitary nondegenerate braided $\mathcal{E}$-category. Let ${\bf R}_{G}^{\mathcal{F}}$ be the collection of rational vertex operator algebras $W$ with a $G$-action such that $\mathcal{F}_{W^G}$ is equivalent to $\mathcal{F}$ as braided $\mathcal{E}$-categories. If ${\bf R}_G^\mathcal{F}$ is not empty, we establish that $\mathcal{M}_v(\mathcal{E})$ acts freely on the set of equivalence classes $\mathcal{M}_v(\mathcal{F})=\{[\mathcal{C}_{W^G}]|W\in {\bf R}_G^{\mathcal{F}} \text{ and }\}$
such that $\mathcal{C}_{V^G}\otimes^{F^{V,G}, F^{W,G}}_{\mathcal{E}} \mathcal{C}_{W^G} \cong (\mathcal{C}_{(V\otimes W)^G}, F^{V \otimes W, G})$ for $V\in {\bf H}_G,$ $W\in {\bf R}_{G}^{\mathcal{F}}$ where the $G$-action of $V \otimes W$ is the diagonal action of $G \times G$. Again, it is desirable that $\mathcal{M}_v(\mathcal{F})=\mathcal{M}(\mathcal{F})$ and $\mathcal{M}_v(\mathcal{F})$ is a $\mathcal{M}_v(\mathcal{E})$-torsor whenever $\mathcal{M}_v(\mathcal{F}) \ne \emptyset$.
For any braided fusion category $\mathcal{C}$ with braiding isomorphism $c_{X,Y}:X\otimes Y\to Y\otimes X$, $\overline{c}_{X,Y}=c_{Y,X}^{-1}$ also defines a braiding on $\mathcal{C}$. We denote by $\overline{\mathcal{C}}$ the braided fusion category $\mathcal{C}$ equipped with the braiding $\overline{c}$. Note that $\overline{\mathcal{E}} = \mathcal{E}$ as braided tensor categories. Thus, if $(\mathcal{C}, j)$ is a braided $\mathcal{E}$-category, then so is $(\overline \mathcal{C}, j)$. Again, we will simply write $\overline\mathcal{C}$ for the braided $\mathcal{E}$-category $(\overline\mathcal{C}, j)$. The braided $\mathcal{E}$-category $\overline{\mathcal{C}}$ plays an essential role in the group structure of $\mathcal{M}(\mathcal{E})$
and the $\mathcal{M}(\mathcal{E})$-torsor structure on $\mathcal{M}(\mathcal{F}).$ In fact, if $\mathcal{M}\in\mathcal{M}(\mathcal{E})$, then its inverse is exactly
$\overline{\mathcal{M}}.$ The proof of free and transitive action of $\mathcal{M}(\mathcal{E})$ on $\mathcal{M}(\mathcal{F})$ in \cite{LKW2} also uses $\overline{\mathcal{N}}$ for
$\mathcal{N}\in \mathcal{M}(\mathcal{F}).$ So it is necessary and important to understand $\overline{\mathcal{C}_V}$ for a rational vertex operator algebra $V.$ For such $V$ we now have two modular tensor categories $\mathcal{C}_V$ and $\overline{\mathcal{C}_V}.$ In the setting of braiding isomorphism, one needs to define $(-1)^n$ for rational number $n.$ The braidings $c_{X,Y}$ and $c_{Y,X}^{-1}$ correspond to the choices $(-1)^n=e^{\pi in}$ and $(-1)^n=e^{-\pi in}.$
From the point of view of vertex operator algebra, we conjecture that the modular tensor category $\overline{\mathcal{C}_V}$ is braided equivalent to $\mathcal{C}_U$ for some rational vertex operator algebra $U.$ This is consistent with the reconstruction program. That is, any modular tensor category $\mathcal{C}$ can be realized as $\mathcal{C}_{W}$ for some rational vertex operator algebra. Using the language of vertex operator algebra, the conjecture is equivalent to the statement: If $V$ is a rational vertex operator algebra then there exists a holomorphic vertex operator algebra $H$ containing $V$ as sub-VOA such that the double commutant $C_H(C_H(V))=V.$ Then $\overline{\mathcal{C}_V}$ and
$\mathcal{C}_{C_H(V)}$ are braided equivalent. We prove this conjecture for lattice vertex operator algebra $V_L,$ affine vertex operator algebras associated to the integrable highest weight representations and the Virasoro vertex operator algebras associated to the discrete series.
This paper is organized as follows: We review the twisted modules and $g$-rationality of vertex operator algebras following \cite{DLM3} in Section 2. Section 3 is a review of basics on the fusion categories, braided fusion categories, modular tensor categories and minimal modular extensions of a fusion category over $\mathcal{E}$ \cite{ENO,EGNO,KO}. We also present the main results concerning $\mathcal{M}(\mathcal{E})$ and its torsor $\mathcal{M}(\mathcal{C})$ from \cite{LKW1}. Section 4 is a review of the modular tensor category
$\mathcal{C}_V$ associated to a rational, $C_2$-cofinite vertex operator algebra $V$ \cite{HL1,HL2,HL3, Hu}. We
discuss how to realize $\overline{\mathcal{C}_V}$ as $\mathcal{C}_U$ with some conjecture and examples in the last half of this section. We also
gives a necessary and sufficient condition for $\overline{\mathcal{C}_V}$ and $\mathcal{C}_U$ being braided equivalent. In Section 5, we recall from \cite{DRX, DLXY} the classification of irreducible $V^G$-modules and related results. In Section 6 we prove that for any
rational, $C_2$-cofinite vertex operator algebra $V,$ $\mathcal{E}$ and $\mathcal{E}_{V^G}$ are braided equivalent, and
the regular commutative algebra $\mathbb C[G]^*$ in $\mathcal{E}$ corresponds to commutative algebra $V$ in $\mathcal{E}_{V^G}$ for any rational vertex operator algebra $V$ under the braided equivalence. Furthermore, $\mathcal{C}_{V^G}$ is a minimal modular extension of $\mathcal{F}_{V^G}.$ In particular, if $V$ is holomorphic, then $\mathcal{C}_{V^G}$ is a minimal modular extension of $\mathcal{E}.$ Section 7 is devoted to the proof that $\mathcal{M}_v(\mathcal{E})$ is a finite abelian group under the product $\mathcal{C}_{V^G}\cdot \mathcal{C}_{U^G}=\mathcal{C}_{(V\otimes U)^G}$ and $\mathcal{M}_v(\mathcal{E})$ acts on $\mathcal{M}_v(\mathcal{F})$ by $\mathcal{C}_{V^G}\cdot \mathcal{C}_{W^G}=\mathcal{C}_{(V\otimes W)^G}.$ We prove in Section 8 that if $\mathcal{Z}({\rm Vec}_G^{\alpha})$ is pointed, then $\mathcal{Z}({\rm Vec}_G^{\alpha})$ is equivalent to $\mathcal{C}_{V_L^G}$
for some positive definite even unimodular lattice $L.$
\section{Twisted modules}
Let $V$ be a vertex operator algebra and $g$ an automorphism of $V$ of finite order $T$. Then $V$ is a direct sum of eigenspaces of $g:$
$V=\bigoplus_{r\in \mathbb Z/T\mathbb Z}V^r$
where $V^r=\{v\in V|gv=e^{-2\pi ir/T}v\}$.
We use $r$ to denote both
an integer between $0$ and $T-1$ and its residue class \mbox{mod}\ $T$ in this
situation.
A {\em weak $g$-twisted $V$-module} $M$ is a vector space equipped
with a linear map
\begin{equation*}
\begin{split}
Y_M: V&\to ({\rm End}\,M)[[z^{1/T},z^{-1/T}]]\\
v&\mapsto\displaystyle{ Y_M(v,z)=\sum_{n\in\frac{1}{T}\mathbb Z}v_nz^{-n-1}\ \ \ (v_n\in
{\rm End}\,M)},
\end{split}
\end{equation*}
which satisfies the following: for all $0\leq r\leq T-1,$ $u\in V^r$, $v\in V,$
$w\in M$,
\begin{eqnarray*}
& &Y_M(u,z)=\sum_{n\in \frac{r}{T}+\mathbb Z}u_nz^{-n-1} \label{1/2},\\
& &u_lw=0~~~
\mbox{for}~~~ l\gg 0,\label{vlw0}\\
& &Y_M({\mathbf 1},z)={\rm Id}_M,\label{vacuum}
\end{eqnarray*}
\begin{equation*}\label{jacobi}
\begin{array}{c}
\displaystyle{z^{-1}_0\delta\left(\frac{z_1-z_2}{z_0}\right)
Y_M(u,z_1)Y_M(v,z_2)-z^{-1}_0\delta\left(\frac{z_2-z_1}{-z_0}\right)
Y_M(v,z_2)Y_M(u,z_1)}\\
\displaystyle{=z_2^{-1}\left(\frac{z_1-z_0}{z_2}\right)^{-r/T}
\delta\left(\frac{z_1-z_0}{z_2}\right)
Y_M(Y(u,z_0)v,z_2)},
\end{array}
\end{equation*}
where $\delta(z)=\sum_{n\in\mathbb Z}z^n$ and
all binomial expressions (here and below) are to be expanded in nonnegative
integral powers of the second variable.
A $g$-{\em twisted $V$-module} is
a $\mathbb C$-graded weak $g$-twisted $V$-module $M:$
\begin{equation*}
M=\bigoplus_{\lambda \in{\mathbb C}}M_{\lambda}
\end{equation*}
where $M_{\lambda }=\{w\in M|L(0)w=\lambda w\}$ and $L(0)$ is the component operator of $Y(\omega,z)=\sum_{n\in \mathbb Z}L(n)z^{-n-2}.$ We also require that
$\dim M_{\lambda }$ is finite and for fixed $\lambda ,$ $M_{\frac{n}{T}+\lambda }=0$
for all small enough integers $n.$ If $w\in M_{\lambda }$, we refer to $\lambda $ as the {\em weight} of
$w$ and write $\lambda ={\rm wt} w.$
We use $\mathbb Z_+$ to denote the set of nonnegative integers.
An {\em admissible} $g$-twisted $V$-module
is a $\frac1T{\mathbb Z}_{+}$-graded weak $g$-twisted $V$-module $M:$
\begin{equation*}
M=\bigoplus_{n\in\frac{1}{T}\mathbb Z_+}M(n)
\end{equation*}
satisfying
\begin{equation*}
v_mM(n)\subseteq M(n+{\rm wt} v-m-1)
\end{equation*}
for homogeneous $v\in V,$ $m,n\in \frac{1}{T}{\mathbb Z}.$
If $g={\rm Id}_V$, we have the notions of weak, ordinary and admissible $V$-modules \cite{DLM3}.
If $M=\bigoplus_{n\in \frac{1}{T}\mathbb Z_+}M(n)$
is an admissible $g$-twisted $V$-module, the contragredient module $M'$
is defined as follows:
\begin{equation*}
M'=\bigoplus_{n\in \frac{1}{T}\mathbb Z_+}M(n)^{*},
\end{equation*}
where $M(n)^*=\mbox{Hom}_{\mathbb C}(M(n),\mathbb C).$ The vertex operator
$Y_{M'}(a,z)$ is defined for $a\in V$ via
\begin{eqnarray*}
\langle Y_{M'}(a,z)f,w\rangle= \langle f,Y_M(e^{z L(1)}(-z^{-2})^{L(0)}a,z^{-1})w\rangle,
\end{eqnarray*}
where $\langle f,w\rangle=f(w)$ is the natural paring $M'\times M\to \mathbb C.$
It follows from \cite{FHL} and \cite{X} that $(M',Y_{M'})$ is an admissible $g^{-1}$-twisted $V$-module. The $g^{-1}$-twisted $V$-module $M'=(M',Y_{M'})$ is called the contragredient module of the $g$-twisted $V$-module $M.$ Moreover, $M$ is irreducible if and only if $M'$ is irreducible.
A vertex operator algebra\ $V$ is called $g$-rational, if the admissible $g$-twisted module category is semisimple. $V$ is called rational if $V$ is $1$-rational.
A vertex operator algebra\ $V$ is $C_2$-cofinite if $V/C_2(V)$ is finite dimensional, where $C_2(V)=\langle v_{-2}u|v,u\in V\rangle$ \cite{Z}. A vertex operator algebra\ $V$ is called regular if every weak $V$-module is a direct sum of irreducible $V$-modules \cite{DLM2}. It is proved in \cite{ABD} that if $V$ is of CFT type, then regularity is equivalent to rationality and $C_2$-cofiniteness. Also $V$ is regular if and only if the weak module category is semisimple \cite{DYu}.
The following results about $g$-rational vertex operator algebras \ are well-known \cite{DLM3}, \cite{DLM4}.
\begin{thm}\label{grational}
If $V$ is $g$-rational, then:
\begin{enumerate}
\item[\rm (1)] Any irreducible admissible $g$-twisted $V$-module $M$ is a $g$-twisted $V$-module. Moreover, there exists a number $\lambda \in \mathbb{C}$ such that $M=\oplus_{n\in \frac{1}{T}\mathbb{Z_+}}M_{\lambda +n}$ where $M_{\lambda}\neq 0.$ The $\lambda $ is called the conformal weight of $M;$
\item[\rm (2)] There are only finitely many irreducible admissible $g$-twisted $V$-modules up to isomorphism.
\item[\rm (3)] If $V$ is also $C_2$-cofinite and $g^i$-rational for all $i\geq 0$ then the central charge $c$ and the conformal weight $\lambda $ of any irreducible $g$-twisted $V$-module $M$ are rational numbers.
\end{enumerate}
\end{thm}
A vertex operator algebra\ $V=\oplus_{n\in \mathbb Z}V_n$ is said to be of CFT type if $V_n=0$ for negative $n$ and $V_0=\mathbb C {\bf 1}.$
\section{Fusion categories}
In this section we will review fusion categories and modular tensor categories following \cite{ENO}, \cite{EGNO}, \cite{KO}. A fusion category ${\cal C}$ is a $\mathbb C$-linear abelian semisimple, rigid monoidal category with finitely many inequivalent simple objects, and finite dimensional morphism spaces, together with a tensor product functor $\boxtimes: {\cal C}\times {\cal C}\to {\cal C},$ a unit object ${\bf 1}_\mathcal{C}$ satisfying certain axioms. We use $X'$ to denote the (left) dual object of $X\in\mathcal{C}$, and
${\cal O}(\cal C)$ denotes the set of equivalence classes of the simple objects. Throughout this paper, subcategories of $\cal C$ are always assumed to be full. A fusion subcategory of $\mathcal{C}$ is defined as expected.
A very useful concept is so called the \emph{Frobenius-Perron dimension}. Let $K_0(\mathcal{C})$ be the Grothendieck ring of a fusion category $\mathcal{C}.$ Then there is a unique ring homomorphism ${\rm FPdim}: K_0(\mathcal{C})\to \mathbb R$ satisfying
${\rm FPdim}(M)\geq 1$ for any nonzero object $M.$ The Frobenius-Perron dimension of $\mathcal{C}$ is defined to be ${\rm FPdim}(\mathcal{C})=\sum_{M\in {\cal O}(\mathcal{C})}{\rm FPdim} (M)^2.$ In the case $\mathcal{C}$ is a fusion subcategory of the module category for
a vertex operator algebra $V$, the Frobenius-Perron dimension ${\rm FPdim} (M)$ is exactly the quantum dimension $\qdim_V(M)$ studied in \cite{DJX} and \cite{DRX}.
A \emph{braided fusion category} is a fusion category $\mathcal{C}$ with a natural isomorphism $c_{X,Y} : X \boxtimes Y\to Y\boxtimes X$, called a \emph{braiding}, which satisfies some compatible conditions. Associated to a braided fusion category $\mathcal{C}$ is another braided fusion category $\overline{\mathcal{C}}$ which has the same fusion category as $\mathcal{C}$ with a new braiding $\overline{c}_{X,Y}=c_{Y,X}^{-1}.$ A braided fusion category $\mathcal{C}$ is called \emph{symmetric} if $c_{Y,X}\circ c_{X,Y} =\operatorname{id}_{X\boxtimes Y}$
or $\mathcal{C}=\overline{\mathcal{C}}$ as braided fusion categories. For any collection $\mathcal{D}$ of objects in $\mathcal{C}$, the \emph{M\"uger centralizer} $C_{\mathcal{C}}(\mathcal{D})$ is the subcategory of $\mathcal{C}$ consisting of the objects $Y$ in $\mathcal{C}$ such that $c_{Y,X}\circ c_{X,Y} = \operatorname{id}_{X\boxtimes Y}$ for all $X$ in $\mathcal{D}.$ The subcategory $C_\mathcal{C}(\mathcal{D})$ is closed under the tensor product of $\mathcal{C}$ and hence a braided fusion subcategory of $\mathcal{C}$. The symmetric fusion category $C_{\mathcal{C}}(\mathcal{C})$ is called the \emph{M\"uger center} of $\mathcal{C}$, and denoted by $\mathcal{C}'$. For example, for any finite group $G,$ the finite dimensional $\mathbb C [G]$-module
category ${\rm Rep}(G)$ is a symmetric fusion category with the usual tensor product and braiding of $\mathbb C$-linear spaces. A symmetric fusion category $\mathcal{C}$
is called \emph{Tannakian} if there is a finite group $G$ such that $\mathcal{C}$ is equivalent to
${\rm Rep}(G)$ as braided fusion categories. According to \cite{De}, the braided fusion category $\mathcal{C}$ is Tannakian if and only if there exists a faithful braided tensor functor from
$\mathcal{C}$ to ${\rm Vec}$, where ${\rm Vec}$ denotes the category of finite dimensional $\mathbb C$-linear spaces with the usual tensor product and braiding.
A braided fusion category $\mathcal{C}$ is called \emph{nondegenerate} if $\mathcal{C}' \stackrel{\otimes}{\cong} {\rm Vec}$. A nondegenerate spherical braided fusion category is called a modular tensor category. This definition is equivalent to that the corresponding $S$-matrix is nonsingular \cite{Mu2}. From \cite{DGNO} we know that if $\mathcal{C}$ is a braided fusion category and $\mathcal{B}$ is a fusion subcategory then
\begin{equation}\label{3.1}
{\rm FPdim}(\mathcal{B})\cdot {\rm FPdim} (C_\mathcal{C}(\mathcal{B}))={\rm FPdim}(\mathcal{C})\cdot {\rm FPdim}(\mathcal{C}'\cap\mathcal{B}).
\end{equation}
The Drinfeld center $\mathcal{Z}(\mathcal{C})$ of a fusion category $\mathcal{C}$ is a braided fusion category whose objects are pairs $(X, z_{X,-})$ in which $X\in\mathcal{C}$ and $z_{X,-}: X\boxtimes (-)\to (-)\boxtimes X$ a natural isomorphism, called \emph{a half-braiding}, satisfying certain conditions. Moreover, ${\rm FPdim}(\mathcal{Z}(\mathcal{C}))={\rm FPdim}(\mathcal{C})^2.$ If $\mathcal{C}$ is a spherical fusion category then $\mathcal{Z}(\mathcal{C})$ is a modular tensor category \cite{Mu2}. In particular, $\mathcal{Z}({\rm Rep}(G))$ is a modular tensor category.
Let $\mathcal{C}$ be a braided fusion category. Then $\mathcal{E}=\mathcal{C}'$ is a symmetric fusion category. A modular extension (ME) of $\mathcal{C}$ is a pair $(\mathcal{D}, \iota_\mathcal{D})$ where $\mathcal{D}$ is modular tensor category and $\iota_\mathcal{D}: \mathcal{C} \to \mathcal{D}$ is a full and faithful braided tensor (and simply called an \emph{embedding} in the sequel). The ME $(\mathcal{D}, j_\mathcal{D})$ is called \emph{minimal} (MME) if $C_{\mathcal{D}}(\mathcal{E})=\mathcal{C}$ under the identification of $\iota_\mathcal{D}(\mathcal{C})$ with $\mathcal{C}$. We will simply write $\mathcal{D}$ for an ME $(\mathcal{D}, \iota_\mathcal{D})$ of $\mathcal{C}$ and identify $\mathcal{C}$ with $\iota_\mathcal{D}(\mathcal{C})$ when the context is clear.
\begin{lem}\label{l3.1} A modular extension $\mathcal{D}$ of a braided fusion category $\mathcal{C}$ over $\mathcal{E}$ is minimal if and only if
$${\rm FPdim}(\mathcal{D})={\rm FPdim}(\mathcal{C})\cdot {\rm FPdim}(\mathcal{E}).$$
\end{lem}
\begin{proof} Since the modular tensor category $\mathcal{D}$ is nondegenerate, we see that $\mathcal{D}'={\rm Vec}$ and ${\rm FPdim}(\mathcal{D}'\cap \mathcal{E})=1.$
It follows immediately from equation (\ref{3.1})
since $\mathcal{C}$ is always a fusion subcategory of $C_{\mathcal{D}}(\mathcal{E}).$
\end{proof}
Let $\mathcal{C}$ be any braided fusion category over $\mathcal{E}$. Two modular extensions $(\mathcal{D}_1,\iota_1)$ and $(\mathcal{D}_2,\iota_{2})$ of $\mathcal{C}$ are equivalent if there is a braided equivalence $F: \mathcal{D}_1\to \mathcal{D}_2$ such that $F \circ \iota_{1} \cong \iota_2$ as braided tensor functors. Let $\mathcal{M}(\mathcal{C})$ be the set of equivalence classes of MMEs of $\mathcal{C}.$ Then $\mathcal{M}(\mathcal{C})$ is a finite set as every MME of $\mathcal{C}$ has the same
Frobenius-Perron dimension ${\rm FPdim}(\mathcal{C})\cdot {\rm FPdim}(\mathcal{E})$ and there are only finitely many modular tensor categories up to equivalence for any fixed
Frobenius-Perron dimension \cite{BNRW}. The following important result was obtained in \cite{LKW1}.
\begin{thm}\label{LKW1}
Let $\mathcal{C}$ be a braided $\mathcal{E}$-category. Then $\mathcal{M}(\mathcal{E})$ is a finite abelian group and $\mathcal{M}(\mathcal{E})$ acts on $\mathcal{M}(\mathcal{C})$ freely and transitively provided $\mathcal{M}(\mathcal{C}) \ne \emptyset$.
In particular, the cardinality of $\mathcal{M}(\mathcal{C})$ equals to the order of $\mathcal{M}(\mathcal{E})$ if $\mathcal{M}(\mathcal{C}) \ne \emptyset$.
\end{thm}
The definition of the product on $\mathcal{M}(\mathcal{E})$ and the action of $\mathcal{M}(\mathcal{E})$ on $\mathcal{M}(\mathcal{C})$ are quite complicated, and we will discuss later in Section 6 in details.
An object $A$ in a braided fusion category $\mathcal{C}$ is called a \emph{commutative algebra} if there are morphisms
$\mu: A\boxtimes A\to A$ and $\eta: {\bf 1_\mathcal{C}}\to A$ such that $$\mu\circ(\mu\boxtimes \operatorname{id}_A)\circ \alpha_{A,A,A} = \mu\circ(\operatorname{id}_A\boxtimes\mu),\ \mu = \mu\circ c_{A,A}$$
$$\mu\circ(\eta\boxtimes \operatorname{id}_A)\circ l_A^{-1}=\operatorname{id}_A =\mu\circ(\operatorname{id}_A\boxtimes\eta) \circ r_A^{-1}$$
where $\alpha_{A,A,A}: A\boxtimes (A\boxtimes A) \to (A\boxtimes A)\boxtimes A$ is the associativity isomorphism, and $l_A: {\bf 1}_\mathcal{C}\boxtimes A\to A$ and $r_A:A \boxtimes {\bf 1}_\mathcal{C} \to A$ are respectively the left and the right unit isomorphisms. A commutative algebra $A$ in $\mathcal{C}$ is called \emph{connected} if $\dim \mbox{Hom}_\mathcal{C}({\bf 1}_\mathcal{C}, A)=1$.
Let $A$ be a connected commutative algebra in $\mathcal{C}$. A right $A$-module $M$ is an object in $\mathcal{C}$ with a morphism $\mu_M: M \boxtimes A \to M$ such that $\mu_M\circ ( \operatorname{id}_M \boxtimes \mu)=\mu_M\circ ( \mu_M \boxtimes \operatorname{id}_A)\circ \alpha_{M, A,A}$ and
$\mu_M\circ(\operatorname{id}_M \boxtimes \eta)=r_M$ where $r_M:M \boxtimes {\bf 1}_\mathcal{C} \to M$ is the right unit isomorphism. If $M, N$ are right $A$-modules, a morphism $f: M \to N$ in $\mathcal{C}$ is called an $A$-module morphism if $\mu_N \circ(f \boxtimes \operatorname{id}_A) = \mu_M$. We denote the category of right $A$-modules by $\mathcal{C}_A.$
The left $A$-modules are defined similarly. Since $A$ is commutative, a right $A$-module $M$ admits two natural left $A$-module structure on $M$, namely $m, \overline m : A \boxtimes M \to M$ defined by
$$
m= \mu_M \circ c_{A,M} \quad \text{and}\quad \overline m = \mu_M \circ \overline c_{A,M}\,.
$$
These left $A$-module structures on $M$ defines two $A$-bimodule structures on $M$, and they coincide when $M \in C_\mathcal{C}(A)$. We will consider any right $A$-module as an $A$-bimodule under the left $A$-action $\overline m$ as discussed. An $A$-module $M$ is called \emph{local} if
$\mu_M\circ c_{M,A}\circ c_{A,M}=\mu_M.$ We denote the local $A$-module category by $\mathcal{C}_A^0.$ It is immediate to see that the $A$-modules in $C_\mathcal{C}(A)$ are local $A$-modules.
An algebra $A$ in $\mathcal{C}$ is an $A$-bimodule under product map $\mu$ of an algebra $A$. If $\mu: A \boxtimes A \to A$ splits as $A$-bimodule morphism in $\mathcal{C}$, then $A$ is called \emph{separable}. Following the terminology in \cite{LKW1}, an algebra $A$ in ribbon tensor category $\mathcal{C}$ is said to be \emph{condensable} if $A$ is a commutative, separable and connected algebra in $\mathcal{C}$ with $\dim(A)\ne 0$ and $\theta_A=\operatorname{id}_A$.
The $A$-module category, $\mathcal{C}_A$, is a fusion category with the tensor product $M\boxtimes_A N$, where $N$ is considered as an $A$-bimodule under the preceding convention. Moreover, the category $\mathcal{C}_A^0$ of local $A$-modules is a braided fusion category. Moreover, if $\mathcal{C}$ is modular tensor category and $A$ is a condensable algebra in $\mathcal{C}$, then $\mathcal{C}_A^0$ is modular \cite{KO}. We will only consider pseudounitary fusion category $\mathcal{C}$ with $\dim(X) = {\rm FPdim}(X)$ for all object $X \in \mathcal{C}$. In the case, the condition $\dim(A) \ne 0$ is satisfied automatically. Moreover, an embedding of (pseudounitary) braided fusion categories preserves the canonical pivotal structures and hence their ribbon structures.
The following identities give relations among dimensions of relevant categories $\mathcal{C}$ and condensable algebras $A$ in $\mathcal{C}$:
$${\rm FPdim}(\mathcal{C}_A) =\frac{{\rm FPdim}(\mathcal{C})}{{\rm FPdim}(A)},\ {\rm FPdim}(\mathcal{C}_A^0) =\frac{{\rm FPdim}(\mathcal{C})}{{\rm FPdim}(A)^2}$$
where the first identity holds for any braided fusion category $\mathcal{C}$ and the second identity requires that $\mathcal{C}$ is modular \cite{DMNO}.
We have mentioned that for a finite group $G,$ ${\rm Rep}(G)$ is a symmetric fusion category. Then $A=\mathbb C[G]^*$ is a condensable in ${\rm Rep}(G)$, called the \emph{regular algebra} of ${\rm Rep}(G)$. Then, ${\rm Rep}(G)_A={\rm Rep}(G)_A^0$ is equivalent to the category ${\rm Vec}$ of finite dimensional vector spaces. Furthermore, any
condensable algebra in ${\rm Rep}(G)$ is given by $\mathbb C[G/H]^*$ where $H$ is a subgroup of $G$ \cite{KO}.
We now discuss the modular extension of $\mathcal{E}={\rm Rep}(G).$ Let $(\mathcal{M},\iota_\mathcal{M})$ be an MME of $\mathcal{E}.$ Then the regular algebra $A$ of $\mathcal{E}$ is a condensable algebra in $\mathcal{M}$. Following \cite{DGNO}, $\mathcal{M}_A$ is a pointed fusion category equivalent to ${\rm Vec}_G^\alpha$ for some $\alpha\in H^3(G,S^1).$
That is, $\mathcal{M}_A=\bigoplus_{g\in G}(\mathcal{M}_A)_g$ and each $(\mathcal{M}_A)_g\cong {\rm Vec}$ as $\mathbb C$-linear categories and
$$(\mathcal{M}_A)_g\boxtimes_A (\mathcal{M}_A)_h\cong (\mathcal{M}_A)_{gh}.$$ We denote the simple
object of $(\mathcal{M}_A)_g$ by $e(g)$ up to isomorphism. Then
$$\alpha: (e(g)\boxtimes_A( e(h))\boxtimes_A e(k) \to e(g)\boxtimes_A (e(h) \boxtimes_A e(k))$$
gives the associativity isomorphism of $\mathcal{M}_A$. Moreover, $\mathcal{M}$ and $\mathcal{Z}({\rm Vec}_G^\alpha)$ are braided equivalent. Furthermore, $\mathcal{Z}({\rm Vec}_G^\alpha)$ is an MME of ${\rm Rep}(G)$ for any $\alpha\in H^3(G,S^1).$ It is well known that $\mathcal{Z}({\rm Vec}_G^\alpha)$ is braided equivalent to the representation category of the twisted Drinfeld double $D^{\alpha}(G)$ of $G$ \cite{DPR}. It was proved in \cite{LKW1} that $(\mathcal{M}, \iota_\mathcal{M})$ is equivalent to $(\mathcal{Z}({\rm Vec}_G^\alpha), \iota_\alpha)$ where $\iota_\alpha: \mathcal{E} \to \mathcal{Z}({\rm Vec}_G^\alpha)$ is the canonical embedding, which can be described as follows:
Recall that the center $\mathcal{Z}({\rm Vec}_G^\alpha )$ of ${\rm Vec}_G^\alpha $ consists of the pairs $(X, c_{X, -})$, in which $X \in {\rm Vec}_G^\alpha $ and $c_{X, Y}: X \otimes Y\to Y \otimes X$, called an \emph{half-braiding}, is a natural isomorphism for $Y \in {\rm Vec}_G^\alpha $ satisfying compatibility conditions (cf. \cite{EGNO} for the center construction). For each $X \in {\rm Rep}(G)$, we consider $X$ as a homogeneous vector space of grading 1, and we define the half-braiding $c_{X,-}$ by setting $c_{X, e(g)} : X \otimes e(g) \to e(g) \otimes X$, $x \otimes 1 \mapsto 1 \otimes g^{-1} x$ for $x \in X$. This assignment $X \mapsto (X, c_{X, -})$ can be extended to an embedding, i.e., a faithful and full braided tensor functor, $\iota_\alpha : \mathcal{E} \to \mathcal{Z}({\rm Vec}_G^\alpha )$.
By \cite{LKW1}, $(\mathcal{M},\iota_\mathcal{M}) \cong (\mathcal{Z}({\rm Vec}^\alpha _G), \iota_\alpha )$ as braided $\mathcal{E}$-categories, and the map $\Phi_G: \alpha \mapsto (\mathcal{Z}({\rm Vec}^\alpha _G), \iota_\alpha )$ defines a group isomorphism from $H^3(G, S^1)$ to $\mathcal{M}(\mathcal{E})$.
\section{Modular categories associated to vertex operator algebras}
Let $V$ be a rational, $C_2$-cofinite vertex operator algebra of CFT type such that the conformal weight of any irreducible $V$-module $M$ is nonnegative, and is zero if and only if $M=V.$ Then the $V$-module category $\mathcal{C}_V$ is a modular tensor category
\cite{Hu}. For the purpose of later discussion we need details on the tensor product of two modules explicitly. So we first give a brief review on the construction of the tensor product of modules for $V$-modules from \cite{HL1,HL2,HL3,Hu}.
For any complex number $z\in\mathbb C^{\times}=\mathbb C\setminus \{0\},$
there is a tensor product $W_1\boxtimes_{P(z)} W_2$ for any $V$-modules $W_1,W_2$ together with a canonical intertwining operator $I=I_{\boxtimes_{P(z)}}$ of type $\left(_{W_1,\ \ W_2}^{W_1\boxtimes_{P(z)} W_2}\right)$ satisfying some universal property such that for $w_i\in W_i,$ there is a tensor element
$${w_1\boxtimes_{P(z)}w_2}=I(w_1,z)w_2\in\overline{W_1\boxtimes_{P(z)} W_2}$$
where $\overline{W_1\boxtimes_{P(z)} W_2}$ is the formal completion of $W_1\boxtimes_{P(z)} W_2$. The operator $I(w_1,z)$ is understood to be $\sum_{n \in \mathbb R} (w_1)_n z^{-n-1}$ where $(w_1)_n \in \mbox{Hom}(W_2, W_1\boxtimes_{P(z)} W_2)$, and $z^n=e^{n\log z}$ for any $n\in\mathbb R$ with $\log z=\log |z|+i {\rm arg}z$ and $0\leq{\rm arg}z<2\pi.$
Moreover, $W_1\boxtimes_{P(z)} W_2$ is spanned by the coefficients
of $z^n$ for all $w_i$ and $n\in \mathbb R.$ If we have three $V$-modules $W_i$ for $i=1,2,3$ there is an associativity isomorphism
$$ A_{z_1,z_2}: W_1\boxtimes_{P(z_1)}(W_2\boxtimes_{P(z_2)}W_3) \to (W_1\boxtimes_{P(z_1-z_2)}W_2)\boxtimes_{P(z_2)}W_3$$
characterized by
$$w_1\boxtimes_{P(z_1)}(w_2\boxtimes_{P(z_2)}w_3) \mapsto (w_1\boxtimes_{P(z_1-z_2)}w_2)\boxtimes_{P(z_2)}w_3$$
for $|z_1|>|z_2|>|z_1-z_2|>0.$ The tensor product in the modular tensor category $\mathcal{C}_V$ is given by $\boxtimes=\boxtimes_{P(1)}.$ We will simply denote the corresponding intertwining operator $I_{\boxtimes_{P(1)}}$ by $I.$
To discuss the braiding and associativity isomorphism in $\mathcal{C}_V$ we also need the natural parallel transport isomorphisms.
Fix $V$-modules $W_1,W_2$ and nonzero complex numbers $z_1,z_2$, for any continuous path $\gamma$ in $\mathbb C^\times$ from $z_1$ to $z_2,$ the parallel transport isomorphisms
$T_{\gamma}:W_1\boxtimes_{P(z_1)} W_2\to W_1\boxtimes_{P(z_2)} W_2 $ is determined by the extension
\begin{eqnarray*} \overline{T_{\gamma} }: &\overline{W_1\boxtimes_{P(z_1)} W_2} &\to \overline{W_1\boxtimes_{P(z_2)} W_2} \\
&w_1\boxtimes_{P(z_1)} w_2 &\mapsto I_{\boxtimes_{P(z_2)}}(w_1,e^{l(z_1)})w_2
\end{eqnarray*}
where $l(z_1)$ is the value of $\log z_1$ determined uniquely by $\log z_2$ with ${\rm arg}z_2\in [0,2\pi)$ and the path.
The braiding isomorphism $c_{W_1,W_2}: W_1\boxtimes W_2\to W_2\boxtimes W_1$ is determined by
$$\overline{c_{W_1,W_2}}(w_1\boxtimes w_2)=e^{L(-1)}\overline{T_{\gamma^-}}(w_2\boxtimes_{P(-1)}w_1)=e^{L(-1)}I(w_2, e^{\pi i})w_1$$
where $\gamma^-$ is a path on the upper half plane without $0$ from $-1$ to $1.$ Then $c_{W_2,W_1}^{-1}$ is determined
by
$$\overline{c_{W_2,W_1}^{-1}}(w_1\boxtimes w_2)=e^{L(-1)}\overline{T_{\gamma^+}}(w_2\boxtimes_{P(-1)}w_1)=e^{L(-1)}I(w_2, e^{-\pi i})w_1$$
$\gamma^+$ is a path on the lower half plane without $0$ from $-1$ to $1.$
The associativity isomorphism
$$A_{W_1,W_2,W_3}: W_1\boxtimes(W_2\boxtimes W_3) \to (W_1\boxtimes W_2)\boxtimes W_3$$
is given by
$$ A_{W_1,W_2,W_3}= T_{\gamma_3} \circ (T_{\gamma_4} \boxtimes_{P(z_2)}\operatorname{id}_{W_3} ) \circ A_{z_1,z_2} \circ (\operatorname{id}_{W_1}\boxtimes_{ P(z_1) }T_{\gamma_2} )\circ T_{\gamma_1}$$
where $z_1>z_2>z_1-z_2>0,$ $\gamma_1,\gamma_2$ are paths in $\mathbb R^\times=\mathbb R\setminus\{0\}$ from $1$ to $z_1,$ $z_2,$ respectively, and $\gamma_3,\gamma_4$ are paths in the real line with $0$ from $z_2,$ $z_1-z_2$ to $1,$ respectively.
We now investigate more on the modular tensor category $\overline{\mathcal{C}_V}$ with braiding $c_{W_2,W_1}^{-1}.$
The difference between these two braidings is how we choose $(-1)^n$ for any rational number $n.$ For braiding $c_{W_1,W_2},$
$(-1)^n$ is understood to be $e^{\pi in}$ and for braiding $c_{W_2,W_1}^{-1},$
$(-1)^n$ is understood to be $e^{-\pi in}$ (see the proof of Theorem \ref{t5.2}). So the two different braidings in the $V$-module category really comes from the two different ways of choosing the skew symmetry
for intertwining operators which define the tensor product.
There is a twist $\theta_W=e^{2\pi i L(0)}: W\to W$ for any $W\in \mathcal{C}_V.$ If $W$ is irreducible then $\theta_W=e^{2\pi i \Delta_W}$ where
$\Delta_W$ is the weight of $W.$ Then the twist in the modular tensor category $\overline{\mathcal{C}_V}$ is given by $\overline{\theta}_W=e^{-2\pi iL(0)}.$
If $W$ is irreducible, $\overline{\theta}_W$ is exactly the complex conjugation of $\theta_W$ as $\Delta_W$ is a rational number. The relation
$\overline{\theta}_{W_1\boxtimes W_2}=\overline{c}_{W_2,W_1}\circ \overline{c}_{W_1,W_2}\circ (\overline{\theta}_{W_1}\boxtimes \overline{\theta}_{W_2})$
is immediate by taking the inverse of the relation $\theta_{W_1\boxtimes W_2}=c_{W_2,W_1}\circ c_{W_1,W_2}\circ (\theta_{W_1}\boxtimes \theta_{W_2}).$
Here is a natural question: Assume that $V$ is a rational, $C_2$-cofinite vertex operator algebra. Is there a rational vertex operator algebra $U$ such that $\overline{\mathcal{C}_V}$ and $\mathcal{C}_U$ are braided equivalent?
\begin{conj}\label{conjecture4.1} Assume that $V$ is a rational, $C_2$-cofinite vertex operator algebra. Then there is a rational, $C_2$-cofinite vertex operator algebra $U$ such that $\overline{\mathcal{C}_V}$ and $\mathcal{C}_U$ are braided equivalent.
\end{conj}
Let $W=(W,Y,{\bf 1},\omega)$ be a vertex operator algebra and
$U=(U,Y,{\bf 1},\omega^1)$ be a vertex operator subalgebra of $W$ such that $L(1)\omega^1=0.$ The {\em commutant} $C_W(U)$ of $U$ is defined to be
$$C_W(U)=\{w\in W|u_nw=0, u\in U,n\geq 0\}.$$
Set $\omega^{2}=\omega-\omega^{1}.$ Then $(C_W(U), Y, {\bf 1},\omega^2)$ is also a vertex operator subalgebra of $V$ \cite{GKO,FZ}. Here is a characterization of $\overline{\mathcal{C}_V}:$
\begin{thm}\label{t4.2} Let $V,U$ be as before. Then $\overline{\mathcal{C}_V}$ and $\mathcal{C}_U$ are braided equivalent if and only if there exists a holomorphic vertex operator algebra $W$ such that $V\otimes U$ is a conformal subalgebra of $W$ satisfying
$C_{W}(V)=U$ and $C_W(U)=V.$
\end{thm}
\begin{proof} It is well known that $\oplus_{X\in {\cal O}(\mathcal{C}_V)}X\otimes X'$ is a condensable algebra in the modular tensor category $\mathcal{C}_V\otimes \overline{\mathcal{C}_V}.$ If $\overline{\mathcal{C}_V}$ and $\mathcal{C}_U$ are braided equivalent, let ${\cal F}: \overline{\mathcal{C}_V}\to \mathcal{C}_U$ be a braided tensor functor giving the equivalence. Then ${\cal O}(\mathcal{C}_U)=\{{\cal F}(X')|X\in {\cal O}(\mathcal{C}_V)\}$ and
$W=\oplus_{X\in {\cal O}(\mathcal{C}_V)}X\otimes {\cal F}(X')$ is a condensable algebra in $\mathcal{C}_V\otimes \mathcal{C}_U$, which is equivalent to $\mathcal{C}_{V\otimes U}$ as braided tensor categories.
It follows from \cite{HKL} that $W$ is a vertex operator algebra which is an extension of $V\otimes U.$ Since ${\rm FPdim}((\mathcal{C}_{V\otimes U)})_W^{0})=1,$ one concludes immediately that $W$ is a holomorphic vertex operator algebra. It is clear that
$C_{W}(V)=U$ and $C_W(U)=V$ from the construction of $W.$
Now assume that there exists a holomorphic vertex operator algebra $W$ such that $V \otimes U$ is a conformal subalgebra of $W$ satisfying $C_{W}(V)=U$ and $C_W(U)=V.$ Using a result in \cite{KM} we know that every irreducible $V$-module and $U$-module appear in $W$ as $W$ is holomorphic.
By Theorem 3.3 of \cite{Lin} we conclude that $\overline{\mathcal{C}_{V}}$ and $\mathcal{C}_{U}$ are braided equivalent.
\end{proof}
The following result asserts that Conjecture \ref{conjecture4.1} holds for lattice vertex operator algebra, which can also be obtained from Proposition \ref{pointed}.
\begin{prop}\label{p4.3} Let $L$ be a positive definite even lattice, then $\overline{\mathcal{C}_{V_L}}$ is braided equivalent to $\mathcal{C}_{V_K}$ for some positive definite even lattice $K.$
\end{prop}
\begin{proof} First, the lattice vertex operator algebra $V_L$ \cite{B,FLM} is rational \cite{D1,DLM2} and $C_2$-cofinite \cite{Z,DLM4}. So $\mathcal{C}_{V_L}$ is a modular tensor category. If $L$ is also unimodular, $V_L$ is holomorphic and $\mathcal{C}_{V_L}$
is braided equivalent to ${\rm Vec}$ and $\overline{\mathcal{C}_{V_L}}=\mathcal{C}_{V_L}.$ Now we assume that $L$ is not unimodular.
By Theorem 5.5 of \cite{GH}, $L$ can be embedded in a positive definite even unimodular lattice $E$ and $L$ is a direct summand of $E$ as abelian groups and $O(L)$ embeds in $O(E)$ where $O(L)$ is the isometry group of $L.$ Then $V_E$ is a holomorphic vertex operator algebra. Let $K=L^{\perp}$ be the orthogonal complement of $L$ in $E.$ Then $V_{L\oplus K}=V_L\otimes V_K$ is a conformal subalgebra of $V_E$ in the sense that $V_{L\oplus K}$ and $V_E$ have
the same Virasoro element.
Let $C_{V_E}(V_L)$ be the commutant of $V_L$ in $V_E.$ We claim that $C_{V_E}(V_L)=V_K$ and $C_{V_E}(V_K)=V_L.$ Clearly,
$V_K$ is a subalgebra of $C_{V_E}(V_L).$ Recall from \cite{D1} that the irreducible $V_K$-modules are given by $\{V_{K+\alpha_i}|i\in K^{\circ}/K\}$ where $K^{\circ}$ is the dual lattice of $K$ and $\alpha_i$ are the representatives of
cosets of $K$ in $K^{\circ}.$ So $C_{V_E}(V_L)$ is a simple current extension of $V_K$ and there is an even sublattice $K_1$ of $E$ containing $K$ such that $K_1$ and $K$ have the same rank, and $C_{V_E}(V_L)=V_{K_1}.$ This implies that $K_1$ is orthogonal to $L,$ $K_1=K$ and $C_{V_E}(V_L)=V_{K}.$ Similarly, $C_{V_E}(V_K)=V_{L_1}$ for an even sublattice $L_1$ of $E$ such that
$L_1$ contains $L$ and $L_1,L$ have the same rank. Thus $L_1/L$ is a finite abelian group. The fact that $L$ is a direct summand
of $E$ as abelian groups forces $L_1=L.$ The result now follows from Theorem \ref{t4.2}
\end{proof}
\begin{ex}
{\rm
Let $L_{A_1}=\mathbb Z\alpha$ with $(\alpha,\alpha)=2.$ Then $V_{L_{A_1}}$ has two irreducible modules $V_{L_{A_1}},V_{L_{A_1+\alpha/2}}.$ Let $K=L_{E_7}$
be the root lattice of type $E_7.$ Recall from \cite{Hum} that the root lattice $L_{E_8}$ of type $E_8$ is spanned by
$\alpha_i$ for $i=1,...,8$ where $\alpha_1=\frac{1}{2}(\epsilon_1+\epsilon_8-(\epsilon_3+\cdots+\epsilon_7)),$ $\alpha_2=\epsilon_1+\epsilon_2,$
$\alpha_i=\epsilon_{i-1}-\epsilon_{i-2}$ for $i=3,...,8$ and $\{\epsilon_i| i=1,...,8\}$ is the standard orthonormal basis of $\mathbb R^8.$ Then $L_{E_7}$ can be identified with the sublattice $\oplus_{i=1}^7\mathbb Z\alpha_i$ of $L_{E_8}$ and
$L_{A_1}$ can be identified with sublattice $\mathbb Z(\epsilon_7+\epsilon_8)$ of $L_{E_8}.$ It is easy to see that $\oplus_{i=1}^7\mathbb Z\alpha_i$ and $\mathbb Z(\epsilon_7+\epsilon_8)$ are orthogonal.
We claim that $L_{E_7}+L_{A_1}$ has index 2 in $L_{E_8}.$ Clearly, $\alpha_8=\epsilon_6-\epsilon_7$ does not lie in $L_{E_7}+L_{A_1}.$ So it is good enough to show that $2\alpha_8$ lies in $L_{E_7}+L_{A_1}.$ Observe that $2\epsilon_i$ belongs to
$L_{E_7}+L_{A_1}$ for all $i.$ Thus $L_{E_8}=(L_{E_7}+L_{A_1})\cup (L_{E_7}+L_{A_1}+\alpha_8),$ as claimed. One can verify that
$C_{V_{L_{E_8}}}(V_{L_{_{E_7}}})=V_{L_{A_1}}$ and $C_{V_{L_{E_8}}}(V_{L_{_{A_1}}})=V_{L_{E_7}}$ by noting that $\alpha_8$ is not orthogonal to $L_{E_7}$ and $L_{A_1}.$ It is immediate from Theorem \ref{t4.2} that $\overline{\mathcal{C}_{V_{L_{A_1}}}}$ and
$\mathcal{C}_{V_{L_{E_7}}}$ are braided equivalent.
}
\end{ex}
\section{$V^G$-modules }
In the rest of this paper we assume the following:
\begin{enumerate}
\item[(V1)] $V=\oplus_{n\geq 0}V_n$ is a simple vertex operator algebra of CFT type,
\item[(V2)] $G$ acts faithfully on $V$ as automorphisms such that $V^G$ is regular,
\item[(V3)] The conformal weight of any irreducible $V^G$-module $N$ is nonnegative and is zero if and only if $N=V^G.$
\end{enumerate}
If $\sigma \in {\rm Aut}(G)$, then the action of $G$ on $V$ can be twisted by $\sigma$, i.e., $g\cdot v = \sigma(g)v$. This twisted $G$ action on $V$ defines another automorphism group of $V.$ In general, if $G$ is an abstract group, it is possible to embed $G$ into ${\rm Aut}(V)$ in different ways.
Assumption (V2) implies that $V$ is $C_2$-cofinite \cite{ABD} and $V$ is $g$-rational for all $g\in G$ \cite{ADJR}. The assumption
(V3) implies that
the conformal weight of any irreducible $g$-twisted $V$-module except $V$ is positive, and that both $V^G$ and $V$ are selfdual.
We remark that if $G$ is solvable, then $V^G$ is regular if and only $V$ is regular. For arbitrary $G$, the regularity of $V$ together with the $C_2$-cofiniteness of $V^G$ implies the rationality of $V^G$ \cite{Mc}.
From our assumptions, both $\mathcal{C}_{V}$ and $\mathcal{C}_{V^G}$ are modular tensor category.
Moreover,
$V$ is a condensable algebra in $\mathcal{C}_{V^G}$ \cite{HKL}. To simplify the notation, we use ${\rm Rep}(V)$ to denote the $V$-module category
$(\mathcal{C}_{V^G})_V$ in $\mathcal{C}_{V^G}$ \cite{KO}. Then ${\rm Rep}(V)$ consists of every $V^{G}$-module
$W$ together with a $V^{G}$-intertwining operator $Y_W(\cdot,z)$ of type $\binom{W}{V\ W}$
such that the following conditions are satisfied:
1. (Associativity) For any $u,v\in V,$ $w\in W$ and $w'\in W'$, the formal series
\[
\langle w',Y_{W}(u,z_{1})Y_{W}(v,z_{2})w\rangle
\]
and
\[
\langle w',Y_{W}(Y(u,z_{1}-z_{2})v,z_{2}))w\rangle
\]
converge on the domains $|z_{1}|>|z_{2}|>0$ and
$|z_{2}|>|z_{1}-z_{2}|>0$, respectively, to multivalued analytic functions
which coincide on their common domain.
2. (Unit) $Y_{W}({\bf 1},z)={\rm Id}_{W}.$
It is proved in \cite{DLXY} that if $V^G$ satisfies conditions (V1)-(V3) then
$${\rm Rep}(V)=\bigoplus_{g\in G}{\rm Rep}(V)_g$$
where ${\rm Rep}(V)_g$ is the $g$-twisted $V$-module category. Moreover, ${\rm Rep}(V)$ is a fusion category \cite{KO}, \cite{CKM}, \cite{EGNO} with tensor product $\boxtimes_{{\rm Rep}(V)}.$ Furthermore, ${\rm Rep}(V)_1$ which is denoted by ${\rm Rep}(V)^0$ is exactly the
modular tensor category $\mathcal{C}_V$ by \cite{HKL}.
We now discuss the connection between the Frobenius-Perron dimension and the quantum dimension defined in \cite{DJX}, \cite{DRX} for a $g$-twisted
$V$-module $M=\oplus_{n\in\frac{1}{T}\mathbb Z_+} M_{\lambda +n}$ where $T$ is the order of $g.$ Define the character of $M$
by
$$\chi_M(\tau)=q^{-c/24}\sum_{n\in\frac{1}{T}\mathbb Z_+}\dim M_{\lambda +n}q^{\lambda +n}$$
where $q=e^{2\pi i\tau}$ and $\tau$ lies in the upper half plane. Then $\chi_M(\tau)$ is a modular function on a congruence subgroup \cite{Z}, \cite{DLN}, \cite{DR}. The quantum dimension of $M$ over $V$ is defined as
$$\qdim_V(M)=\lim_{\tau\to 0}\frac{\chi_M(\tau)}{\chi_V(\tau)}$$
which is always a positive algebraic number greater than or equal to 1. Note that $M$ is an object in the fusion category ${\rm Rep}(V).$
It turns out that $\qdim_V(M)={\rm FPdim}(M).$
Our goal is to understand various fusion categories associated to $V^G.$ We first present a result on the classification of irreducible $V^G$-modules or
determine ${\cal O}(\mathcal{C}_{V^G})$ from \cite{DRX}.
For this purpose, we need the action of $G$ on ${\rm Rep}(V)$ \cite{DLM4}. Let $g, h$ be two automorphisms of $V$ with $g$ of finite order. If $(M, Y_M)$ is a $g$-twisted $V$-module, there is a $h^{-1}gh$-twisted $V$-module $(M\circ h, Y_{M\circ h})$ where $M\circ h\cong M$ as vector spaces and
\begin{equation*}
Y_{M\circ h}(v,z)=Y_M(hv,z)
\end{equation*}
for $v\in V.$
This defines a right action of $G$ on the twisted $V$-modules and on isomorphism classes of twisted $V$-modules. Similarly, we can define a left action of $G$ on the twisted $V$-modules and on isomorphism classes of twisted $V$-modules such that
$h\circ M=M$ as vector spaces and $Y_{h\circ M}(v,z)=Y_M(h^{-1}v,z)$ for $v\in V.$ Then $G$ acts on ${\rm Rep}(V)$
as monoidal functors and ${\rm Rep}(V)$ is a braided $G$-crossed category \cite{Mc}.
If $g,h$ commute, $h$ clearly acts on the $g$-twisted modules.
Denote by $\mathscr{M}(g)$ the equivalence classes of irreducible $g$-twisted $V$-modules and set $\mathscr{M}(g,h)=\{M \in \mathscr{M}(g)| M\circ h\cong M\}.$ Note from Theorem \ref{grational} that
if $V$ is $g$-rational, both $\mathscr{M}(g)$ and $\mathscr{M}(g,h)$ are finite sets.
For any $M\in \mathscr{M}(g,h),$ there is a $g$-twisted $V$-module isomorphism
\begin{equation*}
\phi(h) : M\to M\circ h.
\end{equation*}
The linear map $\phi(h)$ is unique up to a nonzero scalar. If $h=1$ we simply take $\phi(1)={\rm Id}_M.$
Let $M=(M,Y_M)$ be an irreducible $g$-twisted $V$-module.
We define a subgroup $G_M$ of $G$ consisting of $h\in G$ such that $M\circ h$ and $M$ are isomorphic.
As we mentioned in Section 2 there is a projective
representation $h\mapsto \phi(h)$ of $G_M$ on $M$ such that
$$
\phi(h)Y_M(v,z)\phi(h)^{-1}=Y_M(hv,z)
$$
for $h\in G_M$ and $v\in V.$
Let $\alpha _M$ be the corresponding 2-cocycle in $C^2(G,\mathbb C^{\times}).$
Then $\phi(h)\phi(k)=\alpha _M(h,k)\phi(hk)$
for all $h,k\in G_M$. We may assume $\alpha_M$ has finite order. That is, there is a fixed positive integer $n$ such that $\alpha_M(h,k)^n=1$ for all $h,k\in G_M.$ Let $\mathbb C^{\alpha _M}[G_M]=\oplus_{h\in G_M} \mathbb C\bar h$ be the twisted group algebra with product $\bar h\bar k=\alpha_M(h,k)\overline{hk}.$ It is well known that $\mathbb C^{\alpha _M}[G_M]$ is a semisimple associative algebra. It follows that $M$ is a $\mathbb C^{\alpha _M}[G_M]$-module.
Let $\Lambda_{M}$ (which was denoted by $\Lambda_{G_M,\alpha_M}$ in \cite{DRX}) be the set of all irreducible characters $\lambda$
of $\mathbb C^{\alpha _M}[G_M]$. Denote
the corresponding simple module by $W_{\lambda }.$
Using the fact that $M$ is a semisimple
$\mathbb C^{\alpha _M}[G_M]$-module, we let $M^{\lambda}$ be the sum of simple
$\mathbb C^{\alpha _M}[G_M]$-submodules of $M$ isomorphic
to $W_{\lambda }.$ Then
$$M=\bigoplus_{\lambda\in \Lambda_{M}}M^{\lambda}.$$
Moreover, $M^{\lambda}=W_{\lambda}\otimes M_{\lambda }$
where $M_{\lambda }=\mbox{Hom}_{\mathbb C^{\alpha _M}[G_M]}(W_{\lambda },M)$ is the multiplicity
of $W_{\lambda }$ in $M.$ As in \cite{DLM1}, we can,
in fact, realize $M_{\lambda }$ as a subspace of $M$ in the following way. Let $w\in W_{\lambda }$
be a fixed nonzero vector. Then we can identify
$\mbox{Hom}_{\mathbb C^{\alpha _M}[G_M]}(W_{\lambda },M)$ with the subspace
$$\{f(w) |f\in \mbox{Hom}_{\mathbb C^{\alpha _M}[G_M]}(W_{\lambda },M)\}$$
of $M^{\lambda }.$ This gives a decomposition
\begin{equation}\label{decom}
M=\bigoplus_{\lambda\in \Lambda_{M}}W_{\lambda }\otimes M_{\lambda}
\end{equation}
and each $M_{\lambda }$ is a module for vertex operator subalgebra $V^{G_M}$-module.
Recall that the group $G$ acts on the set ${\cal S}=\cup_{g\in G}\mathscr{M}(g)$ and $ M\circ h$ and $M$ are isomorphic
$V^G$-modules for any $h\in G$ and $M\in {\cal S}.$ It is clear that the cardinality of the $G$-orbit $|M\circ G|$ of $M$ is
$[G:G_M].$ Let $J$ be the orbit representatives of $\cal S.$
Then we have the following results \cite{DRX}, \cite{DJX}.
\begin{thm}\label{mthm1}
Assume that $V$ satisfies (V1)-(V3).
\begin{enumerate}
\item[\rm (1)]
The set
$$\{M_{\lambda }\mid M\in J,\,\lambda \in \Lambda_{M}\}$$
gives a complete list of inequivalent irreducible $V^G$-modules. That is,
any irreducible $V^G$-module is isomorphic to an irreducible $V^G$-submodule $M_{\lambda }$ for some $M\in J$ and $\lambda\in \Lambda_M.$
\item[\rm (2)] We have a relation between quantum dimensions
$$\qdim_{V^G}(M_\lambda )=\dim W_{\lambda }\cdot [G:G_M]\cdot \qdim_V(M)$$
where $M$ is an irreducible $g$-twisted $V$-module
and $\lambda\in \Lambda_{M}.$ In particular, $\Lambda_{V}={\rm Irr}(G)$ is the set
of irreducible characters of $G$ and $\qdim_{V^G}(V_\lambda )=\dim W_{\lambda }$ for $\lambda \in {\rm Irr}(G)$.
\end{enumerate}
\end{thm}
\section{ Fusion categories $\mathcal{F}_{V^G}$ and $\mathcal{E}_{V^G}$}
Let $\mathcal{F}_{V^G}$ be the subcategory of $\mathcal{C}_{V^G}$ generated by $V^G$-submodules of $V$-modules and $\mathcal{E}_{V^G}$
the subcategory of $\mathcal{C}_{V^G}$ generated by $V^G$-submodules of $V.$ We prove in this section that $\mathcal{E}_{V^G}$ is equivalent to ${\rm Rep}(G)$ as braided tensor categories, and $\mathcal{F}_{V^G}$ is a braided fusion subcategory of
$\mathcal{C}_{V^G}$ such that the Mu\"ger center $\mathcal{F}_{V^G}'$ of $\mathcal{F}_{V^G}$ is exactly $\mathcal{E}_{V^G}.$
\begin{thm}\label{theorem2} $\mathcal{E}_{V^G}$ is a fusion subcategory of $\mathcal{C}_{V^G}$ equivalent to the symmetric fusion category ${\rm Rep}(G)$ via a canonical braided tensor functor $F^{V,G}: {\rm Rep}(G) \to \mathcal{E}_{V^G}$. In particular, $\mathcal{E}_{V^G}$ is a symmetric fusion category.
\end{thm}
\begin{proof} First we prove that $\mathcal{E}_{V^G}$ is a braided fusion subcategory of $\mathcal{C}_{V^G}.$ Since $\mathcal{E}_{V^G}$ is a semisimple category and
each simple object is isomorphic to $V_{\lambda }$ for some $\lambda \in {\rm Irr}(G),$ it suffices to show that $V_\lambda \boxtimes V_\mu$ lies in $\mathcal{E}_{V^G}$ for
any $\lambda,\mu\in {\rm Irr}(G).$ From \cite{T}, \cite{DRX} we know that
$$V_\lambda \boxtimes V_\mu=\sum_{\nu\in {\rm Irr}(G)}N_{\lambda ,\mu}^{\nu}V_{\nu}$$
where the fusion rules $N_{\lambda ,\mu}^{\nu}$ is given by the tensor product decomposition of $G$-module
$$W_\lambda \boxtimes W_\mu=\sum_{\nu\in {\rm Irr}(G)}N_{\lambda ,\mu}^{\nu}W_{\nu}.$$
Thus, $\mathcal{E}_{V^G}$ is closed under the tensor product and is a braided fusion subcategory of $\mathcal{C}_{V^G}.$
Second, we establish that $\mathcal{E}_{V^G}$ is a symmetric braided fusion category. Equivalently we need to show that
$$c_{V_{\mu}, V_{\lambda }}\circ c_{V_{\lambda },V_{\mu}}=\operatorname{id}_{V_{\lambda }\boxtimes V_{\mu}}$$
for any $\lambda ,\mu.$ Since $\theta_{V_{\nu}}=1$ for all $\nu\in{\rm Irr}(G)$ we see that
$$\operatorname{id}_{V_{\lambda }\boxtimes V_\mu}=\theta_{V_{\lambda }\boxtimes V_\mu}=c_{V_{\mu}, V_{\lambda }}\circ c_{V_{\lambda },V_{\mu}}\circ (\theta_{V_{\lambda }}\boxtimes \theta_{V_{\mu}})=c_{V_{\mu}, V_{\lambda }}\circ c_{V_{\lambda },V_{\mu}}.$$
Finally, we show that $\mathcal{E}_{V^G}$ is braided equivalent to the symmetric braided fusion category ${\rm Rep}(G)$. The categorical dimension $\dim(X)$ for an object $X$ in a spherical fusion category defined as the trace of the identity morphism of $X$. Under our assumption, $\dim(X)={\rm FPdim}(X)$ always positive for any object $X$ in $\mathcal{C}_{V^G}$ or ${\rm Rep}(V)$. The positivity of $\dim(V_\lambda )$ together with the fact $\theta_{V_{\lambda }}=1$ implies that $\mathcal{E}_{V^G}$ is a Tannakian category. By \cite{De}, $\mathcal{E}_{V^G}$ is braided equivalent to
${\rm Rep}(H)$ for a finite group $H.$ The problem is we do not know why $H$ is isomorphic to $G.$
We now prove that ${\rm Rep}(G)$ is braided equivalent to $\mathcal{E}_{V^G}$ directly. For this purpose we define a functor $F=F^{V,G}$ from ${\rm Rep}(G)$ to $\mathcal{E}_{V^G}$ to be the composition functor
$$F(X)=\mbox{Hom}_{G}(X^*,V)=\bigoplus_{n\geq 0}\mbox{Hom}_{G}(X^*,V_n)$$ where $X^*$ is the dual of $X.$ It is easy to see that $F(X)$ is a $V^G$-module
such that for $a\in V^G$ and $f\in F(X),$ $(Y(a,x)f)(u)=Y(a,x)fu$ for $u\in X^*$ as $X$ is a $G$-module.
Now we prove that $F(X)$ lies in category $\mathcal{E}_{V^G}.$
It is good enough to assume $X=W_{\lambda}$ for some $\lambda\in {\rm Irr}}\def \glob{{\rm glob}(G).$ Note that $W_{\lambda}^*$ is isomorphic to $W_{\lambda^*}$ where $\lambda^*$ is the dual character of $\lambda.$ One can easily see that $F(X)$ is isomorphic to $V_{\lambda^*}$ and lies in $\mathcal{E}_{V^G}.$ Moreover, $\mbox{Hom}_G(X^*,V_n)$ is the eigenspace of $L(0)$
with eigenvalue $n.$
We also need to deal with morphisms. Let $X,Y$ be two $G$-modules and $\alpha:X\to Y$ be a $G$-module morphism. Let $\alpha':Y^*\to X^*$
be the adjoint map. For $f\in F(X)$ define $F(\alpha)(f)=f\alpha': F(X)\to F(Y)$. We assert that $F(\alpha)$ is a $V^G$-module homomorphism.
Let $f\in F(X)$. For any $v\in Y^*$ and $a\in V^G$ we see that $$F(\alpha )(Y(a,x)f)(v)=Y(a,x)f\alpha'(v)=Y(a,x)F(\alpha)(f)(v).$$
So $F$ is a functor from ${\rm Rep}(G)$ to $\mathcal{E}_{V^G}$.
Next we show that $F$ is a braided tensor functor. Let $X,Y$ be $G$-modules. We use the natural identification between $(X\otimes Y)^*$ with $Y^*\otimes X^*.$ For $f\in F(X)$ and $g\in F(Y),$ one can show that
$${\cal Y}(f,x)g(v\otimes u)=Y(fu,x)gv$$
for $v\otimes u\in Y^*\otimes X^*$ defines an intertwining operator of type $\left(_{F(X),F(Y)}^{F(X\otimes Y)}\right).$ In fact, for any $a\in V^G$ and formal variables $x_0,x_1,x_2$ we have to prove that
\begin{align*}
& x_{0}^{-1}\text{\ensuremath{\delta}}\left(\frac{x_{1}-x_{2}}{x_{0}}\right)Y(a,x_{1}){\cal Y}(f,x_{2})g-x_{0}^{-1}\delta\left(\frac{x_{2}-x_{1}}{-x_{0}}\right){\cal Y}(f,x_{2})Y(a,x_{1})g\\
& =x_{2}^{-1}\delta\left(\frac{x_{1}-x_{0}}{x_{2}}\right)Y\left({\cal Y}\left(a,x_{0}\right)f,x_{2}\right)g
\end{align*}
or
\begin{align*}
& x_{0}^{-1}\text{\ensuremath{\delta}}\left(\frac{x_{1}-x_{2}}{x_{0}}\right)Y(a,x_{1})Y(fu,x_{2})gv-x_{0}^{-1}\delta\left(\frac{x_{2}-x_{1}}{-x_{0}}\right)Y(fu,x_{2})Y(a,x_{1})gv\\
& =x_{2}^{-1}\delta\left(\frac{x_{1}-x_{0}}{x_{2}}\right)Y\left(Y\left(a,x_{0}\right)fu,x_{2}\right)gv.
\end{align*}
This is obvious as the last identity is just the Jacobi identity in $V.$ Thus ${\cal Y}(f,z)g$ is an intertwining map of type $\left(_{F(X),F(Y)}^{F(X\otimes Y)}\right)$ for any $z\in\mathbb C^\times.$
By the universal mapping property of the tensor product
$\boxtimes_{P(z)}$ we have a unique $V^G$-module homomorphism
$$J_{X,Y}: F(X)\boxtimes_{P(z)} F(Y) \to F(X\otimes Y)=\mbox{Hom}_G(Y^*\otimes X^*, V)$$
characterized by
$$\overline{J_{X,Y}}(f\boxtimes_{P(z)} g)(v\otimes u)=Y(fu,z)gv$$
for any $u\in X^*, v\in Y^*.$
Then we have
\begin{eqnarray*}
\overline{J_{X,Y}}(f\boxtimes_{P(z)} g)(v\otimes u)&=&Y(fu,z)gv\\
&=&e^{zL(-1)}Y(gv,-z)fu \\
&=&e^{L(-1)z}\overline{J_{Y,X}}(g\boxtimes_{P(-z)} f)(u\otimes v)\\
&=&\overline{J_{Y,X}}(e^{L(-1)z}g\boxtimes_{P(-z)}f)(u\otimes v)\\
&=&\overline{J_{Y,X}}\overline{c_{F(X),F(Y)}}(f\boxtimes_{P(z)} g)(u\otimes v).
\end{eqnarray*}
In particular,
$$\overline{J_{X,Y}}(f\boxtimes g)(v\otimes u)=\overline{J_{Y,X}}\overline{c_{F(X),F(Y)}}(f\boxtimes g)(u\otimes v).$$
Let
$\pi_{X,Y}: X\otimes Y\to Y\otimes X$ be the natural braiding of the vector space tensor product. Then $F( \pi_{X,Y}) \overline{J_{X,Y}}(f\boxtimes g)(u\otimes v)= \overline{J_{X,Y}}(f\boxtimes g)(v\otimes u).$ That is,
$J_{X,Y}$ is a natural isomorphism such that the following commuting diagram holds
for objects $X,Y$ in ${\rm Rep}(G):$
\begin{equation*}
\begin{tikzcd}
F(X)\boxtimes F(Y) \arrow[r, "c_{F(X),F(Y)}"] \arrow[d,"J_{X,Y}"]&F(Y)\boxtimes F(X) \arrow[d,"J_{Y,X}"]\\
F(X\otimes Y) \arrow[r, "F(\pi_{X,Y})"] &F(Y\otimes X).
\end{tikzcd}
\end{equation*}
Let $\epsilon : V^G \to F^{V,G}(\underline{1})$ be the natural map $a \mapsto f_a$ where $f_a(1)=a$ for $a \in V^G$, where $\underline{1}$ denote the trivial $G$-module $\mathbb C$. Then $\epsilon $ is clearly a $V^G$-module isomorphism.
Now, We prove that $(F, J, \epsilon )$ is a monoidal functor. That is, we need to verify the following commuting diagrams
\begin{equation}\label{e1}
\begin{tikzcd}
F(X)\boxtimes (F(Y) \boxtimes F(Z))\arrow[r,"\operatorname{id}\otimes J_{Y,Z}"] \arrow[d, "A_{F(X),F(Y),F(Z)}"] &F(X)\boxtimes F(Y\otimes Z)\arrow[r,"J_{X, Y\otimes Z}"] & F(X\otimes (Y\otimes Z)) \arrow[d, "F(a_{X,Y,Z})"] \\
(F(X)\boxtimes F(Y))\boxtimes F(Z) \arrow[r, "J_{X\otimes Y}\otimes \operatorname{id}"] & F(X\otimes Y)\boxtimes F(Z) \arrow[r, "J_{X\otimes Y,Z}"] &F((X\otimes Y)\otimes Z)\ \end{tikzcd}
\end{equation}
\begin{equation}\label{e2}
\begin{tikzcd}
V^G \boxtimes F(X) \arrow[r,"\epsilon \boxtimes \operatorname{id}"] \arrow[d,"l_{F(X)}"] & F(\underline{1}) \boxtimes F(X) \arrow[d,"J_{\underline{1}, X}"] \\
F(X) & F(\underline{1} \otimes X) \arrow[l,"F(l_X)"]
\end{tikzcd} \quad\text{and}\quad
\begin{tikzcd}
F(X) \boxtimes V^G \arrow[r,"\operatorname{id} \boxtimes \epsilon "] \arrow[d,"r_{F(X)}"] & F(X) \boxtimes F(\underline{1}) \arrow[d,"J_{X, \underline{1}}"] \\
F(X) & F(X\otimes \underline{1}) \arrow[l,"F(r_X)"]
\end{tikzcd} \quad
\end{equation}
where $a_{X,Y,Z}$, $r_X$, $l_X$ are respectively the associativity, left and right unit isomorphisms of vector spaces, and $l_F(X)$, $r_F(X)$ respectively denote the left and the right unit isomorphisms of $V^G$-mod.
Since the proofs of two commuting diagrams in (\ref{e2})
are similar, we only give a proof of the first commuting diagram. Note that $l_{F(X)}: V^G \boxtimes F(X)\to F(X)$
is characterized by $l_{F(X)}(\mathbf{1}\boxtimes g)=g$ for $g\in F(X).$ That is,
$l_{F(X)}(\mathbf{1}\boxtimes g)(u)=g(u)$ for $u\in X^*.$
On the other hand,
\begin{eqnarray*}
\begin{aligned}
(F(l_X)\circ J_{\underline{1},X}\circ \epsilon \boxtimes \operatorname{id})({\bf 1}\boxtimes g)(u)&=(J_{\underline{1},X}\circ \epsilon \boxtimes \operatorname{id})({\bf 1}\boxtimes g)(u\otimes 1)\\
&=J_{\underline{1},X}(f_{{\bf 1}}\boxtimes g) (u\otimes 1)\\
&=Y({\bf 1},1)g(u)\\
&=g(u).
\end{aligned}
\end{eqnarray*}
That is, $F(l_X)\circ J_{\underline{1},X}\circ \epsilon \boxtimes \operatorname{id}=l_{F(X)}.$
We now prove (\ref{e1}). Let $z_1, z_2>0$ and $\gamma$ be a path in $\mathbb R^\times$ from $z_1$ to $z_2,$ then we have the following commuting diagram
\begin{equation*}
\begin{tikzcd}
F(X)\boxtimes_{P(z_1)} F(Y) \arrow[r, "J_{X,Y}"] \arrow[d,"T_\gamma"]&F(X\otimes Y) \arrow[d,"\operatorname{id}"]\\
F(X)\boxtimes_{P(z_2)}F(Y) \arrow[r, "J_{X,Y}"] &F(X\otimes Y)
\end{tikzcd}
\end{equation*}
by noting that
\begin{equation*}
\begin{split}
(\overline{J_{X,Y}}\circ \overline{T_{\gamma}})(f \boxtimes_{P(z_1)}g)(v\otimes u)&=\overline{J_{X,Y}}I_{\boxtimes_{P(z_2)}}(f,e^{l(z_1)})g(v\otimes u)\\
&=\overline{J_{X,Y}}I_{\boxtimes_{P(z_2)}}(f,z_1)g(v\otimes u)\\
&=Y(fu,z_1)gv
\end{split}
\end{equation*}
where we have used the fact that $I_{\boxtimes_{P(z_2)}}$ only involves with integral powers of $z.$
Let $z_1>z_2>z_1-z_2>0$ and $\gamma_i$ be as before for $i=1,2,3.4.$ So it is good enough to show the following diagram is commutative:
\begin{equation*}
\begin{tikzcd}
F(X)\boxtimes (F(Y) \boxtimes F(Z)) \arrow[r,"\operatorname{id}\boxtimes J_{Y,Z}"] \arrow[d, "(\operatorname{id}\boxtimes_{ P(z_1) }T_{\gamma_2} )\circ T_{\gamma_1}"] &F(X)\boxtimes F(Y\otimes Z) \arrow[r,"J_{X, Y\otimes Z}"] \arrow[d,"\\T_{\gamma_1}"]&F(X\otimes (Y\otimes Z)) \arrow[d, "\operatorname{id}"] \\
F(X)\boxtimes_{P(z_1)} (F(Y) \boxtimes_{P(z_2)} F(Z)) \arrow[r,"\operatorname{id}\otimes J_{Y,Z}"] \arrow[d, "A_{z_1,z_2}"] &F(X)\boxtimes_{P(z_1)} F(Y\otimes Z) \arrow[r,"J_{X, Y\otimes Z}"]
&F(X\otimes (Y\otimes Z)) \arrow[d, "F(a_{X,Y,Z})"] \\
(F(X)\boxtimes_{P(z_1-z_2)} F(Y)) \boxtimes_{P(z_2)} F(Z)) \arrow[r,"J_{X,Y}\otimes \operatorname{id}"] \arrow[d, "T_{\gamma_3} \circ (T_{\gamma_4} \boxtimes_{P(z_2)}\operatorname{id} )"] &F(X\otimes Y)\boxtimes_{P(z_2)} F( Z) \arrow[r,"J_{X\otimes Y, Z}"] \arrow[d,"T_{\gamma_3}"]
&F((X\otimes Y)\otimes Z) \arrow[d, "\operatorname{id}"] \\
(F(X)\boxtimes F(Y))\boxtimes F(Z) \arrow[r, "J_{X\otimes Y}\otimes \operatorname{id}"] &F(X\otimes Y)\boxtimes F(Z) \arrow[r, "J_{X\otimes Y,Z}"] &F((X\otimes Y)\otimes Z)
\end{tikzcd}
\end{equation*}
From the discussion above, the sub-diagrams involving the first two rows and the last two rows are commutative.
Now, we discuss the commutativity of the sub-diagram involving the second and third rows. Let $f\in F(X), g\in F(Y), h\in F(Z)$ and $u\in X^*,v\in Y^*, w\in Z^*.$ Then
\begin{equation*}
\begin{split}
& \ \ \ (\overline{F(a_{X,Y,Z})}\circ\overline{J_{X,Y\otimes Z}}\circ \overline{\operatorname{id}\otimes J_{Y,Z}} )(f\boxtimes_{P(z_1)}(g\boxtimes_{P(z_2)}h))(w\otimes(v\otimes u))\\
&=Y(fu,z_1)Y(gv,z_2)hw, \quad \text{and}
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
& \ \ \ (\overline{J_{X\otimes Y, Z}}\circ \overline{J_{X,Y}\otimes\operatorname{id}} \circ \overline{A_{z_1,z_2}})(f\boxtimes_{P(z_1)}(g\boxtimes_{P(z_2)}h)(w\otimes(v\otimes u))\\
&=(\overline{J_{X\otimes Y, Z}}\circ \overline{J_{X,Y}\otimes\operatorname{id}})((f\boxtimes_{P(z_1-z_2)} g)\boxtimes_{P(z_2)}h)(w\otimes(v\otimes u))\\
&=Y(Y(fu,z_1-z_2)gv,z_2)hw.
\end{split}
\end{equation*}
That is the commutativity of the diagram
\begin{equation*}
\begin{tikzcd}
F(X)\boxtimes_{P(z_1)} (F(Y) \boxtimes_{P(z_2)} F(Z)) \arrow[r,"\operatorname{id}\otimes J_{Y,Z}"] \arrow[d, "A_{z_1,z_2}"] &F(X)\boxtimes_{P(z_1)} F(Y\otimes Z) \arrow[r,"J_{X, Y\otimes Z}"]
&F(X\otimes (Y\otimes Z)) \arrow[d, "F(a_{X,Y,Z})"] \\
(F(X)\boxtimes_{P(z_1-z_2)} F(Y)) \boxtimes_{P(z_2)} F(Z)) \arrow[r,"J_{X,Y}\otimes \operatorname{id}"] &F(X\otimes Y)\boxtimes_{P(z_2)} F( Z) \arrow[r,"J_{X\otimes Y, Z}"] &F((X\otimes Y)\otimes Z) \,.
\end{tikzcd}
\end{equation*}
Finally we prove that $F$ is an equivalence. It is clear that $F$ is injective on morphisms. Since $F(W_\lambda) \cong V_{\lambda^*}$ for any irreducible character $\lambda$, $F$ is essentially surjective and $\dim_\mathbb C \mbox{Hom}_G(X,Y) = \dim_\mathbb C \mbox{Hom}_{V^G}(F(X), F(Y))$. Therefore, $F$ is bijective on morphism spaces, and $F$ is an equivalence.
\end{proof}
From Theorem \ref{theorem2}, $F^{V,G} : {\rm Rep}(G) \to \mathcal{C}_{V^G}$ is an embedding for any vertex operator algebras $V$ satisfying the assumptions (V1)-(V3), its image is equivalent to $\mathcal{E}_{V^G}$. We may simply denote ${\rm Rep}(G)$ by $\mathcal{E}$ in the sequel, and the pair $(\mathcal{C}_{V^G}, F^{V,G})$ defines a braided $\mathcal{E}$-category.
\begin{thm}\label{t5.2} With the assumptions (V1)-(V3), we have
\begin{enumerate}
\item[\rm (1)]
${\rm FPdim} (\mathcal{F}_{V^G})=o(G) \cdot {\rm FPdim} (\mathcal{C}_V)$ and ${\rm FPdim} (\mathcal{C}_{V^G})=o(G) \cdot {\rm FPdim} (\mathcal{F}_{V^G}).$
\item[\rm (2)] $\mathcal{F}_{V^G}$ is a braided fusion category.
\item[\rm (3)] $\mathcal{F}_{V^G}'=\mathcal{E}_{V^G}.$ That is the Mu\"ger center of $\mathcal{F}_{V^G}$ is the symmetric fusion category $\mathcal{E}_{V^G}$, and $(\mathcal{F}_{V^G}, F^{V,G})$ is a nondegenerate braided $\mathcal{E}$-category.
\item[\rm (4)] $\mathcal{C}_{V^G}$ is a minimal modular extension of $\mathcal{F}_{V^G}$. If $V$ is holomorphic, $\mathcal{C}_{V^G}$ is a minimal modular extension of $\mathcal{E}_{V^G}$ and is braided equivalent to ${\cal Z}({\rm Vec}_G^{\alpha})$ for some $\alpha\in H^3(G,S^1).$
\item[\rm (5)]
$\overline{\mathcal{C}_{V^G}}$ is a minimal modular extension of $\overline{\mathcal{F}_{V^G}}.$ If $V$ is holomorphic, $\overline{\mathcal{C}_{V^G}}$ is a minimal modular extension of $\mathcal{E}_{V^G}$ and is braided equivalent to ${\cal Z}({\rm Vec}_G^{\bar{\alpha}})$ where $\bar\alpha=\alpha^{-1}.$
\end{enumerate}
\end{thm}
\begin{proof} (1) Let $J_0$ be the orbit representatives consisting of irreducible $V$-modules. Then by Theorem \ref{mthm1},
$$ {\rm FPdim} (\mathcal{F}_{V^G})=\sum_{M\in J_0}\sum_{\lambda \in \Lambda_{M}}\qdim_{V^G} (M_{\lambda})^2.$$
and
$$\qdim_{V^G} (M_{\lambda})=[G:G_{M}]\dim (W_{\lambda}) \cdot \qdim_{V} (M),$$
$$o(G_M)=\sum_{\lambda \in \Lambda_{M}} \dim (W_{\lambda})^2.$$
Thus,
\begin{align*}
{\rm FPdim} (\mathcal{F}_{V^G})
=&\sum_{M\in J_0}\sum_{\lambda \in \Lambda_{M}}
[G:G_{M}]^2 \dim (W_{\lambda})^2\cdot\qdim_{V} (M)^2\\
=&\sum_{M\in J_0}[G:G_M]^2 o(G_M)\qdim_{V} (M)^2\\
=&o(G) \sum_{M\in J_0}[G:G_M] \qdim_{V} (M)^2\\
=&o(G)\sum_{M\in J_0}\sum_{N\in G\cdot M}\qdim_{V} (N)^2\\
=&o(G)\cdot {\rm FPdim} (\mathcal{C}_V).
\end{align*}
The identity ${\rm FPdim} (\mathcal{C}_{V^G})=o(G)\cdot {\rm FPdim} (\mathcal{F}_{V^G})$ follows from ${\rm FPdim} (\mathcal{C}_{V^G})=o(G)^2\cdot {\rm FPdim} (\mathcal{C}_V)$ \cite{DRX}.
(2) Since $\mathcal{F}_{V^G}$ is a subcategory of the modular tensor category $\mathcal{C}_{V^G},$ it suffices to show that for any $X,Y$ in $\mathcal{F}_{V^G},$
$X\boxtimes_{V^G} Y$ is also in $\mathcal{F}_{V^G}.$
Recall the fusion category ${\rm Rep}(V)=\oplus_{g\in G}{\rm Rep}(V)_g$ from Section 4. There is an induction functor
$${\rm Ind}_{V^G}^V: \mathcal{C}_{V^G}\to {\rm Rep}(V)$$
such that ${\rm Ind}_{V^G}^V(X)=V\boxtimes_{V^G} X$ for any object $X$ in $ \mathcal{C}_{V^G}$. It follows from \cite{KO} that ${\rm Ind}_{V^G}^V$ is a tensor functor, and it has a right adjoint $\mbox{Res}^{V}_{V^G}: {\rm Rep}(V) \to \mathcal{C}_{V^G}$, which is the restriction functor. In particular, the following holds:
(i) $\mbox{Hom}_V({\rm Ind}_{V^G}^V(X), M)$ and $\mbox{Hom}_{V^G}(X, \mbox{Res}^{V}_{V^G}(M))$ are naturally isomorphic for any $V^G$-module $X$ and $M\in {\rm Rep}(V),$ and
(ii) ${\rm Ind}_{V^G}^V(W_1\boxtimes_{V^G} W_2)$ and ${\rm Ind}_{V^G}^V(W_1)\boxtimes_{{\rm Rep}(V)} {\rm Ind}_{V^G}^V(W_2)$ are naturally isomorphic for any $V^G$-modules $W_1, W_2$.
If $M$ is an irreducible $g$-twisted $V$-module and $\lambda\in\Lambda_{M}$ we claim that
\begin{equation}\label{eq:iso}
{\rm Ind}_{V^G}^V(M_{\lambda}) \cong \bigoplus_{N\in M\circ G}W_{\lambda}\otimes N=\bigoplus_{i=1}^{[G:G_M]} W_{\lambda}\otimes M\circ g_i
\end{equation}
as $V$-modules, where $\{g_1, \dots, g_{[G:G_M]}\}$ is a set of representatives of the right cosets of $G_M$ in $G$. Noting from Theorem \ref{mthm1}, for any irreducible twisted $V$-module $N$, we have $\mbox{Hom}_{V^G}(M_{\lambda},N)=0$ if $N \not\in M \circ G$,
and $\mbox{Hom}_{V^G}(M_{\lambda},N)\cong W_{\lambda}$ if $N \in M\circ G$. Using (i)
immediately concludes the isomorphism in \eqref{eq:iso}.
Let $M,N$ be two irreducible $V$-modules and $\lambda\in \Lambda_M, \mu\in \Lambda_N$ we claim that $M_{\lambda}\boxtimes_{V^G}N_{\mu}$ lies in
$\mathcal{F}_{V^G}.$ First, for any $\lambda \in \Lambda_M$ and irreducible $g$-twisted $V$-module $X$ with $g \ne 1$,
$$
\mbox{Hom}_{{\rm Rep}(V)}({\rm Ind}_{V^G}^V M_\lambda, X) \cong \mbox{Hom}_{V^G}(M_\lambda, \mbox{Res}^V_{V^G}X) = 0
$$
by (i) and Theorem \ref{mthm1}. Therefore, ${\rm Ind}_{V^G}^V M_\lambda \in \mathcal{C}_V$. By (ii), for any $\mu \in \Lambda_N$, we have
$$
{\rm Ind}_{V^G}^V(M_{\lambda}\boxtimes_{V^G}N_{\mu}) \cong
{\rm Ind}_{V^G}^V (M_{\lambda})\boxtimes_{{\rm Rep}(V)}{\rm Ind}_{V^G}^V(N_{\mu}) \in\mathcal{C}_{V}.
$$
It follows from (i) that $\mbox{Hom}_{V^G}(M_{\lambda}\boxtimes_{V^G}N_{\mu}, X)=0$ for any $g$-twisted $V$-module $X$ and $1\ne g\in G.$ This implies that $M_{\lambda}\boxtimes_{V^G}N_{\mu}$ lies in $\mathcal{F}_{V^G}.$ Thus $\mathcal{F}_{V^G}$ is a fusion subcategory of $\mathcal{C}_{V^G}$.
(3) We first prove that for any $\lambda\in {\rm Irr}}\def \glob{{\rm glob}(G),$ $V_{\lambda}$ lies in $\mathcal{F}_{V^G}'.$ and hence ${\rm FPdim}(\mathcal{F}_{V^G}')\ge o(G).$ Equivalently, we need to show that
$$c_{V_{\lambda},M_{\mu}}\circ c_{M_{\mu},V_{\lambda}}=\operatorname{id}_{M_{\mu}\boxtimes V_{\lambda}}$$
for any irreducible $V$-module $M,$ $\lambda\in {\rm Irr}}\def \glob{{\rm glob}(G)$ and $\mu\in \Lambda_M.$ It follows from \eqref{eq:iso} that
$$
{\rm Ind}_{V^G}^V(V_\lambda) \cong W_\lambda \otimes V
$$
as $V$-modules. Therefore,
\begin{align*}
{\rm Ind}_{V^G}^V(M_\mu \boxtimes_{V^G} V_\lambda) & \cong {\rm Ind}_{V^G}^V(M_\mu) \boxtimes_{{\rm Rep}(V)} {\rm Ind}_{V^G}^V(V_\lambda) \\
& \cong W_\lambda \otimes {\rm Ind}_{V^G}^V(M_\mu) \cong W_\lambda\otimes W_{\mu} \otimes \bigoplus_{i=1}^{[G:G_M]} M\circ g_i
\end{align*}
By (i) and Theorem \ref{mthm1}, we find $M_{\mu}\boxtimes V_{\lambda}$ is isomorphic
to a sum of some irreducible $V^G$-submodules of $M$ as $M\circ g_i$ and $M$ are isomorphic $V^G$-modules.
This implies that $\theta_M=\theta_{M_{\mu}}=\theta_{M_\mu\boxtimes V_{\lambda}}$ as complex numbers. Using the fact that $\theta_{V_{\lambda }}=1$ and relation
$$\theta_{M_\mu\boxtimes V_\lambda}=c_{V_{\lambda }, M_{\mu}}\circ c_{M_{\mu},V_{\lambda }}\circ (\theta_{M_{\mu}}\boxtimes \theta_{V_{\lambda }}),$$
we conclude $c_{V_{\lambda},M_{\mu}}\circ c_{M_{\mu},V_{\lambda}}=\operatorname{id}_{M_{\mu}\boxtimes V_{\lambda}}.$
As ${\cal C}_{V^G}$ is modular, it follows from (\ref{3.1}) that
$${\rm FPdim} (\mathcal{C}_{V^G})={\rm FPdim}(\mathcal{F}_{V^G})\cdot {\rm FPdim}(C_{\mathcal{C}_{V^G}}(\mathcal{F}_{V^G})).$$
From (1) we know that
$${\rm FPdim} (\mathcal{C}_{V^G})={\rm FPdim}(\mathcal{F}_{V^G})\cdot o(G)$$
which forces ${\rm FPdim}(C_{\mathcal{C}_{V^G}}(\mathcal{F}_{V^G}))=o(G).$ The fact that $\mathcal{E}_{V^G}$ is a full subcategory of $C_{\mathcal{C}_{V^G}}(\mathcal{F}_{V^G})$ and they have the same dimension
immediately implies that
$$\mathcal{F}_{V^G}'=C_{\mathcal{C}_{V^G}}(\mathcal{F}_{V^G})=\mathcal{E}_{V^G}.$$
(4) By (1)-(3) and Lemma \ref{l3.1}, $\mathcal{C}_{V^G}$ is a minimal modular extension of $\mathcal{F}_{V^G}.$ If $V$ is holomorphic, then $\mathcal{F}_{V^G} = \mathcal{E}_{V^G}$ and the statement follows from \cite[Thm. 4.22]{LKW1}.
(5) By Theorem \ref{theorem2}, $\overline{\mathcal{E}_{V^G}}=\mathcal{E}_{V^G}$ is a subcategory of $\overline{\mathcal{F}_{V^G}}.$ It follows from (1)-(4) that $\overline{\mathcal{C}_{V^G}}$ is a minimal modular extension of $\overline{\mathcal{F}_{V^G}}.$ If $V$ is holomorphic, then $\overline{\mathcal{F}_{V^G}} = \mathcal{E}_{V^G}$ and the statement follows from \cite[Thm. 4.22]{LKW1}.
\end{proof}
Note that $\mathbb C[G]^*$ is a regular algebra in ${\rm Rep}(G)$, which is the dual $G$-module of $\mathbb C[G]$, and is a commutative associative algebra over $\mathbb C$ with a basis $\{e_a \mid a \in G\}$ of complete orthogonal idempotents given by $e_a(b)=\delta_{a,b}$. It is easy to see that $a \cdot e_b = e_{ab}$, and the product $\mu: \mathbb C[G]^*\otimes \mathbb C[G]^*\to \mathbb C[G]^*$ defined by $e_ae_b=\delta_{a,b}e_a$ is a $G$-module homomorphism. The unit map of this algebra is given by $i_{\mathbb C[G]^*}: \underline{1} \to \mathbb C[G]^*, 1 \mapsto 1_{\mathbb C[G]^*}=\sum_{a \in G} e_a$. Now, we can prove the following result:
\begin{thm}\label{t5.3} Let $V$ and $G$ be as before. Then
\begin{enumerate}
\item[\rm (1)]
$F^{V,G}(\mathbb C[G]^*)$ is an algebra isomorphic to $V$ in category
$\mathcal{E}_{V^G}.$
\item[\rm (2)]
For any subgroup $H$ of $G$, $F^{V,G}(\mathbb C[G/H]^*)$ is a subalgebra of $F^{V,G}(\mathbb C[G]^*)$ isomorphic to $V^H$ in category $\mathcal{E}_{V^G}.$
\end{enumerate}
\end{thm}
\begin{proof}
(1) For short we set $F=F^{V,G}$ in this proof. We identify $\mathbb C[G]^{**}$ with $\mathbb C[G]$ in the usual way as $G$-modules, which means $b(e_a) = e_{a}(b) =\delta_{a,b}$. Since $\mathbb C[G]$ is a free $G$-module generated by $1$, for any $G$-module $W$, we have the natural isomorphism of vector spaces
$\mbox{Hom}_G(\mathbb C[G], W) \cong W$, which implies
$\mbox{Hom}_G(\mathbb C[G], W) = \{f_w \mid w \in W\}$ where $f_w(a) =aw$ for $a \in G$. In particular, we have $F(\mathbb C[G]^*)=\{f_v \mid v\in V\}.$
Let $U_n=\mbox{Hom}(\mathbb C[G], V_n) = \{f_v\mid v\in V_n\}$ for $n\geq 0.$
We now show that the algebra $U=\oplus_{n\geq 0}U_n=F(\mathbb C[G]^*)$ with product map $\mu_U= F(\mu)\circ J_{\mathbb C[G]^*, \mathbb C[G]^*}: U\boxtimes U\to U$ and unit map $i_U:=F(i_{\mathbb C[G]^*})\circ \epsilon : V^G \to U$ is isomorphic to $V$ in $\mathcal{E}_{V^G}.$ Note that the adjoint map $\mu': \mathbb C[G]\to \mathbb C[G]\otimes \mathbb C[G]$ of $\mu$ is determined by $\mu'(a)=a \otimes a$ for any $a \in G$. Thus $F(\mu): F(\mathbb C[G]^*\otimes \mathbb C[G]^*)\to F(\mathbb C[G]^*)$ is given by $F(\mu)(f)=f\mu'$
for $f\in F(\mathbb C[G]^*\otimes \mathbb C[G]^*)$. It follows from the braided tensor equivalence $F$ that $i_U:= F(i_{\mathbb C[G]^*}) \circ \epsilon $ is the unit map of the algebra $U$ in $\mathcal{E}_{V^G}$. For any $u, v\in V,$ and $a\in G$,
\begin{align*}
\mu_U (f_u\boxtimes f_v)(a) & =(F(\mu)\circ J_{\mathbb C[G]^*, \mathbb C[G]^*})(f_u\boxtimes f_v)(a)\\
& =Y(au,1)av=aY(u,1)v=f_{Y(u,1)v}(a)
\end{align*}
where $f_{Y(u,1)v}$ is understood to be $\sum_{n\in\mathbb Z}f_{u_nv}.$
Therefore, $\mu_U(f_u\boxtimes f_v)=f_{Y(u,1)v}$.
Recall from \cite{HKL} that $V$ is also algebra in $\mathcal{E}_{V^G}$ with the algebra product map
$$\mu_V(u\boxtimes v)=Y(u,1)v.$$
One can define the $\mathbb C$-linear isomorphism $\phi: v\mapsto f_v$ for $v\in V$ from $V$ to $U$. Then $\phi$ is a $V^G$-module map by the definition of $U$ which satisfies $\mu_U \circ (\phi \boxtimes \phi) = \phi \circ \mu_V$ and $i_U =\phi\circ i_V$, where unit map $i_V: V^G \to V$ of $V$ is the inclusion map.
In particular, $U$ is a vertex operator algebra isomorphic to $V.$ In fact, this can be seen directly that the vertex operator algebra structure on $U$ is given by $Y(f_u,x)f_v=f_{Y(u,x)v}=\sum_{n\in\mathbb Z}f_{u_nv}x^{-n-1}$
for $u,v\in V.$ Since $F$ is a braided tensor equivalence, $U=F(\mathbb C[G]^*)$ is a regular algebra of $\mathcal{E}_{V^G}$ isomorphic to $V$.
(2) For any subgroup $H$ of $G$, $\iota: \mathbb C[G/H]^* \to \mathbb C[G]^*, e_{aH} \mapsto \sum_{h \in H} e_{ah}$ is an algebra embedding in $\mathcal{E}$, where $e_{aH}(bH) = \delta_{aH, bH}$. Therefore, $F(\mathbb C[G/H]^*) \xrightarrow{F(\iota)}F(\mathbb C[G]^*)$ is an algebra embedding in $\mathcal{E}_{V^G}$.
We also identify $\mathbb C[G/H]^{**}$ with $\mathbb C[G/H]$ as $G$-modules.
From (1) we see that
$$F(\mathbb C[G/H]^*)=\mbox{Hom}_G(\mathbb C[G/H],V)=\{f_v|v\in V, f_v(ah)=f_v(a)\ \forall a\in G, h\in H\}.$$
So $f_v\in F(\mathbb C[G/H]^*)$ if and only if $ahv=av$ for any $a\in G, h\in H.$ This forces $v\in V^H$, and hence $F(\mathbb C[G/H]^*)=\{f_v|v\in V^H\}.$ Recall from (1) that $\phi(v)=f_v$ for $v\in V$ is an algebra isomorphism from $V$ to $F(\mathbb C[G]^*).$ It is clear now that
the restriction of $\phi$ to $V^H$ gives an algebra
isomorphism from $V^H$ to $F(\mathbb C[G/H]^*),$ as desired.
\end{proof}
\begin{rem} Theorem \ref{t5.3} gives a categorical interpretation of the Galois correspondence given in \cite{DM2}, \cite{HMT} that there is a bijection between the subgroups $H$ of $G$ and vertex operator subalgebras of $V$ containing $V^G$ by sending $H$ to $V^H.$
Combining with a result in \cite{HKL} we know that the condensable algebras in $\mathcal{E}_{V^G}$ are exactly $V^H$ for subgroups $H$ of $G.$ On the other hand, the condensable algebras in ${\rm Rep}(G)$ are given by $\mathbb C[G/H]^*$ for subgroups $H$ of $G$ \cite{KO}. It is easy from Theorem \ref{t5.3} to see that $F(\mathbb C[G/H]^*)$ is isomorphic to $V^H.$
\end{rem}
\section{The group $\mathcal{M}_v(\mathcal{E})$ and $\mathcal{M}_v(\mathcal{E})$-sets}
It has already been known from \cite{LKW1,LKW2} that $\mathcal{M}(\mathcal{E})$ is an abelian group. Our goal is to understand this group structure from the point of view of vertex operator algebra.
We need more notations and results on the braided fusion category. Let $\mathcal{C}$ and $\mathcal{D}$ be braided fusion categories. We denote by $\mathcal{C}\otimes \mathcal{D}$ the Deligne tensor product of $\mathcal{C}$ and $\mathcal{D}$. Then $L_\mathcal{C}=\oplus_{X\in {\cal O}(\mathcal{C})}X\otimes X^*$ is a contestable algebra in $\mathcal{C}\otimes \overline{\mathcal{C}}$ \cite{DMNO}. We also need a fact from \cite{KO} that the right adjoint of the tensor functor $\mathcal{E}\otimes \mathcal{E} \xrightarrow{\otimes} \mathcal{E}$ defines a braided tensor equivalence $R: \mathcal{E} \to \mathcal{E} \otimes_\mathcal{E} \mathcal{E} = (\mathcal{E} \otimes \mathcal{E})_{L_\mathcal{E}}$ and $R({\bf 1})\cong L_\mathcal{E}$ as algebras in $\mathcal{E} \otimes \mathcal{E}$. Now let $(\mathcal{C},\iota_{\mathcal{C}})$ and $(\mathcal{D},\iota_{\mathcal{D}})$ be braided $\mathcal{E}$-categories with embeddings $\iota_\mathcal{C}: \mathcal{E}\to \mathcal{C}$ and $\iota_\mathcal{D}: \mathcal{E}\to \mathcal{D}.$
Then
$$\iota_\mathcal{C}\otimes_\mathcal{E}\iota_\mathcal{D}: \mathcal{E} \xrightarrow{R} (\mathcal{E}\otimes \mathcal{E})_{L_\mathcal{E}}\xrightarrow{\iota_\mathcal{C} \otimes \iota_\mathcal{D}} (\mathcal{C}\otimes \mathcal{D})_{A}^0$$
defines an embedding of $\mathcal{E}$ into $(\mathcal{C}\otimes \mathcal{D})_{A}^0$, where $A =(\iota_\mathcal{C} \otimes \iota_\mathcal{D})R({\bf 1}) \cong (\iota_\mathcal{C} \otimes \iota_\mathcal{D})(L_\mathcal{E})$. Following \cite{DNO}, \cite{LKW1} one can define the tensor product of braided $\mathcal{E}$-categories as
\begin{equation} \label{eq:product}
\mathcal{C}\otimes_{\mathcal{E}}^{(\iota_\mathcal{C},\iota_\mathcal{D})}\mathcal{D}:=((\mathcal{C}\otimes \mathcal{D})_{A}^0, \iota_\mathcal{C}\otimes_\mathcal{E}\iota_\mathcal{D})\,.
\end{equation}
Let $G$ be a finite group and $V$ a vertex operator algebras satisfying conditions (V1)-(V3) such that $G$ acts faithfully on $V$ as automorphisms of $V$.
We say that two such vertex operator algebras $V_1, V_2$ are \emph{equivalent} if there exists an isomorphism $\sigma : V_1 \to V_2$ of vertex operator algebras which commutes with their $G$-actions. In this case, it is easy to see that $V_1^{G}$ and $V_2^{G}$ are isomorphic and their module categories are braided equivalent.
So the number of inequivalent faithful $G$-action on $V$ as automorphisms is bounded by the cardinality of the conjugacy classes of $G$ in ${\rm Aut}(V).$ For example, there are two inequivalent faithful $\mathbb Z_2$ actions on the Moonshine vertex operator algebra $V^{\natural}$ \cite{FLM}.
We denote by ${\bf R}_G$ for the the collection of vertex operators algebras satisfying conditions (V1)-(V3) with $G$ acting faithfully as automorphisms. The subcollection of ${\bf R}_G$ consisting of holomorphic vertex operators algebras is denoted by ${\bf H}_G$.
The collection ${\bf H}_G$ of holomorphic vertex operators algebras could be generalized to non-holomorphic ones as follows: Fix a nondegenerate pseudounitary braided $\mathcal{E}$-category $\mathcal{F}$. Let
${\bf R}_G^\mathcal{F}$ be the collection of vertex operator algebras $V\in {\bf R}_G$ such that $\mathcal{F}_{V^G}$ is braided equivalent to $\mathcal{F}$. The underlying braided equivalence $j^{V, G}: \mathcal{F}_{V^G} \to \mathcal{F}$ induces an $\mathcal{E}$-braided equivalence $j^{V, G}: (\mathcal{F}_{V^G}, F^{V,G}) \to (\mathcal{F}, j^{V, G} \circ F^{V,G}).$
In particular, if $\mathcal{F}=\mathcal{E}$, ${\bf R}^{\mathcal{E}}_G = {\bf H}_G$. We will use the notation $[\mathcal{C}_{V^G}]$ to denote the equivalence class of the braided $\mathcal{E}$-category $(\mathcal{C}_{V^G}, F^{V,G})$ for any $V \in {\bf R}_G^\mathcal{F}$.
By Theorem \ref{t5.2}, $\mathcal{C}_{V^G}$
is a minimal modular extension of $\mathcal{E}$ and for $V\in {\bf H}_G$ and $\mathcal{C}_{U^G}$ is a minimal modular extension of $\mathcal{F}$ for $U\in {\bf R}_G^\mathcal{F}.$
Set $\mathcal{M}_{v}(\mathcal{E})=\{[\mathcal{C}_{V^G}] \mid V\in {\bf H}_G\}$, and
$\mathcal{M}_{v}(\mathcal{F})=\{[\mathcal{C}_{U^G}] \mid U\in {\bf R}^\mathcal{F}_G\}$.
By Theorem \ref{t5.2}, we have the inclusions $\mathcal{M}_{v}(\mathcal{E})\subseteq \mathcal{M}(\mathcal{E})$ and $\mathcal{M}_{v}(\mathcal{F})\subseteq \mathcal{M}(\mathcal{F}).$
\begin{thm}\label{t7.1} Fix a finite group $G$ and let $\mathcal{E}={\rm Rep}(G)$. Then:
\begin{enumerate}
\item[\rm (1)] The product
$$[\mathcal{C}_{V^G}]\cdot [\mathcal{C}_{U^G}]=[\mathcal{C}_{(V\otimes U)^G}]$$
for $U,V\in {\bf H}_G$ on $\mathcal{M}_{v}(\mathcal{E})$ coincides with
the product of $\mathcal{M}(\mathcal{E})$ given by \eqref{eq:product},
and hence $\mathcal{M}_{v}(\mathcal{E})$ is a subgroup of
$\mathcal{M}(\mathcal{E})$.
Moreover, $\mathcal{M}_{v}(\mathcal{E})$ is isomorphic to a subgroup of $H^3(G, S^1)$, and $\mathcal{C}_{V^G}$ is braided equivalent to ${\cal Z}({\rm Vec}_G^{\alpha})$
for some $\alpha\in H^3(G,S^1)$ if $V\in {\bf H}_G.$
\item[\rm (2)] For any (pseudounitary) braided $\mathcal{E}$-category $\mathcal{F}$, if ${\bf R}_G^\mathcal{F}$ is not an empty collection, then the product $[\mathcal{C}_{V^G}]\cdot [\mathcal{C}_{W^G}]:=[\mathcal{C}_{(V\otimes W)^G}]$
for $V\in{\bf H}_G$ and $W\in {\bf R}_G^\mathcal{F}$ defines a free action of $\mathcal{M}_{v}(\mathcal{E})$ on $\mathcal{M}_{v}(\mathcal{F})$ with at most
$[{\cal M}(\mathcal{E}):{\cal M}_v(\mathcal{E})]$ many orbits.
\item[\rm (3)] The group $\mathcal{M}_{v}(\mathcal{E})$ acts freely
on $\{[\mathcal{C}_{W^G}]|W\in {\bf R}_G\}$ such that
$[\mathcal{C}_{V^G}]\cdot [\mathcal{C}_{W^G}]:=[\mathcal{C}_{(V\otimes W)^G}].$
\end{enumerate}
\end{thm}
\begin{proof} (1)
Let $W$ be any holomorphic vertex operator algebra. Then there exists a positive integer $n$ such that $G$ can be realized as a subgroup of the symmetric group $S_n$. Thus $G$ is an automorphism group of holomorphic vertex operator algebra $V=W^{\otimes n}.$ That means $\mathcal{C}_{V^G}$ lies in $\mathcal{M}_{v}(\mathcal{E})$ and so $\mathcal{M}_{v}(\mathcal{E})$ is not empty.
All the statements in the theorem is a consequence of the equality:
$$
[\mathcal{C}_{V^G}]\cdot [\mathcal{C}_{U^G}]=[\mathcal{C}_{(V\otimes U)^G}]
$$
for any $V \in \mathbf{H}_G$ and $U \in {\bf R}_G$. It amounts to prove that
\begin{equation}\label{7.1}
(\mathcal{C}_{(V\otimes U)^G}, F^{{V\otimes U, G}})\cong \mathcal{C}_{V^G}\otimes_\mathcal{E}^{(F^{V,G},F^{U,G})} \mathcal{C}_{U^G},
\end{equation}
as braided $\mathcal{E}$-categories, which means we need to find a braided tensor equivalence $\tilde\phi : (\mathcal{C}_{V^G} \otimes \mathcal{C}_{U^G})_A^0 \to \mathcal{C}_{(V \otimes U)^G}$ such that $\tilde\phi \circ (F^{V,G}\otimes_\mathcal{E} F^{U,G})\cong F^{V \otimes U, G}$ as tensor functors, where $A =(F^{V,G} \otimes F^{U,G})(L_\mathcal{E})$.
To find such a braided tensor equivalence $\tilde\phi$, we first show that $(V \otimes U)^G$ and $A$ are isomorphic algebras in $\mathcal{C}_{V^G} \otimes \mathcal{C}_{U^G}$ under an isomorphism $\phi$. This algebra isomorphism induces a (strict) tensor equivalence $\tilde\phi : (\mathcal{C}_{V^G} \otimes \mathcal{C}_{U^G})_A \to (\mathcal{C}_{V^G} \otimes \mathcal{C}_{U^G})_{(V \otimes U)^G}$ and hence a braided tensor equivalence $\tilde\phi: (\mathcal{C}_{V^G} \otimes \mathcal{C}_{U^G})^0_A \to (\mathcal{C}_{V^G} \otimes \mathcal{C}_{U^G})^0_{(V \otimes U)^G}=\mathcal{C}_{(V \otimes U)^G}$.
By Theorem \ref{t5.3}, there exists an algebra isomorphism
$$
V \otimes U \xrightarrow[\phi]{\sim} F^{{V\otimes U, G \times G}}(\mathbb C[G \times G]^*)
$$
in $\mathcal{E}_{(V \otimes U)^{G \times G}}$. Consider $G$ as the diagonal subgroup of $G \times G$. Then $\mathbb C[(G \times G)/G]^*$ is a subalgebra of $\mathbb C[G \times G]^*$ in ${\rm Rep}(G \times G)$. It follows from Theorem \ref{t5.3} (2) that the restriction of $\phi$ defines an isomorphism
$$
(V \otimes U)^G \xrightarrow[\phi]{\sim} F^{{V\otimes U, G \times G}}(\mathbb C[(G \times G)/G]^*)
$$
in $\mathcal{E}_{{(V \otimes U)^{G \times G}}}$. Note that if one identifies ${\rm Rep}(G \times G)$ with
$\mathcal{E} \otimes \mathcal{E}$, then $\mathbb C[(G \times G)/G]^* = L_\mathcal{E}$ and $F^{{V\otimes U, G \times G}} = F^{V,G} \otimes F^{U,G}$. Therefore, $\phi$ defines an isomorphism of algebras in $\mathcal{C}_{V^G} \otimes \mathcal{C}_{U^G}$ from $(V\otimes U)^G$ to $A.$ Since $A$ is an algebra in $\mathcal{E}_{V^G} \otimes \mathcal{E}_{U^G}$, so is $(V\otimes U)^G$.
Now, the algebra isomorphism $\phi$ in $\mathcal{C}_{V^G} \otimes \mathcal{C}_{U^G}$ induces a braided tensor equivalence $\tilde \phi: (\mathcal{C}_{V^G}\otimes \mathcal{C}_{U^G})_{A}^0 \to \mathcal{C}_{(V\otimes U)^G}$, and $\phi: (V \otimes U)^G \to \tilde \phi(A)$ is an isomorphism of $(U \otimes V)^G$-modules, and $(\tilde \phi \circ (F^{V,G} \otimes_\mathcal{E} F^{U,G}), J', \phi)$ is a braided tensor functor, where $J'= J^{V \otimes U,G \times G}.$ To complete the proof of \eqref{7.1}, we need to show that $(\mathcal{E}_{V^G}\otimes\mathcal{E}_{U^G})_{(V\otimes U)^G} = \mathcal{E}_{(V\otimes U)^G}$ and the equivalence of the two braided tensor functors:
$$(\tilde \phi \circ (F^{V,G} \otimes_\mathcal{E} F^{U,G}), J', \phi) \cong (F^{{V\otimes U, G }}, J^{V\otimes U, G}, \epsilon ).$$
where $J^{V \otimes U, G}$ and $\epsilon $ are the monoidal structure of the functor $F^{V \otimes U, G}$ defined in Theorem \ref{theorem2}.
Note that
the simple $(V \otimes U)^G$-submodules $(V \otimes U)_\lambda \in (\mathcal{E}_{V^G} \otimes \mathcal{E}_{U^G})_{(V\otimes U)^G}$ for $\lambda \in {\rm Irr}(G)$. Since ${\rm FPdim} ((\mathcal{E}_{V^G} \otimes \mathcal{E}_{U^G})_{(V\otimes U)^G}) = |G|$, we find $(\mathcal{E}_{V^G}\otimes\mathcal{E}_{U^G})_{(V\otimes U)^G} = \mathcal{E}_{(V\otimes U)^G}$.
Using the identification
$\mathcal{E}\otimes \mathcal{E}={\rm Rep}(G\times G)$, we can write each object
$X\in \mathcal{E}\otimes_\mathcal{E} \mathcal{E}$ as a $G \times G$-module with a decomposition $X=\oplus_{x\in G\times G/G}X_x$
such that $(g,h)X_x=X_{(g,h)x}$ for $(g,h) \in G \times G$ \cite{KO}. In particular, $X_1$ is a $G$-module, where $1$ denotes the diagonal subgroup of $G \times G$. Moreover, the induction functor ${\rm Ind}_G^{G \times G} : \mathcal{E} \to \mathcal{E}\otimes_\mathcal{E} \mathcal{E}$ is the corresponding right adjoint of the braided equivalence $\mathcal{E} \otimes_\mathcal{E} \mathcal{E} \xrightarrow{\otimes} \mathcal{E}$. In this convention, $\tilde\phi \circ (F^{V,G} \otimes F^{U,G}) \circ {\rm Ind}_G^{G \times G} \stackrel{\psi}\cong F^{V \otimes U, G}$, where $\psi$ is given by the natural isomorphism,
\begin{align*}
(\tilde \phi \circ (F^{V,G} \otimes F^{U,G}) \circ {\rm Ind}_G^{G \times G})(X) & = \mbox{Hom}_{G \times G}(({\rm Ind}_G^{G \times G}(X))^*, V \otimes U) \\
& \cong \mbox{Hom}_{G \times G}({\rm Ind}_G^{G \times G}(X^*), V \otimes U) \\
& \cong \mbox{Hom}_{G}(X^*, V \otimes U) \\
& =F^{V \otimes U, G} (X)
\end{align*}
for $X \in \mathcal{E}$. With the identification of $X^*$ and $({\rm Ind}_G^{G \times G} (X))^*_1$, the inverse $\psi^{-1} : F^{V \otimes U, G} (X) \to \mbox{Hom}_{G \times G}(({\rm Ind}_G^{G \times G}(X))^*, V \otimes U)$ is given by
$$\psi^{-1}(f)(yu)=yf(u)$$
for any $u \in X^*$, $y \in G \times G$ and $f \in F^{V \otimes U, G} (X)$. In particular, $\psi^{-1}(f)(yu) = f(yu)$ if $y \in G$.
Now we need to show $\psi$ is a isomorphism of the tensor functors which requires to prove the commutativity of following diagrams for $X,Y\in \mathcal{E}$ :
\begin{equation*}
\begin{tikzcd}
F^{V \otimes U, G}(X)\boxtimes F^{V \otimes U, G}(Y) \arrow[r, "J^{V \otimes U, G}_{X,Y}"] \arrow[d,"\psi^{-1}\boxtimes \psi^{-1}"]& F^{V \otimes U, G}(X\otimes Y)\arrow[d,"\psi^{-1}"] \\
F'(X)\boxtimes F'(Y) \arrow[r, "J'_{X,Y}"] &F'(X\otimes Y)
\end{tikzcd} \text{ and }
\begin{tikzcd}
(V \otimes U)^G \arrow[r, "\epsilon "] \arrow[rd, "\phi"'] & F^{V \otimes U, G}(\uline{1}) \arrow[d, "\psi^{-1}"] \\
& F'(\uline{1})
\end{tikzcd}
\end{equation*}
where $F'=\tilde\phi \circ (F^{V,G} \otimes F^{U,G}) \circ {\rm Ind}_G^{G \times G}.$ Let $f\in F^{V \otimes U, G}(X), g\in F^{V \otimes U, G}(Y)$
and $u\in X^*, v\in Y^*.$ We know from the proof of Theorem \ref{theorem2} that $\psi^{-1}\circ J_{X,Y}^{V\otimes U, G}$
is characterized by
$$( \overline{\psi^{-1}\circ J_{X,Y}^{V\otimes U, G}})(f\boxtimes g)(v\otimes u)=Y(fu,1)gv.$$
Since $J'=J^{V\otimes U, G\times G}=J^{V,G}\otimes J^{U,G}$ we immediately see that $J'_{X,Y}$ is characterized by
$$\overline{J'_{X,Y}}(\psi^{-1}(f)\boxtimes \psi^{-1}(g))(v\otimes u)=Y(\psi^{-1}(f)u,1)\psi^{-1}(g)v=Y(fu,1)gv,$$
which proves the commutativity of the first diagram. Note that the $(V \otimes U)^G$-module isomorphism $\phi: (V \otimes U)^G \to F'(\uline{1})$ is unique up to a scalar. Since $\psi^{-1}(\epsilon (x \otimes y))(1) = x \otimes y =\phi(x \otimes y)(1) $ for any $x \in V^G$ and $y \in U^G$, the second commutativity follows.
Therefore, $\mathcal{M}_{v}(\mathcal{E})$ is closed under the product of $\mathcal{M}(\mathcal{E})$, and hence $\mathcal{M}_{v}(\mathcal{E})$ is a subgroup of $\mathcal{M}(\mathcal{E})$. Following the preceding discussion, there exists a unique $\alpha \in H^3(G, S^1)$ such that $(\mathcal{C}_{V^G}, F^{V,G})$ is equivalent to $(\mathcal{Z}({\rm Vec}^\alpha _G), \iota_\alpha )$. In particular,
$\mathcal{C}_{V^G}$ is equivalent to $\mathcal{Z}({\rm Vec}^\alpha _G)$ as modular tensor categories.
(2) From the proof of (1), for $V \in {\bf H}_G$ and $W \in {\bf R}_G^\mathcal{F}$ for some (pseudounitary) nondegenerate braided $\mathcal{E}$-category $\mathcal{F}$, $[\mathcal{C}_{W^G}] \in \mathcal{M}_v(\mathcal{F})$ and the pair $(\mathcal{C}_{(V\otimes W)^G}, F^{V\otimes W, G}) \cong \mathcal{C}_{V^G}\otimes_\mathcal{E}^{(F^{V,G},F^{W,G})}\mathcal{C}_{W^G}.$ According \cite{LKW1}, $\mathcal{C}_{V^G}\otimes_\mathcal{E}^{(F^{V,G},F^{W,G})}\mathcal{C}_{W^G}$ is a minimal modular extension of $\mathcal{F}$, so is $(\mathcal{C}_{(V\otimes W)^G}, F^{V\otimes W, G})$. Therefore, $V \otimes W \in {\bf R}_G^\mathcal{F}$ and $\mathcal{M}_v(\mathcal{F})$ admits a $\mathcal{M}_v(\mathcal{E})$-action defined by $[\mathcal{C}_{V^G}][\mathcal{C}_{W^G}] :=[\mathcal{C}_{(V \otimes W)^G}]$ which coincides with $\mathcal{M}(\mathcal{E})$-action on $\mathcal{M}(\mathcal{F})$. By \cite{LKW1}, $\mathcal{M}(\mathcal{F})$ is an $\mathcal{M}(\mathcal{E})$-torsor. So, the
action of $\mathcal{M}_{v}(\mathcal{F})$ on $\mathcal{M}_{v}(\mathcal{F})$ is free. Since
the cardinality $|\mathcal{M}(\mathcal{F})|$ is equal to $o(\mathcal{M}(\mathcal{E})),$ it is immediate
that the number of $\mathcal{M}_{v}(\mathcal{E})$-orbits on $\mathcal{M}_{v}(\mathcal{F})$
is less than or equal to the index $[{\cal M}(\mathcal{E}):{\cal M}_v(\mathcal{E})].$
(3) follows directly from (2).\end{proof}
\begin{rem}
We now explain how to associate 3-cocycle $\alpha\in Z^3(G,S^1)$ to a holomorphic vertex operator algebra $V$ satisfying conditions (V1)-(V3) such that $\mathcal{C}_{V^G}$ and $\mathcal{Z}({\rm Vec}_G^\alpha)$ are braided equivalent. By Theorem \ref{t5.3}, $V$ is a condensable algebra in $\mathcal{C}_{V^G}.$ Since $V$ is holomorphic, for every $g\in G$ there is a unique irreducible $g$-twisted $V$-module $V(g)$ up to equivalence \cite{DLM4}.
According to \cite{DLXY}, every simple object in $(\mathcal{C}_{V^G})_V$ is isomorphic to $V(g)$ for some $g\in G.$ From the discussion in Section 4, $(\mathcal{C}_{V^G})_V$ is $G$-graded fusion category such that
$((\mathcal{C}_{V^G})_V)_g$ is generated by $V(g).$ The associativity isomorphism
$$(V(g)\boxtimes_{{\rm Rep}(V)} V(h))\boxtimes_{{\rm Rep}(V)}V(k)\to V(g)\boxtimes_{{\rm Rep}(V)}(V(h)\boxtimes_{{\rm Rep}(V)}V(k)) $$
determines an $\alpha\in H^3(G,S^1).$
\end{rem}
\begin{rem} It is definitely desirable that ${\cal M}(\mathcal{E})={\cal M}_v(\mathcal{E})$, but we could not supply a proof to this claim. If this is true and ${\cal M}_v(\mathcal{F})\ne \emptyset$ then ${\cal M}_v(\mathcal{F})={\cal M}(\mathcal{F})$ is an ${\cal M}_v(\mathcal{E})$-torsor.
\end{rem}
\begin{rem} It is worthy to mention that for any $V\in {\bf H}_G,$ the inverse of $[\mathcal{C}_{V^G}]$ in $\mathcal{M}_{v}(\mathcal{E})$ is $[\overline{\mathcal{C}_{V^G}}].$ By Theorem \ref{t7.1}, $\overline{\mathcal{C}_{V^G}}$ is braided equivalent to
$\mathcal{C}_{(V^{\otimes{m-1}})^G}$ where $m$ is the order of $[\mathcal{C}_{V^G}].$ So Conjecture \ref{conjecture4.1} holds for rational vertex operator algebra $V^G$ for $V\in {\bf H}_G.$
\end{rem}
It is proved in \cite{EG} that if $G$ is solvable, then for any $\alpha\in H^3(G,S^1)$ there is a regular vertex operator algebra $V$ such that
$\mathcal{C}_V$ is braided equivalent to $\mathcal{Z}({\rm Vec}^\alpha _G).$ But it is not clear to us that this $V$ can be realized as $U^G$ for some holomorphic vertex operator algebra $U$ with a faithful $G$-action. In the next section
we give a proof when $G$ is a dihedral group or abelian group with at most two generators.
\section{Lattice vertex operator algebras and pointed minimal extensions}
We explain in this section how to use lattice vertex operator algebras to realize the pointed modular categories. In particular, if $\mathcal{Z}({\rm Vec}^\alpha _G)$ is pointed for some $\alpha \in H^3(G,S^1)$,
we prove that there exists a positive definite even unimodular lattice $L$ such that $G$ can be realized as an automorphism group of the lattice vertex operator algebra $V_L,$ $V_L^G$ is also a lattice vertex operator algebra and $(\mathcal{Z}({\rm Vec}^\alpha _G),\iota_\alpha )\cong (\mathcal{C}_{V_L^G}, F^{V_L,G})$ as minimal modular extension of ${\rm Rep}(G)$.
For this purpose we need to recall the construction of the vertex operator algebra $V_L$ associated to a positive definite even lattice $L$ with a bilinear form
$(\cdot , \cdot)$ and its irreducible modules following \cite{D1, DL1, FLM}. As usual we denote by $L^\circ$ the dual lattice of $L.$ Then there exists a positive even integer $m$
and an alternating $\mathbb Z$-bilinear function
$$c: L^{\circ}\times L^{\circ }\to \<\zeta_m\>$$
such that $c(\alpha,\beta)=(-1)^{(\alpha,\beta)}$ for $\alpha,\beta\in L$ where $\zeta_m=e^{2\pi i/m}$
(see \cite[Remark 12.18]{DL1}). In fact, $c(\alpha,\beta)=\epsilon (\alpha ,\beta)\epsilon (\beta,\alpha)^{-1}$ for
some $\mathbb Z$-bilinear function $\epsilon : L^{\circ}\times L^{\circ }\to \<\zeta_m\>$ for $\alpha,\beta\in L^{\circ}.$
Consider the corresponding central extension $\widehat{L^{\circ}}$ of $L^{\circ}$ by the cyclic group $\langle\zeta_m\rangle$:
\[
1\rightarrow \langle\zeta_m\rangle\rightarrow \widehat{L^{\circ}}\mathop{\rightarrow}\limits^{-}L^{\circ}\rightarrow 0
\]
with commutator map $c(\cdot\,,\cdot).$ Let $e:L^\circ \to\widehat{L^\circ},\,\lambda\mapsto e_{\lambda}$ be a section such that $e_0=1$ and $e_{\alpha}e_{\beta}=\epsilon(\alpha,\beta)e_{\alpha+\beta}$ for any $\alpha,\,\beta\in L^{\circ}$.
We can assume that $\epsilon(\alpha,\alpha)=(-1)^{\frac{(\alpha,\alpha)}{2}}$
for any $\alpha\in L.$ Then the twisted group algebra $\mathbb C^\epsilon [L^{\circ}]=\sum_{\alpha\in L^{\circ}}\mathbb C e^{\alpha}$ with product $e^\alpha\cdot e^{\beta}=\epsilon(\alpha,\beta)e^{\alpha+\beta}$
for $\alpha,\beta\in L^{\circ}$ is a quotient of the group algebra $\mathbb C[\widehat{L^{\circ}}]$ by identifying
$\zeta_m\in \widehat{L^{\circ}}$ with $\zeta_m\in\mathbb C.$
It is easy to see that $\mathbb C^\epsilon [L^{\circ}]=\oplus_{i\in L^\circ/L}\mathbb C^\epsilon [L+\lambda_i]$ where
$\mathbb C^\epsilon [L+\lambda_i]=\oplus_{\alpha\in L}\mathbb C e^{\lambda_i+\alpha}$ and $L^{\circ}/L=\{L+\lambda_i\mid i\in L^{\circ}/L\}.$
Let ${\frak h}=\mathbb C\otimes_\mathbb Z L$ be an abelian Lie algebra and extend the form $(\cdot,\cdot)$ to ${\frak h}$ by $\mathbb C$-linearity. Let $\{h_1,...,h_d\}$ be an orthonormal basis of $\frak h$ where $d$ is the rank of $L.$ Then the affine Lie algebra
$$\widehat{\frak h}={\frak h}\otimes \mathbb C[t,t^{-1}]\oplus \mathbb C k$$
has a unique irreducible module
$$M(1)=[h_i(-n)|i=1,...,d, n>0]$$
such that $h_i(n)$ acts as $n\frac{\partial}{\partial h_i(-n)}$ if $n>0,$ as multiplication operator $h_i(n)$
if $n<0$ and $0$ if $n=0,$ and $k$ acts as $1$ where $h(n)=h\otimes t^n$ for $h\in {\frak h}$ and $n\in\mathbb Z.$
Set $$V_{L^{\circ}}=M(1)\otimes \mathbb C^\epsilon [L^{\circ}]=\bigoplus_{i\in L^{\circ}/L}V_{L+\lambda_i}$$ where
$V_{L+\lambda_i}=M(1)\otimes \mathbb C^\epsilon [L+\lambda_i].$ Then
$V_L$ is a vertex operator algebra and $\{V_{L+\lambda_i}\mid i\in L^{\circ}/L\}$ is a complete list of inequivalent irreducible $V_L$-modules. Moreover, the ribbon structure of $\mathcal{C}_{V_L}$ is given by
$\theta_{V_{L+\lambda_i}}=e^{\pi i(\lambda_i,\lambda_i)}.$
For any $\alpha\in \mathbb Q\otimes_\mathbb Z L$, one can define an automorphism
$\sigma_\alpha $ of finite order of $V_L$ by setting
$$\sigma_\alpha (u\otimes e^\beta)=e^{2\pi i(\alpha,\beta)}u\otimes e^\beta$$
for $u\in M(1)$ and $\beta\in L.$ This type of automorphism
will be useful in the following discussions.
We first show that ${\cal M}_v(\mathcal{E})\cong H^3(G,S^1)$ when
$G$ is a cyclic group or a dihedral group by using
concrete lattice vertex operator algebras associated
to the Niemeier lattice
$L$ of type $A_1^{24}.$
\begin{prop}\label{p7.2} Let $V,G, \mathcal{F}, \mathcal{E}$ be as before. If $G\cong \mathbb Z_n$ is a cyclic group or $G\cong D_{2m}$ is a dihedral group of order $2m$ with $m$ being odd, then ${\cal M}_v(\mathcal{E})={\cal M}(\mathcal{E})\cong H^3(G,S^1)$ and
$\mathcal{M}_v(\mathcal{F})$ is a ${\cal M}_v(\mathcal{E})$-torsor if $\mathcal{M}_v(\mathcal{F})$ is not empty.
\end{prop}
\begin{proof}
(1) $G\cong \mathbb Z_n.$ In this case $H^3(G,S^1) \cong \mathbb Z_n$. Therefore, it suffices to show that $\mathcal{M}_{v}(\mathcal{E})$ has an element of order $n.$ Consider the holomorphic lattice
vertex operator algebra $V_L$ associated to the Niemeier lattice
$L$ of type $A_1^{24}$ \cite{FLM}. Then
$$L=\sum_{C\in G_{24}}\mathbb Z \frac{1}{2}\alpha_C+Q$$
where $G_{24}$ is the Golay code based on the set $\Omega=\{1,...,24\},$ $Q=\sum_{i=1}^{24}\mathbb Z\alpha_i$ is a positive definite lattice with $(\alpha_i,\alpha_j)=2\delta_{i,j}$ and $\alpha_C=\sum_{i\in C}\alpha_i.$
Let $G$ be the cyclic group generated by $\sigma=\sigma_{\alpha_1/n}.$ Then $V_L^G=V_E$ where $E$ is the sublattice of $L$ given by $E=\{\alpha\in L\mid (\alpha_1,\alpha)\in n\mathbb Z\}.$
Moreover, there is a unique
irreducible $\sigma$-twisted module $V_L(\sigma)=M(1)\otimes \mathbb C^{\epsilon }[L]e^{-\alpha_1/n}$ \cite{DM1}.
Note that $V_{E-\alpha_1/n}=M(1)\otimes \mathbb C^{\epsilon }[E]e^{-\alpha_1/n}$ is an irreducible $V_E$-module
with $\theta_{V_{E-\alpha_1/n}}=e^{2\pi i/n^2}.$ In particular, the order of $\tilde t=\diag(\theta_{M} \mid {M\in {\cal O}(C_{V_L^G})})$, denoted by $\operatorname{FSexp}(\mathcal{C}_{V^G_L})$, is a multiple of $n^2$.
By Theorem \ref{t7.1}, $(C_{V_L^G}, F^{V_L, G}) \cong (\mathcal{Z}({\rm Vec}_G^\omega), \iota_\omega)$ as braided $\mathcal{E}$-categories for some $\omega \in H^3(G, S^1)$.
By \cite{NS}, $\operatorname{FSexp}(\mathcal{Z}({\rm Vec}_G^\omega))$ is equal to least common multiple of $\operatorname{ord}(\omega|_H) \cdot o(H)$, where $\omega|_H$ is the restriction of $\omega$ on $H$, and $H$ runs through the maximal cyclic subgroups of $G$. It follows from the preceding paragraph that $\operatorname{ord}(\omega)=n$. Therefore, $[\mathcal{C}_{V_L^G}] \in \mathcal{M}_v(\mathcal{E})$ is of order $n,$ as desired.
(2) $G\cong D_{2m}$ for some odd integer $m$. Then $H^3(G,S^1)\cong\mathbb Z_{2m}.$ Recall from \cite[Theorems 10.1.2, 10.1.5]{FLM} that the Golay code $G_{24}$ is built from the Hamming codes $\mathcal{C}_1$ and $\mathcal{C}_2.$
In fact,
$$G_{24}=\<(S,S, \emptyset), (S,\emptyset, S), (T,T,T)|S\in \mathcal{C}_1,T\in\mathcal{C}_2\>.$$
Let $\Omega=\{1,....,24\}$ and $w=\{1,2,11,12\}.$ Then
$|w\cap C|$ is always even for any $C\in G_{24}.$
Define an isometry $\nu$ of $L$ such that
$$\nu (\sum_{i=1}^{24} k_i\alpha_i)= -\sum_{i\in w}k_i\alpha_i+
\sum_{i\notin w}k_i\alpha_i$$
where $k_i\in \frac{1}{2}\mathbb Z.$ Then $(\nu(\alpha),\alpha)\in 2\mathbb Z$ for all $\alpha\in L.$
So $\nu$ satisfies the assumption in \cite{Le} and can be lifted to an automorphism $\tau$ of $V_L$ of order $2.$ Let $V_L(\tau)$ be the unique irreducible $\tau$-twisted $V_L$-module. Then
$V_L(\tau)$ has a gradation $V_L(\tau)=\oplus_{n\geq 0}V_L(\tau)_{\frac{1}{4}+\frac{1}{2} n}$ as the eigenspace of
$\nu$ on ${\frak h}$ with eigenvalue $-1$ has dimension $4$ \cite{DL2}. Recall that $\sigma_m=\sigma_{\alpha_1/m}$ is an automorphism of $V_L$ of order $m.$ It is easy to see that the group $G$ generated by $\sigma_m$ and $\tau$ is isomorphic to $D_{2m}.$ Note that
for any $h=\sigma_m^s$ with $s \not \equiv 0 \mod m$,
$$V_L(h)=\oplus_{n\geq0}V_L(h)_{\frac{1}{o(h)^2}+\frac{1}{o(h)}n}, $$
$$V_L(h\tau)=\oplus_{n\geq0}V_L(h\tau)_{\frac{1}{4}n}$$
by \cite[Theorem 5.11]{EMS}. In particular, $\theta_{V_{E-\alpha_1/m}}= e^{2 \pi i/m^2} $ and $\theta_{V_L(\sigma_m \tau)^+}= i$ in $\mathcal{C}_{V_L^G}$
where $E$ is defined as in (1) and $V_L(\sigma_m \tau)^+
=\oplus_{n\geq0}V_L(\sigma_m\tau)_{\frac{1}{4}+n}.$
Therefore, $\operatorname{FSexp}(\mathcal{C}_{V_L^G})$ is a multiple of $4m^2$.
By Theorem \ref{t7.1} again, $(\mathcal{C}_{V_L^G}, F^{V_L, G}) \cong (\mathcal{Z}({\rm Vec}_G^\omega), \iota_\omega)$ as braided $\mathcal{E}$-categories for some $\omega \in H^3(G, S^1)$. Therefore, $\operatorname{FSexp}(\mathcal{Z}({\rm Vec}_G^\omega))=\operatorname{FSexp}(\mathcal{C}_{V_L^G})$ is a multiple of $4m^2$. By \cite{NS}, $\operatorname{FSexp}(\mathcal{Z}({\rm Vec}_G^\omega))$ is the least common multiple of $\operatorname{ord}(\omega |_H) \cdot o(H)$ where $H$ runs through all the maximal cyclic subgroups of $G$. Therefore, $\operatorname{ord}(\omega|_{\< \sigma_m\>})=m$ and $\operatorname{ord}(\omega|_{\< \sigma_m^i \tau\>})=2$ for some $i$. Since $m$ is odd, $\operatorname{ord}(\omega)=2m$ and hence $[\mathcal{C}_{V_L^G}] \in \mathcal{M}(\mathcal{E})$ is of order $2m$.
The proof is complete.
\end{proof}
A fusion category $\mathcal{C}$ is called \emph{pointed} if ${\rm FPdim}(V)=1$ for any simple object $V$ (cf. \cite{EGNO} for more details). For any pointed fusion category, there exists a canonical spherical structure on $\mathcal{C}$ such that $\dim(V)>0$ for each nonzero object $V$, and this implies $\dim(V)=1$ if $V$ is simple. This condition on the positivity of categorical dimensions has been assumed throughout this paper. The set $A = {\rm Irr}(\mathcal{C})$ forms a group under the tensor product of simple objects, and $\mathcal{C}$ is equivalent to ${\rm Vec}_A^\omega$ for some $\omega \in Z^3(A, \mathbb C^\times)$ as fusion categories. If, in addition, $\mathcal{C}$ is braided, then $A$ is abelian and there exists a normalized 2-cochain $c: A \times A \to \mathbb C^\times$ such that the scalar $c(g,h): e(g) \otimes e(h) \to e(h) \otimes e(g)$ defines a braiding on ${\rm Vec}_G^\omega$. Let ${\rm Vec}_A^{(\omega, c)}$ denote this braided fusion category and hence a ribbon category with the underlying spherical structure. The pair $(\omega, c)$ also defines an Eilenberg-MacLane \emph{abelian} 3-cocycle, and the cohomology class $[(\omega, c)]$ in the corresponding cohomology group $H^3_{ab}(A, \mathbb C^\times)$ uniquely determines the braided equivalence class of ${\rm Vec}_A^{(\omega, c)}$ \cite{JS}.
Let $(A, q)$ denote the quadratic form $q: A \to \mathbb C^\times$ on $A$. Then the set ${\operatorname{Quad}}(A)$ of quadratic forms on $A$ forms a group under the pointwise multiplication. The cohomology group $H^3_{ab}(A, \mathbb C^\times)$ is isomorphic to ${\operatorname{Quad}}(A)$ via the \emph{trace map} $[(\omega, c)] \mapsto q_c$ \cite{EM1, EM2}, where $q_c(a)= c(a,a)$ for $a \in A$. The ribbon category ${\rm Vec}_A^{(\omega, c)}$ is modular if and only if the quadratic form $(A,q_c)$ is nondegenerate.
For any quadratic form $(A, q)$, we denote by $\mathcal{C}(A, q)$ a ribbon category ${\rm Vec}_A^{(\omega, c)}$ with $q_c= q$ and so $\theta_a:=\theta_{e(a)} = q(a)$. In particular, $\mathcal{C}(A, q_0)$, where $q_0(a)=1$ for all $a \in A$, is equivalent to the Tannakian category ${\rm Rep}(\hat{A})$. By \cite{NS1}, tensor equivalences of pseudounitary fusion categories preserve the canonical spherical structures. So, for any quadratic forms $(A,q)$ and $(A', q')$, the ribbon categories $\mathcal{C}(A, q)$ and $\mathcal{C}(A', q')$ are equivalent if and only if $(A,q)$ and $(A', q')$ are equivalent quadratic forms, i.e., there exists an isomorphism $f: A \to A'$ of groups such that $q' \circ f = q$.
We call a rational vertex operator algebra $V$ \emph{pointed} if every simple module of $V$ is simple current or $\mathcal{C}_V$ is pointed. In general, every lattice vertex operator algebra $V_L$ of a positive definite even lattice $L$ is a pointed modular category given by the quadratic form $(L^\circ/L, q_L)$ where $q_L(L+\lambda) = \theta_{V_{L+\lambda}} = e^{\pi i (\lambda, \lambda)}$. The converse is stated in the following proposition.
\begin{prop}\label{pointed}
Let $\mathcal{C}$ be a pointed modular category (with positive dimensions). Then $\mathcal{C} \cong \mathcal{C}_{V_L}$ as modular categories for some positive definite even lattice $L$. In particular, for any pointed vertex operator algebra $V$, $\mathcal{C}_V \cong \mathcal{C}_{V_L}$ as modular categories for some positive definite even lattice $L$.
\end{prop}
\begin{proof}
If $\mathcal{C}$ is a pointed modular category with positive dimensions and ribbon structure $\theta$, then $\mathcal{C} \cong \mathcal{C}(A, q)$ where $A$ is the abelian group ${\rm Irr}(\mathcal{C})$ under the tensor product, and a nondegenerate quadratic form $q:A \to \mathbb C^\times$ is given by $q(a) = \theta_a$. The first multiplicative central charge $c$ of $\mathcal{C}$ is an 8-th root of unity (cf. \cite[Prop.6.7 (ii)]{DLN}). Therefore, $c = e^{2 \pi i n/8}$ for some unique element $n \in \mathbb{Z}_8$, called the signature of $(A,q)$. Note that $q(A) \subset S_r^1$, where $S_r^1$ is the group of all roots of unity in $\mathbb C$. Let $\ell$ be the minimal number of generators of $A$. It follows from \cite[Corollary 1.10.2]{Ni} that there exists a positive definite even lattice $L$ with $\ell < {\operatorname{rank}}(L) \equiv n \mod 8$, and a group isomorphism $j: A \to L^\circ/L$ such that
$$
q(a) = e^{\pi i (j(a), j(a))} = \theta_{V_{L+j(a)}} = q_L(j(a))\,.
$$
Therefore, $(A, q)$ and $(L^\circ /L, q_L)$ are equivalent quadratic forms, and
$$
\mathcal{C} \cong \mathcal{C}(A, q) \cong \mathcal{C}(L^\circ /L, q_L) \cong \mathcal{C}_{V_L}
$$
as modular categories.
\end{proof}
\begin{rem} Proposition \ref{p4.3} now follows from Proposition \ref{pointed} as $\overline{\mathcal{C}_{V_L}}$ is pointed.
\end{rem}
We now turn our attention to the case when $G$ is a finite abelian group and will prove that Proposition \ref{p7.2} holds if $G$ is generated by two elements. Set
$$\mathbf{H}^{{\operatorname{pt}}}_G = \{V \in \mathbf{H}_G\mid V^G \text{ is pointed}\},$$
$$
{\mathcal{M}^{\operatorname{pt}}}(\mathcal{E}) =\{[\mathcal{C}] \in \mathcal{M}(\mathcal{E}) \mid \mathcal{C} \text{ is pointed}\}, $$
$$
{\mathcal{M}_v^{\operatorname{pt}}}(\mathcal{E})=\{[\mathcal{C}_{V^G}] \in \mathcal{M}_v(\mathcal{E}) \mid V \in {\mathbf{H}_G^{\operatorname{pt}}}\}.$$
We first observe that $\mathbf{H}^{{\operatorname{pt}}}_G$ is closed under the tensor product of vertex operator algebras:
\begin{lem}
Let $G$ be a finite abelian group and $\mathcal{E}={\rm Rep}(G)$. For any $U, V \in \mathbf{H}_G^{\operatorname{pt}}$, $U \otimes V \in {\mathbf{H}_G^{\operatorname{pt}}}$. Hence, ${\mathcal{M}_v^{\operatorname{pt}}}(\mathcal{E})$ is a subgroup of ${\mathcal{M}_v}(\mathcal{E})$.
\end{lem}
\begin{proof} Since $V$ is holomorphic, there is a unique irreducible $g$-twisted $V$-module $V(g)$ up to isomorphism. Recall from Section 5 that $V(g)\circ h\cong V(g)$ for any $h\in G$ as $G$ is abelian. That is, $G_{V(g)}=G$ and $V(g)$ is a $\mathbb C^{\alpha_{V(g)}}[G]$-module with decomposition
$$V(g)=\oplus_{\lambda\in \Lambda_{V(g)}}W_{\lambda}\otimes V(g)_{\lambda}.$$
From Theorem \ref{mthm1} we know that $\{V(g)_{\lambda}\mid g\in G, \lambda\in \Lambda\}$ gives a complete list of irreducible $V^G$-modules and
$\qdim_{V^G}(V(g)_\lambda )=\dim (W_{\lambda })\cdot [G:G_{V(g)}]\cdot\qdim_V(V(g)).$
Since $V^G$ is pointed, $\qdim_{V^G}(V(g)_\lambda )=1$. This implies $\qdim_V(V(g))=1$, $[G:G_{V(g)}]=1$ and $\dim (W_{\lambda})=1$ for all $\lambda$. Thus, $\mathbb C^{\alpha_{V(g)}}[G]$ is a commutative semisimple algebra for $g\in G.$ Similarly, $\mathbb C^{\alpha_{U(g)}}[G]$ is a commutative semisimple algebra.
Identify $G\times G$ as a subgroup of ${\rm Aut}(U\otimes V)$. Then $(U\otimes V)((g, g))\cong U(g)\otimes V(g)$ and
$\mathbb C^{\alpha_{(U\otimes V)((g, g))}}[G\times G]= \mathbb C^{\alpha_{U(g)}}[G]\otimes \mathbb C^{\alpha_{V(g)}}[G]$
is a commutative semisimple algebra for $g\in G.$ Regarding $G$ as a diagonal subgroup of $G\times G$ we can realize $\mathbb C^{\alpha_{(U\otimes V)((g, g))}}[G]$ as a subalgebra of
$\mathbb C^{\alpha_{(U\otimes V)((g, g))}}[G\times G].$ Thus, $\mathbb C^{\alpha_{(U\otimes V)((g, g))}}[G]$ is a commutative semisimple algebra of dimension $o(G).$ Using Theorem \ref{mthm1}
again gives
$$\qdim_{(U\otimes V)^G}(U\otimes V)(g)_\lambda =\dim W_{\lambda }[G:G_{(U\otimes V)((g, g))}]\qdim_{U\otimes V}(U\otimes V)((g, g))=1$$
for $\lambda\in \Lambda_{(U\otimes V)((g, g))}.$
That is, $(U\otimes V)(g)_\lambda = (U\otimes V)((g, g))_\lambda $ is a simple
current and $\mathcal{C}_{(U\otimes V)^G}$ is pointed, as desired.
\end{proof}
We turn to understand the group $\mathcal{M}_v^{{\operatorname{pt}}}(\mathcal{E})$ for any finite abelian group $G$.
Let
$$
H^3(G,S^1)_{{\operatorname{pt}}} = \{\alpha \in H^3(G, S^1)\mid \mathcal{Z}({\rm Vec}_G^\alpha ) \text{ is pointed}\}.
$$
The subgroup $H^3(G, S^1)_{ab}$ of $H^3(G, S^1)$ defined in \cite[p3471]{MN1} is shown to be the same as $H^3(G,S^1)_{{\operatorname{pt}}}$ by \cite[Corollary 3.6]{MN1} with slightly different terminology. Therefore, $H^3(G,S^1)_{{\operatorname{pt}}}$ is a subgroup of $H^3(G, S^1)$ isomorphic to $\mathcal{M}^{{\operatorname{pt}}}(\mathcal{E})$.
\begin{lem} \label{l:pt_iso}
Let $G$ be a finite abelian group and $\mathcal{E}={\rm Rep}(G)$. Then
$$H^3(G, S^1)_{{\operatorname{pt}}} \stackrel{\Phi_G}{\cong} \mathcal{M}^{{\operatorname{pt}}}(\mathcal{E}).$$
\end{lem}
\begin{proof}
Recall the isomorphism $\Phi_G : H^3(G, S^1) \to \mathcal{M}(\mathcal{E})$ from Section 3. By definition, $\Phi_G(H^3(G, S^1)_{{\operatorname{pt}}})$ is a subgroup of $\mathcal{M}^{{\operatorname{pt}}}(\mathcal{E})$. Conversely, suppose $(\mathcal{Z}({\rm Vec}_G^\alpha ), \iota)$ is a minimal modular extension of $\mathcal{E}$ such that $\mathcal{Z}({\rm Vec}_G^\alpha )$ is pointed. There exists $\alpha ' \in H^3(G, S^1)$ such that $(\mathcal{Z}({\rm Vec}_G^\alpha ), \iota)$ is equivalent to $(\mathcal{Z}({\rm Vec}_G^{\alpha '}), \iota_{\alpha '})$. In particular, $\mathcal{Z}({\rm Vec}_G^{\alpha '})$ is pointed and hence $\alpha ' \in H^3(G, S^1)_{{\operatorname{pt}}}$. Now, we have
$$
\Phi_G(\alpha ') =[(\mathcal{Z}({\rm Vec}_G^{\alpha '}), \iota_{\alpha '})] =[(\mathcal{Z}({\rm Vec}_G^\alpha ), \iota)], $$
and so $\Phi_G(H^3(G, S^1)_{{\operatorname{pt}}})=\mathcal{M}^{{\operatorname{pt}}}(\mathcal{E})$.
\end{proof}
Since $G$ is abelian, for any $\alpha =[\omega] \in H^3(G, S^1)_{{\operatorname{pt}}}$, the embedding $\iota_\omega: \mathcal{E} \to \mathcal{Z}({\rm Vec}_G^\omega)$ is equivalent to the inclusion of quadratic forms $i_\omega: (\hat G, q_0) \to (\Gamma^\omega, q_\omega)$ where $\Gamma^\omega = {\rm Irr}(\mathcal{Z}({\rm Vec}_G^\omega))$ and $q_\omega(x) = \theta_x$ for $x \in \Gamma^\omega$ (cf. \cite[Thm. 3.3]{JS}). By \cite[Pro. 5.2]{MN2} or \cite[Prop. 3.5 and Cor. 3.6]{MN1}, we have an exact sequence of abelian groups
$$
1 \to \hat{G} \xrightarrow{i_\omega} \Gamma^\omega \to G \to 1\,,
$$
and its corresponding cohomology class in $H^2(G, \hat{G})$ is determined by $\alpha $. The triple $(\Gamma^\omega, q_\omega, i_\omega)$, in fact, depends on the choice of representative $\omega$ of $\alpha $, but its equivalence class does not. This will be explained in the following discussion.
We denote by a triple $(\Gamma, q, i)$ for any nondegenerate quadratic form $q: \Gamma \to \mathbb C^\times$ on a finite abelian group $\Gamma$ of order $|G|^2$ containing an isotropic subgroup isomorphic to $\hat{G}$ under a group monomorphism $i$. Let $b_q$ be the associated nondegenerate symmetric bicharacter of $\Gamma$. For any coset $\overline x$ of $i(\hat{G})$ in $\Gamma$ represented by $x$, $b_q(i(\chi), x) = b_q(i(\chi), x')$ for any $x' \in \overline x$ and $\chi \in \hat{G}$. There exists a unique element $g \in G$ such that $\chi(g) =b_q(i(\chi), x)$ for all $\chi \in \hat{G}$. The assignment $p: \Gamma/i(\hat{G}) \to G$, $\overline x \mapsto g$, defines a group monomorphism. Since $o(\Gamma/i(\hat{G})) = o(G)$, $p$ is an isomorphism and we have an exact sequence of abelian groups:
$$
1 \to \hat{G} \xrightarrow{i} \Gamma \to G \to 1\,.
$$
Two such triples $(\Gamma, q, i), (\Gamma', q', i')$ are called \emph{equivalent} if there exists a group isomorphism $j: \Gamma \to \Gamma'$ such that $q'\circ j = q$ and $i' = j \circ i$. Let $Q(G)$ be the set of equivalence classes $[\Gamma, q, i]$ of the triples $(\Gamma, q, i)$. One can define the product of two such triples $(\Gamma, q, i), (\Gamma', q', i')$ as follows: Let $H$ be the subgroup $\{(i(\chi), i'(\overline{\chi})) \mid \chi \in \hat{G}\}$ of $\Gamma \times \Gamma'$, and
$$
H^\perp= \{(x, y) \in \Gamma \times \Gamma' \mid b(x, i(\chi)) b'(y, i'(\overline{\chi})) = 1 \text{ for all } \chi \in \hat{G}\}
$$
where $b$ and $b'$ are nondegenerate bicharacters associated with $q, q'$ respectively. Then the quadratic form $q\perp q'$ on $\Gamma \times \Gamma'$ induces a nondegenerate quadratic form $q''$ on $H^\perp/H$, and $i'' (\chi) = (i(\chi), 1) H$ for $\chi \in \hat{G}$ defines an embedding of quadratic form from $\hat{G}$ into $H^\perp/H$. Then $Q(G)$ forms an abelian group with the multiplication given by
$$
[\Gamma, q, i]\cdot [\Gamma', q', i'] = [H^\perp/H, q'', i'']\,,
$$
and we have the exact sequence
$$1 \to \hat{G} \xrightarrow{i''} H^\perp/H \to G \to 1\,.
$$
The quadratic form $ev: \hat{G} \times G \to \mathbb C^\times$, given by the evaluation map $ev$, with the embedding $i_0: \hat{G}\to \hat{G} \times G,\, \chi \mapsto (\chi, 1)$ defines the identity class $[\hat{G} \times G, ev, i_0]$ of $Q(G)$.
For any $\omega, \omega' \in Z^3(G, S^1)_{{\operatorname{pt}}}$ such that $\alpha = [\omega]=[\omega']$, the triples
$(\Gamma^\omega, q_\omega, i_\omega)$ and $(\Gamma^{\omega'}, q_{\omega'}, i_{\omega'})$ are equivalent \cite[Prop. 3.5]{MN1}, and its equivalence class will be denoted by
$[\Gamma^\alpha , q_\alpha , i_\alpha ]$. The function $\phi_G: H^3(G, S^1)_{{\operatorname{pt}}} \to Q(G), \alpha \mapsto [\Gamma^\alpha , q_\alpha , i_\alpha ]$, will be shown to be an isomorphism of groups.
Note that $Q(G)$ is a finite group. Any class $[\Gamma, q, i]$ determines an equivalent class of pointed minimal modular extension $[(\mathcal{C}(\Gamma, q), \iota)]$ of $\mathcal{E}$ with the embedding $\iota: \mathcal{E} \to \mathcal{C}(\Gamma, q)$ given by $i$. Let $\psi: Q(G) \to \mathcal{M}^{{\operatorname{pt}}}(\mathcal{E})$ be the mapping $[\Gamma, q, i] \mapsto [(\mathcal{C}(\Gamma, q), \iota)]$ (cf. \cite[Thm. 3.3]{JS}). Since every pointed minimal modular extension of $\mathcal{E}$ is an image of $\psi$, $\psi$ is a bijection. It is easy to see that $\psi$ preserves the products of these groups, and so
we have proved the first assertion of following proposition.
\begin{prop}\label{p:Q(G)}
The map $\psi: Q(G) \to \mathcal{M}^{{\operatorname{pt}}}(\mathcal{E})$ is an isomorphism of groups, and we have the commutative diagram:
$$
\begin{tikzcd}
H^3(G,S^1)_{{\operatorname{pt}}} \arrow[r,"\Phi_G"] \arrow[rd,"\phi_G"'] & \mathcal{M}^{{\operatorname{pt}}}(\mathcal{E}) \arrow[d, "{\psi^{-1}}"] \\
& Q(G)
\end{tikzcd}
$$
Hence, $\phi_G$ is an isomorphism of groups. In particular, for any $\alpha , \alpha ' \in H^3(G, S^1)_{{\operatorname{pt}}}$, we have
$$[\Gamma^\alpha , q_\alpha , i_\alpha ] \cdot [\Gamma^{\alpha '}, q_{\alpha '}, i_{\alpha '}] = [\Gamma^{\alpha \alpha '}, q_{\alpha \a'}, i_{\alpha \alpha '}]\,.
$$
\end{prop}
\begin{proof}
The equality $
\psi^{-1} \circ \Phi_G = \phi_G
$ follows directly from the definitions of $\phi_G$, $\Phi_G$ and $\psi$, and the fact that $\psi^{-1}([\mathcal{Z}({\rm Vec}_G^\alpha ), \iota_\alpha ])= [\Gamma^\alpha , q_\alpha , i_\alpha ]$ for $\alpha \in H^3(G, S^1)_{{\operatorname{pt}}}$. Since $\Phi_G$ and $\psi$ are isomorphisms of groups, and so is $\phi_G$. The last statement is a consequence of the commutative diagram.
\end{proof}
\begin{thm} \label{t:pt}
Let $G$ be a finite abelian group and $\mathcal{E}={\rm Rep}(G)$. Then for any $\alpha \in H^3(G,S^1)_{{\operatorname{pt}}}$, there exists a positive definite even unimodular lattice $E$ such that $V_E$ admits an automorphism group isomorphic to $G$ and
$(\mathcal{C}_{V_E^G}, F^{V_E, G}) \cong (\mathcal{Z}({\rm Vec}_G^\alpha ), \iota_\alpha )$ as minimal modular extensions of $\mathcal{E}$. Moreover, $\mathcal{M}_v^{\operatorname{pt}}(\mathcal{E}) = \mathcal{M}^{{\operatorname{pt}}}(\mathcal{E}) \cong H^3(G,S^1)_{{\operatorname{pt}}}$.
\end{thm}
\begin{proof}
Let $[\Gamma, q, i] \in Q(G)$. By \cite[Cor. 1.10.2]{Ni}, there exists a positive definite even lattice $L$ such that $(\Gamma, q) \stackrel{j}{\cong} (L^\circ /L, q_L)$ for some isomorphism $j$ of quadratic forms. Let $E$ be the subgroup of $L^\circ$ containing $L$ such that $j \circ i(\hat{G})=E/L$. Then we have the following row exact commutative diagram:
$$
\begin{tikzcd}
1 \arrow[r] & \hat{G}\arrow[equal]{d} \arrow[r, "i"] \arrow[r, "i"] & \Gamma \arrow[d, "j"] \\
1 \arrow[r] & \hat{G} \arrow[r, "j\circ i"] & L^\circ/L
\end{tikzcd}\,.
$$
In particular, $[\Gamma, q, i] = [L^\circ /L, q_L, j\circ i]$ in $Q(G)$.
For any $x \in E$, $q_L(L+x) = 1$ or $(x,x)$ is a positive even integer. Thus, $E$ is a positive definite even lattice. Since $E/L \cong \hat{G}$, $[E:L]=o(G)$ and so
$$
|\det(E)| = |\det(L)|/[E:L]^2 = o(G)^2/o(G)^2=1\,.
$$
Therefore, $E$ is unimodular.
Now, we identify $L^\circ/E$ with $G$ via the isomorphism $p: L^\circ/E \to G$ with $p(E+x)=g$ given by $b_L(j\circ i(\chi), L+x) = \chi(g)$ for all $\chi \in \hat{G}$ where $b_L$ is the associated bicharacter of $q_L$. We consider $G$ as an automorphism group of the lattice vertex operator algebra $V_E = M(1) \otimes \mathbb C^\epsilon [E]$ via $p$, namely $g=\sigma_{x_g}$ where $p(E+x_g) = g$. Then $V_E^G = V_L$, and
$$
V_E = \bigoplus _{L+x \in E/L} V_{L+x}=\bigoplus _{\chi \in \hat{G}} V_{j\circ i(\chi)}
$$
as a $V_L$-module. So, ${\rm Irr}}\def \glob{{\rm glob}(\mathcal{E}_{V_E^G}) =\{ V_{j\circ i(\chi)} \mid \chi \in \hat{G}\}$ and $F^{V_E, G}(\chi) = V_{-j\circ i(\chi)}$.
Thus, $\psi^{-1}([\mathcal{C}_{V_E^G}, F^{V_E, G}])=[L^\circ/L, q_L, i']$ where $i'(\chi) = -j \circ i (\chi)$ for $\chi \in \hat{G}$. However, $(L^\circ/L, q_L, i') \cong (L^\circ/L, q_L, j \circ i)$ under the automorphism $x \mapsto -x$ in ${\rm Aut}(L^\circ/L, q_L)$. Therefore,
$$
\psi^{-1}([(\mathcal{C}_{V_E^G}, F^{V_E, G})])=[L^\circ/L, q_L, i'] = [L^\circ/L, q_L, j \circ i] = [\Gamma, q, i]\,,
$$
and hence $\psi^{-1}(\mathcal{M}_v^{{\operatorname{pt}}}(\mathcal{E})) = Q(G)$. The remaining statement follows immediately from Lemma \ref{l:pt_iso} and Proposition \ref{p:Q(G)}.
\end{proof}
Recall from \cite[p3480]{MN1} that the group epimorphism $\varphi^*: H^3(G, S^1) \to \mbox{Hom}(\bigwedge^3 G, S^1)$ defined by
$$
\varphi^*([\omega])(a, b, c) = \frac{\omega(a,b,c)\omega(b,c,a)\omega(c,a,b)}{\omega(b,a,c)\omega(a,c,b)\omega(c, b, a)}
$$
for $a, b, c \in G$ and $\omega \in Z^3(G, S^1)$. The definition of $\varphi^*$ is independent of the choice of representatives of the cohomology class $[\omega]$, and its kernel was characterized in \cite[Lem. 7.4]{MN1} as
$$
H^3(G, S^1)_{{\operatorname{pt}}} = \ker \varphi^*\,.
$$
This gives us the following corollary.
\begin{coro}\label{c8.8}
If $G$ is a finite abelian group generated by two elements, then $\mathcal{M}_v(\mathcal{E}) \cong H^3(G, S^1)$.
\end{coro}
\begin{proof}
Suppose $G$ is generated by $a,b$. Then, for any $x,y,z \in G$ and $\alpha \in H^3(G, S^1)$, we have
$\varphi^*(\alpha )(x, y, z)$ is a product of the values of $\varphi^*(\alpha )$ at the following triples:
$$
(a, a, a), (a, a, b), (a, b, a), (b, a, a), (b, b, a), (b, a, b), (a, b, b), (b, b, b)\,.
$$
Since they are all equal to 1, $\alpha \in H^3(G, S^1)_{{\operatorname{pt}}}$ by \cite[Lem. 7.4]{MN1}. So $H^3(G, S^1)_{{\operatorname{pt}}} = H^3(G, S^1)$. Now, the result follows from Theorem \ref{t:pt}.
\end{proof}
\begin{rem} The embedding $F^{V,G}:\mathcal{E}\hookrightarrow\mathcal{C}_{V^G}$
in $(\mathcal{C}_{V^G}, F^{V,G})$ plays an essential role
in identifying $(\mathcal{C}_{V^G}, F^{V,G})$ with the corresponding $\alpha \in H^3(G,S^1).$ It is possible that the modular tensor categories $\mathcal{C}_{V^G}$ and $\mathcal{C}_{U^G}$ are braided equivalent but $(\mathcal{C}_{V^G}, F^{V,G})$ and $(\mathcal{C}_{U^G}, F^{U,G})$ give
two different elements in group $\mathcal{M}_v(\mathcal{E}).$ Or equivalently,
there are two inequivalent embeddings $\mathcal{E}\hookrightarrow\mathcal{C}_{V^G}.$ The following example explains this in details.
\end{rem}
\begin{ex}\label{e8.10}{\rm
Let $L=\Gamma_{16}$ be the spin lattice of rank 16. We now give an automorphism group $G\cong \mathbb Z_2\times \mathbb Z_2$ of $V_L$ such that
there are three inequivalent embeddings: $\mathcal{E}\hookrightarrow \mathcal{C}_{V_L^G}$. The main idea is to find
an even lattice $K$ of $L$ such that $L/K\cong \mathbb Z_2\times \mathbb Z_2$
and $K^{\circ}/K\cong \mathbb Z_4\times \mathbb Z_4$ where $K^{\circ}$ is the dual lattice of $K$ as usual. We thank Griess Jr. for providing us with such $K.$
Let $\{\epsilon _1, \cdots, \epsilon _{16}\}$ be the standard orthonormal basis of ${\Bbb R}^{16}.$ Recall that root lattice $L_{D_{16}}=\sum_{i=1}^{16}\mathbb Z\alpha_i$ of type $D_{16}$ where
$\alpha_i=\epsilon _{i}-\epsilon _{i+1}$ for $i=1,...,15$ and $\alpha_{16}=\epsilon _{15}+\epsilon _{16}.$ Then $L=L_{D_{16}}+\mathbb Z w = L_{D_{16}}\cup (L_{D_{16}}+w)$
where $w=\frac{1}{2}(\epsilon _1+\cdots +\epsilon _{16}).$ Also let $u=\frac{1}{2}(\epsilon _1+\cdots +\epsilon _8)$ and $v=\frac{1}{2}(\epsilon _8-\epsilon _{16}).$
Then $(u,u)=2,$ $(v,v)=\frac{1}{2}$ and $(u,v)=\frac{1}{4}.$
Let
$$K=\{\alpha\in L\mid (u,\alpha)\in\mathbb Z \ {\rm and}\ (v,\alpha)\in\mathbb Z\}$$
be a sublattice of $L.$
It is easy to see that
$$L/K=\{K, K+2u, K+2v, K+2u+2v\}\cong \mathbb Z_2\times \mathbb Z_2$$
and
$$K^{\circ}=\bigcup_{i,j=0}^3(K+iu+jv), \quad K^{\circ}/K=\<u+K,v+K\>\cong \mathbb Z_4\times \mathbb Z_4.$$
Recall that $V_L=M(1)\otimes \mathbb C^{\epsilon }[L].$ We have automorphisms $\sigma_u,\sigma_v\in {\rm Aut}(V_L).$ Then $G=\<\sigma_{u},\sigma_{v}\>$ is isomorphic to $L/K\cong \mathbb Z_2\times \mathbb Z_2.$ Moreover,
$V_L^G=V_K$ and the irreducible $V_K$-modules are $V_{K+z},$
and
$$
\theta_{V_{K+z}}=e^{2\pi i(s^2+\frac{1}{4}(t^2+st))} = i^{(s+t)t}
$$
for
$z=su+tv$ with $s,t=0,...,3.$ The pair $(K^\circ/K, q)$ defines a quadratic form with $q(K+z) =\theta_{V_{K+z}}$ and $\mathcal{C}_{V_L^G}$ is equivalent to the pointed modular category $\mathcal{C}(K^\circ/K, q)$.
Consider the generating set $\{x, y\}$ of $K^\circ/K$ where $x = u+K$ and $y = -u +v+K$. For any $K+ z \in K^\circ/K, K+z = s x + t y = (s -t)u+ tv+K$ for some $s, t =0, \dots, 3$, and so
$$
q(K+z) = \theta_{V_{K+z}}= i^{st}\,.
$$
By direct computation, the automorphisms of the quadratic form $(K^\circ/K, q)$ are given by
\begin{equation}\label{eq:auto}
{\rm Aut}(K^\circ/K, q)= \{f \in {\rm Aut}(K^\circ/K) \mid q = q\circ f \} = \{\pm id, \pm \kappa\} \cong \mathbb{Z}_2 \times \mathbb{Z}_2
\end{equation}
where $\kappa(s x + t y) = t x + s y$.
Recall that the Tannakian category $\mathcal{E} = {\rm Rep}(G)$ is equivalent to $\mathcal{C}(\hat{G}, q_0)$ where $q_0$ is the trivial quadratic form given by $q_0=1$. Let $\psi_1, \psi_2 \in \hat{G}$ such that
$$
\psi_1(\sigma_{u})=-1, \, \psi_1(\sigma_{v})=1 \quad \text{and}\quad \psi_2(\sigma_{u})=1, \, \psi_2(\sigma_{v})=-1\,.
$$
Then $F^{V_L, G}(\psi_1)= 2(x+y)$ and $F^{V_L, G}(\psi_2)= 2x$ and $F^{V_L, G}(\psi_1\psi_2) =2y$ and $F^{V_L, G}$ induces an embedding $F^{V_L, G}: (\hat{G}, q_0) \to (K^\circ/K, q)$ of quadratic forms.
Now, we twist the $G$ action on $V_L$ by an automorphism $\gamma$ of $G$, that means $g\cdot a =\gamma(g)(a)$ for $g \in G$ and $a \in V_L$, and we denote this new $G$-module by $V_L^\gamma$. Note that $(V_L^\gamma)^G = V_L^G$ as vertex operator algebras. The automorphism $\gamma$ also acts on $\hat{G}$ by composition, and $F^{V_L^\gamma, G}(\psi) = F^{V_L, G}(\psi \circ \gamma^{-1})$ for $\psi \in \hat{G}$. Thus, the corresponding embedding of quadratic forms $F^{V_L^\gamma, G}: (\hat{G}, q_0) \to (K^{\circ}/K, q)$ can be expressed as $F^{V_L^\gamma, G}(\psi) =F^{V_L, G}(\psi \circ \gamma^{-1})$ for $\psi \in \hat{G}$.
The automorphism group of $G$ is isomorphic to $S_3$ and each automorphism $\gamma$ is completely determined by its images of $\sigma_{u}$ and $\sigma_{v}$. The equivalence $(\mathcal{C}_{V_L^G}, F^{V_L , G}) \cong (\mathcal{C}_{V_L^G}, F^{V_L^\gamma, G})$ implies the embeddings of quadratic forms $F^{V_L^\gamma, G} , F^{V_L, G} : (\hat{G}, q_0) \to (K^{\circ}/K, q)$ are equivalent, i.e., there exists automorphism $f \in {\rm Aut}(K^{\circ}/K, q)$ such that $f \circ F^{V_L, G}=F^{V_L^\gamma, G}$. By \eqref{eq:auto}, $\gamma =\operatorname{id}_G$ or
$$
\gamma: \sigma_v \mapsto \sigma_v , \quad \sigma_u \mapsto \sigma_u \sigma_v \,.
$$
Let $\delta$ be the automorphism of $G$ given by the 3-cycle $(\sigma_u, \sigma_u \sigma_v, \sigma_v)$ in $S_3$. Then
$(\mathcal{C}_{V_L^G}, F^{V_L^\delta, G})$ and $(\mathcal{C}_{V_L^G}, F^{V_L^{\delta^2}, G})$ are not equivalent to $(\mathcal{C}_{V_L^G}, F^{V_L , G})$. One can further show directly from \eqref{eq:auto} that $(\mathcal{C}_{V_L^G}, F^{V_L^\delta, G}) \not\cong (\mathcal{C}_{V_L^G}, F^{V_L^{\delta^2}, G})$. These three inequivalent embedding of $\mathcal{E}$ correspond to three different cohomology classes $\alpha \in H^3(G, S^1)$ such that $\mathcal{Z}({\rm Vec}_G^\alpha )$ are equivalent modular tensor categories.
}
\end{ex}
Example \ref{e8.10} also gives the following result.
\begin{prop} If $G\cong\mathbb Z_2\times \mathbb Z_{2}\times \mathbb Z_{2}$ then ${\cal M}_v(\mathcal{E})={\cal M}(\mathcal{E})\cong H^3(G,S^1).$
\end{prop}
\begin{proof} From the discussion before Corollary \ref{c8.8}, we see that $H^3(G,S^1)/H^3(G,S^1)_{{\operatorname{pt}}}$ is isomorphic to $\mbox{Hom}(\bigwedge^3 G, S^1) \cong \mathbb{Z}_2$, and so $[H^3(G,S^1): H^3(G,S^1)_{{\operatorname{pt}}}] =2$. By Theorem \ref{t:pt}, ${\cal M}^{{\operatorname{pt}}}_v(\mathcal{E})\cong H^3(G,S^1)_{{\operatorname{pt}}}.$ It is enough to show that
there exists $[\mathcal{C}_{V_L^G}]\in {\cal M}_v(\mathcal{E})$ such that $\mathcal{C}_{V_L^G}$ is not pointed. Let $V_L$ be the lattice vertex operator algebra defined in Example \ref{e8.10}. Let $\tau\in {\rm Aut}(V_L)$ such that
$$\tau(h_{i_1}(n_1)\cdots h_{i_k}(n_k)\otimes e^{\alpha})=(-1)^kh_{i_1}(n_1)\cdots h_{i_k}(n_k)\otimes e^{-\alpha}$$
for $i_1,...,i_k\in\{1,...,16\},$ $n_i<0$ and $\alpha\in L.$
Then $G\cong\<\sigma_u,\sigma_v,\tau\>$
and
$$V_L^G=V_K^+=\{a\in V_K\mid \tau(a)=a\}.$$
Since $V_{K+u}$ is an irreducible
$V_K^+$-module (see \cite{ADL}) and ${\rm FPdim}(V_{K+u})=\qdim_{V_K^+}(V_{K+u})=2$, we conclude that
$V_{K+u}$ is not a simple current and so $\mathcal{C}_{V_L^G}$ is not pointed.
\end{proof}
It is worth noting that for any nonabelian group $H$ of order 8, $\mathcal{Z}({\rm Rep}(H)) \cong \mathcal{Z}({\rm Vec}_{H})$ is braided equivalent to some nonpointed $\mathcal{Z}({\rm Vec}_G^\alpha )$ where $G=\mathbb{Z}_2^3$ (cf. \cite{GMN}). Thus, there exists an embedding $\iota: {\rm Rep}(H) \to \mathcal{Z}({\rm Vec}_G^\alpha )$ so that $(\mathcal{Z}({\rm Vec}_G^\alpha ), \iota)$ is a minimal modular extension of ${\rm Rep}(H)$. In general, for any finite group $A$, $\mathcal{Z}({\rm Vec}_A^\alpha )$ is a minimal modular extension of any symmetric fusion subcategory $\mathcal{E}$ of $\mathcal{Z}({\rm Vec}_A^\alpha )$ with $\dim(\mathcal{E}) = |A|$, i.e., a Lagrangian subcategory of $\mathcal{Z}({\rm Vec}_A^\alpha )$. If $\mathcal{E}$ is Tannakian, then $\mathcal{E}$ is braided equivalent to ${\rm Rep}(B)$ for some uniquely determined group $B$, and $\mathcal{Z}({\rm Vec}_A^\alpha )$ is braided equivalent to $\mathcal{Z}({\rm Vec}_B^{\alpha '})$ for some $\alpha ' \in H^3(B, S^1)$.\\
\noindent{{\bf Acknowledgement:}} We would like to thank Robert Griess Jr. for his crucial suggestion on Example \ref{e8.10}. The second author would also like to acknowledge that this joint work began while he was a member in residence at MSRI in the Spring of 2020 for the program on Quantum Symmetry supported by the NSF under the Grant No. DMS-1440140.
|
2,869,038,156,761 | arxiv | \section{Introduction} \label{Int}
The logical structure of General Relativity (GR) is one of the greatest achievements in theoretical physics. The current results from gravitational wave astronomy cements GR as the appropriate theory to describe the gravitational interaction. Although the open problem of dark energy and dark matter is compatible with GR (if one proposes new exotic sources of matter and energy), current observations do not discard alternative theories of gravity. Furthermore, considering that after many decades of research a complete quantum theory of gravity is still missing, one can strongly look at alternatives to GR. We can follow the traditional line of reasoning by considering gravity as a fundamental interaction and from some fundamental principle write down the corresponding theory (i.e. $f(R)$ \cite{Sotiriou:2008rp}, massive gravity \cite{deRham:2014zqa}, Horndeski, etc.). Another approach is to consider gravity not as fundamental interaction but as an emergent phenomenon \cite{Jacobson:1995ab}.
During this decade there has been a renewed interest in this idea. It started with Verlinde's ideas presented in \cite{Verlinde:2010hp}, where he claims that Newtonian gravity is an entropic force, in the sense of the emergent forces that are present in the study of polymers. This approach is motivated from the ideas in holography together with the area relation for the entropy of black holes. Since in this formulation Newtonian gravity has an entropic origin, one can propose modifications to Newtonian gravity by analyzing modifications to the entropy area law \cite{Modesto:2010rm,Martinez-Merino:2017xzn}.
Research on black hole physics has uncovered several mysteries: Why is the statistical
black hole entropy proportional to the horizon area?, What happens to the information in black
hole evaporation?. The answers to these questions and more others have been extensively studied in the
literature, however, a definitive answer to the microscopic description of black holes has yet to be found.
For supersymmetric theories there is a bound on the mass $M$ of the states related to the supercharges $Q$, it is known as the BPS bound $M\ge Q$. The mass of the states that saturate this bound (BPS states) are protected and do not receive quantum corrections. Charged black holes that satisfy $M=Q$ are called extremal black holes, which turn out to have no Hawking temperature, are quantum mechanically stable and their microscopic description is exact.
Unfortunately, this approach {can not be applied to} the Schwarzschild black hole. In \cite{LopezDominguez:2009ue,LopezDominguez:2011zz}, the authors propose a supersymmetric generalization to the Schwarzschild and Schwarzschild-(anti)de Sitter black holes. Their approach begins with the known diffeomorphism between the Kantowski Sachs (KS) cosmological solution and the Schwarzschild black hole solution. For this model they find the Wheeler-DeWitt equation (WDW) and show that after a WKB approximation the black hole solution is recovered. Then, using the methods of supersymmetric quantum cosmology \cite{Graham:1991av,Obregon:1999wt},
a supersymmetric generalization of the Schwarzschild black hole can be constructed. In \cite{Obregon:2000zd},
the authors use the Feynman-Hibbs path integral procedure \cite{feynman} to calculate the temperature and entropy of the Schwarzschild black hole. This approach allows us to incorporate quantum
corrections to the partition function through the {\it
``corrected"} potential and has been successfully applied to the study of the thermodynamics of different black hole models \cite{LopezDominguez:2006wd,Bastos:2009ae}.
Using this method, we calculate the entropy-area relationship for the supersymmetric generalization of the Schwarzschild black hole. Furthermore, using {Verlinde's proposal} we find the corrections to Newton's law of gravitation and show that the contributions to Newtonian gravity give the correct rotation curves for galaxies.
\section{The supersymmetric WDW equation for Schwarzschild metric} \label{difeo}
In order to obtain the supersymmetric generalization of the WDW equation for the Schwarzschild black hole we will use its relationship with the KS model. The
Schwarzschild black hole is described by the metric
\begin{eqnarray}
ds^{2}&=&-\left ( 1-\frac{2m}{r} \right )dt^{2}+\left ( 1-\frac{2m}{r} \right )^{-1}dr^{2}\\
&+&r^{2}\left(d\theta^{2}+\sin^{2}\theta d\varphi^{2} \right),\nonumber
\end{eqnarray}
for the case $r<2m$ the $g_{rr}$ and the $g_{tt}$ components of the metric change in sign and $\partial_{t}$ becomes a space-like vector, {hence} if we perform the transformation $t\leftrightarrow r$,
and compare with the Misner parametrization of the KS metric
we identify
\begin{equation}
N^{2}=\left(\frac{2m}{t}-1\right)^{-1}, e^{2\sqrt{3}\xi }=\frac{2m}{t}-1, e^{-2\sqrt{3}\left ( \xi +\Omega \right )}=t^{2},\label{iden}
\end{equation}
this establishes the diffeomorphism with KS. Using this result in the Einstein-Hilbert action
and performing an integration over the spatial coordinates, we get an effective Lagrangian from which it is straight forward to obtain the Hamiltonian constraint
\begin{equation}
H=p_{\xi}^{2}-p_{\Omega}^{2}-48e^{-2\sqrt{3}\Omega}.\label{hamilton}
\end{equation}
From standard canonical quantization, with the usual identifications for the canonical momenta $p_{\xi}=-i\frac{\partial }{\partial \xi}$, $p_{\Omega}=-i\frac{\partial }{\partial \Omega}$, the WDW equation derived from the Hamiltonian constraint is
\begin{equation}
\left ( -\frac{\partial^2 }{\partial \Omega ^2}+\frac{\partial^2 }{\partial \xi ^2}+48e^{-2\sqrt{3}\Omega } \right )\Psi (\Omega ,\xi )=0,
\label{wdwks}
\end{equation}
this equation has been used to study quantum black holes \cite{ryan}. Using the plane wave solution for the variable $\xi$, the WDW equation takes the form
\begin{equation}
\left ( -\frac{d^{2}}{d\Omega^{2}} +48e^{-2\sqrt{3}\Omega }\right )\chi(\Omega)=3\nu^{2}\chi(\Omega).\label{QE}
\end{equation}
We now proceed to obtain the supersymmetric version of the WDW equation Eq.\eqref{wdwks}, for this purpose we will follow \cite{Graham:1991av}.
For homogeneous models the Hamiltonian $H_{0}$ can be written as
\begin{equation}
2H_{0}=\mathcal{G}^{\mu \nu }p_{\mu }p_{\nu }+\mathcal{U}(q^{\mu}),\label{H022}
\end{equation}
where $q^{\mu }$ are the minisuperspace coordinates, $\mathcal{G}^{\mu \nu }$ is the minisuperspace metric and $\mathcal{U}(q^{\mu})$ is the potential. Also, it is possible to find a function $\Phi(q^{\nu})$ satisfying
\begin{equation}\mathcal{G}^{\mu \nu }\frac{\partial \Phi }{\partial q^{\mu }}\frac{\partial \Phi }{\partial q^{\nu }}=\mathcal{U}(q^{\alpha}).\label{ec}\end{equation}
To construct the supersymmetric Hamiltonian first we need the
supercharges
\begin{equation}
\mathcal{Q}=\psi^{\mu } \left ( p_{\mu } +i\frac{\partial \Phi }{\partial q^{\mu }}\right ),\quad \bar{\mathcal{Q}}=\bar{\psi }^{\mu }\left ( p_{\mu }-i\frac{\partial \Phi}{\partial q^{\mu }} \right ), \label{spcosm}
\end{equation}
where $\bar{\psi }^{\mu}$, and $\psi^{\nu }$ are Grassmann variables and satisfy the algebra
\begin{equation}\left \{ \bar{\psi }^{\mu } ,\bar{\psi }^{\nu } \right \}=\left \{\psi ^{\mu},\psi ^{\nu } \right \}=0, \quad \left \{ \bar{\psi }^{\mu },\psi ^{\nu } \right \}=\mathcal{G}^{\mu \nu }.\label{alg}\end{equation}
The supersymmetric Hamiltonian is obtained from the algebra of the supercharges $2H_{S}=\{\mathcal{Q},\bar{\mathcal{Q}}\}$, this gives
\begin{equation}
2H_{S}=\mathcal{G}^{\mu \nu }p_{\mu }p_{\nu }+\mathcal{U}(q^{\mu})+\frac{\partial ^{2}\Phi }{\partial q^{\mu }\partial q^{\nu }}\left [ \bar{\psi }^{\mu},\psi^{\nu } \right ].\label{hsusycosm}
\end{equation}
This supersymmetric Hamiltonian is the supersymmetric generalization of Eq.(\ref{H022}), it is the sum of the ``bosonic'' Hamiltonian and the contribution $\frac{\partial ^{2}\Phi }{\partial q^{\mu }\partial q^{\nu }}$.
This Hamiltonian is fully determined once we adopt a suitable representation of the Grassmann variables
\begin{equation}
\begin{aligned}
&\psi ^{\xi}=\left(\begin{smallmatrix}
0 &0 &0 &0 \\
1 &0 &0 &0 \\
0&0 &0 &0 \\
0&0 &\text{-}1 &0
\end{smallmatrix}\right),
\quad \bar{\psi }^{\xi}=\left(\begin{smallmatrix}
0 &1 &0 &0 \\
0&0 &0 &0 \\
0&0 &0 &\text{-}1 \\
0&0 &0 &0
\end{smallmatrix}\right),\\
&\psi ^{\Omega}=\left(\begin{smallmatrix}
0 &0 &0 &0 \\
0&0 &0 &0 \\
1&0 &0 &0 \\
0&1 &0 &0
\end{smallmatrix}\right), \quad \bar{\psi }^{\Omega}=\left(\begin{smallmatrix}
0 &0 &\text{-}1 &0 \\
0&0 &0 &\text{-}1 \\
0&0 &0 &0 \\
0&0 &0 &0
\end{smallmatrix}\right).
\end{aligned}
\end{equation}
{From Eq.\eqref{wdwks} we identify $\mathcal{U} (\Omega,\xi)=48e^{-2\sqrt{3}\Omega }$ and from Eq.\eqref{ec} we obtain a differential equation for $\Phi$ whose solution is given by
\begin{align}
\Phi=&-4\left [ \sqrt{e^{-2\sqrt{3}\Omega }+\epsilon ^{-2/3}}-\epsilon ^{-1/3}{\rm arcsinh}\left (\epsilon ^{-1/3} e^{\sqrt{3}\Omega } \right ) \right ]\nonumber\\
&+4\sqrt{3}\epsilon^{-1/3}\xi,
\end{align}
with $\epsilon={\rm constant}$. Since $\left[ \bar{\psi }^{\Omega},\psi^{\Omega } \right]={\rm diag}(-1,-1,1,1)$, the supersymmetric Hamiltonian will have two independent components which only differ in the sign of the modified potential. Now the proposed WDW equation for the supersymmetric quantum Schwarzschild black hole is
\begin{align}\label{susyec}
&\left[ -\frac{\partial^2}{\partial\Omega^2}+\frac{\partial^2}{\partial\xi^2}\right. \\
&\left. +12\left( 4\pm\frac{1}{\sqrt{e^{-2\sqrt{3}\Omega}+\epsilon^{-2/3}}} \right)e^{-2\sqrt{3}\Omega} \right] \Psi_{\pm}^{S}(\Omega, \xi)=0.\nonumber
\end{align}
The wave function has four components, although only two are linearly independent, also the contributions of supersymmetry are encoded in the modified potential. Finally, it is worth mentioning that Eq.(\ref{wdwks}) is recovered from Eq.(\ref{susyec}), by taking the limit $\epsilon\to 0$.
\section{Modified Entropy-Area relationship} \label{BHT}
Let us start by reviewing the calculation of the entropy for the Schwarzschild black hole, using the Feynman-Hibbs procedure. This approach was originally applied to the Schwarzschild black hole \cite{Obregon:2000zd} and subsequently used for different black hole models \cite{LopezDominguez:2006wd,Bastos:2009ae,mena}.
In the limit {of} small $\Omega$ and taking $x=l_{P}(\sqrt{6}\Omega-1/\sqrt{2})$, Eq.(\ref{QE}) can be written as
\begin{equation}
\left ( -\frac{1}{2}l_{P}^{2}E_{P}\frac{d^{2}}{dx^{2}}+4\frac{E_{P}}{l_{P}^{2}}x^{2} \right )\chi (x)= E_{P}\left ( \frac{\nu ^{2}}{4}-2 \right )\chi (x),\label{eccuan}
\end{equation}
we can see that Eq.(\ref{eccuan}) is the usual quantum harmonic oscillator if we identify $\hbar \omega=\sqrt{\frac{3}{2\pi}}E_p$ and $\frac{\hbar ^2}{m}=l_{P}^{2}E_{P}$.
\noindent To compute the ``corrected'' partition function of the black hole we apply the Feynman-Hibbs procedure. This approach is based on exploiting the similarities of the expression of the density matrix and the kernel of Feynman's path integral approach to quantum mechanics. By doing a Wick rotation $t\to i\beta$, we get the Boltzmann factor and the kernel is transformed to the density matrix. The kernel is calculated along the paths that go from $x_1$ to $x_2$, if we consider small $\Delta t$ (small $\beta$). Then, when calculating the partition function, only the paths that stay near $x_1$ have a non-negligible contribution (the exponential in the expression for the density matrix gives a negligible contribution to the sum from the other paths). Therefore, the potential to a first-order approximation can be written as $V(x)\approx V(x_1)$ for all the contributing paths.
In this approximation, we can formally establish a map from the path integral formulation of quantum mechanics to the classical canonical partition function. To introduce quantum effects, we must incorporate the changes to the potential along the path; in particular, we are interested in the first-order effects. For this, we start by doing a Taylor expansion around the mean position $\tilde x$ along any path. Calculating the kernel with $\tilde x$ and doing the Wick rotation, we get the modified partition function. This partition function is calculated in a classical manner, but with the corrected potential, which is a mean value of the potential $V(x)$ averaged over points near $\tilde x$ with a Gaussian distribution. Therefore, to calculate the partition function with quantum corrections \cite{Obregon:2000zd,LopezDominguez:2006wd,Bastos:2009ae}, we use the corrected potential.
According to this procedure, the corrected partition function is
\begin{equation}
Z=\sqrt{\frac{m}{2\pi\beta\hbar^2}}\int^{\infty}_{-\infty}e^{-\beta U(\tilde{x})}d\tilde{x},\label{fundep}
\end{equation}
where $\beta=1/k_{B}T$ and $U(\tilde{x})$ is the corrected potential given by
\begin{equation}
U(\tilde{x})=\sqrt{\frac{12m}{2\pi \beta \hbar^{2}}}\int_{-\infty}^{\infty}V(\tilde{x}+y)e^{-6 y^{2}m/\beta \hbar^{2}}dy.\label{potc}
\end{equation}
Now we substitute the potential of the WDW equation in Eq.\eqref{fundep} and Eq.\eqref{potc}, this gives the corrected partition function
\begin{equation}
Z=\sqrt{\frac{2\pi}{3}}\frac{1 }{\beta E_{P}}e^{-\beta ^{2}E_{P}^{2}/16\pi}.\label{FP}
\end{equation}
From the partition function it is straightforward to calculate the temperature and entropy for the Schwarzschild black hole.
We begin with the internal energy
\begin{equation}
{E}=-\frac{\partial}{\partial \beta}\ln Z,\label{EI2}
\end{equation}
{which gives a relation between the black hole corrected temperature $\beta$ and its mass $M$}. In terms of Hawking temperature $\beta_{H}=\frac{8\pi Mc^{2}}{E_{P}^{2}}$, the corrected temperature of the black hole is
\begin{equation}
\beta=\beta_{H}\left ( 1-\frac{1}{\beta _{H}}\frac{1}{Mc^{2}} \right ),\label{temp}
\end{equation}
where we can observe an extra contribution to the temperature proportional to $\beta_{H}^{-1}$.
\noindent To calculate the entropy we use
\begin{equation}\label{relacion}
\frac{S}{k_{B}}=\ln{Z}+\beta {E},
\end{equation}
by relating the Bekenstein-Hawking entropy to the Hawking temperature as $Mc^{2}\beta_{H}=2\frac{S_{BH}}{k_{B}}$ we get entropy
\begin{equation}
\frac{S}{k_{B}}=\frac{S_{BH}}{k_{B}}-\frac{1}{2}\ln \frac{S_{BH}}{{k_{B}}}+\mathcal{O}(S_{BH}^{-1}).\label{entrop}
\end{equation}
This result has the interesting feature that the logarithmic correction agrees with the one obtained in string theory as well as in loop quantum gravity \cite{Domagala:2004jt,Mukherji:2002de,Sen:2012dw}.
Now we apply this method to obtain the temperature and entropy of the supersymmetric Schwarzschild black hole. Since the potential in Eq.\eqref{susyec} depends only on the $\Omega$ coordinate, we use the plane wave solution for $\xi$. Following the same procedure and similar approximations as in the original case, the equation takes the form
\begin{equation}\left [ -\frac{1}{2}l_{P}^{2}E_{P}\frac{d^{2}}{dx^{2}}+4\frac{E_{P}}{l_{P}^{2}}x^{2}\pm\frac{1}{2}\frac{E_{P}}{l_{P}^{4}}\epsilon x^{4} \right ]\chi_{\pm}^{S} (x)=\eta^{2}\chi_{\pm}^{S} (x). \label{ecaprox}
\end{equation}
From now on we will take the $(-)$ case, the $(+)$ case follows straightforwardly by transforming $\epsilon\to-\epsilon$.
Now we apply the Feynman-Hibbs procedure for the potential of the form $V(x)=\frac{m\omega^2}{2}x^2+\lambda x^4$. Following the bosonic case a straightforward calculation gives the corrected partition function for the supersymmetric model
\begin{eqnarray}\label{FPSUSY}
Z_{S}&=&\sqrt{\frac{2 \pi}{3}}\frac{1}{\beta E_{P}}\exp{\left ( -\frac{\beta^{2}E_{P}^{2}}{16\pi}-\frac{\beta^{3}E_{P}^{3}\epsilon}{96} \right )}\\
&&\times \left ( 1+\frac{\pi \beta E_{P} \epsilon}{3} \right )^{-1/2}.\nonumber
\end{eqnarray}
As before, this partition function reduces to the original one in the limit $\epsilon\to 0$.
For temperature we proceed as before, using the partition function Eq.\eqref{FPSUSY} the internal energy is
\begin{equation}
\frac{1}{\beta}+\frac{E_{P}^{2}\beta}{8 \pi}+\frac{E_{P}^{3}\beta^{2}}{32}\epsilon+\frac{\pi E_{P}}{6}\epsilon-\frac{\pi^{2}E_{P}^{2}\beta}{18}\epsilon^{2}=Mc^2, \label{ec1}
\end{equation}
solving for $\beta$ in terms of the Hawking temperature gives
\begin{equation}\label{Tsusy}
\beta=\beta_{H}\left [ 1-\frac{1}{\beta _{H}Mc^{2}}+f(\epsilon ) \frac{1}{\left ( \beta _{H}Mc^{2} \right )^{1/2}}\right ],
\end{equation}
where $f(\epsilon)= \frac{2}{3}\epsilon^{3}-\epsilon$, and for convenience, the parameter $\epsilon$ has been redefined by $\epsilon \rightarrow \frac{2^{1/2}\pi ^{3/2}}{3}\epsilon$. We see the supersymmetric contribution to the temperature is proportional to $\beta_{H}^{-1/2}$, also in the limit $\epsilon \to 0$ we recover {Eq. \eqref{temp}}.
\noindent From the partition function Eq.\eqref{FPSUSY} and the temperature, the entropy for the supersymmetric model is given by
\begin{equation}
\frac{S}{k_{B}}=-\frac{1}{2}\ln \frac{3E_{P}^{2}\beta ^{2}}{2\pi }+\frac{E_{P}^{2}\beta ^{2}}{16\pi }+\frac{E_{P}^{3}\beta ^{3}}{48}\epsilon -\frac{\pi ^{2}E_{P}^{2}\beta ^{2}}{18}\epsilon^{2}+1.
\end{equation}
Solving in terms of the Bekenstein-Hawking entropy $S_{BH}/k=A/4l_P^2$ we arrive to the entropy-area relationship
\begin{eqnarray}
\frac{S[A]}{k_B}&=&\frac{\left [1+\Delta(\epsilon) \right ]}{4l_{P}^{2}}A-\frac{1}{2}\ln{\frac{A}{4l_{P}^{2}}}\nonumber\\
&+&\Gamma(\epsilon)\left(\frac{A}{4l_{P}^{2}}\right)^{1/2}+\epsilon\left(\frac{A}{2l_{P}^{2}}\right)^{3/2},\label{entr}
\end{eqnarray}
where $\Gamma(\epsilon)=2^{1/2}f(\epsilon )-3\cdot 2^{1/2}\epsilon +3\cdot 2^{1/2}\epsilon f^{2}(\epsilon )-2^{5/2}\epsilon ^{2}f(\epsilon )$ is and odd function of $\epsilon$ and $\Delta(\epsilon)=6\epsilon f(\epsilon )-4\epsilon ^{2}$ is an even function.\\%Therefore the calculation of the entropy for the other case is obtained by the change $\epsilon\to-\epsilon$ in Eq.(\ref{entr}).
The modifications to the entropy of the proposed supersymmetric Schwarzschild model can be understood as follows.
The first term is the usual Bekenstein-Hawking entropy, the logarithmic correction seems to be a universal correction and has been derived in different approaches in the study of black holes \cite{Obregon:2000zd,Domagala:2004jt,Mukherji:2002de,Sen:2012dw}. The third term is proportional to a linear length, this term can be interpreted as an effective contribution due to a self-gravitating gas, this behavior in the scaling is consistent with the entropy calculated for a self-gravitating gas where $S\sim V^{1/3}$ \cite{deVega:2001zk}. Finally the last term is proportional to the volume and is the usual behavior that the entropy has for non gravitational systems, and corresponds to effective volumetric short distance interactions. This type of volumetric modification, in the context of emergent gravity, was first studied in \cite{Modesto:2010rm}, where the authors justify the introduction of this term by using arguments from
loop quantum gravity.
\section{Emergent Modified Newtonian Gravity}\label{eg}
The relation between entropy and gravity to derive Einstein's equations was put forward by Jacobson \cite{Jacobson:1995ab}. In more recent works Verlinde showed a relation between the Entropic force and the Newtonian gravity. He proposed that gravity is an effective force which emerges from the entropy, such as the emergent entropic forces in polymers. These forces are connected to the entropy via the thermodynamic equation of state $F\Delta x=T \Delta S$.
Verlinde's approach relates the entropy with the information contained in a surface $\mathcal{S}$ that surrounds a mass $M$, which is very near from a test mass $m$, from which the entropic derivation of Newtonian gravity is obtained \cite{Verlinde:2010hp}. These ideas have inspired several models that attempt to modify Newtonian gravity. This has been achieved by proposing quantum modification to the entropy or by using new definitions of entropy \cite{Martinez-Merino:2017xzn} to find modifications to Newton's gravitational force and in some cases modifications to gravity in the cosmological scenario \cite{Sheykhi:2010yq}. We can write a generic modification to the entropy-area relationship as
\begin{equation}
\frac{S}{k_{B}}=\frac{A}{4l_{P}^{2}}+\mathfrak{s}(A),\label{moden}
\end{equation}
the first term corresponds to the usual area law and the second term, $\mathfrak{s}(A)$, includes the other contributions to the entropy.Using Verlinde's entropic Newtonian gravity, modifications to Newton's law of gravitation \cite{Modesto:2010rm} can be obtained from
\begin{equation}\label{verlinde}
\mathbf{F}_{M}=-\frac{GMm}{R^{2}}\left[1+4l_{P}^{2}\frac{\partial \mathfrak{s}}{\partial A}\right]_{A=4\pi R^{2}}\mathbf{\hat{R}}.
\end{equation}
With this in mind we will find the emergent modified Newtonian force related with the area-entropy derived in the previous section. In order to proceed, we separate the entropy of the supersymmetric model in accordance to Eq.\eqref{moden}. The calculation for the entropic Newtonian force is straightforward
\begin{eqnarray}
\mathbf{F}_{M}&=&-\frac{G_{eff}Mm}{R^{2}}\left[1+\frac{3\sqrt{2\pi} }{l_{P}(1+\Delta(\epsilon))} \epsilon R \right.\\
&+&\left.\frac{l_{P}\Gamma(\epsilon)}{2\sqrt{\pi}(1+\Delta(\epsilon))}\frac{1}{R}
-\frac{l_{P}^{2}}{2\pi(1+\Delta(\epsilon))}\frac{1}{R^2}\right]\mathbf{\hat{R}},\nonumber \label{modforce}
\end{eqnarray}
where $G_{eff}=(1+\Delta(\epsilon))G$ is the {effective} gravitational constant.
We can derive the modified gravitational potential by integrating Eq.\eqref{modforce}, which up to a arbitrary integration constant $\sigma$ is
\begin{eqnarray}
\Phi_{M}&=&-\frac{G_{eff}M}{R}\left[1-\frac{3\sqrt{2\pi}}{l_{P}(1+\Delta(\epsilon))} \epsilon R\ln{\frac{R}{\sigma}}\right.\\
&+&\left.\frac{l_{P}\Gamma(\epsilon)}{4\sqrt{\pi}(1+\Delta(\epsilon))}\frac{1}{R}-\frac{l_{P}^{2}}{6\pi(1+\Delta(\epsilon))}\frac{1}{R^{2}}\right].\nonumber
\end{eqnarray}
It is also straightforward to obtain an effective matter density such that the Poisson equation $\nabla ^{2}\Phi_{M}=4\pi\rho _{eff}$ is satisfied
\begin{equation}
\rho_{eff}=\frac{G M}{2\pi R}\frac{l^2_P}{2\pi R^4}\left[1-\frac{\sqrt{\pi}}{2 l_P}\Gamma(\epsilon) R+\frac{3}{2}\left(\frac{\sqrt{2\pi}}{l_P}\right)^3\epsilon R^3\right].
\end{equation}
We can have different interpretations to this modified matter density, one can consider that the origin of $\rho_{eff}$ is a consequence to the presence of a point mass $M$ and through some unknown process gives $\rho_{eff}$. The other possibility is to consider that we have a point particle of mass $M$ that generates the potential $\Phi_M$ by using the appropriate limit of some unknown theory of gravity.
\noindent If we consider a particle of mass $m$ in a circular orbit of radius $R$, according to Eq.\eqref{modforce}, the velocity of the particle is given by
\begin{eqnarray}\label{velocidad2}
\frac{mv^2}{R}&=&\frac{G_{eff}Mm}{R^{2}}\left[1+\frac{3\sqrt{2\pi} }{l_{P}(1+\Delta(\epsilon))} \epsilon R \right.\\
&+&\left.\frac{l_{P}\Gamma(\epsilon)}{2\sqrt{\pi}(1+\Delta(\epsilon))}\frac{1}{R}
-\frac{l_{P}^{2}}{2\pi(1+\Delta(\epsilon))}\frac{1}{R^2}\right].\nonumber
\end{eqnarray}
Such is the case of a star in circular motion around the center of a galaxy. The second term in Eq.\eqref{velocidad2}
is the leading order term for very large $R$ and corresponds to the entropic volumetric correction. At this point, we observe that our model has one free parameter $\epsilon$ which can be fit so we can reproduce the observed galactic rotations curves. On the other hand, the Modified Newtonian Dynamics (MOND), is a one parameter phenomenological theory, that reproduces the observed galactic rotation curves \cite{mond}. Of particular interest is the behavior for large $R$, since in this limit the velocity obtained in our model is a constant, like the velocity one obtains from MOND in the same limit. We can exploit this fact in order to relate our parameter $\epsilon$ with the characteristic quantities of MOND.
In the limit of large $R$ the velocity is given by
\begin{equation}
v^2\approx\frac{3\sqrt{2\pi}G M}{l_P}\epsilon,
\end{equation}
it is worth mentioning that the velocity depends on the regular gravitational constant $G$ and is linear on the parameter $\epsilon$}. Comparing with the velocity obtained for large $R$ in MOND, $v^2=\sqrt{G Ma_0}$, we can see that the results agree if
\begin{equation}
\epsilon=\frac{1}{6\pi}\sqrt{\frac{a_0h}{M c^3}}.
\end{equation}
{We can constrain the upper bound for $\epsilon$ by using the Planck mass $M_{P}$. Considering the characteristic acceleration of MOND $a_0\approx10^{-10}m/s^2$ then $\epsilon < 10^{-30}$.}
\section{Conclusions and final remarks }\label{conclusiones}
The complexities of defining a quantum theory of gravity has led to consider gravity as an emergent phenomena. This opens a new paradigm for the understanding of the origin of the gravitational interaction. By assuming an entropic origin of Newtonian gravity we can study new effects by considering modifications to the Bekenstein-Hawking entropy.
In this work,
we used the supersymmetric minisuperspace approach for the Schwarzschild black hole to construct a supersymmetric generalization and from the Feynman-Hibbs approach we obtained the entropy-area relationship. It is worth mentioning, that except for the logarithmic term that has a quantum origin, the remaining terms are related to the supersymmetric modification. Assuming an entropic origin to gravity, we constructed a generalized Newtonian force, the modified gravitational potential and the effective matter density.
When considering the case of circular orbit for very large radius, this modified theory of gravity can account for the anomalous galaxy rotation curves. Therefore one can conjecture that if gravity is emergent, supersymmetry can substitute (under some conditions) the need for dark matter to explain the rotation curves.
Although this results are encouraging, further exploration is needed to establish this model as a replacement to dark matter.
\section*{Acknowledgements}
This work is supported by CONACyT grants 257919, 258982. M. S. is supported by the CONACyT program ``Estancias sab\'aticas en el extranjero'', grant 31065. J.C.L-D. is supported by UAZ grant UAZ-2016-37235. I. D-S. thanks CONACyT support.
|
2,869,038,156,762 | arxiv | \section{Introduction}\label{introduction}
Recently, in \cite{ DarbyHagerRao11,DarbyHagerRao10, FrancolinHagerRao13,
GargHagerRao11b, GargHagerRao11a, GargHagerRao10a, PattersonHagerRao14},
a class of methods was developed for solving optimal control problems
using collocation at either Gauss or Radau quadrature points.
In \cite{HagerHouRao15b} and \cite{HagerHouRao15c} an
exponential convergence rate is established for these schemes.
The analysis is based on a bound for the inverse of a linearized operator
associated with the discretized problem, and an
estimate for the residual one gets when substituting the solution to the
continuous problem into the discretized problem.
This paper focuses on the estimation of the residual.
We show that the residual in the sup-norm is bounded by the sup-norm distance
between the derivative of the solution to the continuous problem and
the derivative of the interpolant of the solution.
By Markov's inequality $\cite{Markov1916}$,
this distance can be bounded in terms of the Lebesgue
constant for the point set and the error in best polynomial approximation.
A classic result of Jackson \cite{jackson} gives an estimate for
the error in best approximation.
The Lebesgue constant that we need to analyze corresponds to the
roots of a Jacobi polynomial on $(-1, +1)$
augmented by either $\tau = +1$ or $\tau = -1$.
The effects of the added endpoints were analyzed by
V\'{e}rtesi in \cite{Vertesi81}.
For either the Gauss quadrature points
on $(-1, +1)$ augmented by $\tau = +1$ or the Radau quadrature points on
$(-1, +1]$ or on $[-1, +1)$, the bound given in \cite[Thm. 2.1]{Vertesi81}
for the Lebesgue constants is $O(\log (N) \sqrt{N})$,
where $N$ is the number of quadrature points.
We sharpen this bound to $O(\sqrt{N})$.
To motivate the relevance of the Lebesgue constant to collocation methods,
let us consider the scalar first-order differential equation
\begin{equation} \label{de}
\dot{x}(\tau)=f\left(x(\tau)\right), \quad \tau \in [-1, +1],
\quad x(-1) = x_0,
\end{equation}
where $f : \mathbb{R}\rightarrow\mathbb{R}$.
In a collocation scheme for (\ref{de}),
the solution $x$ to the differential equation
(\ref{de}) is approximated by a polynomial $x$
that is required to satisfy the differential
equation at the collocation points.
Let us consider a scheme based on collocation at the Gauss quadrature
points $-1 < \tau_1 < \tau_2 < \ldots < \tau_N < +1$, the roots of the
Legendre polynomial of degree $N$.
In addition, we introduce the noncollocated point $\tau_0 = -1$.
The discretized problem is to find $x \in \C{P}_{N}$,
the space of polynomials of degree at most $N$, such that
\begin{equation}\label{collocated}
\dot{x}(\tau_k) = f(x(\tau_k)), \quad 1 \le k \le N,
\quad x(-1) = x_0.
\end{equation}
A polynomial of degree at most $N$ is uniquely specified by
$N+1$ parameters such as its coefficients.
The $N$ collocation equations and the boundary condition in (\ref{collocated})
yield $N+1$ equations for the polynomial.
The convergence of a solution of the collocated problem (\ref{collocated})
to a solution of the continuous problem (\ref{de})
ultimately depends on how accurately a polynomial interpolant of a
continuous solution satisfies the discrete equations (\ref{collocated}).
The Lagrange interpolation polynomials for the point set
$\{\tau_0, \tau_1, \ldots , \tau_N\}$ are defined by
\begin{equation}\label{lag}
L_i(\tau)=\prod_{\substack{j=0\\ j\neq i}}^N\frac{\tau-\tau_j}
{\tau_i-\tau_j}, \quad 0 \le i \le N.
\end{equation}
The interpolant $x^N$ of a solution $x$ to (\ref{de}) is given by
\[
x^N (\tau) = \sum_{j=0}^N x (\tau_j) L_j(\tau).
\]
The residual in (\ref{collocated})
associated with a solution of (\ref{de}) is the vector with components
\begin{equation}\label{res}
r_0 = x^N(-1) - x_0, \quad r_k = \dot{x}^N(\tau_k) - f(x^N(\tau_k)), \quad
1 \le k \le N.
\end{equation}
For the Gauss scheme,
$r_0 = 0$ since $x$ satisfies the boundary condition in (\ref{de}).
The potentially nonzero components of the residual are $r_k$, $1 \le k \le N$.
As we show in Section~\ref{residual}, the residual can be bounded
in terms of a Lebesgue constant and the error in best approximation for $x$ and
its derivative.
The Lebesgue constant $\Lambda_N$ relative to the point set
$\{\tau_0, \tau_1, \ldots , \tau_N\}$ is defined by
\begin{equation}\label{ln1}
\Lambda_N=\max \left\{
\sum_{j=0}^N\left|L_j(\tau)\right|: \tau\in[-1,1] \right\} .
\end{equation}
The article \cite{Brutman97} of Brutman gives a comprehensive survey on the
analysis of Lebesgue constants, while the book \cite{Mastroianni08}
of Mastroianni and Milovanovi\'{c} covers more recent results.
The paper is organized as follows.
In Section~\ref{residual}, we show how the Lebesgue constant enters
into the residual associated with the discretized problem (\ref{collocated}).
Section~\ref{szego} summarizes results of Szeg\H{o} used in the analysis.
Section~\ref{gauss+} analyzes the Lebesgue constant for the
Gauss quadrature points augmented by $\tau = -1$,
while Section~\ref{radau+} analyzes Radau quadrature points.
Finally, Section~\ref{tight} examines the tightness of the estimates
for the Lebesgue constants.
{\bf Notation.}
$\mathcal{P}_N$ denotes the space of polynomials of degree at most $N$
and $\|\cdot\|$ denotes the sup-norm on the interval $[-1, +1]$.
The Jacobi polynomial $P_N^{(\alpha, \beta)}(\tau)$,
$N \ge 1$, is an $N$-th degree polynomial, and for fixed $\alpha > -1$ and
$\beta > -1$, the polynomials are orthogonal on the interval $[-1, +1]$
relative to the weight function $(1-\tau)^\alpha(1+\tau)^\beta$.
$P_N$ stands for the Jacobi polynomial $P_N^{(0,0)}$, or equivalently,
the Legendre polynomial of degree $N$.
\section{Analysis of the residual}
\label{residual}
As discussed in the introduction,
a key step in the convergence analysis of collocation schemes
is the estimation of the residual defined in (\ref{res}).
The convergence of a discrete solution to the
solution of the continuous problem ultimately depends on
how quickly the residual approaches 0 as $N$ tends to infinity;
for example, see Theorem~3.1 in \cite{DontchevHager97},
Proposition~5.1 in \cite{Hager99c}, or Theorem~2.1 in \cite{Hager02b}.
Since a solution $x$ of (\ref{de}) satisfies the differential equation
on the interval $[-1, +1]$, it follows that
$\dot{x}(\tau_k) = f(x(\tau_k))$, $1 \le k \le N$.
Hence, the potentially nonzero components of the residual can be expressed
$r_k = \dot{x}^N (\tau_k) - \dot{x}(\tau_k)$, $1 \le k \le N$.
In other words, the size of the residual depends on the difference between
the derivative of the interpolating polynomial at the collocation
points and the derivative of the continuous solution at the collocation points.
Hence, let us consider the general problem of estimating the
difference between the derivative of an interpolating polynomial on the
point set $\tau_0 < \tau_1 < \ldots < \tau_N$ contained in $[-1, +1]$
and the derivative of the original function.
\smallskip
\begin{proposition}\label{L1}
If $x$ is continuously differentiable on $[-1, +1]$, then
\begin{eqnarray}
\left\|\dot{x}-\dot{x}^N\right\|
&\le& \left(1+2N^2\right)
\inf_{q \in \mathcal{P}_{N}}\left\|\dot{x}-\dot{q}\right\| \nonumber \\
&& \quad + N^2(1+\Lambda_N)
\inf_{p \in \mathcal{P}_{N}}\left\|x-p\right\|
\label{diffy}
\end{eqnarray}
where $x^N \in \C{P}_N$ satisfies $x^N(\tau_k) = x(\tau_k)$,
$0 \le k \le N$, and $\Lambda_N$ is the Lebesgue constant relative to
the point set $\{ \tau_0, \tau_1, \ldots, \tau_N \}$.
\end{proposition}
\begin{proof}
Given $p \in \mathcal{P}_N$, the triangle inequality gives
\begin{equation}
\left\|\dot{x}-\dot{x}^N\right\|\leq \|\dot{x}-\dot{p}\|+\left\|\dot{p}
-\dot{x}^N\right\|.\label{dify}
\end{equation}
By Markov's inequality $\cite{Markov1916}$, we have
\begin{eqnarray}
\left\|\dot{p}-\dot{x}^N\right\|
&\leq& N^2 \left\|p-x^N\right\|=N^2 \left\|\sum_{i=0}^N(p(\tau_i)-x(\tau_i))
L_i(\tau)\right\|\nonumber \\
&\leq & N^2 \Lambda_N\max_{0\leq i\leq N}|p(\tau_i)-x(\tau_i)|
\le N^2\Lambda_N \|p-x\|. \label{qminusy}
\end{eqnarray}
Let $q \in \C{P}_{N}$ with $q(-1) = x(-1)$.
Again, by the triangle and Markov inequalities, we have
\begin{eqnarray}
\|\dot{x}-\dot{p}\| &\le& \|\dot{x} - \dot{q} \| + \|\dot{q} - \dot{p}\| \le
\|\dot{x} - \dot{q} \| + N^2 \|q - p\| \nonumber \\
&\le&
\|\dot{x} - \dot{q} \| + N^2 (\|q - x\| + \|x - p\|). \label{h71}
\end{eqnarray}
By the fundamental theorem of calculus,
\begin{equation}\notag
\left|q(t)-x(t)\right|=\left|\int_{-1}^{t}
\left(\dot{q}(s)-\dot{x}(s)\right)ds\right|\leq \int_{-1}^{t}
\left|\dot{q}(s)-\dot{x}(s)\right|ds\leq 2\|\dot{q}-\dot{x}\|.
\end{equation}
We combine this with (\ref{h71}) to obtain
\begin{equation}\label{h72}
\|\dot{x}-\dot{p}\| \le (1 + 2N^2) \|\dot{x} - \dot{q} \| + N^2 \|x - p\| .
\end{equation}
To complete the proof, combine (\ref{dify}), (\ref{qminusy}), and (\ref{h72})
and exploit the fact that
\[
\left\{\dot{q}: q(-1) = x(-1), \;\; q \in \C{P}_N \right\} =
\left\{\dot{q}: q \in \C{P}_N \right\}.
\]
\end{proof}
An estimate for the right side of \eqref{diffy} follows from results
on best uniform approximation by polynomials, which
originate from work of Jackson \cite{jackson}.
For example, the following result employs an estimate from Rivlin's
book \cite{Rivlin1969}.
\begin{lemma}\label{L2}
If $x$ has $m$ derivatives on $[-1, +1]$ and $N > m$, then
\begin{equation}\label{jackson}
\inf_{p\in \mathcal{P}_N}\|x-p\|\leq
\left( \frac{12}{m+1} \right) \left( \frac{6e}{N} \right)^m
\|x^{(m)}\|,
\end{equation}
where $x^{(m)}$ denotes the $m$-th derivative of $x$.
\end{lemma}
\begin{proof}
It is shown in \cite[Thm. 1.5]{Rivlin1969} that
\begin{equation}\label{yp}
\inf_{p\in \mathcal{P}_N}\left\|x-p\right\|\leq
\left( \frac{6}{m+1} \right) \left( \frac{6e}{N} \right)^m
\omega_m \left(\frac{1}{N-m}\right),
\end{equation}
where $\omega_m$ is the modulus of continuity of $x^{(m)}$.
By the definition of the modulus of continuity, we have
\[
\omega_m\left(\frac{1}{N-m}\right)=\sup\left\{\left|x^{(m)}(\tau_1)
-x^{(m)}(\tau_2)\right|: {\tau_1, \tau_2 \in[-1,1], |\tau_1-\tau_2|
\leq
\frac{1}{N-m}}\right\}.
\]
Since
\[
|x^{(m)}(\tau_1)-x^{(m)}(\tau_2) |\leq 2
\|x^{(m)}\| ,
\]
(\ref{jackson}) follows from (\ref{yp}).
\end{proof}
If $\Lambda_N = O(N)$ and $m \ge 4$, then
Proposition~\ref{L1} and Lemma~\ref{L2} imply that the components
of the residual approach zero as $N$ tends to infinity.
Moreover, if $x$ is infinitely differentiable and
there exists a constant $c$ such that $\|x^{(m)}\| \le c^m$,
then we take $m = N-1$ in Lemma~\ref{L2} to obtain
\[
\inf_{p\in \mathcal{P}_N}\|x-p\|\leq
\left( \frac{2}{ec} \right) \left( \frac{6ec}{N} \right)^N.
\]
Hence, the convergence is extremely fast due to the $1/N^N$ factor.
\section{Some results of Szeg\H{o}}
\label{szego}
We now summarize several results developed by Szeg\H{o} in \cite{Szego1939}
for Jacobi polynomials that are used in the analysis.
The page and equation numbers that follow refer to the 2003 edition
of Szeg\H{o}'s book published by the American Mathematical Society.
First, at the bottom of page 338, Szeg\H{o} makes the following observation:
\smallskip
\begin{theorem}\label{jacobi}
The Lebesgue constant for the roots of the Jacobi polynomial
$P_N^{(\alpha, \beta)}(\tau)$ is $O(N^{0.5+\gamma})$
if $\gamma := \max(\alpha, \beta) > -1/2$,
while it is $O(\log N)$ if $\gamma \le-1/2$.
\end{theorem}
\smallskip
For the Gauss quadrature points, $\alpha = \beta = 0$, $\gamma = 0$,
and $\Lambda_N = O(\sqrt{N})$.
The result that we state as Theorem~\ref{jacobi}
is based on a number of additional properties of Jacobi polynomials
which are useful in our analysis.
The following identity is a direct consequence of the Rodrigues formula
\cite[p. 67]{Szego1939} for $P_N^{(\alpha,\beta)}$.
\smallskip
\begin{proposition}\label{flip}
For any $\alpha$ and $\beta \in \mathbb{R}$, we have
\begin{equation}\label{eq8}
P_N^{(\alpha, \beta)}(\tau)=(-1)^NP_N^{(\beta, \alpha)}(-\tau)
\quad \mbox{for all } \tau \in [-1, +1].
\end{equation}
\end{proposition}
\smallskip
The following proposition provides some bounds for Jacobi polynomials.
\smallskip
\begin{proposition}\label{pro1}
For any $\alpha$ and $\beta \in \mathbb{R}$
and any fixed constant $c_1 > 0$,
we have
\[
P_N^{(\alpha,\beta)}(\cos\theta)=\left\{
\begin{array}{clcccl}
O\left(N^\alpha\right) &\mbox{if } \theta \in
[&0&,& c_1N^{-1} &],\\[.05in]
\theta^{-\alpha-0.5}O\left(N^{-1/2}\right)
&\mbox{if } \theta \in [ &c_1N^{-1} &, & \pi/2 &],\\[.05in]
(\pi-\theta)^{-\beta-0.5}O\left(N^{-1/2}\right)
&\mbox{if } \theta \in [&\pi/2&,& \pi- c_1N^{-1}&],\\[.05in]
O\left(N^\beta\right) &\mbox{if } \theta \in [&\pi- c_1N^{-1}&,& \pi&].
\end{array}
\right.
\]
\end{proposition}
\smallskip
\begin{proof}
The bounds for $\theta \in [0, cN^{-1}]$ and for
$\theta \in [cN^{-1}, \pi/2]$ appear in \cite[(7.32.5)]{Szego1939}.
If $\theta \in \left[\pi/2, \pi\right]$, then
$\pi-\theta \in \left[0, \pi/2 \right]$ and by \eqref{eq8},
\begin{equation}\label{h1}
P_N^{(\alpha, \beta)}(\cos \theta)=P_N^{(\alpha, \beta)}(-\cos(\pi- \theta))
=(-1)^NP_N^{(\beta, \alpha)}(\cos(\pi- \theta)).
\end{equation}
Hence, for $\theta \in [\pi/2, \pi]$,
the first two estimates in the proposition applied to the right
side of (\ref{h1}) yield the last two estimates.
\end{proof}
The next proposition provides an estimate for the derivative of a
Jacobi polynomial at a zero.
\smallskip
\begin{proposition}\label{pro2}
If $\alpha>-1$ and $\beta>-1$, then there exist constants
$\gamma_2 \ge \gamma_1 > 0$, depending only on $\alpha$ and $\beta$, such that
\[
\gamma_1 i^{-\beta - 1.5} N^{\beta + 2} \le
\left|\dot{P}_N^{(\alpha, \beta)}(\tau_i)\right| \le
\gamma_2 i^{-\beta - 1.5} N^{\beta + 2}
\]
whenever $\tau_i \le 0$ where
$\tau_1 < \tau_2 < \ldots < \tau_N$ are the zeros of $P_N^{(\alpha, \beta)}$
(the smallest zero is indexed first).
Moreover, if $\theta_i \in [0, \pi]$ is defined by
$\cos \theta_i = \tau_i$, then there exist constants
$\gamma_4 \ge \gamma_3 > 0$, depending only on $\alpha$ and $\beta$, such that
\begin{equation}\label{h9}
\gamma_3 \sqrt{N} (\pi - \theta_i)^{-\beta - 1.5} \le
\left|\dot{P}_N^{(\alpha, \beta)}(\tau_i)\right| \le
\gamma_4 \sqrt{N} (\pi -\theta_i)^{-\beta - 1.5}
\end{equation}
whenever $\theta_i \in [\pi/2, \pi]$.
\end{proposition}
\smallskip
\begin{proof}
In \cite[(8.9.2)]{Szego1939}, it is shown that there exist
$\gamma_2 \ge \gamma_1 > 0$, depending only on $\alpha$ and $\beta$, such that
\begin{equation}\label{h7}
\gamma_1 i^{-\beta - 1.5} N^{\beta + 2} \le
\left|\dot{P}_N^{(\beta, \alpha)}(\sigma_i)\right| \le
\gamma_2 i^{-\beta - 1.5} N^{\beta + 2}
\end{equation}
whenever $\sigma_i \ge 0$ where
$\sigma_1 > \sigma_2 > \ldots > \sigma_N$ are the zeros of
$P_N^{(\beta, \alpha)}$ (the largest zero is indexed first).
By Proposition~\ref{flip}, $\tau_i$ is a zero of $P_N^{(\alpha,\beta)}$
if and only if $-\tau_i$ is a zero of $P_N^{(\beta,\alpha)}$.
Hence, the zeros of $P_N^{(\beta,\alpha)}$ are
$-\tau_1 > -\tau_{2} > \ldots > -\tau_N$.
Moreover,
\begin{equation}\label{h7.5}
\dot{P}_N^{(\alpha,\beta)}(\tau) = \pm
\dot{P}_N^{(\beta,\alpha)}(-\tau).
\end{equation}
The bound given in the proposition for
$|\dot{P}_N^{(\alpha,\beta)}(\tau_i)|$ with $\tau_i \le 0$ is exactly the
bound (\ref{h7}) for
$|\dot{P}_N^{(\beta,\alpha)}(\sigma_i)|$ with $\sigma_i \ge 0$.
It is shown in \cite[(8.9.7)]{Szego1939}, that there exist constants
$\gamma_4 \ge \gamma_3 > 0$, depending only on $\alpha$ and $\beta$, such that
\begin{equation}\label{h8}
\gamma_3 \sqrt{N} \phi_i^{-\beta - 1.5} \le
\left|\dot{P}_N^{(\beta, \alpha)}(\sigma_i)\right| \le
\gamma_4 \sqrt{N} \phi_i^{-\beta - 1.5}
\end{equation}
whenever $\phi_i \in [0, \pi/2]$ where $\cos \phi_i = \sigma_i$.
Since $\cos \phi_i = \sigma_i = -\tau_i = \cos (\pi - \theta_i)$,
it follows that $\phi_i = \pi - \theta_i$, and
(\ref{h7.5}) and (\ref{h8}) yield (\ref{h9}).
\end{proof}
\section{Lebesgue constant for Gauss quadrature points augmented by $-1$}
\label{gauss+}
In this section we estimate the Lebesgue constant for
the Gauss quadrature points augmented by $\tau_0 = -1$.
Due to the symmetry of the Gauss quadrature points, the same
estimate holds when the Gauss quadrature points are augmented by $+1$
instead of $-1$.
The Gauss quadrature points are the zeros of the Jacobi polynomial
$P_N^{(0, 0)}(\tau)$, which is abbreviated as $P_N(\tau)$.
By Theorem~\ref{jacobi}, the Lebesgue constant for the Gauss
quadrature points themselves is $O(\sqrt{N})$.
The effect of adding the point $\tau_0 = -1$ to the Gauss quadrature
points is not immediately clear due to the new factor $(1 + \tau_i)$
in the denominator of the Lagrange polynomials;
this factor can approach 0 since roots of $P_N$
approach $-1$ as $N$ tends to infinity.
Nonetheless, with a careful grouping of terms,
Szeg\H{o}'s bound in Theorem~\ref{jacobi}
for the Gauss quadrature points can be extended to handle the new
point $\tau_0 = -1$.
\smallskip
\begin{theorem}\label{gausstheom}
The Lebesgue constant for the point set consisting of the Gauss
quadrature points $-1 < \tau_1 < \tau_2 < \ldots < \tau_N < +1$
$($the zeros of $P_N)$ augmented with $\tau_0 = -1$ is $O(\sqrt{N})$.
\end{theorem}
\smallskip
\begin{proof}
Define
\[l(\tau)=(\tau-\tau_1)(\tau-\tau_2)\dots (\tau-\tau_N),
\quad \mbox{and}\quad L(\tau)=(\tau+1)l(\tau).
\]
The derivative of $L(\tau)$ at $\tau_i$ is
\[
\dot{L}(\tau_i)=l(\tau_i)+(\tau_i+1)\dot{l}(\tau_i)=\left\{
\begin{array}{cl}\displaystyle
l(-1), & i = 0, \\[.1in]
(\tau_i+1)\dot{l}(\tau_i), &i> 0.
\end{array}
\right.
\]
Hence, the Lagrange polynomials $L_i(\tau)$ associated with the
point set $\{\tau_0 , \tau_1, \ldots, \tau_N\}$ can be expressed as
\begin{equation}\label{Li}
L_i(\tau)=\frac{L(\tau)}{\dot{L}(\tau_i)(\tau-\tau_i)}=\left\{
\begin{array}{cl}
l(\tau)/l(-1), &i=0, \\[.1in]
\displaystyle\frac{L(\tau)}{(\tau_i+1)\dot{l}(\tau_i)(\tau-\tau_i)},
& i> 0.
\end{array}
\right.
\end{equation}
Since $P_N$ is a multiple of $l$ (it has the same zeros), it follows that
\[
L_i(\tau)=\left\{
\begin{array}{cl}
P_N(\tau)/P_N(-1), &i=0,\\[.1in]
\displaystyle
\frac{(\tau+1)P_N(\tau)}{(\tau_i+1)\dot{P}_N(\tau_i)(\tau-\tau_i)},
&i > 0.
\end{array}
\right.
\]
By \cite[(7.21.1)]{Szego1939},
$|P_N(\tau)| \le 1$ for all $\tau \in [-1, +1]$, and by
\cite[(4.1.4)]{Szego1939}, $|P_N(-1)| = (-1)^N$.
We conclude that $|L_0 (\tau)| \le 1$ for all $\tau \in [-1, +1]$.
Hence, the proof is complete if
\begin{equation}\label{h3}
\max \left\{ \sum_{i=1}^N|L_i(\tau)| : \tau \in[-1, 1] \right\} = O(\sqrt{N}) .
\end{equation}
For any $\tau \in [-1, +1]$, the integers $i \in [1, N]$ are partitioned
into the four disjoint sets
\begin{eqnarray*}
\C{I}_1 &=& \{ i \in [1,N]: \tau_i \ge 0 \}, \\
\C{I}_2 &=& \{ i \in [1,N]: -1 < \tau_i < 0, \; \tau_i > \tau \}, \\
\C{I}_3 &=& \{ i \in [1,N]: -1 < \tau_i < 0, \; \tau_i \le \tau, \;
\tau - \tau_i \le \tau_i + 1 \}, \\
\C{I}_4 &=& \{ i \in [1,N]: -1 < \tau_i < 0, \; \tau_i \le \tau, \;
\tau - \tau_i > \tau_i + 1 \}.
\end{eqnarray*}
Let $\C{I}_{123}$ denote $\C{I}_1 \cup \C{I}_2 \cup \C{I}_3$.
Observe that for any $i \in \C{I}_{123}$ and $\tau \in [-1, +1]$,
$(\tau+1)/(\tau_i + 1) \le 2$.
Consequently, for all $i \in \C{I}_{123}$,
\[
|L_i (\tau)| =
\left| \frac{(\tau+1)P_N(\tau)}{(\tau_i+1)\dot{P}_N(\tau_i)(\tau-\tau_i)}
\right| \le
\frac{2|P_N(\tau)|}{|\dot{P}_N(\tau_i)(\tau-\tau_i)|} .
\]
This bound together with Theorem~\ref{jacobi} imply that
\[
\sum_{i \in \C{I}_{123}} |L_i(\tau)| \le
\sum_{i \in \C{I}_{123}}
\frac{2|P_N(\tau)|}{|\dot{P}_N(\tau_i)(\tau-\tau_i)|} \le
2 \sum_{i=1}^N
\frac{|P_N(\tau)|}{|\dot{P}_N(\tau_i)(\tau-\tau_i)|} = O(\sqrt{N})
\]
since the terms in the final sum are the Lagrange
polynomials for the Gauss quadrature points.
To complete the proof, we need to analyze the terms in (\ref{h3})
associated with the indices in $\C{I}_4$.
These terms are more difficult to analyze since $\tau_i + 1$
in the denominator of $L_i$ could approach 0 while $\tau +1$ in
the numerator remains bounded away from 0.
For $i \in \C{I}_4$, we have
\[
\tau + 1 = (\tau - \tau_i) + (\tau_i + 1) \le 2 (\tau - \tau_i)
\]
since $\tau - \tau_i > \tau_i + 1$.
Hence,
\[
|L_i (\tau)| \le \frac{2|P_N(\tau)|}{|(\tau_i + 1) \dot{P}_N (\tau_i)|} \le
\frac{2}{|(\tau_i + 1) \dot{P}_N (\tau_i)|}
\]
since $|P_N(\tau)| \le 1$ for all $\tau \in [-1, +1]$
by \cite[(7.21.1)]{Szego1939}.
It follows that
\begin{equation}\label{h6}
\sum_{i \in \C{I}_{4}} |L_i(\tau)| \le
\sum_{i \in \C{I}_{4}}
\frac{2}{|(\tau_i + 1) \dot{P}_N (\tau_i)|} \le
\sum_{-1 < \tau_i < 0 }
\frac{2}{|(\tau_i + 1) \dot{P}_N (\tau_i)|} .
\end{equation}
Given $\theta \in [\pi/2, \pi]$, define $\phi = \pi - \theta$.
Observe that
\[
\left|\frac{\phi^2}{1+\cos \theta}\right|
=\frac{\phi^2}{2\cos^2(\theta/2)}
=\frac{2(\phi/2)^2}{\sin^2 (\phi/2)}
\leq \max_{x\in [0, \pi/4]}
\frac{2x^2}{\sin^2 x} =\frac{\pi^2}{4}.
\]
Hence, for $\theta \in [\pi/2, \pi]$, we have
\begin{equation}\label{h5}
1 + \cos \theta \ge \left( \frac{4}{\pi^2} \right) \phi^2 =
\frac{4}{\pi^2} (\pi - \theta)^2 .
\end{equation}
By the bounds \cite[(6.21.5)]{Szego1939} for the roots of the
Jacobi polynomial $P_N^{(\alpha, \beta)}$ when
$\alpha$ and $\beta \in [-0.5, +0.5]$, it follows that
\begin{equation}\label{*}
\left(\frac{2i-1}{2N+1}\right) \pi \leq \pi-\theta_i
\leq \left(\frac{2i}{2N+1}\right) \pi, \quad
1 \le i \le N,
\end{equation}
where $\cos \theta_i = \tau_i$.
This implies the lower bound
\begin{equation}\label{h4}
\pi - \theta_i \ge
\left(\frac{2i-1}{2N+1}\right) \pi \ge
\left( \frac{i}{3N} \right)\pi > \frac{i}{N} .
\end{equation}
We combine (\ref{h5}) and (\ref{h4}) to obtain
\begin{equation}\label{eq3}
1+\tau_i \ge \frac{4}{\pi^2}(\pi-\theta_i)^2\geq\frac{4}{\pi^2}
\left(\frac{i}{N}\right)^2.
\end{equation}
By Proposition~\ref{pro2},
\[
|\dot{P}_N(\cos \theta_i )| \ge \gamma_1 i^{-1.5} N^2.
\]
This lower bound for the derivative and the lower bound (\ref{eq3}) for
the root imply that
\[
\frac{1}{(1+\tau_i)|\dot{P}_N(\tau_{i})|} \le
\left( \frac{\pi^2}{4 \gamma_1} \right) i^{-1/2} .
\]
Hence, we obtain the following bound for the $\C{I}_4$ sum in (\ref{h6}):
\[
\sum_{-1<\tau_i<0}\frac{2}{(1+\tau_i)|\dot{P}_N(\tau_{i})|} \le
\left( \frac{\pi^2}{2 \gamma_1} \right)
\sum_{i = 1}^N i^{-1/2} \le
\left( \frac{\pi^2}{2 \gamma_1} \right)
\int_0^N i^{-1/2} di = O(\sqrt{N}) .
\]
This bound inserted in (\ref{h6}) completes the proof.
\end{proof}
\section{Lebesgue constants for the Radau quadrature points}
\label{radau+}
Next, we estimate the Lebesgue constant for the Radau quadrature scheme.
There are two versions of the Radau quadrature points depending on whether
$\tau_1 = -1$ or $\tau_N = +1$.
Since these two schemes have quadrature points that are the
negatives of one another, the Lebesgue constants are the same.
The analysis is carried out for the case $\tau_N = +1$.
In this case, the Radau quadrature points are the $N-1$ roots of
$P_{N-1}^{(1,0)}$ augmented by $\tau_N = 1$.
Szeg\H{o} shows that the Lebesgue constant for the roots of
$P_{N-1}^{(1,0)}$ is $O(N^{3/2})$.
We show that when the quadrature point $\tau_N = 1$ is included,
the Lebesgue constant drops to $O(\sqrt{N})$.
The analysis requires an estimate for the location of the zeros of
$P_{N-1}^{(1,0)}$.
Our estimate is based on some relatively recent results on
interlacing properties for the zeros of Jacobi polynomials obtained by
Driver, Jordaan, and Mbuyi in \cite{DriverJordaanMbuyi2008}.
Let $\tau_i'$ and $\tau_i''$, $i\geq 1$, be zeros of
$P_{N-1}$ and $P_{N}$ respectively, arranged in increasing order.
Applying \cite[Thm. 2.2]{DriverJordaanMbuyi2008}, we have
\[
\tau_i'' < \tau_i < \tau_{i}' ,
\]
$i = 1, 2, \ldots, N-1$, where
$-1 < \tau_1 < \tau_2 < \ldots < \tau_{N-1} < +1$ are the zeros of
$P_{N-1}^{(1,0)}$.
Let $\theta_i \in [0, \pi]$ be defined by $\cos \theta_i = \tau_i$.
By the estimate (\ref{*}) for the zeros of $P_N$, it follows that
the zeros of $P_{N-1}^{(1,0)}$ have the property that
\begin{equation}\label{zeros}
\left( \frac{2i-1}{2N-1} \right) \pi < \theta_{N-i} <
\left( \frac{2(i+1)}{2N+1} \right) \pi, \quad
1 \le i \le N-1.
\end{equation}
When $i$ is replaced by $N-i$, these bounds become
\begin{equation}\label{zeros*}
\left( \frac{2i-1}{2N+1} \right) \pi < \pi - \theta_{i} <
\left( \frac{2i}{2N-1} \right) \pi, \quad
1 \le i \le N-1.
\end{equation}
Together, (\ref{zeros}) and (\ref{zeros*}) imply that
\begin{equation}\label{phibounds}
\pi - \theta_{i} > i/N \quad \mbox{and} \quad \theta_{N-i} > i/N,
\quad 1 \le i \le N-1;
\end{equation}
moreover, taking into account both the upper and lower bounds, we have
\begin{eqnarray}
\theta_i - \theta_{i+1} &<& \left( \frac{4(i+N)+2N+1}{4N^2 - 1}\right) \pi
\le \left( \frac{10N - 7}{4N^2 - 1} \right) \pi \nonumber \\
&<& \left( \frac{5(2N - 1)}{4N^2 - 1} \right) \pi < \frac{2.5\pi}{N},
\quad 1 \le i \le N-2.
\label{separation}
\end{eqnarray}
Thus, the interlacing properties for the zeros leads to explicit
bounds for the separation of the zeros; for comparison,
Theorem~8.9.1 in \cite{Szego1939} yields $\theta_i - \theta_{i+1} = O(1)/N$,
while (\ref{separation}) yields an explicit constant $2.5\pi$.
These estimates for the zeros of $P_{N-1}^{(1,0)}$ are used
to derive the following result.
\smallskip
\begin{theorem}\label{radau}
The Lebesgue constant for the Radau quadrature points
\[
-1 < \tau_1 < \tau_2 < \ldots < \tau_N = 1
\]
$($the zeros of $P_{N-1}^{(1,0)}$ augmented by $\tau_N = +1)$ is $O(\sqrt{N})$.
\end{theorem}
\smallskip
\begin{proof}
The Lagrange interpolating polynomials $R_i$, $1 \le i \le N$,
associated with the Radau quadrature points are given by
\[
R_i(\tau)= \left( \frac{1-\tau}{1-\tau_i} \right)
\prod_{\substack{j=1\\ j\neq i}}^{N-1}\frac{\tau-\tau_j}
{\tau_i-\tau_j}, \quad 1 \le i \le N-1, \quad
R_N(\tau) =
\prod_{\substack{j=1}}^{N-1}\frac{\tau-\tau_j}
{1-\tau_j}.
\]
Similar to (\ref{Li}), the $R_i$ can be expressed
\begin{equation}\label{Ri}
R_i(\tau)=\left\{
\begin{array}{cl}
\displaystyle\frac{(1-\tau)P_{N-1}^{(1,0)}(\tau)}
{(1-\tau_i)\dot{P}_{N-1}^{(1,0)}(\tau_i)(\tau-\tau_i)},
&i < N, \\[.20in]
\displaystyle{\frac{P_{N-1}^{(1,0)}(\tau)}{P_{N-1}^{(1,0)}(1)}}.
&i=N.\\
\end{array}
\right.
\end{equation}
By \cite[(4.1.1)]{Szego1939} and \cite[(7.32.2)]{Szego1939}, we have
\begin{equation}\label{h22}
P_{N-1}^{(1,0)}(1)= N \quad \mbox{and} \quad
|P_{N-1}^{(1,0)}(\tau)|\le N \mbox{ for all } \tau \in [-1, +1] .
\end{equation}
We conclude that $|R_N (\tau)| \le 1$ for all $\tau \in [-1, +1]$.
Hence, the proof is complete if
\begin{equation}\label{h10}
\max \left\{ \sum_{i=1}^{N-1}|R_i(\tau)| : \tau \in [-1, +1] \right\}
=O(\sqrt{N}) .
\end{equation}
Let $\delta > 0$ be a small constant.
Technically, any $\delta$ satisfying $0 < \delta < 1/2$ is
small enough for the analysis.
Szeg\H{o} establishes the following bounds when analyzing the
Lebesgue constants associated with the roots of Jacobi polynomials:
\begin{equation}\label{radaulebesgue}
\sum_{i = 1}^N \left| \frac{P_{N}^{(1,0)}(\tau)}
{\dot{P}_{N}^{(1,0)}(\tau_i)(\tau-\tau_i)} \right| =
\left\{
\begin{array}{ll}
O(\sqrt{N}) & \mbox{if } \tau \in [-1, \delta-1], \\
O(\log N) & \mbox{if } \tau \in [\delta-1 , 1 - \delta], \\
O(N^{3/2}) & \mbox{if } \tau \in [1 - \delta, 1].
\end{array} \right.
\end{equation}
Szeg\H{o} considers the general Jacobi polynomials
$P_N^{(\alpha, \beta)}$ on pages 336--338 of \cite{Szego1939},
while here we only state the results
corresponding to $\alpha = 1$ and $\beta = 0$.
We first show that (\ref{h10}) holds when $\tau \in [-1, 1-\delta]$.
Observe that $(1 - \tau)/(1-\tau_i) \le 4/\delta$
when $\tau_i \le 1 - \delta/2$ and $\tau \in [-1, +1]$.
It follows from (\ref{radaulebesgue}) that
\begin{eqnarray}
\sum_{\tau_i \le 1-\delta/2} |R_i(\tau)| &\le& \left( \frac{4}{\delta} \right)
\sum_{\tau_i \le 1-\delta/2}
\left| \frac{P_{N-1}^{(1,0)}(\tau)}
{\dot{P}_{N-1}^{(1,0)}(\tau_i)(\tau-\tau_i)} \right| \nonumber \\
&=& \left\{ \begin{array}{ll}
O(\sqrt{N}), & \tau \in [-1, \delta-1], \\
O(\log N), & \tau \in [\delta-1, 1 - \delta] .
\end{array} \right. \label{h11}
\end{eqnarray}
When $\tau_i > 1-\delta/2$ and $\tau \in [-1, +1-\delta]$, we have
$|\tau - \tau_i| \ge \delta/2$; hence,
\begin{equation}\label{h12}
\sum_{1 > \tau_i > 1-\delta/2} |R_i(\tau)| \le \left( \frac{4}{\delta} \right)
\sum_{1 > \tau_i > 1-\delta/2}
\left| \frac{P_{N-1}^{(1,0)}(\tau)}
{(\tau_i - 1)\dot{P}_{N-1}^{(1,0)}(\tau_i)} \right| .
\end{equation}
We have the following bounds for the factors on the right side of (\ref{h12}):
\begin{itemize}
\item[(a)]
By Proposition~\ref{pro1},
$|P_{N-1}^{(1,0)} (\tau)| = O(1)$ if $\tau \in [-1, \delta - 1]$ and
$|P_{N-1}^{(1,0)} (\tau)| = O(N^{-1/2})$ if $\tau \in [\delta - 1, 1-\delta]$.
\item[(b)]
By (\ref{h8}),
$|\dot{P}_{N-1}^{(1, 0)}(\tau_i)| \ge
\gamma_3 \theta_i^{-5/2} \sqrt{N-1}$, where $\cos \theta_i = \tau_i \ge 0$.
\item[(c)]
By a Taylor expansion around $\theta = 0$,
\begin{equation}\label{1-cos}
\theta^2/4 \le 1 - \cos \theta \le \theta^2/2, \quad \theta \in [0, \pi/2].
\end{equation}
\end{itemize}
By (b) and the lower bound in (c) at $\theta = \theta_i$, we have
\begin{equation}\label{h100}
(1-\tau_i) |\dot{P}_{N-1}^{(1,0)}(\tau_i)| \ge
0.25 \gamma_3 \theta_i^{-1/2} \sqrt{N-1} .
\end{equation}
We combine this with (a) and (\ref{h12}) to obtain
\[
\sum_{1 > \tau_i > 1-\delta/2} |R_i(\tau)| =
\left\{ \begin{array}{lll}
O(N^{-1/2})\displaystyle\sum_{i = 1}^N \sqrt{\theta_i} &= O(\sqrt{N}),
& \tau \in [-1, \delta - 1], \\
O(N^{-1})\displaystyle\sum_{i = 1}^N \sqrt{\theta_i} &= O(1),
& \tau \in [\delta-1, 1-\delta] ,
\end{array} \right.
\]
since $\theta_i \in [0, \pi]$.
This establishes (\ref{h11}) for all $\tau \in [-1, 1-\delta]$.
To complete the proof of (\ref{h10}), we need to consider
$\tau \in (1-\delta, 1]$.
The analysis becomes more complex since
Szeg\H{o}'s estimate (\ref{radaulebesgue}) is $O(N^{3/2})$ in this region,
while we are trying to establish a much smaller bound in (\ref{h10});
in fact, the bound in this region is $O(\log N)$ as we will show.
For the numerator of $R_i (\tau)$ and
$\tau \in (1-\delta, 1]$,
Proposition~\ref{pro1} and (\ref{1-cos}) yield
\begin{eqnarray}
(1-\tau) |P_{N-1}^{(1,0)} (\tau)| &=&
(1-\cos \theta)|P_{N-1}^{(1,0)}(\cos \theta)| =
\left\{ \begin{array}{ll}
\theta^2 O(N), & \theta \in [0, 1/N], \\
\theta^{1/2} O(N^{-1/2}), & \theta \in [1/N, \pi/2],
\end{array} \right. \nonumber \\
&=&
O(N^{-1/2}). \label{h14}
\end{eqnarray}
Given $\tau \in (1-\delta, 1]$,
let us first focus on those $i$ in (\ref{h10}) for which
$|\tau-\tau_i| \ge \delta$.
In this case, $\tau_i \le 1-\delta$ or
$1-\tau_i \ge \delta$, and (\ref{h14}) gives
\begin{eqnarray}
\sum_{|\tau-\tau_i|\ge\delta}|R_i(\tau)| &=&
\sum_{|\tau-\tau_i|\ge\delta}
\left| \frac{(\tau-1)P_{N-1}^{(1,0)}(\tau)}
{(\tau_i-1)\dot{P}_{N-1}^{(1,0)}(\tau_i)(\tau-\tau_i)} \right| \nonumber \\
&\le& \frac{O(N^{-1/2})}{\delta^2} \sum_{|\tau-\tau_i|\ge\delta}
\frac{1}{|\dot{P}_{N-1}^{(1,0)} (\tau_i)|}. \label{h50}
\end{eqnarray}
The lower bounds (\ref{h9}) and (\ref{h8}) imply that
\begin{equation}\label{h51}
\sum_{|\tau-\tau_i|\ge\delta}|R_i(\tau)| =
O(N^{-1}) \sum_{\tau_i \ge 0} \theta_i^{5/2}
+
O(N^{-1}) \sum_{\tau_i < 0} |\pi - \theta_i|^{3/2} = O(1),
\end{equation}
since the terms in the sums are uniformly bounded and there are at
most $N$ terms.
The next step in the proof of (\ref{h10}) for $\tau \in (1-\delta, 1]$
is to consider those terms corresponding to $|\tau-\tau_i|<\delta$.
Since $\delta$ is small, it follows that both $\tau$ and $\tau_i$ are
near 1, and consequently, $\theta$ and $\theta_i$
are small and nonnegative,
where $\cos \theta = \tau$ and $\cos \theta_i = \tau_i$.
In particular, $0 \le \theta_i \le \pi/2$.
In this case where $\tau_i$ is near $\tau$,
it is important to take into account the fact that
$\tau - \tau_i$ is a divisor of the numerator $P_{N-1}^{(1,0)}(\tau)$.
To begin, we combine the lower bound in (\ref{h8}) and the bounds in
(\ref{1-cos}) to obtain
\begin{equation}\label{h19}
\frac{(1-\tau)}
{(1-\tau_i)|\dot{P}_{N-1}^{(1,0)}(\tau_i)|} \le
\frac{2\theta^2}{\theta_i^2 (\gamma_3 \sqrt{N}) \theta_i^{-5/2}} =
O(N^{-1/2}) \theta^2\sqrt{\theta_i} .
\end{equation}
It follows from (\ref{Ri}) that
\begin{equation}\label{h15}
|R_i(\tau)|= O(N^{-1/2}) \theta^2\sqrt{\theta_i}
\left| \frac{P_{N-1}^{(1,0)}(\tau)}{\tau-\tau_i} \right| .
\end{equation}
The mean value theorem and the formula
\cite[(4.21.7)]{Szego1939} for the derivative of
$P_{N-1}^{(\alpha, \beta)}(\tau)$ in terms of
$P_{N-2}^{(\alpha+1, \beta+1)}(\tau)$ gives the identity
\begin{equation} \label{h17}
\left|\frac{P_{N-1}^{(1,0)}(\tau)}{\tau-\tau_i}\right|
= \left|\frac{P_{N-1}^{(1,0)}(\tau)-P_{N-1}^{(1,0)}
(\tau_i)}{\tau-\tau_i}\right|
=\left(\frac{N+1}{2}\right)\left|P_{N-2}^{(2,1)}(\cos\eta_i)\right|,
\end{equation}
where $\eta_i$ lies between $\theta$ and $\theta_i$.
Together, (\ref{h15}) and (\ref{h17}) imply that
\begin{equation}\label{h20}
|R_i(\tau)| = O(N^{1/2}) \theta^2 \sqrt{\theta_i}
\left|P_{N-2}^{(2,1)}(\cos\eta_i)\right|.
\end{equation}
The estimate (\ref{h20}) is useful when $\tau_i$ is near $\tau$.
When $\tau_i$ is not near $\tau$, we proceed as follows.
Use the identity
\[
\cos\alpha-\cos\beta
=-2\sin\displaystyle{\frac{(\alpha+\beta)}{2}}
\sin\displaystyle{\frac{(\alpha-\beta)}{2}},
\]
to deduce that
\begin{equation}\label{h70}
|\tau - \tau_i| = |\cos \theta - \cos \theta_i| \ge
\frac{2}{\pi^2} \left| \theta^2 - \theta_i^2 \right|
\end{equation}
when $|\theta + \theta_i| \le \pi$, which is satisfied since
both $\theta$ and $\theta_i$ are near 0.
Exploiting this inequality in (\ref{h15}) yields
\begin{equation}\label{h21}
|R_i(\tau)|= O(N^{-1/2}) \theta^2\sqrt{\theta_i}
\left| \frac{P_{N-1}^{(1,0)}(\tau)}{ \theta^2 - \theta_i^2} \right| .
\end{equation}
Recall, that we now need to analyze the interval $\tau \in [1-\delta, 1]$
and those $i$ for which $|\tau-\tau_i| < \delta$.
Our analysis works with the variable $\theta \in [0, \pi/2]$,
where $\cos \theta = \tau$.
The interval $\theta \in [0, \pi/2]$ corresponds to
$\tau \in [0,1]$ which covers the target interval $[1-\delta, 1]$ when
$\delta$ is small.
By \cite[(7.32.2)]{Szego1939}, we have
\[
|P_{N-2}^{(2,1)}(\cos\eta_i)| \le N(N-1)/2 .
\]
If $\theta \in [0, c/N]$, where $c$ is a fixed constant independent of $N$,
then it follows from
(\ref{h20}) that $|R_i(\tau)| = O(N^{1/2}) \sqrt{\theta_i}$.
Moreover, if $\theta_i \le 2 \theta \le 2c/N$, then
$|R_i (\tau)| = O(1)$.
By the bounds (\ref{phibounds}),
the number of roots that
satisfy $\theta_{N-i} \le 2c/N$ is at most $2c$, independent of $N$.
On the other hand, if $\theta_i > 2 \theta$, then $\theta< \theta_i/2$ and
\[
\left|\theta_i^2-\theta^2\right|=\theta_i^2-\theta^2\geq
(3/4) \theta_i^2 .
\]
With this substitution in (\ref{h21}), we have
\[
|R_i(\tau)|= O(N^{-1/2}) \theta^2\theta_i^{-3/2}
\left| {P_{N-1}^{(1,0)}(\tau)} \right| .
\]
By (\ref{h22}), $| P_{N-1}^{(1,0)}(\tau)| \le N$.
Hence, if $\theta \in [0, c/N]$, then by (\ref{phibounds}), we have
\begin{eqnarray*}
\sum_{|\tau-\tau_i| < \delta} |R_i(\tau)| &=&
O(N^{-3/2}) \sum_{|\tau-\tau_i| < \delta} \theta_i^{-3/2} =
O(N^{-3/2}) \sum_{i=1}^{N-1} \theta_i^{-3/2} \\
&=& O(N^{-3/2}) \sum_{i=1}^{N-1} \theta_{N-i}^{-3/2} =
O(1) \sum_{i=1}^{N-1} i^{-3/2} = O(1),
\end{eqnarray*}
for all $\theta \in [0, c/N]$.
Finally, suppose that $\theta \in [c/N, \pi/2]$.
By (\ref{separation}) the separation between adjacent zeros
$\theta_i$ and $\theta_{i+1}$ is at most $2.5\pi/N$.
Hence, if $\theta_i$ is within $k$ zeros of $\theta$, then
$\eta_i \ge \theta - \gamma N^{-1}$, $\gamma = 2.5\pi k$.
Here $k \ge 2$ is an arbitrary fixed integer.
By Proposition~\ref{pro1}, we have
\[
\left|P_{N-2}^{(2,1)}(\cos\eta_i)\right| =
(\theta- \gamma N^{-1})^{-5/2}O(N^{-1/2}) .
\]
Choose $c > 2\gamma$.
If $\theta \in [c/N, \pi/2]$,
then $\theta/2 \ge c/(2N) \ge \gamma/N$.
Hence, $\theta- \gamma /N \ge \theta/2$ and
\[
\left|P_{N-2}^{(2,1)}(\cos\eta_i)\right| =
(\theta/2)^{-5/2}O(N^{-1/2}) =
\theta^{-5/2}O(N^{-1/2}) .
\]
Combine this with (\ref{h20}) to obtain
\[
|R_i(\tau)| = O(1) \sqrt{\theta_i/\theta} .
\]
when $\theta \in [c/N, \pi/2]$ and $\theta_i$ is within $k$
zeros of $\theta$.
If $\theta_i \le \theta$, then $R_i (\tau) = O(1)$.
If $\theta_i > \theta$ and $\theta_i$ is within $k$ zeros of $\theta$,
then $\theta_i - \theta \le \gamma/N$, and
\[
\theta_i/\theta \le (\theta + \gamma/N)/\theta \le 1 + \gamma/c
\]
when $\theta \in [cN, \pi/2]$.
Thus $|R_i (\tau)| = O(1)$ when $\theta \in [cN, \pi/2]$ and
$\theta_i$ is within $k$ zeros of $\theta$.
This analysis of $R_i$ when $\theta_i$ is close to $\theta$ needs to
be complemented with an analysis of $R_i$ when $\theta_i$ is not
close to $\theta$ and $\theta \in [c/N, \pi/2]$.
For $\theta$ in this interval, Proposition~\ref{pro1} yields
$|P_{N-1}^{(1,0)} (\cos \theta)| = \theta^{-3/2}O(N^{-1/2})$.
By (\ref{h21}), we have
\begin{equation}\label{h23}
|R_i (\tau)| = O(N^{-1}) \frac{\sqrt{\theta} \sqrt{\theta_i}}
{|\theta^2 - \theta_i^2|} .
\end{equation}
If $\theta \ge 2\theta_i$, then
$\theta^2 - \theta_i^2 \ge (3/4)\theta^2$ and
\[
|R_i (\tau)| = O(N^{-1}) \theta^{-3/2} \theta_i^{1/2} .
\]
By (\ref{zeros*}), we have
\[
|R_{N-i} (\tau)| = O((N\theta)^{-3/2}) \sqrt{i+1} .
\]
Recall that we are focusing on those $i$ for which $\theta_{N-i} \le \theta/2$.
The lower bound $\theta_{N-i} \ge i/N$ from (\ref{phibounds}) implies that
$i \le N\theta/2$ whenever $\theta_{N-i} \le \theta/2$.
Hence, the set of $i$
satisfying $i \le N\theta$ is a superset of the $i$ that we need to consider,
and we have
\begin{eqnarray*}
\sum_{\theta_i \le \theta/2} |R_i (\tau)| &=&
\sum_{\theta_{N-i} \le \theta/2} |R_{N-i} (\tau)| =
O((N\theta)^{-3/2}) \sum_{i \le N\theta} \sqrt{i+1} \\
&=& O((N\theta)^{-3/2}) (N\theta + 1)^{3/2} = O(1) .
\end{eqnarray*}
On the other hand, if $\theta < 2 \theta_i$, then we have
\[
\frac{\sqrt{\theta} \sqrt{\theta_i}}
{|\theta^2 - \theta_i^2|} =
\frac{\sqrt{\theta} \sqrt{\theta_i}}
{|(\theta - \theta_i)(\theta + \theta_i)|} \le
\frac{\sqrt{\theta} \sqrt{\theta_i}}
{|(\theta - \theta_i)\theta_i|} \le
\frac{\sqrt{2}}
{|\theta - \theta_i|} .
\]
Combine this with (\ref{h23}) to obtain
\[
|R_i (\tau)| = \frac{O(1)}{|N\theta - N\theta_i|} .
\]
Earlier we showed that
$|R_i (\tau)| = O(1)$ for those $i$ where the associated $\theta_i$
is within $k$ zeros of $\theta$.
When $\theta_i$ is more than $k$ zeros away from $\theta$,
we exploit the estimate (\ref{zeros}) for the zeros to deduce that
$|N\theta - N\theta_i|$ behaves like an arithmetic sequence of natural numbers.
Hence, the sum of the $|R_i (\tau)|$ over these natural numbers,
where we avoid the singularity, is bounded by a multiple of $\log N$.
This completes the proof.
\end{proof}
\section{Tightness of estimates}\label{numerical}
\label{tight}
At the bottom of page 110 in \cite{Vertesi81}, V\'{e}rtesi states
some lower bounds for the Lebesgue function.
In the case of the Gauss quadrature points augmented by $\tau_{N+1} = +1$
and the Radau quadrature points with $\tau_N = +1$, the associated
Lebesgue function is of order $\sqrt{N}$ at
$\tau = (\tau_1 + \tau_{2})/2$,
the midpoint between the two smallest quadrature points.
It follows that the $O(\sqrt{N})$ estimates for the Lebesgue constant are tight.
To study the tightness of the estimates,
the Lebesgue constants were evaluated numerically and
fit by curves of the form $a \sqrt{N} + b$, $10 \le N \le 100$
(see Figures~\ref{graphgauss}--\ref{graphradau}).
A fast and accurate method for evaluating the Gauss quadrature points,
which could be extended to the Radau quadrature points,
is given by Hale and Townsend in \cite{HaleTownsend13}.
Figure~\ref{graphgauss}--\ref{graphradau} indicate that
a curve of the form $a \sqrt{N} + b$ is a good fit to the Lebesgue constant.
Another Lebesgue constant which enters into the analysis of
the Radau collocation schemes studied in \cite{HagerHouRao15c} is the
Lebesgue constant for the Radau quadrature points on $(-1, +1]$
augmented by $\tau_0 = -1$.
As given by V\'{e}rtesi in \cite[Thm. 2.1]{Vertesi81}, the
Lebesgue constant is $O(\log n)$.
Trefethen \cite{Trefethen13} points out that the Lebesgue constant
on any point set has the lower bound
\[
\Lambda_N \ge \left( \frac{2}{\pi} \right) \log (N) + 0.52125\ldots ,
\]
due to Erd\H{o}s \cite{Erdos61} and Brutman \cite{Brutman78}.
For comparison, Figure~\ref{graphradau-1} plots this lower bound
along with the computed Lebesgue constant.
When the number of interpolation points range between 10 and 100,
the Lebesgue constant for the Radau quadrature
points augmented by the point $-1$ differs from the smallest possible Lebesgue
constant by between 0.70 and 0.84.
\begin{figure}
\centering
\includegraphics[scale=.4]{gauss.eps}
\caption{Least squares approximation to the Lebesgue constant for
the point set corresponding to the Gauss quadrature points augmented by $-1$
using curves of the form $a\sqrt{N}+b$}
\label{graphgauss}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.4]{radau.eps}
\caption{Least squares approximation to the Lebesgue constant for
the point set corresponding to the Radau quadrature points using curves of the
form $a\sqrt{N}+b$}
\label{graphradau}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.4]{radau1.eps}
\caption{Least squares approximation to the Lebesgue constant for
the point set corresponding to the Radau quadrature points on
$(-1, +1]$ augmented by $-1$ using curves of the form $a\log N +b$}
\label{graphradau-1}
\end{figure}
\section{Conclusions}
\label{conclusions}
In Gauss and Radau collocation methods for unconstrained control problems
\cite{HagerHouRao15b, HagerHouRao15c},
the error in the solution to the discrete problem is bounded by the
residual for the solution to the continuous problem inserted in the
discrete equations.
In Section~\ref{residual}, we observe that the residual in the sup-norm
is bounded by the distance between the derivative of the continuous
solution interpolant and the derivative of the continuous solution.
Proposition~\ref{L1} bounds this distance in terms of the error in
best approximation and the Lebesgue constant for the point set.
We show that the Lebesgue constant for the point sets associated with
the Gauss and Radau collocation methods is $O(\sqrt{N})$, and
by the plots of Section~\ref{numerical}, the Lebesgue constants are
closely fit by curves of the form $a\sqrt{N}+b$.
\section*{Acknowledgments}
Special thanks to Lloyd N. Trefethen for pointing out Brutman's paper
\cite{Brutman97} and for providing a copy when we had trouble locating
the journal.
Also, we thank a reviewer for pointing out the book \cite{Mastroianni08}
which contains newer results as well as additional references.
\bibliographystyle{siam}
|
2,869,038,156,763 | arxiv | \section{Introduction}
Fast coherent instability of horizontal betatron oscillations of bunched proton beam
was observed in the Fermilab Recycler since 2014 as it is described
in Ref.~\cite{MAIN}.
It has been shown in this paper that the instability is caused by electron
cloud which arises at ionization of residual gas by protons,
and grows later due breeding of the electrons at collision with the beam pipe
walls.
A theoretical model of the instability has been proposed in
Ref.~\cite{MY}.
The electron cloud is treated as a set of ``snakes'' each of them
appearing as a footprint of some proton bunch.
The snakes are immovable in horizontal plane due to strong vertical
magnetic field.
However, the electrons are very mobile in vertical direction because they
move between the beam pipe walls under the influence of electric field
of the protons.
They can breed or perish at collisions with the beam pipe walls.
The model provides a suitable description of initial part of the instability
including dependence of the bunch amplitude on time and the position in the batch.
However, it predicts an unrestricted growth of the bunch amplitudes
which statement is in conflict with the experimental evidence.
It follows from the experiment that the amplitude increases with variable growth rate
within 60-80 turns and becomes about stable after that.
It was suggested in \cite{MY} that nonlinearity of the e-cloud field can
be responsible for similar behavior of the proton beam, and several examples
have been represented there.
The development of this idea is a subject if this paper.
It is shown the it is a way to bring the calculation into accordance with the
experimental evidence.
\section{Electron cloud model}
\begin{figure}[t!]
\includegraphics[width=100mm]{01_snake.eps}
\caption{Top view of the e-cloud. Each proton bunch gives rise to an
immovable e-snake.
The snakes coincide with each other if their parent proton bunches
have the same injection conditions being different otherwise~(\#2 in the picture).
Local density of each snake depends on time.}
\end{figure}
It has been shown in \cite{MY} that horizontal motion of electrons in the
cloud is awfully obstructed by the Recycler magnetic field.
Vertical motion between the walls and the electron breeding in the walls
result in creation of a vertical strips \cite{MAIN}
and in the formation of the ``snake'' as it is shown in Fig.~1.
Each proton bunch creates the wake following the bunch in accordance with
the injection error.
The bunch wakes coincide if they are injected with the same error
(\#0, 1, 3, 4 in Fig.~1).
Any wake has a steady shape but variable density dependent on time.
According to this model, electron density at distance $\,s\,$ from
beginning of the batch can be represented in the form
\begin{equation}
\rho_e(x,s,t) = \int_0^s w\left(\frac{s-s'}{v}\right)\,\bar\rho
\left(x-X\Big(s,t-\frac{s-s'}{v}\Big)\right)\lambda(s')\,ds'
\end{equation}
where $\,\bar\rho(x)\,$ is normalized projection of the proton steady state
distribution on axis $\,x$, $\,X(s,t)\,$ is the beam coherent displacement,
and $\,\lambda(s)\,$ is its linear density.
The coefficient $\,w(\tau)\,$ describes evolution of the snake local density
which has been considered in Ref.~\cite{FUR}-\cite{PIV}.
Calculation of this function is not a subject of this paper, and it will be
treated further as some phenomenological parameter.
Because the electron distribution is flat in $(y$-$z)$ plane, and effect of the
walls is small within the proton beam, electric field of this beam is
\begin{eqnarray}
E_e(x,s,t) = e\int_0^s w\left(\frac{s-s'}{v}\right)\,F\left(x-X\Big(s,t
-\frac{s-s'}{v}\Big)\right)\lambda(s')\,ds'
\end{eqnarray}
with the function F satisfying the equation
\begin{eqnarray}
F'(x) = 4\pi \bar\rho(x)
\end{eqnarray}
If the beam consists of short identical bunches, the integral turns into the sum
\begin{equation}
E_n(t,x) = eN_b\sum_{m=0}^{n} w_{k} F\Big(x-X_{n-m}(t-mT_{RF})\Big)
\end{equation}
where $\,N_b\,$ is the bunch population, $\,T_{RF}\,$ is the time separation
of the bunches which are enumerated from the beam head (index 0) to the
current bunch (index $n$).
\section{Proton equation of motion}
With the cloud electric field taken into account,
equation of horizontal betatron oscillations of a proton in $\,n^{\rm th}$ bunch is
\begin{equation}
[\ddot x(t)+\omega_0^2x]_n =
-\frac{e^2N_b}{m\gamma}\sum_{m=0}^{n}w_{m}F\Big(x-X_{n-m}(t-mT_{RF})\Big)
\end{equation}
where $\omega_0$ is betatron frequency without e-cloud
(we do not consider here other factors which could affect the betatron
motion, for example chromaticity).
Because $\,\bar\rho(x)\,$ is the odd function, approximate solution of Eq.~(3)
including the lowest nonlinearity is
\begin{equation}
F(x)\simeq 4\pi \bar\rho(0)\left(x +\frac{\epsilon x^3}{3}\right),\qquad
\epsilon = \frac{1}{2\bar\rho(0)}\frac{d^2\rho(0)}{dx^2}
\end{equation}
Therefore equation of betatron oscillations
of a proton in $\,n^{\rm th}\,$ bunch obtains the form
\begin{equation}
[\ddot x(t)+\omega_0^2x(t)]_n=-2\omega_0\sum_{m=0}^n W_m\xi_m
\left(1+\frac{\epsilon_m\xi_m^2}{3}\right),
\qquad\xi_m = x(t)-X_{n-m}(t-nT_{RF})
\end{equation}
where $\,W_m=4\pi e^2\bar\rho_m(0)w_m/(m\gamma\omega_0)$.
Without coherent oscillations that is at $\,X_j=0\,$, equation of small
incoherent oscillations of protons in $\,n^{\rm th}\,$ bunch is
\begin{equation}
\ddot x(t)+\omega_n^2x(t)=0, \qquad\omega_n=\omega_0+\Delta Q_n, \qquad
\Delta Q_n=\sum_{m=0}^nW_m.
\end{equation}
It means that $\Delta Q_n$ is the incoherent tune shift of protons in
$\,n^{\rm th}$ bunch caused by e-cloud produced by all foregoing bunches,
and $\,W_m\,$ is the contribution of the bunch \#$(n-m)$.
\section{Linear approximation}
At $\,\epsilon_m=0$, Eq.~(7) can be averaged over all particles of
$\,n^{\rm th}\,$ bunch
resulting series of equations for coherent oscillations of the bunches
\begin{equation}
\ddot X_n(t)+\omega_0^2X_n=-2\omega_0\sum_{m=0}^{n}
W_{m}\Big[X_n(t)-X_{n-m}(t-mT_{RF})\Big]
\end{equation}
This series has been investigated in detail in Ref.~\cite{MY}.
The main conclusions of the paper are summarized below and illustrated
by Fig.~(2).
1. Injection errors are the root cause of the ``instability''.
The initial amplitude can increase in time as well as from bunch to bunch
along the batch.
2. Some spread of the errors is another condition for the instability.
Otherwise solution of Eq.~(9) is $\,X_n(t)=X_0(t-nT_{RF})$
that is all bunches move one by one along the same stable trajectory.
Coherent interaction of the bunches is absent at such conditions.
3. A variability of the wake is another condition of the instability because the
bunches have different eigentunes and their resonant interaction is impossible
at $\,W_m=\,$const.
4. With restricted wakes, the eigentunes have the same value in the batch tail
where the amplitude growth should be maximal.
This statement is in agreement with experimental evidence.
5. Dependence of the amplitude on time is non-exponential generally
being different from bunch to bunch.
6. However, growth of amplitudes is unrestricted at long last, which conclusion
contradicts the experimental evidence.
Therefore this statement requires an analysis beyond the scope
of linear approximation.
\begin{figure*}[b!]
\hspace{-20mm}
\begin{minipage}[h!]{0.45\linewidth}
\begin{center}
\includegraphics[width=85mm]{04_1.eps}
\end{center}
\end{minipage}
\hspace{-5mm}
\begin{minipage}[h!]{0.45\linewidth}
\begin{center}
\includegraphics[width=85mm]{04_2.eps}
\end{center}
\end{minipage}
\caption{E-cloud instability in linear approximation.
Left-hand graph represents effect one-step wake $\,W_n=W\delta_{n,1}$,
right-hand graph refers to the 5-steps wake.
Index $\,n\,$ is the bunch number in the batch,
$\,t_n=t-nT_{RF}\,$ with $\,t\,$ as the current time.
It is assumed that the leading bunches are not oscillating:
$\,A_0=0,\,$ at the left-hand plot,
and $\,A_{0-4}=0\,$ at the right-hand one.
Initial amplitude of other bunches $\,A_n(0)=1\,$.}
\end{figure*}
\section{Nonlinear consideration}
We will represent the variables $\,x\,$ and $\,X\,$ in Eq.~7 with help of
the complex amplitudes $\,a\,$ and $\,A$:
\begin{equation}
x(t) = a(t)\exp\big(i\omega_0[t-nT]\big)+c.c.,\qquad
X_m(t) = A_m(t)\exp\big(i\omega_0[t-mT]\big)+c.c.
\end{equation}
Substituting these values in Eq.~(7) and applying the standard method of averaging,
one can get following equations for amplitude of a proton inside
$\,n^{\rm th}$ bunch \cite{MY}
\begin{equation}
\dot a(t) = i\sum_{m=0}^n W_m\eta (1+\epsilon_m|\eta_m|^2),
\qquad \eta_m=a(t)-A_{n-m}(t-mT).
\end{equation}
One-step wake will be investigated further: $\,W_n=W\delta_{n1}$.
Note that the condition $\,W_0=0$ follows from this definition
being very reasonable because a noticeable e-cloud cannot appear in
the leading bunch without secondary electrons.
Therefore any proton has a constant betatron amplitude in this bunch,
and the same is valid for the bunch coherent amplitude as well.
The last can be taken as $\,A_0=0\,$ because difference of the bunch amplitudes
is the only crucial circumstance.
With these approximations, equations of motion of any proton inside
$\,n^{\rm th}$ bunch is
\begin{equation}
\dot a(t) = iW\big[\,a(t)-A_{n-1}(t-T)\,\big]
\big[\,1+\epsilon\big|a(t)-A_{n-1}(t-T)|^2\big],
\qquad A_0=0.
\end{equation}
Following steps have to be used for numerical solution of these equations:
1. To generate a random initial distribution of particles in first bunch $(N=1)$.
The bunch central amplitude should be $\,A_1(0)\ne 0\,$ to begin the process.
2. To calculate the function $\,a(t)\,$ for each particle of the first bunch
$(n=1)$ by solution of Eq.~(12) with the known value of the amplitude
$A_{n-1}=A_0=0$.
3. To calculate the central amplitude $\,A_1(t)\,$ as a function of time
by the averaging over all particles of the bunch;
4. To repeat the operation for second bunch with known $\,A_1(t)$, etc.
\\
Results of the calculation are represented below.
\newpage
\subsection{Physics of the phenomenon}
The linear approximation for the one-step wake has been commented
in Sec.~IV being represented by left-hand Fig.~2.
At present the same case will be investigated with nonlinear additions
taken into account.
Initial amplitude of all bunches except as the leading one $\,A_{n\ne0}(0)=1$,
and the nonlinear parameter given by Eq.~(6) is taken as large as
$\epsilon=-0.001$.
The proton beam is considered as thin one that is its radius is assumed
to be small in comparison with the injection errors.
Obtained coherent amplitude of the bunches is represented in Fig.~3
against the normalized time.
In the beginning, it is about the same as it has been shown in Fig.~2.
However, further behavior is strongly different.
It is seen that the growth of the bunch amplitudes ceases at about
$\,|A_n/A_1|=20$ - 30~~which limit is achieved at $\,Wt=8$ - 10.
\begin{figure*}[b!]
\includegraphics[width=85mm]{06_1.eps}
\caption{Instability with nonlinear e-cloud field.
The same conditions as in the left-hand Fig.~2
but nonlinear Eq.~12 is used with the nonlinear parameter
$\,\epsilon A_1^2=-0.001$.}
\end{figure*}
The saturation cannot be treated as Landau damping because thin proton
beam with negligible incoherent tune spread could not be an object of this
phenomenon.
Therefore the nonlinearity does not prevent the instability in the case,
but merely restricts its growth.
This statement is illustrated by Fig.~4 where behavior of second bunch of the
batch is considered in more details.
The leading bunch does not oscillate as it was assumed, and the first bunch has
constant amplitude because there is no external force to excite it.
The relative amplitude of second bunch is shown in the left-hand graph against time
at different nonlinearity, and several phase trajectories are represented in the
right-hand figure.
It is a typical behavior of nonlinear oscillator exited by periodical external
field.
\begin{figure*}[h!]
\hspace{-10mm}
\begin{minipage}[h!]{0.45\linewidth}
\begin{center}
\includegraphics[width=85mm]{07_1.eps}
\end{center}
\end{minipage}
\hspace{0mm}
\begin{minipage}[h!]{0.45\linewidth}
\begin{center}
\includegraphics[width=85mm]{07_2.eps}
\end{center}
\end{minipage}
\caption{Second bunch in the train.
The leading bunch does not oscillate, the first bunch has constant amplitude,
and the second one has the same initial amplitude: $\,A_2(0)=A_1$.
Its relative amplitude is shown in the left-hand graph against time
at different nonlinearity, and several phase trajectories are presented in the
right-hand figure.}
\vspace{-5mm}
\end{figure*}
\subsection{Dependence on value of the nonlinearity}
Two more examples are represented in Fig.~5.
In the case, the batch has the same arrangement and initial conditions as in
Fig.~3 but other parameters of the nonlinearity:
$\,\epsilon=10^{-4}\,$ and $\,10^{-2}$.
As one can expected, the more nonlinearity results in the less coherent amplitude.
The ultimate amplitude can be estimated by the relation $\,\epsilon A^2\simeq-1$,
and it is attained at about $\,Wt=8-10$.
\begin{figure*}[h!]
\hspace{-20mm}
\begin{minipage}[h!]{0.45\linewidth}
\begin{center}
\includegraphics[width=85mm]{08_1.eps}
\end{center}
\end{minipage}
\hspace{-5mm}
\begin{minipage}[h!]{0.45\linewidth}
\begin{center}
\includegraphics[width=85mm]{08_3.eps}
\end{center}
\end{minipage}
\caption{Instability with nonlinear e-cloud field.
The same conditions as in Fig.~3 with other nonlinearity:
left-hand graph: $\,|\epsilon A_1^2|=10^{-4}$,
right-hand one: $\,|\epsilon A_1^2|=10^{-2}$.}
\end{figure*}
\begin{figure*}[h!]
\includegraphics[width=85mm]{08_4.eps}
\caption{The betatron amplitude (solid lines) and its growth rate (dashed lines)
averaged across the batch at different nonlinearity.
Fig.~2, 3, and 5 are used as the sources.}
\end{figure*}
The results are summarized in Fig.~6 where averaged across the batch parameters
are shown.
Solid lines represent the averaged coherent amplitude, and dashed lines --
its instantaneous rate
(it is just a picture which has been measured in the experiment \cite{MAIN},
and corresponding comparison will be made later).
Four cases are considered in this example being taken from Fig.~2 (left), 3,
and 5.
It is seen that the amplitude growth has about exponential behavior
only at zero nonlinearity, and only at $\,Wt>\sim 5$.
The nonlinearity does not reveal itself at $\,Wt<\sim 3\,$
but restricts the amplitude growth at $\,Wt>\sim 6$ - 10.
The maximal growth rate is about $\,\sim 1/\ln|\epsilon| A_1^2$,
and the maximal amplitude is $\,A_{max}^2 \sim 0.5/|\epsilon|$.
\newpage
\subsection{Dependence on the beam radius}
\begin{figure*}[t!]
\hspace{-20mm}
\begin{minipage}[h!]{0.45\linewidth}
\begin{center}
\includegraphics[width=85mm]{09_2.eps}
\end{center}
\end{minipage}
\hspace{-5mm}
\begin{minipage}[h!]{0.45\linewidth}
\begin{center}
\includegraphics[width=85mm]{09_3.eps}
\end{center}
\end{minipage}
\caption{Instability of thick beam at e-cloud nonlinearity $\,\epsilon=-0.001\,$.
The same conditions as in Fig.~3 are used with the proton beam radius
$\,R = A_1\,$ in the left-hand graph and $\,R=10A_1\,$ in the right-hand one.}
\end{figure*}
\begin{figure*}[h!]
\includegraphics[width=85mm]{09_4.eps}
\caption{The betatron amplitude (solid lines) and its growth rate (dashed lines)
averaged across the batch at different beam radius.
Fig.~3 and 7 are used as the sources.}
\end{figure*}
Thick beam is considered in this subsection at the same conditions
as it has been done in previous part.
The water-bag model of radius $\,R\,$ is used for transverse distribution of the
proton beam.
The injection error is taken to be unity, and parameter of nonlinearity
$\,\epsilon=-0.001\,$ in all the cases.
The results are represented in Fig.~7 at $\,R/A_1=1\,$ and 10.
Corresponding averaged values are shown in Fig.~8 where the case $\,R=0,$ is
added being taken from Fig.~3.
Comparison of these figures with Fig.~6 and 7 allows to conclude that
the beam radius is a factor of second importance for the problem.
\section{Influence of the field free areas}
About a half of the Recycler perimeter is occupied by the field free regions
where the dipole magnetic field is absent.
The electron production and breeding take place in these regions as well as
in the field filled regions.
Therefore, there is no reasons to think that e-cloud density in the
field-free zones essentially differs from the density in the magnetic zones.
However, there is no an effective mechanism in the free zones to correlate e-cloud
position with proton beam so firmly as it makes strong dipole magnetic field.
Therefore direct contribution of the field-free zones to the instability
is expected to be relatively small.
However, this part can affect the incoherent motion of protons including
linear and nonlinear tune shift.
The last is especially important because one cannot to exclude an additional
restriction of the coherent amplitude due to this addition.
Because this part of the cloud does not follow the proton beam, its distribution
should depend on $\,x\,$ but not on $\,X$, in the used terminology.
Taking it into account, one can write correspondingly modified Eq.~(7)
in the form
\begin{equation}
[\ddot x(t)+\omega_0^2x(t)]_n=-2\omega_0 W
\left(\xi+\frac{\epsilon_B\xi^3}{3}+\frac{\epsilon_F x^3}{3}\right)
\end{equation}
where $\,\xi = x(t)-X_{n-1}(t-T_{\rm RF})$.
This equation describes betatron oscillations of arbitrary proton
in $\,n^{\rm th}\,$ bunch.
A one-step wake is considered here, and only cubic nonlinearity is
taken into account (incoherent linear contribution can be included to $\,\omega_0$).
The coefficients $\,\epsilon_B\,$ and $\,\epsilon_F\,$ describe the nonlinearity
of the field filled (B) and the field free (F) parts with their relative length
being taken into account.
\begin{figure*}[t!]
\hspace{-10mm}
\begin{minipage}[h!]{0.45\linewidth}
\begin{center}
\includegraphics[width=85mm]{11_2.eps}
\end{center}
\end{minipage}
\hspace{0mm}
\begin{minipage}[h!]{0.45\linewidth}
\begin{center}
\includegraphics[width=85mm]{11_3.eps}
\end{center}
\end{minipage}
\caption{Instability with field-free regions taken into account.
Contributions of the field filled parts and field free ones are marked
by symbols B and F. Proton beam radius $\,R=1$, first bunch oscillates
with amplitude $\,A_1=1$.
Left: $\,\epsilon_B A_1^2=0,\;\epsilon_F A_1^2=-0.001$, right:
$\,\epsilon_B A_1^2=0=\epsilon_F A_1^2=-0.001$. }
\end{figure*}
\begin{figure*}[h!]
\vspace{10mm}
\includegraphics[width=85mm]{11_4.eps}
\caption{The betatron amplitude (solid lines) and its growth rate (dashed lines)
averaged across the batch at different partial nonlinearities.
The beam radius $\,R=1$.
Red: $\,\epsilon_B A_1^2=-0.001,\;\epsilon_F A_1^2= 0$.
Green: $\,\epsilon_B A_1^2= 0,\;\epsilon_F A_1^2=-0.001$.
Blue: $\,\epsilon_B A_1^2=0=\epsilon_F A_1^2=-0.001$.
Fig.~3 and 9 are used as the sources.}
\end{figure*}
Results of the calculations are represented in Fig.~9.
The used beam parameters are: beam radius is taken to be unity,
leading bunch does not oscillate, injection error of other bunches
$\,A_n(0)=1\;(n\ne 0)$.
The left-hand Fig.~9 represents the contribution of the field free parts only:
$\,\epsilon_B=0,\;\epsilon_F=-0.001$.
It should be compared with left Fig.~7 where the contribution of the field filled
part has been shown at the same nonlinear parameter.
It is seen that nonlinearity of the field free parts have less influence on the
proton coherent oscillations.
It is confirmed by the right-hand Fig.~9 where equal nonlinearities of both kinds
are considered: $\,\epsilon_B=\epsilon_F=-0.001$.
It considerably differs from the left-hand figure being rather similar to left
Fig.~7.
The same conclusion follows from Fig.~10 where the averaged
beam parameters ate plotted like Fig.~6 and 8.
It is seen that the addition of the field free regions only slightly
change the results (the blue and the red lines).
\section{Comparison with the experiment}
\begin{figure*}[b!]
\includegraphics[width=85mm]{12_2_summa.ps}
\caption{}
\end{figure*}
Presented results are in a reasonably good agreement with the experimental
evidence represented in Ref.~\cite{MAIN}.
One of the resumptive plots of this paper is copied and shown here as Fig.11
The black curve in this plot has the same sense as dashed lines in
Fig.~6 and 8.
All of them demonstrate the instability rate dependent of the parameters which
can be treated as time measured in different units.
The curves are similar in shape, and the quantitative agreement
can be obtained at following relation of the parameters:
$$
Wt=10\quad\mbox{corresponds to 80 revolutions, that is}\quad\,WT_{\rm rev}\simeq 1/8
$$
with $\,T_{\rm rev}\,$ as the Recycler revolution time.
On the other hand, it has been shown in Sec.~III that $W$ should be treated
as betatron tune shift of protons produced by the electron cloud.
It means that
$$
WT_{\rm rev}=2\pi\Delta Q\qquad\mbox{that is}\qquad
\Delta Q\simeq\frac{1}{16\pi}\simeq 0.02
$$
This result can be used to estimate the central density of the e-cloud $\,n_e$.
At the accepted model of the cloud, the relation is
\begin{equation}
\Delta Q=\frac{r_0 n_p P^2}{2\pi Q\beta^2\gamma}
\end{equation}
where $\,n_p=1.54\times 10^{-18}$m is the electromagnetic proton radius,
$\,Q=25.45\,$ is the Recycler tune, $\,P=3319\,$m is its perimeter,
$\,\beta\simeq 1\,$,
and $\,\gamma=9.53\,$ is the normalized energy of protons.
It gives numerically
$$
\Delta Q \simeq \frac{n_e}{10^{14}{\rm m}^3}\quad
\mbox {that is}\quad n_e\simeq 2\times 10^{12}{\rm m}^{-3}\quad
\mbox{at}\quad \Delta Q=0.02
$$
Measurement of the density was not performed in the experiment
but simulation with code POSINS is presented in \cite{MAIN}
resulting in 5-10 times more density.
\section{Conclusion}
The model of electron cloud in the form of a motionless snake
is considered in the paper.
Ionization of residual gas by protons is the primary source of the electrons
being supported by their multiplication in the beam pipe walls.
Fixation of the electron horizontal position is realized by strong vertical
magnetic field.
The model allows to explain the electron instability of bunched proton beam
in the Fermilab Recycler.
According it, the instability is caused by injection errors which initiate
coherent betatron oscillations of the bunches,
and electric field of the electron snake promotes an increase of their
amplitude in time, as well as from the batch head to its tail.
Nonlinearity of the e-cloud electric field is considered in detail
as the important factor restricting the amplitude growth.
The parts of the Recycler perimeter without dipole magnetic field are
included in the investigation as well.
However, it turns out that their contribution in the instability is negligible.
Results of calculations are in reasonable agreement with the Recycler
experiment evidence.
|
2,869,038,156,764 | arxiv | \section{Introduction} \label{section.introduction}
This paper analyses the geometry and invariant theory of the
Hermite invariant for binary quintics. We begin by recalling the
elementary properties of this invariant; the main results are summarised
on pages~\pageref{section.results.summary}-\pageref{end.summary}
after the required notation is available.
We refer to~\cite{Glenn,GrYo} and~\cite{Salmon1} for foundational notions in
the classical invariant theory of binary forms, as well as the symbolic
method. Modern treatments of
this material may be found in~\cite{Dolgachev,Gurevich,Kung-Rota}
and~\cite{Olver}.
The encyclop{\ae}dia article~\cite{MacMahon} contains a very readable
introduction to the classical theory.
We will use~\cite[Lecture 11]{FH} and \cite[\S4.2]{Sturmfels}
for the basic representation theory of $SL_2$.
The discovery of the Hermite invariant was first reported in
\cite[Premi{\`e}re Partie, \S IV-VII]{Hermite1}.
\begin{figure}
\epsffile[-70 10 187 300]{conic_figure.ps}
\end{figure}
The results in Lemma~\ref{lemma.Hdegree} and
Proposition~\ref{prop.triple.intersection} below are classical; I
have included them for completeness of treatment.
\subsection{} \label{section.conic_diagram}
The base field will be $\mathbf C$. Let $V$ denote a two-dimensional
complex vector space with basis $\mathbf x = \{x_1,x_2\}$ and a natural action of
$SL(V)$. For $m \ge 0$, let $S_m = \text{Sym}^m \, V$ denote the
$(m+1)$-dimensional irreducible $SL(V)$-representation consisting of
binary $m$-ics in $\mathbf x$. Consider the quadratic Veronese imbedding
\[ \phi: \P \, V \longrightarrow \P S_2, \quad
[c_1 \, x_1 + c_2 \, x_2] \longrightarrow
[(c_1 \, x_1+c_2 \, x_2)^2], \]
whose image is a smooth conic $\phi(\P^1) = C \subseteq \P^2$. We identify
$\P^5$ with $\text{Sym}^5 \, C \simeq \P S_5$,
i.e., a point in $\P^5$ is alternately seen as
a degree $5$ effective divisor on $C$, or as a
binary quintic in $\mathbf x$ distinguished up to scalars.
\label{conic_diagram}
Let $\mathsf z$ be a point of $\P^2 \setminus C$, and $L_1,L_2$ two lines through
$\mathsf z$ intersecting $C$ in $\mathsf a_1,\mathsf b_1;\mathsf a_2,\mathsf b_2$. Let $\mathsf c \in C$ be
one of the two points such that the line $\overline{\mathsf c \, \mathsf z}$
is tangent to $C$, and now define a divisor
$\mathsf a_1 + \mathsf b_1 + \mathsf a_2 + \mathsf b_2 + \mathsf c \in \P^5$. As $\mathsf z,L_1,L_2$ move, let
$\mathcal H \subseteq \P^5$ denote the closure of the set of all such divisors.
(The closure includes all divisors of the form $3 \, \mathsf z + \mathsf a + \mathsf b$ for arbitrary
points $\mathsf z,\mathsf a,\mathsf b$ in $C$.)
There are $\infty^2$ possible positions for $\mathsf z$, and then
$\infty^1$ positions for each of the $L_i$ once $\mathsf z$ is fixed;
hence $\dim \mathcal H = 4$. By construction $\mathcal H$ is an irreducible variety.
The action of $SL(V)$ on $\P S_2$ induces an action
on $C$, moreover it takes a tangent line
to $C$ to another tangent line, hence $SL(V)$ acts on the imbedding
$\mathcal H \subseteq \P^5$. Consequently the equation of $\mathcal H$ is an invariant
of binary quintics, usually called the Hermite invariant $\mathbb H$. This defines
$\mathbb H$ only up to a multiplicative constant; but see formula~(\ref{defn.H}) below.
A point $\mathsf z \in \P^2 \setminus C$ defines an order
$2$ automorphism of $C$, sending $\mathsf a \in C$ to the other intersection
of $\overline{\mathsf z \, \mathsf a}$ with $C$.
The divisor $\mathsf z + \mathsf a_1 + \mathsf b_1 + \mathsf a_2 + \mathsf b_2$ is said to be
{\sl in involution} with respect to $\mathsf z$
since it is fixed by this automorphism.
\begin{Lemma} \sl The degree of $\mathcal H$ is $18$.
\label{lemma.Hdegree} \end{Lemma}
\noindent {\sc Proof.}\;
For $\mathsf p \in C$, let $\Gamma_\mathsf p \subseteq \P^5$ denote the hyperplane
defined by all the divisors containing $\mathsf p$. Given general points
$\mathsf p_1,\mathsf p_2,\mathsf p_3,\mathsf p_4$ in $C$, consider the intersection
$\Sigma = \mathcal H \cap \Gamma_{\mathsf p_1} \cap \dots \cap \Gamma_{\mathsf p_4}$. The
three points
\begin{equation} \overline{\mathsf p_1 \, \mathsf p_2} \cap \overline{\mathsf p_3 \, \mathsf p_4},
\quad
\overline{\mathsf p_1 \, \mathsf p_3} \cap \overline{\mathsf p_2 \, \mathsf p_4}, \quad
\overline{\mathsf p_1 \, \mathsf p_4} \cap \overline{\mathsf p_2 \, \mathsf p_3},
\label{triple-intersection} \end{equation}
give $6$ elements in $\Sigma$ (since two tangents to $C$ can be drawn
from each). Alternately, let the tangent to $C$ at $\mathsf p_1$ intersect
$\overline{\mathsf p_2 \, \mathsf p_3}$ at $\mathsf z$, and let
$\overline{\mathsf z \, \mathsf p_4}$ intersect $C$ in the additional point $\mathsf q$; which
gives $\mathsf p_1 + \dots + \mathsf p_4 + \mathsf q \in \Sigma$. This construction produces
$4 \times 3 = 12$ more elements in $\Sigma$, hence
$\text{card} \, (\Sigma) = 18$. \qed
\smallskip
\subsection{} \label{section.FQ}
With notation as in the diagram, write $\mathsf c = [\phi(x_1)]$ after a change of variables. Then
$\mathsf a_1,\mathsf b_1$ must equal $\phi([\alpha_1 x_1 + \alpha_2 x_2]),
\phi([\alpha_1 x_1 - \alpha_2 x_2])$ for some $[\alpha_1,\alpha_2] \in \P^1$, and
similarly for $\mathsf a_2,\mathsf b_2$. Hence $\mathsf a_1 + \mathsf a_2 + \mathsf b_1 + \mathsf b_2 + \mathsf c$ corresponds to the quintic
\begin{equation} \mathcal F_Q = x_1 \, (q_0 \, x_1^4 + 2 \, q_1 \, x_1^2 \, x_2^2 + q_2 \, x_2^4)
\end{equation}
for some $Q=[q_0,q_1,q_2] \in \P^2$. This `canonical form' will prove most useful for computations.
Since any $[F] \in \mathcal H$ lies in the $SL_2$-orbit of some $[\mathcal F_Q]$, any `equivariant' calculation which is valid for
$\mathcal F_Q$ is valid generally.
\smallskip
In the next few sections we will gather some needed preliminaries from
classical invariant theory; we will take up $\mathbb H$ once more on page~\pageref{defn.H}.
\subsection{Transvectants} \label{section.trans}
Given integers $m,n \ge 0$, we have a decomposition of
$SL(V)$-representations
\begin{equation}
S_m \otimes S_n \simeq \bigoplus\limits_{r=0}^{\min(m,n)} \,
S_{m+n-2r}. \label{Clebsch-Gordan} \end{equation}
Let $A,B$ denote binary forms in $\mathbf x$ of respective orders $m,n$. The $r$-th transvectant
of $A$ with $B$, written $(A,B)_r$, is defined to be the image of
$A \otimes B$ via the projection map
\[ \pi_r: S_m \otimes S_n \longrightarrow S_{m+n-2r} \, . \]
It is given by the formula
\begin{equation} (A,B)_r = \frac{(m-r)! \, (n-r)!}{m! \, n!} \,
\sum\limits_{i=0}^r \, (-1)^i \binom{r}{i} \,
\frac{\partial^r A}{\partial x_1^{r-i} \, \partial x_2^i} \,
\frac{\partial^r B}{\partial x_1^i \, \partial x_2^{r-i}}
\label{trans.formula} \end{equation}
(Some authors choose the initial
scaling factor differently, cf.~\cite[Ch.~5]{Olver}.)
By convention $(A,B)_r = 0$ if $r > \min \, (m,n)$. If we symbolically write
$A = \alpha_\mathbf x^m, B = \beta_\mathbf x^n$, then
$(A,B)_r = (\alpha \, \beta)^r \, \alpha_\mathbf x^{m-r} \, \beta_\mathbf x^{n-r}$.
There is a canonical isomorphism of representations
\begin{equation}
S_m \stackrel{\sim}{\longrightarrow} S_m^* \, ( \, = \text{Hom}_{SL(V)}(S_m,S_0))
\label{self-duality} \end{equation}
which sends $A \in S_m$ to the functional $B \longrightarrow (A,B)_m$. Hence
if $A$ is an order $m$ form such that $(A,B)_m=0$ for all $B \in S_m$, then
$A$ must be zero.
\subsection{Gordan series} \label{section.Gordanseries}
Introduce a parallel set of letters
$\mathbf y = (y_1,y_2)$, and define Cayley's Omega operator
\[ \Omega_{\mathbf x \mathbf y} =
\frac{\partial^2}{\partial x_1 \, \partial y_2} -
\frac{\partial^2}{\partial x_2 \, \partial y_1}. \]
If we represent an element in $S_m \otimes S_n$ as a bihomogeneous form $G$
of orders $m,n$ in $\mathbf x,\mathbf y$, then \[ \pi_r (G) = \frac{(m-r)! \, (n-r)!}{m! \, n!} \,
\{\Omega_{\mathbf x \mathbf y}^r \circ G \}_{\mathbf y:=\mathbf x \, .} \]
A splitting to $\pi_r$ is given by the map
\[ \imath_r: \alpha_{\mathbf x}^{m+n-2r} \longrightarrow (\mathbf x \, \mathbf y)^r \,
\alpha_{\mathbf x}^{m-r} \alpha_{\mathbf y}^{n-r}, \] where
$(\mathbf x \mathbf y) = x_1 \, y_2 - x_2 \, y_1$. The decomposition $G = \sum\limits_r \, \imath_r \circ \pi_r(G)$ is called
the Gordan series for $G$. In general, it may be symbolically written as
\[ \alpha_\mathbf x^m \, \beta_\mathbf y^n =
\sum\limits_{r=0}^{\min(m,n)} \,
\frac{\binom{m}{r} \, \binom{n}{r}}{\binom{m+n-r+1}{r}} \,
(\mathbf x \, \mathbf y)^r \, {\theta_{(r)}}_{\mathbf x}^{m-r} \, {\theta_{(r)}}_{\mathbf y}^{n-r}, \]
where ${\theta_{(r)}}_\mathbf x^{m+n-2r}$ stands for $(\alpha \, \beta)^r \,
\alpha_\mathbf x^{m-r} \, \beta_\mathbf x^{n-r}$ (see~\cite[p.~55]{GrYo} or~\cite[\S 24.4]{Gurevich}).
\subsection{Wronskians}
Let $m,n \ge 0$ be integers such that $m \le n+1$.
Consider the following composite morphism of representations
\[ w: \wedge^m S_n \stackrel{\sim}{\longrightarrow} S_m(S_{n-m+1}) \longrightarrow S_{m(n-m+1)}, \]
where the first map is an isomorphism (see~\cite[\S2.5]{AC}) and the second is the natural surjection.
Given a sequence of binary $n$-ics $A_1,\dots,A_m$, define
their Wronskian $W(A_1,\dots,A_m)$ to be the
determinant
\[ (i,j) \longrightarrow \frac{\partial^{m-1} \, A_i}{\partial x_1^{m-j} \, \partial \, x_2^{j-1}}, \quad
(1 \le i,j \le m). \]
It equals the image $w(A_1 \wedge \dots \wedge A_m)$. We have
$W(A_1,\dots,A_m)=0$, iff the $A_i$ are linearly dependent over $\mathbf C$. (The `if' part is obvious. For
the converse, see~\cite[\S 1.1]{Meulien}.)
\begin{Lemma} \sl
Let $A_1,\dots,A_m$ be linearly independent forms of order $m$. Then
$W = W(A_1,\dots,A_m)$ is (up to scalar) the unique form of order $m$
such that $(W,A_i)_m=0$ for all $i$.
\label{lemma.wr} \end{Lemma}
\noindent {\sc Proof.}\;
Consider the composite morphism
\[ g: \wedge^{m+1} S_m \longrightarrow \wedge^m S_m \otimes S_m
\stackrel{\sim}{\longrightarrow} S_m \otimes S_m \longrightarrow \mathbf C, \]
where the first map is dual to the exterior product.
For any $i$, we have $(W,A_i)_m = g(A_1 \wedge \dots \wedge A_m \wedge A_i) = 0$.
The pairing
\[ S_m \times S_m \longrightarrow \mathbf C, \quad (A,B)\longrightarrow (A,B)_m \]
is nondegenerate, hence such a form is unique up to scalar. \qed
\subsection{Covariants}
Reviving an old notation due to Cayley,
we will write $(\alpha_0,\dots,\alpha_n \,) \hspace{-1.6mm} ( \, u,v)^n$ for the expression
\[\sum\limits_{i=0}^n \; \binom{n}{i} \, \alpha_i \, u^{n-i} v^i. \]
In particular $\mathbb F = (a_0,\dots,a_d \,) \hspace{-1.6mm} ( \, x_1,x_2)^d$ denotes the {\sl generic} $d$-ic,
which we identify with the natural trace form in $S_d \, \otimes \, S_d^*$.
Using the duality in~(\ref{self-duality}), this amounts to the identification
of $a_i \in S_d^*$ with $\frac{1}{d!} \, x_2^{d-i} \, (-x_1)^i$.
Let $R$ denote the symmetric algebra
\[ \bigoplus\limits_{m \ge 0} \, S_m(S_d^*) = \bigoplus\limits_{m \ge 0} \, R_m =
\mathbf C \, [a_0,\dots,a_d], \]
and $\P^d = \P \, S_d = \text{Proj} \, \, R$.
A {\sl covariant} of degree-order $(m,q)$ (of binary $d$-ics) is by definition an $SL(V)$-equivariant imbedding
$S_0 \hookrightarrow S_m(S_d) \otimes S_q$.
Let $\Phi$ denote the image of $1$ via this map, then we may write
$\Phi = (\varphi_0,\dots,\varphi_q \,) \hspace{-1.6mm} ( \, x_1,x_2)^q$
where each $\varphi_i$ is a homogeneous degree $m$ form in the
$\{a_i\}$. The weight of $\Phi$ is defined to be $\frac{1}{2}(d \, m-q)$
(which is always a nonnegative integer).
A covariant of order $0$ is called an invariant.
E.g., $(\mathbb F,\mathbb F)_2$ is a covariant of degree-order $(2,2d-4)$, and for $d=4$,
the compound transvectant $((\mathbb F,\mathbb F)_2,\mathbb F)_4$ is an invariant of degree $3$. If
$\mathbb F$ is specialized to $F \in S_d$, then $\Phi$ gets specialized to $\Phi_F \in S_q$.
\subsection{} \label{section.map.Rmodules}
Let $\Phi$ denote a covariant of degree-order $(m,q)$. Let $a,b$ denote
nonnegative integers, and let $r= (a+q-b)/2$. For every $F \in S_d$, we have a map
\[ h_F: S_a \longrightarrow S_b, \quad G \longrightarrow (\Phi_F,G)_r. \]
Since the entries of the matrix describing $h_F$ are degree $m$ forms in the $\{a_i\}$, we
may see it as an $SL_2$-equivariant map of graded $R$-modules
\begin{equation} R \otimes S_a \longrightarrow R(m) \otimes S_b,
\label{map.Rmodules} \end{equation}
Conversely, every equivariant map of the form (\ref{map.Rmodules})
arises from a covariant. (Indeed, in degree zero it reduces to
a map of representations $S_a \longrightarrow S_m(S_d) \otimes S_b$.)
The numerical conditions are assumed to be such that the transvection is possible,
i.e., we must have $a+q-b$ nonnegative and even, and
$r \le \min(a,q)$.
If $a \le b$, then by the Wronskian of the map $h$ we mean
\[ W(h_\mathbb F(x_1^a),h_\mathbb F(x_1^{a-1} \, x_2), \dots, h_\mathbb F(x_2^a)), \]
which is a covariant of degree $m \, (a+1)$ and order $(a+1)(b-a)$. Its coefficients are
(up to signs) the maximal minors of $h_\mathbb F$.
\subsection{} \label{section.evectant}
We will let $\mathfrak I(\Phi) \subseteq R$ denote the ideal generated by the
coefficients of $\Phi$. E.g., if $d=3$, then $\mathfrak I((\mathbb F,\mathbb F)_2)$ is the
defining ideal of the twisted rational cubic curve.
If $\mathbb I(a_0,\dots,a_d)$ is an invariant of degree $m$, then its
{\sl evectant} is defined to be
\begin{equation}
\mathcal E_\mathbb I = \frac{1}{m} \sum\limits_{i=0}^d \,
\frac{\partial \mathbb I}{\partial a_i} \, (-x_2)^{d-i} \, x_1^i,
\label{formula.evectant} \end{equation}
which is a covariant of degree-order $(m-1,d)$. By Euler's formula we have an
identity $(\mathcal E_\mathbb I,\mathbb F)_d= \mathbb I$.
Let $\mathcal A \subseteq \mathbf Q [a_0,\dots,a_d;x_1,x_2]$
denote the subring of covariants, which is naturally bigraded by $(m,q)$.
By a fundamental theorem of Gordan, $\mathcal A$ is finitely generated. A
minimal set of generators of $\mathcal A$ is called a fundamental system for $d$-ics.
Moreover $\mathcal A$ is a unique factorization domain and
each of the minimal generators is a prime element of $\mathcal A$.
The number of linearly independent covariants of $d$-ics of degree-order $(m,q)$ is given
by the Cayley-Sylvester formula (see~\cite[Corollary 4.2.8]{Sturmfels}).
For integers $n,k,l$, let $p(n,k,l)$ denote the number of partitions of
$n$ into $k$ parts such that no part exceeds $l$. Then
\begin{equation} \zeta_{m,q} = \dim \mathcal A_{m,q} =
p \, (\frac{dm-q}{2},d,m)-p \, (\frac{dm-q-2}{2},d,m).
\label{formula.CS} \end{equation}
\begin{Example} \rm Let $d=5$, then
$\zeta_{4,8} = p(6,5,4)-p(5,5,4) = 2$.
A basis for the space $\mathcal A_{4,8}$ is given by
\[ (\mathbb F,\mathbb F)_2 \, (\mathbb F,\mathbb F)_4, \quad \mathbb F \, (\mathbb F,(\mathbb F,\mathbb F)_4)_2. \]
\end{Example}
\subsection{Quintics}
We will make use of the fundamental system for quintics, which has been known
since the nineteenth century. The following table (adapted from~\cite[p.~131]{GrYo})
lists the degree-orders of the minimal generators of $\mathcal A$. For instance, there is
one generator in degree-order $(5,3)$ and none in $(3,7)$.
\[ \qquad \qquad \text{order} \]
\[ \text{degree} \; \;
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \hline
{} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 9 \\ \hline \hline
1 & {} & {} & {} & {} & {} & 1 & {} & {} & {} \\ \hline
2 & {} & {} & 1 & {} & {} & {} & 1 & {} & {} \\ \hline
3 & {} & {} & {} & 1 & {} & 1 & {} & {} & 1 \\ \hline
4 & 1 & {} & {} & {} & 1 & {} & 1 & {} & {} \\ \hline
5 & {} & 1 & {} & 1 & {} & {} & {} & 1 & {} \\ \hline
6 & {} & {} & 1 & {} & 1 & {} & {} & {} & {} \\ \hline
7 & {} & 1 & {} & {} & {} & 1 & {} & {} & {} \\ \hline
8 & 1 & {} & 1 & {} & {} &{} & {} & {} & {} \\ \hline
9 & {} & {} & {} & 1 & {} &{} & {} & {} & {} \\ \hline
11 & {} & 1 & {} & {} & {} &{} & {} & {} & {} \\ \hline
12 & 1 & {} & {} & {} & {} &{} & {} & {} & {} \\ \hline
13 & {} & 1 & {} & {} & {} &{} & {} & {} & {} \\ \hline
18 & 1 & {} & {} & {} & {} &{} & {} & {} & {} \\ \hline
\end{tabular} \]
\vspace*{1cm}
\medskip
We will frequently need the following covariants:
\begin{equation}
\begin{array}{lll}
\vartheta_{22}=(\mathbb F,\mathbb F)_4, & \vartheta_{26}=(\mathbb F,\mathbb F)_2, &
\vartheta_{33}=(\vartheta_{22},\mathbb F)_2, \\
\vartheta_{39} =(\mathbb F,\vartheta_{26})_1, &
\vartheta_{40}=(\vartheta_{22},\vartheta_{22})_2, &
\vartheta_{44}=(\vartheta_{22},\vartheta_{26})_2, \\
\vartheta_{51}=(\vartheta_{22}^2,\mathbb F)_4, &
\vartheta_{80}=(\vartheta_{22}^3,\vartheta_{26})_6.
\end{array} \end{equation}
The notation is so set up that $\vartheta_{m \,q}$ is a generator in
degree-order $(m,q)$. (The comma is omitted for ease of reading.)
\medskip \medskip
The computations which go into constructing such tables are generally very laborious, and of course
the classical invariant theorists carried them out without the aid of machines.
Hence, it is not unreasonable to worry about their correctness (also see the footnote
on~\cite[p.~131-132]{GrYo}). In the case of binary quintics however, I have thoroughly checked
that the table above is entirely correct.
Here is a typical instance of how the table is used: we have
\[ \zeta_{9,5} = p(20,5,9)-p(19,5,9) = 98-93 = 5, \]
i.e., $\mathcal A_{9,5}$ is $5$-dimensional. Notice that
\begin{equation} \label{basis.95}
B = \{ \vartheta_{51} \, \vartheta_{22}^2, \,
\vartheta_{51} \, \vartheta_{44}, \,
\vartheta_{40} \, \vartheta_{33} \, \vartheta_{22}, \,
\vartheta_{40}^2 \, \mathbb F, \, \vartheta_{80} \, \mathbb F \} \end{equation}
are all of degree-order $(9,5)$. Since they are linearly independent over $\mathbf Q$ (this can
be checked by specializing to $F= x_1^5 + x_2^5 + (x_1+x_2)^5$ and solving a system of linear
equations), $B$ is a basis of $\mathcal A_{9,5}$. This basis will be used in \S\ref{firstsyzygies.J}.
Since $\zeta_{18,0} = p(45,5,18)-p(44,5,18) = 967-966=1$, up to scalar, quintics have
a unique invariant of degree $18$. Hence, following~\cite[p.~131]{GrYo}, we will define
\begin{equation} \mathbb H = (\vartheta_{22}^7, \mathbb F \, \vartheta_{39})_{14}.
\label{defn.H} \end{equation}
(This merely requires checking that the transvectant is not identically zero, which can be
done by specializing $\mathbb F$ and calculating directly.) Usually $\mathbb H$ is called a skew-invariant
(since it is of odd weight). Indeed, $\mathbb H$ was the first discovery of a skew-invariant for any $d$.
(They do not occur for $d \le 4$.) For what it is worth, a
{\sc Maple} computation shows that $\mathbb H$ is a linear combination of
$848$ monomials in $a_0,\dots,a_5$.
\subsection{} \label{geometry1}
Let $u \in S_2$ be a nonzero vector. The duality in~(\ref{self-duality})
identifies the point $[u] \in \P S_2$ with its
polar line $\{[v] \in \P^2: (u,v)_2 =0\} \in \P S_2^*$.
The point lies on its own polar iff $(u,u)_2=0$, which happens iff $[u] \in C$.
If $[u]$ lies on the polar of $[v]$, then $[v]$ lies on the
polar of $[u]$. The pole of the line joining two points $[u],[v]$ is given by $[(u,v)_1]$.
Three points $[u],[v],[w]$ are collinear iff $((u,v)_1,w)_2=0$.
If $l \in S_1$, then the tangent to $\phi(l) \in C$ is the line
$\{[l \, m]: m \in S_1 \}$. The line joining $\phi([l]),\phi([m])$
is (the polar of) $[l \, m]$.
\subsection{} \label{R=H}
The following proposition will be needed in \S\ref{section.sing.H}.
Let $G$ denote a binary quartic identified with four points
$\Pi = \{\mathsf a,\mathsf b,\mathsf c,\mathsf d\} \subseteq C$. Consider the three pairwise
intersections
$\overline{\mathsf a \, \mathsf b} \cap \overline{\mathsf c \, \mathsf d},
\overline{\mathsf a \, \mathsf c} \cap \overline{\mathsf b \, \mathsf d},
\overline{\mathsf a \, \mathsf d} \cap \overline{\mathsf b \, \mathsf c}$,
regarding each as a form in $S_2$.
\begin{Proposition} \sl
The product of the three points is given (of course up to scalar) by the
covariant $\mathbb T(G) = (G,(G,G)_2)_1$.
\label{prop.triple.intersection} \end{Proposition}
\noindent {\sc Proof.}\; Let us write $G = a_\mathbf x \, b_\mathbf x \, c_\mathbf x \, d_\mathbf x$, where
$a_\mathbf x = a_1 \, x_1 + a_2\, x_2$ and $\mathsf a = \phi([a_\mathbf x])$ etc. By \S\ref{geometry1}, the intersection
$\overline{\mathsf a \mathsf b} \cap \overline{\mathsf c \mathsf d}$ corresponds to
\[ ((a_\mathbf x^2,b_\mathbf x^2)_1,(c_\mathbf x^2,d_\mathbf x^2)_1)_1 =
(a \, b)(c \, d) \, (a_\mathbf x \, b_\mathbf x, c_\mathbf x \, d_\mathbf x)_1, \]
where $(a \, b) = a_1 b_2 - a_2 \, b_1$ etc. Hence, up to a factor, the product corresponds to
\begin{equation}
(a_\mathbf x \, b_\mathbf x, c_\mathbf x \, d_\mathbf x)_1 \, (a_\mathbf x \, c_\mathbf x, b_\mathbf x \, d_\mathbf x)_1 \,
(a_\mathbf x \, d_\mathbf x, b_\mathbf x \, c_\mathbf x)_1.
\label{product.3} \end{equation}
The last expression is of degree $3$ in the coefficients of $G$ (since each of the letters
$a, \dots, d$ occurs thrice), moreover it is a covariant since the underlying
geometric construction is compatible with the $SL(V)$-action. However, $\zeta_{3,6} = 1$ for
binary quartics, hence $\mathbb T(G)$ and (\ref{product.3}) are equal up to a scalar. \qed
\smallskip
The result remains true if $\Pi$ contains one double point, say $\mathsf a = \mathsf b$, with
$\overline{\mathsf a \, \mathsf b}$ interpreted as the tangent to $C$ at $\mathsf a$.
By~\cite[\S 3.5.2]{Glenn}, the covariant $\mathbb T(G)$ vanishes identically iff $\Pi$ consists of two (possibly coincident) double points,
say $\mathsf a = \mathsf b, \mathsf c = \mathsf d$. In this case the
geometric construction collapses, since $\overline{\mathsf a \, \mathsf c}
\cap \overline{\mathsf b \, \mathsf d}$ is no longer a determinate point.
\smallskip
This proposition can be used to give an alternate definition of $\mathbb H$.
Let $\mathfrak R$ denote the resultant $\text{Res}(\mathbb F,\vartheta_{33})$, defined
as the determinant of an $8 \times 8$ Sylvester matrix
(see~\cite[Ch.~V,\S 10]{Lang}).
By construction it is of degree $5 \times 3 + 3 \times 1 =18$ in the $\{a_i\}$.
\begin{Proposition} \sl
The hypersurface defined by $\mathfrak R$ coincides with $\mathcal H$.
\end{Proposition}
We will avoid using the fact that $\zeta_{18,0} =1$.
\smallskip
\noindent {\sc Proof.}\;
Let us first show that $\mathfrak R$ is not identically zero. Specialize to
$F = x_1^5 + 2 \, x_2^5 + (x_1+ x_2)^5$. Then
$\vartheta_{33} = -12 \, x_1 \, x_2 \, (x_1+x_2)$,
which has no common factor with $F$, hence $\mathfrak R \not\equiv 0$.
Now assume that $F$ and $\vartheta_{33}(F)$ have a
common linear factor, we may take it to be $x_1$ after a change of variables.
Let $F = x_1 \, G$, with $G = (a_0,a_1,a_2,a_3,a_4 \,) \hspace{-1.6mm} ( \, x_1,x_2)^4$.
Calculating directly, we have
\begin{equation}
\vartheta_{33}(F)|_{x_1:=0} =
\frac{24}{125} \, x_2^3 \, (2 \, a_3^3 + a_1 \, a_4^2 - 3 \, a_2 \, a_3 \, a_4),
\label{vartheta33.1} \end{equation}
which vanishes by hypothesis. Hence
\[ \mathbb T(G)|_{x_1:=0} =
- \, x_2^6 \, (2 \, a_3^3 + a_1 \, a_4^2 - 3 \, a_2 \, a_3 \, a_4) \]
must also vanish, i.e., $x_1$ must divide one of the three
intersection points coming from $G$. Denote this point by $\mathsf z =
[x_1 \, (\alpha \, x_1 + \beta \, x_2)]$. It is now immediate that
the divisor corresponding to $F$ is in involution with respect to $\mathsf z$, hence
$[F] \in \mathcal H$. Thus we have an inclusion of hypersurfaces
$\{[F] \in \P^5: \mathfrak R = 0 \} \subseteq \mathcal H$. Since the latter
is irreducible, they must be equal. \qed
\subsection{}
It will prove useful to introduce the following loci in $\P^5$.
If $\lambda=(\lambda_1,\dots,\lambda_r)$ is a partition of $5$,
let $X_\lambda$ denote the closed subvariety
\[ \{[F] \in \P^5: F = \prod l_i^{\lambda_i} \; \;
\text{for some $l_i \in S_1$} \}. \]
In other words, the divisor of $[F] \in X_\lambda$ is of the form
$\lambda_1 \mathsf a_1 + \dots + \lambda_r \, \mathsf a_r$ with some of the
$\mathsf a_i$ possibly coincident. The dimension of $X_\lambda$ equals the
number of (nonzero) parts in $\lambda$. There is an inclusion
$X_\mu \subseteq X_\lambda$ iff $\lambda$ is a refinement of $\mu$.
For instance, $X_{(5)}$ is the rational normal
quintic, $X_{(2,1,1,1)}$ is the discriminant hypersurface, and
$X_{(3,1,1)}$ is the locus of nullforms.
\subsection{A summary of results} \label{section.results.summary}
In \S\ref{section.sing.H} we will construct a desingularization of $\mathcal H$, and then show that
its singular locus $\mathcal B$ consists of three components
$\Omega_{(1)},\Omega_{(2)}$ and $X_{(3,1,1)}$. They are respectively the
$SL_2$-orbit closures of the forms
\[ x_1^5 + x_2^5, \quad
x_1 \, x_2 \, (x_1-x_2) \, (x_1^2 + x_1 \, x_2 + x_2^2), \quad
x_1^3 \, x_2 \, (x_1+x_2). \]
Their degrees are $6,10$ and $9$, hence $\mathcal B$ is of degree $25$
and pure codimension two. Next we show that the ideal
$I_\mathcal B \subseteq R$ is a complete intersection, defined by the coefficients
of $\vartheta_{51}$.
In \S\ref{section.dual.H} it will be seen that $\mathcal H$ is naturally isomorphic to its own
dual variety. The duality $S_5 \simeq S_5^*$ in (\ref{self-duality})
induces an isomorphism $\sigma: \P^5 \stackrel{\sim}{\longrightarrow} (\P^5)^*$.
Let $[F] \in \mathcal H \setminus \mathcal B$, with $T_{\mathcal H,[F]}$ the tangent space to
$\mathcal H$ at $[F]$. Then the point $\sigma^{-1}(T_{\mathcal H,[F]})$ coincides with
$[\mathcal E_\mathbb H(F)]$ (the value of the evectant at $F$). It turns out however,
that this point also belongs to $\mathcal H$. Thus we get a morphism
\[ \mathcal H \setminus \mathcal B \longrightarrow \mathcal H \setminus \mathcal B, \quad
[F] \longrightarrow [\mathcal E_\mathbb H(F)]. \]
This map is involutive, i.e., $\mathcal E_\mathbb H(\mathcal E_\mathbb H(F))$ equals $F$ up to a scalar.
Let $J = (\frac{\partial \, \mathbb H}{\partial a_0}, \dots,
\frac{\partial \, \mathbb H}{\partial a_5}) \subseteq R $
denote the Jacobian ideal of $\mathbb H$. In~\S\ref{section.jacobian}
we show that $J$ is a perfect ideal of height two, with an $SL_2$-equivariant
minimal resolution
\[ \begin{aligned}
0 \leftarrow R/J \leftarrow R & \leftarrow R(-17) \otimes S_5 \\
& \leftarrow R(-18) \otimes S_2 \oplus R(-22) \oplus R(-26) \leftarrow 0.
\end{aligned} \]
During the course of the proof we will see that $J$ naturally fits into
a three-parameter family of perfect ideals.
The results of~\S\ref{section.J} allow us to identify the morphisms in this resolution
up to three distinct possibilities, but no further.
In order to resolve this ambiguity it would suffice to
calculate the value of $\mathcal E_\mathbb H$ at $\mathcal F_Q$. A general formalism is developed in
\S\ref{section.evectants} to solve this problem. For any covariant
$\Phi$ of $d$-ics, we construct a sequence of covariants $\mathcal A_\bullet$ called its evectants; this
generalizes the classical construction from \S\ref{section.evectant}. Given two arbitrary
covariants $\Phi,\Psi$ with evectants $\mathcal A_\bullet,\mathcal B_\bullet$, we deduce formulae for
calculating the evectants of a general transvectant $(\Phi,\Psi)_r$.
This iterative scheme is then applied to formula~(\ref{defn.H}) to evaluate $\mathcal E_\mathbb H$.
Nearly all of \S\ref{section.evectants} can be read independently of the rest of the paper.
\label{end.summary}
\subsection{A note on computational procedures} Since I have used
machine computations in several parts of this paper, their role
and extent should be clarified. All the computations have been done in
{\sc Maple}. I have written routines to calculate the numbers $p(n,k,l)$ and
$\zeta_{m,q}$ appearing in formula (\ref{formula.CS}). I have also programmed
formula~(\ref{trans.formula}) for calculating transvectants; hence
identities such as~(\ref{expr.TA}) and (\ref{phi51.1}) are machine-computed.
I have also used {\sc Maple} for some routine calculation in linear
algebra, e.g., for evaluating Wronskian determinants and for solving systems of linear equations.
None of the results depend upon calculating Gr{\"o}bner bases in
any guise (e.g., minimal free resolutions).
\setcounter{footnote}{1}
On the whole, I have not succeeded in bypassing heavy calculations entirely,
and I very much doubt if this is at all possible. The Hermite invariant is a specific
algebro-geometric object which is not a member of any natural `family', hence it seems
unlikely that merely general considerations will enable us to prove much about it. Even so,
I believe that none of the calculations done here by a machine are beyond the ambit of a
patient and able human mathematician\footnote{Paul Gordan and George Salmon come to mind; for
instances, see~\cite{Gordan} or the tables at the end of~\cite{Salmon1}.}.
\section{The singular locus} \label{section.sing.H}
\subsection{} First we construct a natural desingularization of $\mathcal H$.
Let
\[ Y = \{ (\mathsf c,\mathsf z) \in C \times \P^2:
\text{the tangent to $C$ at $\mathsf c$ passes through $\mathsf z$} \}. \]
The second projection $Y \stackrel{\alpha}{\longrightarrow} \P^2$ is a double
cover ramified along $C$. Let
$\P \, T_{\P^2} \longrightarrow \P^2$ denote the projectivisation of the tangent
bundle of $\P^2$, so that the fibre over $\mathsf z \in \P^2$ can be identified
with the pencil of lines through $\mathsf z$. Define the $\P^2$-bundle
\[ \text{Sym}^2 \, (\P \, T_{\P^2})
\stackrel{\beta}{\longrightarrow} \P^2, \]
so that an element in $\beta^{-1}(\mathsf z)$ is an unordered pair of
(possibly coincident) lines $L_1,L_2$ through $\mathsf z$.
Consider the pullback square
\[ \diagram
{\mathcal Z \,} \rto \dto & \text{Sym}^2 \, (\P \, T_{\P^2}) \dto^\beta \\
Y \rto_\alpha & {\P^2 \, .}
\enddiagram \]
Define $\mathcal Z \stackrel{f}{\longrightarrow} \mathcal H$
by sending $(\mathsf c,\mathsf z) \times (L_1,L_2)$ to the divisor
\[ \mathsf c + L_1 \cap C + L_2 \cap C. \]
(Of course, $L_i \cap C$ are interpreted scheme-theoretically.)
By construction $f$ is a projective birational morphism which is a
desingularization of $\mathcal H$. We will use this map to detect the
singularities of $\mathcal H$.
Since $Y$ is a rational variety (in fact isomorphic to $\P^1 \times \P^1$),
so is $\mathcal Z$ and hence $\mathcal H$.
Henceforth we will write
$(\mathsf c,\mathsf z;L_1,L_2)$ for $(\mathsf c,\mathsf z) \times (L_1,L_2) \in \mathcal Z$.
\begin{Lemma} \sl
The morphism
$\mathcal Z \setminus f^{-1}(X_{(5)}) \stackrel{f}{\longrightarrow}
\mathcal H \setminus X_{(5)}$ is finite.
\end{Lemma}
\noindent {\sc Proof.}\; Since the morphism is projective, it suffices to show that it has
finite fibres (see~\cite[Lemma 14.8]{Harris}).
Let $(\mathsf c,\mathsf z;L_1,L_2) \in f^{-1}([F])$. There are finitely many
choices for $\mathsf c$. By hypothesis there is a point
$\mathsf a (\, \neq \mathsf c)$ appearing in $[F]$; hence for a given $\mathsf c$
there are only finitely many possibilities for $\mathsf z$ (because
$\overline{\mathsf z \, \mathsf a} \cap C$ must be contained in $[F]$). Then for a
given $\mathsf z$, there are only finitely many possibilities for the $L_i$.
\qed
\smallskip
This argument breaks down over $X_{(5)}$; in fact
$f^{-1}(X_{(5)}) \longrightarrow X_{(5)}$ is a $\P^1$-bundle.
\subsection{}
Define the forms
\[ \begin{array}{ll}
\bU_{(1)} = x_1^5 + x_2^5, & \bU_{(2)} = x_1 \, x_2 \, (x_1 -x_2) \, (x_1^2 + x_1 \, x_2 + x_2^2) \\
\bU_{(3)} = x_1^3 \, x_2 \, (x_1 + x_2), & \bU_{(4)} = x_1^3 \, x_2^2, \\
\bU_{(5)} = x_1^4 \, x_2, & \bU_{(6)} = x_1^5.
\end{array} \]
Let $\mathcal B \subseteq \mathcal H$ denote the union of the orbits of all the $U_{(i)}$.
We claim that $\mathcal B$ is closed. Indeed, by~\cite[\S2]{Aluffi-Faber} the closure of any orbit is a union of
orbits of forms of the type $x_1^a \, x_2^b$, and they are already included.
\begin{Theorem} \sl
The singular locus $\text{Sing}(\mathcal H)$ coincides with $\mathcal B$.
\label{theorem.sing} \end{Theorem}
The theorem will follow from the following proposition.
\begin{Proposition} \sl
\begin{enumerate}
\item
For $[F] \in \mathcal H$, the fibre $f^{-1}([F])$ consists of more than one point iff
$F$ lies in the orbit of one of the forms $U_{(i)}$ for $1 \le i \le 6,i \neq 5$.
\item
Assume $[F] \in \mathcal H \setminus \mathcal B$, and $f^{-1}([F]) = \{\mathsf w\}$. Then the morphism on tangent spaces
$T_{\mathcal Z,\mathsf w} \longrightarrow T_{\mathcal H,[F]}$ is injective.
\end{enumerate}
\end{Proposition}
Let us show the theorem assuming the proposition. If $[F]$ lies in the orbit of one of
$\bU_{(1)},\dots,\bU_{(4)}$, then the fibre $f^{-1}([F])$ is disconnected, hence
$[F]$ is not a normal point. Since $\bU_{(5)},\bU_{(6)}$ lie in the orbit closure of
$\bU_{(3)}$, we deduce that $\mathcal B \subseteq \text{Sing}(\mathcal H)$. If $[F] \in \mathcal H \setminus \mathcal B$, then
by~\cite[Theorem 14.9]{Harris} the map $f$ is a local isomorphism in a neighbourhood of $\mathsf w$,
hence $[F]$ is a nonsingular point. \qed
\subsection{} Let us prove part (1) of the proposition. Define
\[ \mathcal S = \{[F] \in \mathcal H: f^{-1}([F]) \; \text{consists of at least two points} \}. \]
Evidently $\bU_{(6)} \in \mathcal S$.
Assume that $[F] = 3 \mathsf c + \mathsf a_1 + \mathsf a_2$, where $\mathsf a_1,\mathsf a_2$ are (possibly coincident) points each
different from $\mathsf c$. Let $\mathsf z$ denote the intersection $\overline{\mathsf c \mathsf c} \cap
\overline{\mathsf a_1 \mathsf a_2}$, then
$(\mathsf c,\mathsf z; \overline{\mathsf c \, \mathsf c},\overline{\mathsf a_1 \, \mathsf a_2})$ and
$(\mathsf c,\mathsf c; \overline{\mathsf c \, \mathsf a_1},\overline{\mathsf c \, \mathsf a_2})$ both map to
$[F]$; this shows that $\bU_{(3)}, \bU_{(4)} \in \mathcal S$. It is equally clear that
$\bU_{(5)} \notin \mathcal S$.
If a point of the form $(\mathsf c,\mathsf c,L_1,L_2)$ belongs to $f^{-1}([F])$, then $[F]$ must have a
point of multiplicity $\ge 3$ at $\mathsf c$, which is already considered above.
Hence assume that $[F] \in \mathcal S \setminus X_{(3,1,1)}$, and
$(\mathsf c,\mathsf z;L_1,L_2),(\mathsf c',\mathsf z';L_1',L_2')$ are two distinct points in $f^{-1}([F])$.
Since $\mathsf c \neq \mathsf z$, we may write $\mathsf c = \phi([x_1]), \mathsf z = [x_1 x_2]$ after a
change of variables. Then $[F] = [\mathcal F_Q]$ for some $Q \in \P^2$ (see~\S\ref{section.FQ}).
If $q_0=0$, then both $q_1,q_2$ must be nonzero (otherwise $[\mathcal F_Q] \in X_{(3,1,1)}$).
But then $[\mathcal F_Q]$ is in the orbit of $A=x_1 \, x_2^2 \, (x_1+x_2) \, (x_1-x_2)$, and it is
clear from the geometry that $[A] \notin \mathcal S$.
Hence we may assume $q_0 =1$, and then
\[ F = x_1 \, (x_1 - \alpha \, x_2) \, (x_1 + \alpha \, x_2) \,
(x_1 - \beta \, x_2) \, (x_1 + \beta \, x_2) \]
for some $\alpha,\beta$, such that $\mathsf c' = \phi([x_1 - \alpha \, x_2])$.
By assumption $\mathsf z'$ is one of the diagonal intersection points (see~\S\ref{R=H}) coming from
the quartic form $G = x_1 \, (x_1 + \alpha \, x_2) \,
(x_1 - \beta \, x_2) \, (x_1 + \beta \, x_2)$. The quadratic form corresponding to
$\mathsf z'$ must divide $\mathbb T(G)$, and hence $x_1 - \alpha \, x_2$ must divide $\mathbb T(G)$.
By a direct calculation,
\begin{equation} \begin{aligned}
{} & \mathbb T(G)|_{x_1 := \alpha \, x_2} \\
= \; & \frac{1}{32} \, x_2^6 \, \alpha^3 \, (\alpha^2 + 3 \, \beta^2) \,
(\alpha^2 + 4 \, \alpha \, \beta - \beta^2) \,
(\alpha^2 - 4 \, \alpha \, \beta - \beta^2),
\end{aligned} \label{expr.TA} \end{equation}
which must vanish. Now $\alpha \neq 0$, since $[F] \notin X_{(3,1,1)}$.
Hence we have two cases
\[ \frac{q_0 \, q_2}{q_1^2} =
\frac{4 \, \alpha^2 \, \beta^2}{(\alpha^2 + \beta^2)^2} =
\begin{cases}
1/5 & \text{if $\alpha^2 \pm 4 \, \alpha \beta - \beta^2 =0$,} \\
-3 & \text{if $\alpha^2 + 3 \, \beta^2 =0$.}
\end{cases} \]
A form satisfying the first case is in the orbit of
$\mathcal F_{[1,5,5]} = x_1 (1,5,5 \,) \hspace{-1.6mm} ( \, x_1^2,x_2^2)^2$. By the transformation
$(x_1,x_2) \longrightarrow (x_1 +x_2,x_1-x_2)$ it can be brought into the
more manageable form
\begin{equation} \bU_{(1)} = x_1^5 + x_2^5.
\label{psi.1} \end{equation}
Similarly in the second case $\mathcal F_{[1,1,-3]}$ can be brought into the form
\begin{equation}
\bU_{(2)} = x_1 \, x_2 \, (x_1-x_2) \, (x_1^2 + x_1 \, x_2 + x_2^2).
\label{psi.2} \end{equation}
via $(x_1,x_2) \longrightarrow (x_1-x_2,x_1+x_2)$. We have shown that any form in
$\mathcal S \setminus X_{(3,1,1)}$ belongs to the orbit of either $\bU_{(1)}$ or $\bU_{(2)}$.
It remains to show that the latter two belong to $\mathcal S$, this
can be done by an explicit construction as follows:
Let $\omega = \exp(\frac{\pi \sqrt{-1}}{5})$, and $y = \omega^r \, x_2$.
Define points $\mathsf c = \phi([x_1-y]), \mathsf z = [(x_1+y) \, (x_1-y)]$,
and $L_i$ to be the line joining
$\phi([x_1-\omega^i \, y])$ and $\phi([\omega^i \, x_1-y])$
for $i =1,2$. This
gives a point of $f^{-1}([\bU_{(1)}])$ for every $1 \le r \le 5$.
Let $\nu = \exp(\frac{2 \pi \sqrt{-1}}{3})$, and $y = \nu^r \, x_2$.
Define points $\mathsf c = \phi([x_1-y]), \mathsf z = [(x_1-y) \, (x_1+y)]$.
Let $L_1$ be the line joining $\phi([x_1]),\phi([x_2])$, and
$L_2$ joining $\phi([x_1-\nu \, y])$ and $\phi([\nu \, x_1 - y])$.
This gives a point of $f^{-1}([\bU_{(2)}])$ for every $1 \le r \le 3$.
This completes the proof of part (1). \qed
\subsection{} We will prove part (2) by introducing a local parametrisation of the affine
version of $f$, and directly calculating the map on tangent spaces. Since
$[F] \notin X_{(3,1,1)}$, after a change of variables we may write $F =
x_1 \, (1,\xi,1 \,) \hspace{-1.6mm} ( \, x_1^2,x_2^2)^2$ for some $\xi \in \mathbf C$.
Let $\mathcal A = S_1 \times S_1 \times \mathbf C$, and define a morphism
from $\mathcal A$ to $\mathcal Z$ by sending $(l_1,l_2,\xi) \in \mathcal A$
to $([l_1^2],[l_1 \, l_2],L_1,L_2)$, where $L_1, \, L_2$ correspond to the solutions of
the equation $(1,\xi,1 \,) \hspace{-1.6mm} ( \, l_1^2,l_2^2)^2=0$. Since the morphism is smooth, for a local
parametrisation of $f$ we may use the map
\[ \hat f: \mathcal A \longrightarrow \text{Cone}(\mathcal H), \quad
(l_1,l_2,\xi) = l_1 \, (1,\xi,1 \,) \hspace{-1.6mm} ( \, l_1^2,l_2^2)^2. \]
The image of an arbitrary tangent vector $(m_1,m_2,\eta)$ via $d \hat f$
is given by the limit
\[ \begin{aligned}
{} & \tau(m_1,m_2,\eta) \\
= & \lim_{\epsilon \rightarrow 0} \;
\frac{1}{\epsilon} \, [ \,
\hat f(l_1 + \epsilon \, m_1,l_2 + \epsilon \, m_2,\xi + \epsilon \, \eta) -
\hat f(l_1,l_2,\xi) \, ].
\end{aligned} \]
Writing $\mathsf w = (x_1,x_2,\xi)$, the image of the map $T_{\mathcal A,\mathsf w} \longrightarrow T_{\text{Cone}(\mathcal H),F}$ is
spanned by the five vectors
\[ \begin{array}{ll}
\tau(x_1,0,0) = x_1 \, (5,3 \, \xi, 1 \,) \hspace{-1.6mm} ( \, x_1^2, x_2^2), &
\tau(x_2,0,0) = x_2 \, (5,3 \, \xi, 1 \,) \hspace{-1.6mm} ( \, x_1^2, x_2^2), \\
\tau(0,x_1,0) = 4 \, x_1^2 \, x_2 \, (\xi \, x_1^2 + x_2^2), &
\tau(0,x_2,0) = 4 \, x_1 \, x_2^2 \, (\xi \, x_1^2 + x_2^2), \\
\tau(0,0,1) = 2 \, x_1^3 \, x_2^2.
\end{array} \]
In order to verify that they are linearly independent, we calculate their Wronskian
\begin{equation} \begin{aligned}
{} & \left| \begin{array}{rrrrr}
600\, x_1 & 72\, \xi \, x_2 & 72\, \xi x_1\, & 24\, x_2 & 24\, x_1\\
120\, x_2 & 120\, x_1 & 72\, \xi\, x_2 & 72\, \xi \, x_1 & 120\, x_2 \\
96\, \xi\, x_2 & 96\, \xi \, x_1 & 48\, x_2 & 48\, x_1 & 0 \\
0 & 48\, \xi\, x_2 & 48\, \xi \, x_1 & 96\, x_2 & 96\, x_1 \\
0 & 24\, x_2 & 24\, x_1 & 0 & 0
\end{array} \right| \\
= & - \, 2^{18} \, 3^5 \, 5^2 \, x_1 \,
(6 \, \xi^2 -5, -5 \, \xi, 5 \,) \hspace{-1.6mm} ( \, x_1^2,x_2^2)^2.
\end{aligned} \label{formula.wr=E} \end{equation}
This is nonzero for any $\xi$, which proves part (2)
of the proposition. The proof of
Theorem~\ref{theorem.sing} is complete. \qed
\medskip
One can restate the theorem as follows:
$\mathcal F_Q$ is a singular point of $\mathcal H$, iff one of the expressions
$q_2, q_0 \, q_2 + 3 \, q_1^2,5 \, q_0 \, q_2 - q_1^2$ is zero.
\subsection{} For $i=1,2$, let $\Omega_{(i)}$ denote the orbit closure of $[\bU_{(i)}]$, and
let $\mathcal G_i \subseteq SL(V)$ denote the stabilizer subgroup of
$[\bU_{(i)}]$. By~\cite[\S0]{Aluffi-Faber}, we have a formula
\[ \deg \, \Omega_{(i)} = \frac{5.4.3}{|\mathcal G_i|}. \]
Since an element of $\mathcal G_i$ must permute the linear factors of $\bU_{(i)}$, it is
easy to determine all symmetries by mere inspection.
The group $\mathcal G_1$ is the dihedral group $D_5$ of order $10$, generated by
the transformations
\[ (x_1,x_2) \longrightarrow
\begin{cases} (x_2,x_1), \\ (x_1,\exp(\frac{2 \pi \sqrt{-1}}{5} ) \, x_2).
\end{cases} \]
Similarly $\mathcal G_2$ is isomorphic to $D_3$, generated by
\[ (x_1,x_2) \longrightarrow
\begin{cases} (x_2,x_1), \\ (\exp(\frac{2 \pi \sqrt{-1}}{3}) \, x_1,x_2). \end{cases} \]
Hence $\Omega_{(1)},\Omega_{(2)}$ are of degrees $6$ and $10$
respectively. The degree of $X_{(3,1,1)}$ is $9$, as
given by a formula due to Hilbert~\cite{Hilbert1}.
\subsection{}
Let $\mathfrak p_{(i)} \subseteq R$ denote the homogeneous ideal of
$\Omega_{(i)}$. The variety $\Omega_{(1)}$ is the closure of the
union of secant lines to $X_{(5)}$, and it is known
(as an instance of a more general result) that $\mathfrak p_{(1)}$ is a perfect ideal
of height two (see~\cite[Theorem~1.56]{Iarrobino-Kanev}). We briefly recapitulate the proof.
Given $F \in S_5$, define
\[ \alpha_F: S_2 \longrightarrow S_3, \quad G \longrightarrow (F,G)_2, \]
and let
\[ \alpha: S_2 \otimes R(-1) \longrightarrow S_3 \otimes R \]
denote the corresponding morphism of graded $R$-modules (\S\ref{section.map.Rmodules}).
\begin{Lemma} \sl
The map $\alpha_F$ is injective for a general $F$, moreover $\ker \alpha_F$ is nonzero
iff $[F] \in \Omega_{(1)}$.
\end{Lemma}
\noindent {\sc Proof.}\; It is easily verified from formula~(\ref{trans.formula}) that
$\ker \alpha_F = 0$ for $F = x_1^5 + x_2^5 + (x_1+x_2)^5$. Assume $G (\neq 0) \in \ker \alpha_F$,
then after a change of variables $G$ can be written as either $x_1^2$ or $x_1 \, x_2$.
In the former case $F = x_1^4 \, (c_1 \, x_1 + c_2 \, x_2)$ and in the latter case $F = c_1 \, x_1^5 +
c_2 \, x_2^5$. The `if' part is equally clear. \qed
\smallskip
By the Porteous formula (see~\cite[Ch.~II.4]{ACGH}) the scheme-theoretic degeneracy locus
$\{\text{rank} \, \alpha_F \le 2 \}$ has degree $6$ (it is the coefficient of $h^2$ in the Maclaurin
expansion of $(1+h)^{-3}$), and so does $\Omega_{(1)}$. Hence
the ideal of maximal minors of $\alpha$ coincides with $\mathfrak p_{(1)}$, and we get a
Hilbert-Burch resolution (see~\cite[\S20.4]{Ei})
\[ 0 \leftarrow R/\mathfrak p_{(1)} \leftarrow R \stackrel{\delta_0}{\leftarrow} R(-3) \otimes S_3
\stackrel{\delta_1}{\leftarrow} R(-4) \otimes S_2 \leftarrow 0. \]
Now consider the complex
\[ R \stackrel{\delta_0^\vee}{\rightarrow} R(3) \otimes S_3
\stackrel{\delta_1^\vee}{\rightarrow} R(4) \otimes S_2. \]
To describe the first map, let $\mathcal W_{(1)}$ denote the Wronskian of $\alpha_\mathbb F$,
i.e., the determinant of the $3 \times 3$ matrix of linear forms
\[ (i,j) \longrightarrow \frac{\partial^2 \, (\mathbb F,x_1^{3-i} \, x_2^{i-1})_2}{\partial x_1^{3-j} \, \partial x_2^{j-1}}, \quad
(1 \le i,j \le 3). \]
Now $\mathcal W_{(1)}$ is a covariant of degree-order $(3,3)$, and $\zeta_{3,3}=1$ for quintics,
hence it must coincide with $\vartheta_{33}$ up to a scalar. Thus $\mathfrak p_{(1)} = \mathfrak I(\vartheta_{33})$.
Up to a scalar, the map $\delta_1^\vee$ must be given by
$S_3 \longrightarrow S_2, G \longrightarrow (F,G)_3$. From $\delta_1^\vee \circ \delta_0^\vee=0$
we deduce the identity $(\vartheta_{33},\mathbb F)_3=0$.
\subsection{} Using similar ideas we will find a free resolution of $\mathfrak p_{(2)}$. It is sensible to look for
a $4 \times 5$ matrix of linear forms, since then by Porteous's formula
the degeneracy locus $\{ \text{rank} \, \le 3\}$ has expected degree $10$.
\begin{Proposition} \sl The ideal $\mathfrak p_{(2)}$ is perfect of height two.
\end{Proposition}
\noindent {\sc Proof.}\; Consider the map
\[ \beta_F: S_3 \longrightarrow S_4, \quad G \longrightarrow (F,G)_2, \]
and let $\mathcal W_{(2)}$ denote the corresponding $4 \times 4$ Wronskian determinant
\[ (i,j) \longrightarrow \frac{\partial^3 \, (\mathbb F,x_1^{4-i} \, x_2^{i-1})_2}
{\partial x_1^{4-j} \, \partial x_2^{j-1}}, \qquad
(1 \le i,j \le 4), \]
which is a covariant of degree-order $(4,4)$.
Let ${\mathfrak a} = \mathfrak I(\mathcal W_{(2)})$ denote the ideal of maximal minors; {\sl a priori} we know it
to be of height $\le 2$. If it were to have height one, then an invariant would have to divide $\mathcal W_{(2)}$,
which is impossible. Hence we get a free resolution
\[ 0 \leftarrow R/{\mathfrak a} \leftarrow R \leftarrow R(-4) \otimes S_4 \leftarrow R(-5) \otimes S_3 \leftarrow 0. \]
Now a direct calculation shows that
\[ \begin{aligned}
\mathcal W_{(2)}(\mathcal F_Q) & = \left| \begin{array}{rrrr}
24/5 \, q_1 \, x_1 & 12/5 \, q_2 \, x_2 & 12/5 \, q_2 \, x_1 & 0 \\
-2 \, q_1 \, x_2 & -2 \, q_1 \, x_1 & 2/5 \, q_2 \, x_2 &
2/5 \, q_2 \, x_1\\
8 \, q_0 \, x_1 & -4/5 \, q_1 \, x_2 & -4/5 \, q_1 \, x_1 &
-16/5 \, q_2 \, x_2 \\
6 \, q_0 \, x_2 & 6 \, q_0 \, x_1 & 18/5 \, q_1 \, x_2 & 18/5 \, q_1 \, x_1
\end{array} \right| \\
& = \frac{1152}{125} \, (q_0 \, q_2 + 3 \, q_1^2) \,
(5 \, q_0 \, q_2 + q_1^2, -2 \, q_1 \, q_2, 2 \, q_2^2 \,) \hspace{-1.6mm} ( \, x_1^2,x_2^2)^2.
\end{aligned} \]
Hence $\mathcal W_{(2)}$ vanishes on $\Omega_{(2)}$. Since the latter has degree $10$, the
scheme defined by ${\mathfrak a}$ coincides with $\Omega_{(2)}$ and ${\mathfrak p}_{(2)} =
{\mathfrak a}$. \qed
\smallskip
A basis for the space $\mathcal A_{4,4}$ is given by the two covariants $\vartheta_{22}^2,
\vartheta_{44}$, hence $\mathcal W_{(2)}$ must be their linear combination.
The actual coefficients can be easily found by specializing $F$ and
then solving a system of linear equations. This gives the relation
$\mathcal W_{(2)} = 1/5760 \, (7 \, \vartheta_{22}^2 - 10 \, \vartheta_{44})$.
As before, we have an identity $(\mathcal W_{(2)},\mathbb F)_3=0$.
\subsection{}
By a result of Weyman (see~\cite[Theorem 3]{Weyman}),
the ideal of $X_{(3,1,1)}$
(say $\mathfrak q$) is generated in degrees $\le 4$. If we specialize to
$F = x_1^3 \, x_2 \, (x_1+x_2)$ and search through
all covariants in degrees $\le 4$, then we find that only
$\vartheta_{40}$ and $2 \, \vartheta_{22}^2 + 15 \, \vartheta_{44}$
vanish on $F$, hence their coefficients must generate $\mathfrak q$. One sees
that $\mathfrak q$ is not perfect; indeed, it would have to arise as the ideal of
maximal minors of a map \[ R \otimes (S_0 \oplus S_4) \longrightarrow
\bigoplus\limits_{i \ge 0} \, R(i) \otimes
(S_{k_i} \oplus S_{k_i'} \oplus \dots) \]
such that the target module has rank $5$, the minors are of degree $4$ and the Porteous
degree is $9$. However no such integers can be found.
\subsection{} Let $I_\mathcal B \subseteq R$ denote the defining ideal of the singular locus $\mathcal B$.
\begin{Proposition} \sl The ideal $I_\mathcal B$ is a complete intersection generated by the
two coefficients of the covariant $\vartheta_{51}$.
\end{Proposition}
\noindent {\sc Proof.}\; The ideal ${\mathfrak e} = \mathfrak I(\vartheta_{51})$ is a complete intersection, since otherwise an
invariant would have to divide both coefficients of $\vartheta_{51}$. By a direct calculation,
\begin{equation} \vartheta_{51}(\mathcal F_Q) =
\frac{4}{625} \, q_2 \, (q_0 \, q_2 + 3 \, q_1^2) \, (5 \, q_0 \, q_2 - q_1^2) \, x_1,
\label{phi51.1} \end{equation}
hence $\vartheta_{51}(F)$ vanishes on $\mathcal B$. Since $\deg \mathcal B = 25$, we must have
${\mathfrak e} = I_\mathcal B$. \qed
\smallskip
Given a point $[F] \in \mathcal H \setminus \mathcal B$, the linear form $\vartheta_{51}$ `detects' the point of
tangency $\mathsf c$ in the configuration on page~\pageref{conic_diagram}.
Indeed this is visibly true of $\mathcal F_Q$, and since $\vartheta_{51}$ is a covariant, it is true generally.
\section{The dual variety} \label{section.dual.H}
Let $\sigma: \P S_d \stackrel{\sim}{\longrightarrow} \P S_d^*$ be the isomorphism
induced by the duality in (\ref{self-duality}); it identifies $[A] \in \P S_d$ with the
hyperplane $\{[B] \in \P S_d: (A,B)_d =0 \}$.
Let $\mathbb I$ denote a degree $m$ invariant of $d$-ics, defining a hypersurface
$\mathcal X \subseteq \P S_d$.
\begin{Proposition} \sl
Let $[F] \in \mathcal X$ be a nonsingular point, and let $T = T_{\mathcal X,[F]} \in \P S_d^*$
denote the tangent space to $\mathcal X$ at $[F]$. Then we have an equality
\[ [\mathcal E_\mathbb I(F)] = \sigma^{-1}(T). \]
\end{Proposition}
\noindent {\sc Proof.}\; Let $B = (b_0,\dots,b_d \,) \hspace{-1.6mm} ( \, x_1,x_2)^d$. The point
$[b_0,\dots,b_d]$ belongs to $T$ iff
\[ \sum\limits_{i=0}^d \, b_i \,
(\left. \frac{\partial \mathbb I}{\partial a_i}\right|_F) = 0. \]
This condition can be rewritten as $(\mathcal E_\mathbb I(F),B)_d =0$, hence the assertion. \qed
Now let $F = x_1 \, (1,\xi,1 \,) \hspace{-1.6mm} ( \, x_1^2,x_2^2)^2$. By the Proposition together with
Lemma~\ref{lemma.wr}, the evectant $\mathcal E_\mathbb H(F)$ is
given (up to scalar) by the Wronskian of a basis of $T_{\mathcal H,[F]}$. But
we have already calculated the latter in~(\ref{formula.wr=E}).
After the substitution
\[ (x_1,x_2,\xi) \longrightarrow (q_0^{1/5} \, x_1, \, q_2^{1/4} \, q_0^{-1/20} \, x_2,
\, q_1 \, q_0^{-1/2} \, q_2^{-1/2}) \]
we get the expression
\[ \mathcal E_\mathbb H(\mathcal F_Q) = \text{constant} \times \mathcal F_{Q'}, \]
where
\[ Q' = [\, q_0 \, q_2 - \frac{6}{5} \, q_1^2, q_1 \, q_2, -q_2^2 \, ]. \]
Since $\mathcal E$ is a degree $17$ covariant, the `constant' must be a degree
$15$ polynomial in the $q_i$. Now $\mathcal E_\mathbb H(F)$ vanishes identically iff $[F] \in \mathcal B$,
so we must have
\begin{equation}
\mathcal E_\mathbb H(\mathcal F_Q) = \constant \, q_2^n \, (q_0 \, q_2 + 3 \, q_1^2)^{n'}
\, (5 \, q_0 \, q_2 - q_1^2)^{n''} \mathcal F_{Q'},
\label{EHI.exp1} \end{equation}
for some integers $n,n',n''$ such that $n + 2 \, n' + 2 \, n''=15$. Here (and subsequently)
$\constant$ stands for some {\sl nonzero} rational number which need not be precisely specified.
The indices $n,n'$ etc.~will be determined later in \S\ref{resolution.tau}.
Note the identity $(Q')' = [-q_2^3 \, q_0, -q_2^3 \, q_1, -q_2^4] = Q$.
We have proved the following:
\begin{Theorem} \sl
If $[F]$ is a nonsingular point in $\mathcal H$, then so is $[\mathcal E_\mathbb H(F)]$.
The assignment
\[ \mathcal H \setminus \mathcal B \longrightarrow \mathcal H \setminus \mathcal B, \quad
[F] \longrightarrow [\mathcal E_\mathbb H(F)] \]
is an involutive automorphism. In particular $\mathcal H$ is isomorphic to
its own dual variety.
\end{Theorem}
\section{The Jacobian ideal} \label{section.J}
Let $J = \mathfrak I(\mathcal E_\mathbb H(\mathbb F))$ denote the Jacobian ideal of $\mathbb H$.
\subsection{} Let
\begin{equation}
0 \leftarrow R/J \leftarrow R \leftarrow R(-17) \otimes S_5 \leftarrow E_1 \leftarrow E_2 \leftarrow \dots
\label{res1.J} \end{equation}
denote the equivariant minimal resolution of $J$, i.e.,
$E_i$ is the module of $i$-th syzygies.
Apply $\text{Hom}_R(-,R)$ to~(\ref{res1.J}) and consider the complex
\[ 0 \rightarrow R \stackrel{\epsilon_0}{\longrightarrow} R(17) \otimes S_5
\stackrel{\epsilon_1}{\longrightarrow} E_1^\vee \rightarrow \dots \]
Write $E_1^\vee$ as a direct sum
\[ \bigoplus\limits_{r \ge 1} \, R(17+r) \otimes M_r, \]
where each $M_r$ is a finite direct sum of irreducible $SL_2$-representations.
By construction $\epsilon_0(1)= \mathcal E_\mathbb H(\mathbb F)$.
Let $S_p \subseteq M_r$ denote a direct summand, and consider the
composite
\[ \theta: R(17) \otimes S_5 \longrightarrow R(17+r) \otimes M_r \longrightarrow
R(17+r) \otimes S_p. \]
It can be seen as a map $S_5 \longrightarrow S_p$ whose coefficients are degree $r$ forms
in the coefficients of $\mathbb F$. Hence, $\theta$ corresponds to a covariant $\Theta$
(determined up to a constant) of degree $r$ and order (say) $q$, defining
\[ S_5 \longrightarrow S_p, \quad G \longrightarrow (G,\Theta)_{\frac{1}{2} (5-p+q)}. \]
Altogether, the identity $\theta \circ \epsilon_0=0$ translates into
\[ (\mathcal E_\mathbb H(\mathbb F),\Theta)_{\frac{1}{2} (5-p+q)}=0. \]
\subsection{First syzygies of $J$} \label{firstsyzygies.J}
We will enumerate some of the first syzygies of $J$ by hand, and then
show {\sl a posteriori} that they are a complete list. Since a syzygy in a certain
degree produces non-minimal syzygies in higher degrees, at each stage we should
ensure that only `new' syzygies are included.
\begin{enumerate}
\item[(i)] If $\mathbb I$ is any invariant of $d$-ics, then
$(\mathcal E_\mathbb I(\mathbb F),\mathbb F)_{d-1} =0$ (see~Corollary~\ref{corollary.EPhi} below),
hence $S_2$ is a summand in $M_1$.
\item[(ii)]The space $\mathcal A_{5,5}$ is $2$-dimensional, and spanned by
$\vartheta_{33} \, \vartheta_{22}$ and $\vartheta_{40} \, \mathbb F$.
By construction
$\widetilde I = (\mathcal E_\mathbb H,\vartheta_{33} \, \vartheta_{22})_5$ is an
invariant of degree $22$ (possibly zero). Since $\zeta_{22,0}=1$, we must have
$\widetilde I = \alpha \, \vartheta_{40} \, \mathbb H$ for some
$\alpha \in \mathbf Q$. Define
\begin{equation} \mathcal U = \vartheta_{33} \, \vartheta_{22} - \alpha \,
\vartheta_{40} \, \mathbb F, \label{defn.U} \end{equation}
so that $(\mathcal E_\mathbb H,\mathcal U)_5=0$.
\noindent Claim: This syzygy cannot have arisen from the submodule
$S_2 \subseteq M_1$.
\noindent {\sc Proof.}\; Otherwise it would correspond to a nonzero morphism $S_2 \otimes R_4 \longrightarrow S_0$.
However $R_4 \simeq S_4(S_5)$ contains
no copies of $S_2$ (or equivalently, $\zeta_{4,2} =0$ for quintics), hence this is impossible.
\item[(iii)]By an analogous reasoning,
if $\Phi$ is any covariant of degree-order $(9,5)$, then
\[ (\mathcal E_\mathbb H,\Phi)_5 = \text{some degree $8$ invariant} \times \mathbb H. \]
Since $\mathcal A_{8,0}$ has $\{\vartheta_{40}^2, \vartheta_{80} \}$ as a basis,
this would produce a syzygy of the form
\begin{equation} (\mathcal E_\mathbb H,\Phi -
\beta \, \vartheta_{40}^2 \, \mathbb F - \gamma \, \vartheta_{80} \, \mathbb F)_5 =0 \quad
\text{for some $\beta,\gamma \in \mathbf Q$.}
\label{syz.bc} \end{equation}
However, we need to weed out those syzygies which come from earlier
degrees. Broadly speaking, we have three syzygies in degree $9$ which arise
in this way, amongst which two come from earlier degrees and one will be new.
The space $\mathcal A_{9,5}$ is $5$-dimensional with a basis
(see page~\pageref{basis.95})
\begin{equation} \begin{array}{rrrrr}
\vartheta_{51} \, \vartheta_{22}^2, &
\vartheta_{51} \, \vartheta_{44}, &
\vartheta_{40} \, \vartheta_{33} \, \vartheta_{22}, &
\vartheta_{40}^2 \, \mathbb F, & \vartheta_{80} \, \mathbb F.
\end{array} \label{A95basis} \end{equation}
The one-dimensional space $\mathcal A_{8,2}$ is spanned by $\vartheta_{82}$.
From part (i) we get the obvious identity $((\mathcal E_\mathbb H,\mathbb F)_4,\vartheta_{82})_2 =0$, which
can be rewritten as $(\mathcal E_\mathbb H,(\mathbb F,\vartheta_{82})_1)_5 = 0$. This is best seen
symbolically. Writing $\mathcal E = e_\mathbf x^5,\mathbb F = f_\mathbf x^5, \vartheta_{82} = t_\mathbf x^2$,
both compound transvectants evaluate to $(e \, f)^4 \, (e \, t) (f \, t)$.
Now $(\mathbb F,\vartheta_{82})_1$ is the following linear combination of
the basis in~(\ref{A95basis}):
\[ \qquad
-\frac{7}{10} \, \vartheta_{51} \, \vartheta_{22}^2
-\frac{1}{4} \, \vartheta_{51} \, \vartheta_{44}
+\frac{5}{12} \, \vartheta_{40} \, \vartheta_{33} \, \vartheta_{22}
-\frac{1}{20} \, \vartheta_{40}^2 \, \mathbb F - \frac{1}{4} \, \vartheta_{80} \, \mathbb F. \]
From (ii) we have the obvious syzygy $(\mathcal E_\mathbb H,\vartheta_{40} \, \mathcal U)_5 = 0$.
Let us define $\beta, \gamma \in \mathbf Q$ such that the covariant
\begin{equation}
\mathcal V = \vartheta_{51} \, \vartheta_{22}^2 - \beta \, \vartheta_{40}^2 \, \mathbb F -
\gamma \, \vartheta_{80}\, \mathbb F \label{defn.V} \end{equation}
satisfies $(\mathcal E_\mathbb H,\mathcal V)_5 =0$. It is immediate that $\mathcal V$ cannot be a linear
combination of $(\mathbb F, \vartheta_{82})_1$ and $\vartheta_{40} \, \mathcal U$, hence we have a
new syzygy.
\end{enumerate}
So far we have found three independent first syzygies of $J$ corresponding to
$S_2 \subseteq M_1, S_0 \subseteq M_5, S_0 \subseteq M_9$. The rational
numbers $\alpha,\beta,\gamma$ are uniquely determined by the identities
$(\mathcal E_\mathbb H,\mathcal U)_5 = (\mathcal E_\mathbb H, \mathcal V)_5 =0$, but we do not yet know their values.
\subsection{} We will now construct the morphism whose Hilbert-Burch
complex is expected
to give a resolution of $J$. Let us change our approach slightly,
and let $\tau = (\alpha,\beta,\gamma)$ denote an {\sl arbitrary} triplet in $\mathbf Q^3$.
For $F \in S_5$, define
\[ \begin{aligned}
\sigma_{\tau}(F): \; & S_2 \oplus S_0 \oplus S_0 \longrightarrow S_5, \\
& (A,c_1,c_2) \longrightarrow (A,F)_1 + c_1 \, \mathcal U + c_2 \, \mathcal V,
\end{aligned} \]
where $\mathcal U, \mathcal V$ are defined via formulae~(\ref{defn.U}),(\ref{defn.V}).
Let $\Gamma_\tau$ denote the Wronskian of $\sigma_{\tau}(\mathbb F)$, which is a covariant
of degree-order $(17,5)$. Let $\mathfrak b_\tau \subseteq R$ denote the ideal generated by the coefficients of $\Gamma_\tau$,
and $V_\tau = V(\mathfrak b_\tau) \subseteq \P^5$ the corresponding subvariety. One knows {\sl a priori} that each
of the components of $V_\tau$ is of codimension $\le 2$.
We claim that $(\Gamma_\tau,\mathbb F)_5 = \constant \, \mathbb H$ for all $\tau$. Indeed, the left hand side
is a degree $18$ invariant, hence a numerical multiple of $\mathbb H$. It remains to
check that it does not vanish identically, which is easily verified by specializing to
$x_1^5 + x_2^5 + (x_1+2 \, x_2)^5$. It follows that $(\mathbb H) \subseteq \mathfrak b_\tau$,
hence $V_\tau \subseteq \mathcal H$. Since the latter contains no proper hypersurfaces, $\mathfrak b_\tau$ must be of pure height
two. Hence the Eagon-Northcott complex (or what is the same, the Hilbert-Burch complex) of $\sigma_\tau$ is a
minimal resolution of $\mathfrak b_\tau$.
By a direct calculation,
\begin{equation} \Gamma_\tau(\mathcal F_Q) = - \frac{2^6 . 3^2 . 151 . 293}{5^{15}} \,
q_2^3 \, (q_0 \, q_2 + 3 \, q_1^2) \, (5 \, q_0 \, q_2 - q_1^2) \, K_\tau \, \mathcal F_{Q'},
\label{Gamma.exp} \end{equation}
where $K_\tau$ is the expression
\begin{equation} \begin{aligned}
{} & (75000 \, \gamma+28125) \, q_0^4 \, q_2^4+ \\
& (520000 \, \alpha+42000 \, \gamma-22500-960000 \, \beta) \, q_0^3 \, q_1^2 \, q_2^3+ \\
& (-1344000 \, \beta+292800 \, \gamma+872000 \, \alpha+6750) \, q_0^2 \, q_1^4 \, q_2^2+ \\
& (121200 \, \gamma-900-576000 \, \beta+408000 \, \alpha) \, q_0 \, q_1^6 \, q_2+ \\
& (12744 \, \gamma-69120 \, \beta+43200 \, \alpha+45) \, q_1^8.
\end{aligned} \label{Ktau.exp} \end{equation}
Since $\Gamma_\tau$ visibly vanishes on $\mathcal B$, we have $V_\tau \supseteq \mathcal B$.
Now we would like to impose the condition that $V_\tau = \mathcal B$. This will happen iff
$K_\tau$ is nonzero at every point of $\mathcal H \setminus \mathcal B$, i.e., iff
\begin{equation} K_\tau = \delta \, q_2^r \, (q_0 \, q_2 + 3 \, q_1^2)^s \,
(5 \, q_0 \, q_2 - q_1^2)^t \label{Ktau.eqns} \end{equation}
for some $\delta \in \mathbf Q$, and nonnegative integers $r,s,t$ satisfying
$r + 2s+2t =8$. It is easy to see that if we fix the choice of the triple
$(r,s,t)$, then (\ref{Ktau.eqns}) is an inhomogeneous system of
linear equations for the variables
$\alpha,\beta,\gamma,\delta$. I solved this system in {\sc Maple}, and found that
it admits a solution only in the following cases, the solution being unique in
every case.
\[ \begin{array}{ccc}
(r,s,t) & & (\alpha,\beta,\gamma,\delta)\\ \hline
(0,0,4) & & (0,0,0,1/45) \\
(0,2,2) & & (2/5,14/75,-2/5,-1/75) \\
(0,1,3) & & (1/6,2/45,-1/3,1/25).
\end{array} \]
Thus we have the following theorem.
\begin{Theorem} \sl
\begin{enumerate}
\item
For any $\tau \in \mathbf Q^3$, the ideal $\mathfrak b_\tau$ is perfect of height two with
minimal resolution
\[ \begin{aligned}
0 \leftarrow R/\mathfrak b_\tau \leftarrow R & \leftarrow R(-17) \otimes S_5 \\
& \leftarrow R(-18) \otimes S_2 \oplus R(-22) \oplus R(-26) \leftarrow 0.
\end{aligned} \]
\item
We have an inclusion of varieties $\mathcal B \subseteq V_\tau$, which is an
equality iff $\tau$ is one of the following triples:
\begin{equation}
(0,0,0), \quad (2/5,14/75,-2/5) \quad (1/6,2/45,-1/3).
\label{special.triples} \end{equation}
\end{enumerate}
\end{Theorem}
\subsection{} \label{section.jacobian}
Now let $\tau = (\alpha,\beta,\gamma)$ be the {\sl specific} triple for which
$(\mathcal E_\mathbb H,\mathcal U)_5 = (\mathcal E_\mathbb H,\mathcal V)_5 = 0$. We have shown that the complex
\[ R \stackrel{\epsilon_0}{\longrightarrow} R(17) \otimes S_5
\stackrel{\epsilon_1}{\longrightarrow} R(18) \otimes S_2 \oplus R(22) \oplus R(26) \]
is exact in the middle. (Indeed, its middle cohomology is
$\text{Ext}^1_R(R/\mathfrak b_\tau,R)$, which is zero since $\mathfrak b_\tau$ is perfect of
height $2$.) Hence up to scalar, $\Gamma_\tau$ is the unique covariant of degree-order
$(17,5)$ whose image by $\epsilon_1$ is zero. But $\mathcal E_\mathbb H$ also has
this property, hence $\Gamma_\tau = \text{nonzero constant} \times \mathcal E_\mathbb H$.
Since $V(J) = \mathcal B$, $\tau$ must be one of the special triples above, hence the following result:
\begin{Proposition} \sl
For one of the three triples from~(\ref{special.triples}), we have the equality
$\mathfrak b_\tau = J$ (the Jacobian ideal of $\mathbb H$). In particular $J$ is also
perfect.
\end{Proposition}
The value of $\tau$ will be found in~\S\ref{resolution.tau}.
\subsection{The Cayley method}
Initially I attempted to prove the perfection of $J$ by using the
Cayley method of calculating resultants (see~\cite[Ch.~2]{GKZ}). This attempt failed,
but the outcome was yet another perfect ideal supported on
$\mathcal B$. Since the details are similar to~\cite[\S 5]{ACA}, we will be
brief.
Since $\mathbb H$ is the resultant of $\mathbb F$ and $\vartheta_{33}$, it can be represented
as the determinant of a complex. For a fixed $F \in S_5$, consider the Koszul complex
\[ 0 \rightarrow \O_{\P^1}(-8) \rightarrow \O_{\P^1}(-3) \oplus \O_{\P^1}(-5)
\stackrel{u}{\rightarrow} \O_{\P^1} \rightarrow 0, \]
where $u$ is defined on the fibres as the map $(A,B) \rightarrow A \, \vartheta_{33}(F) + B \, F$.
Form the tensor product with $\O_{\P^1}(5)$, and consider the resulting hypercohomology spectral sequence.
This produces a morphism
\[ g_F: S_2 \oplus S_0 \oplus S_1 \longrightarrow S_5, \]
such that $\det(g_\mathbb F) = \mathbb H$. The component maps
$S_2 \rightarrow S_5, S_0 \rightarrow S_5$ are easily described, they are
$A \rightarrow A \, \vartheta_{33}(F), B \rightarrow B \, F$.
The third map $\mu_F: S_1 \rightarrow S_5$ (which is a $d_2$-differential in the spectral sequence)
is given via the {\sl Morley form}, described as follows: symbolically write $F = f_\mathbf x^5, \vartheta_{33} = c_\mathbf x^3$,
and define
\[ \mathcal M = (f \, c) \, [ \, f_\mathbf x \, f_\mathbf y^3 \, c_\mathbf y^2 +
c_\mathbf x \, f_\mathbf y^4 \, c_\mathbf y \, ]. \]
This defines a {\sl bivariate} covariant of $\mathbb F$, of orders $1$ and $5$ respectively in $\mathbf x,\mathbf y$.
If $A \in S_1$, then $\mu_F(A) = (A,\mathcal M)_1$.
(The transvectant is with respect to $\mathbf x$-variables, so the result is an order $5$ form in $\mathbf y$.)
We may instead decompose $\mathcal M$ into its Gordan series, and write
\[ \mu_F(A) = (A,(F,\vartheta_{33})_1)_1 +
\frac{1}{6} \, A \, (F,\vartheta_{33})_2. \]
Now consider the truncated morphism $h_F: S_2 \oplus S_1 \longrightarrow S_5$, and let $\Lambda$ denote the
Wronskian of $h_\mathbb F$. By construction $\Lambda$ is also a covariant of degree-order
$(17,5)$, moreover $(\Lambda,\mathbb F)_5 = \constant \, \mathbb H$. (Compare the argument in the previous section.)
By a direct calculation,
\[ \Lambda(\mathcal F_Q) =
- \frac{2^{16} \, 3^9}{5^{14}} \, q_2^3 \,
(q_0 \, q_2 + 3 \, q_1^2)^2 \, (5 \, q_0 \, q_2 - q_1^2)^5 \, x_1^5, \]
hence $\Lambda$ differs from $\mathcal E_\mathbb H$ (or rather from any of the $\Gamma_\tau$). However,
$\Lambda$ vanishes exactly over $\mathcal B$, hence by the usual argument we
get the following result:
\begin{Proposition} \sl
The ideal $\mathfrak I(\Lambda)$ is perfect of height two, with equivariant minimal resolution
\[ 0 \leftarrow R/{\mathfrak I(\Lambda)} \leftarrow R \leftarrow R(-17) \otimes S_5 \leftarrow
R(-20) \otimes S_2 \oplus R(-21) \otimes S_1 \leftarrow 0. \]
\qed \end{Proposition}
\section{Evectants} \label{section.evectants}
We could resolve the ambiguity about the correct value of $\tau$ (and hence about the maps in the
resolution of $J$), if we could only derive an expression for $\mathcal E_\mathbb H$.
It is certainly possible to compute the latter in {\sc Maple} by a brute-force
differentiation, but I have avoided this route in the belief that the general formalism developed
here will prove useful elsewhere.
In this section the construction of evectants will be generalized as follows:
given any covariant $\Phi$ of $d$-ics we will associate to it a sequence of covariants called the evectants of
$\Phi$. We will then deduce formulae for the evectants of $(\Phi,\Psi)_r$ in terms of those of $\Phi$ and $\Psi$.
Finally this machinery will be applied to formula~(\ref{defn.H}).
We will heavily use the symbolic method, however the final result of the calculation can be understood
(and used) without any reference to it. Additional variable-pairs $\mathbf y,\mathbf z$ etc will be used as necessary, and then
$\Omega_{\mathbf y \mathbf z}$ etc denote the corresponding Omega operators.
\subsection{Evectants of a covariant} Let
$\mathbb F = f_\mathbf x^d = \sum\limits_{i=0}^d \, \binom{d}{i} \, a_i \,
x_1^{d-i} \, x_2^i$
denote a generic binary $d$-ic, and let
\[ \mathcal E(\mathbf x) = \sum\limits_{i=0}^d \,
\frac{\partial}{\partial a_i} \,
x_1^i \, (-x_2)^{d-i} \, \]
denote the evectant operator.
Let $\Phi = \varphi_\mathbf x^n$ be a covariant of degree-order $(m,n)$ of $d$-ics. Define
\begin{equation} \Gamma = \frac{1}{m} \, [ \, \mathcal E(\mathbf x) \circ \varphi_\mathbf y^n \, ],
\label{eqn1.gamma} \end{equation}
which is a bihomogeneous form of orders $d,n$ in $\mathbf x,\mathbf y$ respectively, so that
\[ (\Gamma,\mathbb F)_d = \frac{1}{m} \,
\sum\limits_{i=0}^d \, a_i \, \frac{\partial \, \Phi(\mathbf y)}{\partial a_i} = \Phi(\mathbf y). \]
Expanding $\Gamma$ into its Gordan series (\S\ref{section.Gordanseries}), we may write
\begin{equation} \Gamma = \sum\limits_{i=0}^{\min(d,n)} \,
(\mathbf x \, \mathbf y)^i \, {\alpha_{(i)}}_{\mathbf x}^{d-i} \,
{\alpha_{(i)}}_{\mathbf y}^{n-i},
\label{eqn2.gamma} \end{equation}
where $\mathcal A_i = {\alpha_{(i)}}_{\mathbf x}^{d+n-2i}$ are a series of
covariants of $f_\mathbf x^d$.
Now apply $(-,f_\mathbf x^d)_d$ to each term in (\ref{eqn2.gamma}). Since
\[ ((\mathbf x \, \mathbf y)^i \, \alpha_{\mathbf x}^{d-i} \,
\alpha_{\mathbf y}^{n-i},f_\mathbf x^d)_d =
(\alpha \, f)^{d-i} \, \alpha_{\mathbf y}^{n-i} \, f_\mathbf y^i =
[ (\alpha_\mathbf x^{d+n-2i}, f_\mathbf x^d)_{d-i} \, ]_{\mathbf x:=\mathbf y}, \]
we deduce the identity
\begin{equation} \sum\limits_i \, (\mathcal A_i,\mathbb F)_{d-i} = \Phi.
\label{identity.Ai.F} \end{equation}
The covariants $\mathcal A_\bullet = \{\mathcal A_0,\mathcal A_1,\dots,\mathcal A_{\min(d,n)}\}$ will
be called the evectants of $\Phi$. By construction $\mathcal A_i$ is of
degree-order $(m-1,d+n-2i)$.
If $\Phi$ is an invariant, then $\mathcal A_0$ (the only nonzero
evectant) coincides with $\mathcal E_\Phi$ as defined in \S\ref{section.evectant}.
\begin{Lemma} \sl With notation as above,
\[ \mathcal A_i = \frac{(d+n-2i+1)!}{i! \, (d+n-i+1)! \,m}
\, \left\{ \Omega_{\mathbf x \, \mathbf y}^i \circ
[ \mathcal E(\mathbf x) \circ \Phi(\mathbf y)] \right\}_{\mathbf y := \mathbf x \, .} \]
\label{lemma.Ev1} \end{Lemma}
\noindent {\sc Proof.}\; Apply $\Omega_{\mathbf x \mathbf y}^\ell$ to each term in~(\ref{eqn2.gamma}),
and use Lemma~\ref{lemma.Ev2} below.
The terms with $\ell > i$ vanish because
$\Omega_{\mathbf x \mathbf y} \circ \alpha_{\mathbf x}^{d-i} \, \alpha_\mathbf y^{n-i} =0$.
Those with $\ell < i$ vanish after we set $\mathbf y:=\mathbf x$, this leaves only the term $\ell = i$. \qed
\subsection{The evectants of a transvectant}
Let $\Phi = \varphi_\mathbf x^n, \, \Psi = \psi_\mathbf x^{n'}$ denote two
covariants with degree-orders $(m,n),(m',n')$, and evectants
$\mathcal A_\bullet, \mathcal B_\bullet$ respectively. Their $r$-th transvectant
$\Theta = (\Phi,\Psi)_r$ is of degree-order $(m+m',n+n'-2r)$.
We would like to deduce formulae for the evectants $\mathcal C_\bullet$ of
$\Theta$ in terms of the data $\Phi,\Psi,\mathcal A_\bullet,\mathcal B_\bullet$. Let us write
\[ \Theta(\mathbf y) = \frac{(n-r)! \, (n'-r)!}{n! \, n'!} \,
\left \{ \Omega_{\mathbf y \mathbf z}^r \circ [ \Phi(\mathbf y) \, \Psi(\mathbf z) \, ]
\right \}_{\mathbf z:= \mathbf y \, } \]
(if we expand $\Omega_{\mathbf y \mathbf z}^r$ by the binomial theorem, then this reduces to the
definition in \S\ref{section.trans}), and then
\[ \mathcal C_s = \kappa \, \left \{ \Omega_{\mathbf x \mathbf y}^s \circ \underbrace{[
\, \mathcal E(\mathbf x) \circ \left \{ \Omega_{\mathbf y \mathbf z}^r \circ [ \Phi(\mathbf y) \, \Psi(\mathbf z) \, ]
\right \}_{\mathbf z:= \mathbf y } ]}_{\en{a}} \right\}_{\mathbf y:=\mathbf x \, ,} \]
where
\begin{equation}
\kappa = \frac{(n-r)! \, (n'-r)! \, (d+n+n'-2r-2s+1)!}
{n! \, n'! \, s! \, (d+n+n'-2r-s+1)! \, (m+m')} \, .
\label{eqn.c1} \end{equation}
It is understood that $r \le \min(n,n')$ and $s \le \min(d,n+n'-2r)$.
The operators $\mathcal E(\mathbf x)$ and $\Omega_{\mathbf y \mathbf z}$ commute, since they involve
disjoint sets of variables. Hence
\[ \en{a} = [ \, \Omega_{\mathbf y \mathbf z}^r \circ \underbrace{
\left\{ \, \mathcal E(\mathbf x) \circ [\, \Phi(\mathbf y) \, \Psi(\mathbf z) \, ] \,
\right\} }_{\en{b}} \; ]_{\mathbf z:=\mathbf y \, .} \]
By the product rule for differentiation,
\[ \en{b} = \underbrace{[ \, \mathcal E(\mathbf x) \circ \Phi(\mathbf y) \, ] \, \Psi(\mathbf z)}_{\en{b_1}} +
\underbrace{\Phi(\mathbf y) \, [ \, \mathcal E(\mathbf x) \circ \Psi(\mathbf z) \, ]}_{\en{b_2}}. \]
\subsection{}
Writing $\mathcal A_i = {\alpha_{(i)}}_{\mathbf x}^{d+n-2i}$,
\begin{equation} \en{b_1} = m \, \sum\limits_i \,
(\mathbf x \, \mathbf y)^i \, {\alpha_{(i)}}_{\mathbf x}^{d-i} \,
{\alpha_{(i)}}_{\mathbf y}^{n-i} \, \psi_\mathbf z^{n'}. \label{b.term1} \end{equation}
We have to apply $\Omega_{\mathbf y \mathbf z}^r$ to each term in~$\en{b_1}$,
and then set $\mathbf z:=\mathbf y$. The recipe is best seen combinatorially (also see~\cite[\S3.2.5]{Glenn}).
From each summand in~(\ref{b.term1}) we sequentially remove $r$ symbolic
factors involving $\mathbf y$, and pair them with similarly removed $r$ factors involving $\mathbf z$.
By pairing a factor of the type $\beta_\mathbf y$ with one of the type $\gamma_\mathbf z$, we get a new factor
$(\beta \, \gamma)$.
The $\mathbf z$-factors are all necessarily equal to $\psi_\mathbf z$, on the other hand we may suppose that
$k$ of the $\mathbf y$-factors are $(\mathbf x \, \mathbf y)$ and the rest $r-k$ are ${\alpha_{(i)}}_{\mathbf y}$. It is convenient
to see $(\mathbf x \, \mathbf y)$ as $h_\mathbf y$ with $(h_1,h_2) = (-x_2,x_1)$. Then the pairings produce
factors $(h \, \psi)^k = (-1)^k \, \psi_\mathbf x^k$ and $(\alpha_{(i)} \, \psi)^{r-k}$ respectively.
We think of the $r$ copies of $\Omega_{\mathbf y \mathbf z}$ operating one after the
other, so that the temporal sequence of removing the factors
needs to be taken into account. At any stage, we may remove
an $(\mathbf x \, \mathbf y),\psi_\mathbf z$ pair or an ${\alpha_{(i)}}_{\mathbf y},\psi_\mathbf z$ pair, hence
there are $r!/(k! \, (r-k)!)$ ways of choosing this sequence.
The $\psi_\mathbf z$ factors which have been removed can be sequentially ordered in
$\frac{n'!}{(n'-r)!}$ ways (regarding them as mutually distinguishable), with
a similar argument for other factors. This gives the expression
\[ \begin{aligned}
{} & [ \, \Omega_{\mathbf y \mathbf z}^r \circ \en{b_1} \,]_{\mathbf z:=\mathbf y} = \\
& m \,
\sum\limits_i \, \sum\limits_k \, \lambda(i,k;n,n') \;
\underbrace{(\mathbf x \, \mathbf y)^{i-k} \, (\alpha_{(i)} \, \psi)^{r-k} \,
{\alpha_{(i)}}_{\mathbf x}^{d-i} \,
{\alpha_{(i)}}_{\mathbf y}^{n-i-r+k} \, \psi_\mathbf x^k \; \psi_\mathbf y^{n'-r} \,}_{\en{c}},
\end{aligned} \]
where
\begin{equation}
\lambda(i,k;n,n') = (-1)^k \, \frac{r!}{k! \, (r-k)!} \,
\frac{i!}{(i-k)!} \, \frac{(n-i)!}{(n-i-r+k)!} \, \frac{n'!}{(n'-r)!}.
\label{eqn.c2} \end{equation}
The inner sum is quantified over
$\max(0,r-n+i) \le k \le \min(i,r)$, which is exactly the
possible range of removals. Our numerical assumptions imply that
the range is always nonempty.
The reader who dislikes the combinatorial argument may verify the formula
\[ \Omega_{\mathbf y \mathbf z} \circ \alpha_\mathbf y^p \, \beta_\mathbf z^q =
p \, q \, (\alpha \, \beta) \, \alpha_\mathbf y^{p-1} \, \beta_\mathbf z^{q-1} \]
by a direct calculation, and then proceed by induction.
\subsection{}
The next task is to apply $\Omega_{\mathbf x \mathbf y}^s$ to $\en{c}$, and
then set $\mathbf y:=\mathbf x$. We need a preliminary lemma which describes how the
operator $\Omega_{\mathbf x \mathbf y}$ can be
`cancelled' against a factor of $(\mathbf x \, \mathbf y)$.
\begin{Lemma} \sl For integers $p,q,\ell,i \ge 0$,
we have an equality
\[[\, \Omega_{\mathbf x \mathbf y}^\ell \circ
(\mathbf x \, \mathbf y)^i \, a_\mathbf x^p \, b_\mathbf y^q \, ]_{\mathbf y:=\mathbf x} =
\begin{cases}
\mu(p,q;\ell,i) \;
[\, \Omega_{\mathbf x \mathbf y}^{\ell-i} \circ a_\mathbf x^p \, b_\mathbf y^q \, ]_{\mathbf y:=\mathbf x}
& \text{if $\ell \ge i$}, \\
0 & \text{otherwise,} \end{cases} \]
where
\begin{equation} \mu(p,q;\ell,i) = \frac{\ell!}{(\ell-i)!}
\frac{(p+q-\ell+2i+1)!}{(p+q-\ell+i+1)!} \, .
\label{eqn.c3} \end{equation}
\label{lemma.Ev2} \end{Lemma}
\noindent {\sc Proof.}\; Let $\mathcal G$ denote an arbitrary bihomogeneous form of orders $p,q$ in $\mathbf x,\mathbf y$ respectively.
By straightforward differentiation,
\[ \begin{aligned}
{} & \Omega_{\mathbf x \mathbf y} \circ (\mathbf x \, \mathbf y) \, \mathcal G = 2 \, \mathcal G +
(x_1 \, \frac{\partial \mathcal G}{\partial x_1} +
x_2 \, \frac{\partial \mathcal G}{\partial x_2}) +
(y_1 \, \frac{\partial \mathcal G}{\partial y_1} +
y_2 \, \frac{\partial \mathcal G}{\partial y_2}) + \\
& (x_1 \, y_2 - x_2 \, y_1) \,
(\frac{\partial^2 \mathcal G}{\partial x_1 \, \partial y_2} -
\frac{\partial^2 \mathcal G}{\partial x_2 \, \partial y_1}) \\
= \; & (p+q+2) \, \mathcal G + (\mathbf x \, \mathbf y) \, \Omega_{\mathbf x \mathbf y} \circ \mathcal G.
\end{aligned} \]
Now proceed by induction on $\ell,i$, and observe that terms
involving $(\mathbf x \, \mathbf y)$ vanish once we set $\mathbf y:=\mathbf x$. \qed
\medskip
\noindent Hence
\[ \en{d} =
[ \, \Omega_{\mathbf x \mathbf y}^s \circ \en{c} \, ]_{\mathbf y:=\mathbf x} \]
vanishes if $s < i-k$. Assume $s \ge i-k$, then
\begin{equation} \begin{aligned}
\en{d} = & \, \mu(d-i+k,n+n'-2r-i+k \, ;s,i-k) \, \times \\
& (\alpha_{(i)} \, \psi)^{r-k} \,
\underbrace{ [\, \Omega_{\mathbf x \mathbf y}^{s-i+k} \circ
{\alpha_{(i)}}_{\mathbf x}^{d-i} \,
{\alpha_{(i)}}_{\mathbf y}^{n-i-r+k} \,
\psi_\mathbf x^k \; \psi_\mathbf y^{n'-r} \, ]_{\mathbf y:=\mathbf x}}_{\en{e}}.
\end{aligned} \label{eqn.de} \end{equation}
Now $\en{e}$ can be evaluated using the following lemma.
\begin{Lemma} \sl For integers $p_1,q_1,p_2,q_2,u \ge 0$,
we have an equality
\[ \begin{aligned}
{} & [ \, \Omega_{\mathbf x \mathbf y}^u \circ
a_\mathbf x^{p_1} \, a_\mathbf y^{q_1} \, b_\mathbf x^{p_2} \,
b_\mathbf y^{q_2} \, ]_{\mathbf y:=\mathbf x} \\
= \; & \nu(p_1,q_1,p_2,q_2;u) \times
(a \, b)^u \, a_\mathbf x^{p_1+q_1-u} \; b_\mathbf x^{\, p_2+q_2-u},
\end{aligned} \]
where $\nu$ is given by the sum
\[ \sum\limits_t \,
(-1)^{u-t} \, \frac{u!}{t! \, (u-t)!} \,
\frac{p_1!}{(p_1-t)!} \, \frac{q_1!}{(q_1-u+t)!} \,
\frac{p_2!}{(p_2-u+t)!} \, \frac{q_2!}{(q_2-t)!}, \]
quantified over
\[ \max(0,u-\min(q_1,p_2)) \le t \le
\min(p_1,q_2,u). \]
(The sum is understood to be zero if this range is
empty.)
\end{Lemma}
\noindent {\sc Proof.}\; This is essentially the same combinatorial argument as before.
Note however that since $(a \, a) =0$, we cannot
pair $a_\mathbf x$ with $a_\mathbf y$, and similarly for $b$.
Assume that we have removed respectively $t,u-t,u-t,t$ copies of
$a_\mathbf x,a_\mathbf y,b_\mathbf x,b_\mathbf y$. Then pairings of $a_\mathbf x,b_\mathbf y$ produce
$(a \, b)^t$, and those of $b_\mathbf x,a_\mathbf y$ produce
$(b \, a)^{u-t} = (-1)^{u-t} \, (a \, b)^{u-t}$.
The range of $t$ is exactly such that the removals are possible,
e.g., $u-t$ cannot exceed $q_1$ or $p_2$ etc. \qed
\medskip
\noindent It follows that
\[ \begin{aligned}
(\alpha_{(i)} \, \psi)^{r-k} \, \en{e} =
& \, \nu(d-i,n-i-r+k,k,n'-r;s-i+k) \, \times \\
& \underbrace{(\alpha_{(i)} \, \psi)^{r-i+s} \,
{\alpha_{(i)}}_\mathbf x^{d+n-r-s-i} \, \psi_\mathbf x^{n'-r-s+i}}_{\en{f}},
\end{aligned} \]
and of course $\en{f} = (\mathcal A_i,\Psi)_{r-i+s}$. The calculation for $\en{b_2}$ is
essentially the same, hence we are done.
\begin{Theorem} \sl With notation as above,
\begin{equation}
\mathcal C_s = \sum\limits_{i=0}^{\min(d,n)} \, \xi_i \, (\mathcal A_i,\Psi)_{r-i+s} +
\sum\limits_{i=0}^{\min(d,n')} \, \eta_i \, (\mathcal B_i,\Phi)_{r-i+s},
\label{formula.Cs} \end{equation}
where
\[ \begin{aligned}
\xi_i = \kappa \, m \, \sum\limits_k \; \{ \,
& \lambda(i,k;n,n') \; \mu(d-i+k,n+n'-2r-i+k \, ;s,i-k) \, \times \\
& \nu(d-i,n-i-r+k,k,n'-r \, ;s-i+k) \, \}, \end{aligned} \]
and
\[ \begin{aligned}
\eta_i = (-1)^r \, \kappa \, m' \, \sum\limits_k \; \{ \,
& \lambda(i,k;n',n) \; \mu(d-i+k,n+n'-2r-i+k \, ;s,i-k) \, \times \\
& \nu(d-i,n'-i-r+k,k,n-r;s-i+k) \, \}. \end{aligned} \]
The sums are respectively quantified over
\[ \begin{aligned}
{} & \max(0,r-n+i,i-s) & \le k \le \min(i,r), \\
& \max(0,r-n'+i,i-s) & \le k \le \min(i,r).
\end{aligned} \]
\end{Theorem}
\subsection{} \label{resolution.tau}
Note the following classical proposition:
\begin{Proposition} \sl A degree $m$ covariant $\Phi$ is a ${\mathbf Q}$-linear combination of
compound transvectants
\[ (\dots ((\mathbb F,\mathbb F)_{r_1},\mathbb F)_{r_2},\dots,\mathbb F)_{r_{m-1}}. \]
\end{Proposition}
\noindent {\sc Proof.}\; This is usually proved using the symbolic method, but
it is easy to give an alternate proof. Write $\Phi =
\sum\limits_i \, (\mathcal A_i,\mathbb F)_{d-i}$, and use the inductive hypothesis to write
each $\mathcal A_i$ in terms of compound transvectants. \qed
\smallskip
The only nonzero evectant of $\Phi = \mathbb F$ is $\mathcal A_d = 1$. Starting from this,
in principle we can calculate the evectants of any covariant.
I have programmed formula~(\ref{formula.Cs}) in {\sc Maple}, so that
the calculations can be made seamlessly.
\begin{Example} \rm
Let $d=5, \Phi = (\mathbb F,\mathbb F)_2, \Psi = (\mathbb F,\mathbb F)_4$ and
$\Theta = (\Phi,\Psi)_1$.
Now $\Phi,\Psi$ have only one nonzero evectant each, namely
$\mathcal A_3 = \mathbb F, \mathcal B_1 = \mathbb F$. Hence $\Theta$ has evectants
\[ \begin{array}{ll}
\mathcal C_0 = \frac{1}{4} \, \mathbb F \, \Phi, &
\mathcal C_1 = \frac{2}{11} \, (\mathbb F,\Phi)_1, \\
\mathcal C_2 = - \frac{1}{4} \, \mathbb F \, \Psi
- \frac{5}{18} \, (\mathbb F,\Phi)_2, &
\mathcal C_3 = \frac{2}{7} \, (\mathbb F,\Psi)_1
- \frac{10}{21} (\mathbb F,\Phi)_3, \\
\mathcal C_4 = \frac{3}{20} \, (\mathbb F,\Psi)_2
- \frac{17}{56} \, (\mathbb F,\Phi)_4, &
\mathcal C_5 = -\frac{2}{21} \, (\mathbb F,\Phi)_5.
\end{array} \]
In fact $\mathcal C_5$ vanishes identically, since quintics have no covariant of degree-order $(3,1)$.
\end{Example}
It is now routine to calculate the evectants of $\vartheta_{22}^7$ and $ \mathbb F \, \vartheta_{39}$, and
hence finally $\mathcal E_\mathbb H$. The result is
\begin{equation} \mathcal E_\mathbb H(\mathcal F_Q) =
- \frac{2^6}{3. \, 5^{14}} \;
q_2^3 \, (q_0 \, q_2 + 3 \, q_1^2)^2 \,
\, (5 \, q_0 \, q_2 - q_1^2)^4 \, \mathcal F_{Q'}.
\label{EH.expression} \end{equation}
\begin{Corollary} \sl We have an equality
$J = \mathfrak b_{(1/6,2/45,-1/3)}$.
\end{Corollary}
\noindent {\sc Proof.}\; Comparing (\ref{Gamma.exp}) and (\ref{Ktau.eqns}) with (\ref{EH.expression}),
we can read off the values $(r,s,t)=(0,1,3)$. \qed
\medskip
It is unnecessary to make six iterations in order to calculate the evectants of $\vartheta_{22}^7$.
Instead observe that
\[ 14 \, \Gamma = \mathcal E(\mathbf x) \circ \vartheta_{22}(\mathbf y)^7 = 7 \, \vartheta_{22}(\mathbf y)^6 \,
\underbrace{\mathcal E(\mathbf x) \circ \vartheta_{22}(\mathbf y)}_{(\ast)}, \]
where $(\ast) = (\mathbf x \, \mathbf y) f_\mathbf x^4 \, f_\mathbf y$. Now one can find the Gordan series of $\Gamma$ directly.
\subsection{} Let $\Phi$ be a covariant of degree-order $(m,n)$. The covariance property of $\Phi$ implies
that its coefficients satisfy certain differential equations; this forces some identities between its
evectants $\mathcal A_\bullet$. In this section we will make them explicit.
Let $U = (u_0,u_1,u_1 \,) \hspace{-1.6mm} ( \, x_1,x_2)^2$ denote an arbitrary quadratic form.
\begin{Proposition} \sl
We have an equality
\begin{equation}
(( \, [\mathcal E(\mathbf x) \circ \Phi(\mathbf y)],\mathbb F(\mathbf x) \, )_{d-1},
U(\mathbf x) \, )_2 =
\frac{n}{d} \, \{ \, (\Phi(\mathbf x),U(\mathbf x))_1 \}_{\mathbf x:=\mathbf y}.
\label{Phi.diffeq} \end{equation}
\end{Proposition}
By construction $\mathcal E(\mathbf x) \circ \Phi(\mathbf y)$ has orders $d,n$ in $\mathbf x,\mathbf y$. Its $(d-1)$-th transvectant
with $\mathbb F(\mathbf x)$ has $\mathbf x$-order $2$, and finally the second transvectant with $U(\mathbf x)$ has
no $\mathbf x$-variables remaining. Thus both sides are order $n$ forms in $\mathbf y$.
\begin{Corollary} \sl
If $\Phi$ is an invariant, then
$(\mathcal E_\Phi,\mathbb F)_{d-1} = 0$.
\label{corollary.EPhi} \end{Corollary}
\noindent {\sc Proof.}\;
The right hand side of (\ref{Phi.diffeq}) vanishes,
hence the second transvectant of an arbitrary quadratic form with
$(\mathcal E_\Phi,\mathbb F)_{d-1}$ is zero. This forces the latter to be zero. \qed
\smallskip
We sketch a proof of the proposition. Since both sides are linear in $U$, it suffices to check the identity for
each of the basis elements $\{x_1^2, x_1 \, x_2, x_2^2 \}$. After unravelling the transvectants,
we are reduced to the following differential equations known to be
satisfied by any covariant (see~\cite[\S1.2.12]{Glenn}):
\begin{equation} \begin{aligned}
{} & \sum\limits_{i=0}^{d-1} \, (d-i) \, a_{i+1} \,
\frac{\partial \Phi}{\partial a_i} =
x_1 \, \frac{\partial \Phi}{\partial x_2}, \qquad
\sum\limits_{i=1}^d i \, a_{i-1} \, \frac{\partial \Phi }{\partial a_i} =
x_2 \, \frac{\partial \Phi}{\partial x_1}, \\
& \sum\limits_{i=0}^d \, (d-2i) \, a_i \, \frac{\partial \Phi }{\partial a_i} =
x_1 \, \frac{\partial \Phi}{\partial x_1} -
x_2 \, \frac{\partial \Phi}{\partial x_2}.
\end{aligned} \label{cayley.equations}
\end{equation} \qed
Broadly speaking, these equations express the fact that
$\Phi$ is annihilated by the three generators
$\left(\begin{array}{rr} 0 & 1 \\ 0 & 0 \end{array}\right),
\left(\begin{array}{rr} 0 & 0 \\ 1 & 0 \end{array}\right),
\left(\begin{array}{rr} 1 & 0 \\ 0 & -1 \end{array}\right)$
of the Lie algebra ${\mathfrak {sl}}_2$.
\subsection{}
It turns out that one can remove the reference to $U$ from~(\ref{Phi.diffeq}) and rephrase
it as a set of three identities in the $\mathcal A_\bullet$. They will be of the form
\[
\sum\limits_i \, \omega_{i,q} \, (\mathcal A_i,\mathbb F)_{d-i-1+q} = 0, \qquad (q=0,1,2), \]
where $\omega_{i,q}$ are certain rational numbers.
The calculations are thematically similar to those
we have just seen, so the derivation will only be sketched.
Write $\mathcal E(\mathbf x) \circ \Phi(\mathbf y) = p_\mathbf x^d \, q_\mathbf y^n, \,
\mathbb F = f_\mathbf x^d, \, U = u_\mathbf x^2, \, \Phi = \varphi_\mathbf x^n$. Then the left and right hand
sides of (\ref{Phi.diffeq}) respectively equal
\[( \, \underbrace{(p \, f)^{d-1} \, p_\mathbf x \, f_\mathbf x \, q_\mathbf y^n}_{(\star)},
u_\mathbf x^2)_2, \quad
( \, \underbrace{\frac{n}{d} \, (\mathbf x \, \mathbf y) \, \varphi_\mathbf x \,
\varphi_\mathbf y^{n-1}}_{(\star \star)},u_\mathbf x^2)_2. \]
The second transvectant of $Z = (\star) - (\star \star)$ with
an arbitrary $U$ is zero, so $Z$ itself must be zero. Now substitute
the sum $m \, \sum\limits_i \, (\mathbf x \, \mathbf y)^i \, {\alpha_{(i)}}_{\mathbf x}^{d-i} \,
{\alpha_{(i)}}_{\mathbf y}^{n-i}$ for $\mathcal E(\mathbf x) \circ \Phi(\mathbf y)$, and expand $Z$ into
its Gordan series. It is of the form
\[ Z = \mathbb T_0 + (\mathbf x \, \mathbf y) \, \mathbb T_1 +
(\mathbf x \, \mathbf y)^2 \, \mathbb T_2, \]
where $\mathbb T_q$ are of orders $2-q,n-q$ in $\mathbf x,\mathbf y$.
We get the required identities by writing $\mathbb T_q|_{\mathbf y:=\mathbf x}=0$.
Although {\sl a priori} $\mathbb T_1$ involves $\Phi$, we can rewrite the
latter in terms of $\mathcal A_\bullet$ using (\ref{identity.Ai.F}).
In conclusion we have the following theorem:
\begin{Theorem} \sl With notation as above,
\begin{equation}
\sum\limits_{i=0}^{\min(d,n)} \,
\omega_{i,q} \, (\mathcal A_i,\mathbb F)_{d-i-1+q} = 0,
\qquad (q=0,1,2), \label{identities.A.}
\end{equation}
where
\[ \begin{array}{ll}
\omega_{i,0} = d-i, &
\omega_{i,1} = \frac{(d-i)(2i-n)}{d \, (n+2)}+
\frac{m \, i-n}{m \, d}, \\
\omega_{i,2} = i \, (n-i) \, (d-i+n+1).
\end{array} \]
\end{Theorem} \qed
These identities are collectively equivalent to (\ref{cayley.equations}),
but in contrast to the latter, each of them is individually invariant under a change of co{\"o}rdinates.
If a sequence of covariants $\{\mathcal A_i\}$ of degree-orders $(m,d+n-2i)$
is to appear as the sequence of evectants of some $\Phi$, it is necessary that they satisfy the
identities above. It would be of interest to find a set of necessary and sufficient conditions.
\bigskip
{\sc Acknowledgements.} {\small
This work was funded by the Natural Sciences and Engineering Research Council of Canada.
The following electronic libraries have been useful
for accessing classical references:
\begin{itemize}
\item The G\"ottinger DigitalisierungsZentrum ({\bf GDZ}),
\item Project Gutenberg ({\bf PG}),
\item The University of Michigan Historical Mathematics
Collection ({\bf UM}).
\end{itemize}}
|
2,869,038,156,765 | arxiv | \section{Introduction}
Local Lorentz Invariance, or more specifically the question of
whether this physical principle will be maintained or broken at
the Planck scale, has become a much debated topic both in the
theory and phenomenology of quantum gravity. On the
phenomenological side, constraints on violation of Lorentz
invariance (termed ``Lorentz violation'' below) are becoming ever
more stringent \cite{Amelino-Camelia:2004hm}, and more arguments
have recently appeared suggesting that Planck scale Lorentz
violation is incompatible with current observations, without (at
best) additional fine-tuning being introduced
\cite{Collins:2004bp,Collins:2006bw}. With this progress, it
becomes increasingly important for the theoretical side to provide
predictions, at least of a heuristic nature: in any particular
theoretical framework, what becomes of Lorentz invariance? Is it
possible to maintain it, or must it be broken at some scale?
It is well-known that waves travelling on lattices violate Lorentz
invariance, and this is often used as an argument against
fundamental discreteness. Some Loop Quantum Gravity (LQG) based
arguments have been made to support this expectation
\cite{Gambini:1998it,Sahlmann:2002qj,Sahlmann:2002qk}, and some
further discussion of the methods used is given in
\cite{Bojowald:2004bb}. However, there are conflicting arguments
for local Lorentz invariance in LQG \cite{Rovelli:2002vp}, and
also some \cite{Livine:2004xy} for the ``third way'' of Doubly
Special Relativity (DSR) \cite{Amelino-Camelia:2005ne} in which
the Planck length or Planck energy is introduced as an invariant
scale along with the speed of light, deforming the Lorentz
transformations (an idea that is critised in
\cite{Sudarsky:2005ua}). If the spin-foam quantum gravity program
is to provide a path-integral formulation of LQG, then it might be
expected that the argument which prevails in the LQG program will
also prevail for these models, and \textit{vice-versa}.
Two interesting strategies have been proposed to evade the
conclusion that Lorentz violation follows from discreteness. One
is to find a form of discreteness that retains Lorentz symmetry at
the level of the continuum approximation. The causal set
\cite{SpacetimeAsCS,Henson:2006kf} offers such a discretisation of
spacetime, as argued in \cite{Dowker:2003hb} and proved in a
strong sense in \cite{Bombelli:2006}. This allows waves to travel
on a fixed background causal set without extra Lorentz violating
terms appearing in the dispersion relation \cite{Henson:2006}.
This special property of the causal set is often given as a
motivation for using these structures as the histories in a
quantum gravity sum-over-histories (SOH).
In spin-foam models, this Lorentz invariance of individual
histories has not been claimed, and so a different argument must
be employed. Since discreteness arises in standard quantum
theories without violating continuum symmetries, it has been
suggested that the same might be true for Lorentz symmetry in
quantum gravity. This line of reasoning has been expanded on in
the context of LQG \cite{Rovelli:2002vp}. In that article, an
analogy is drawn between the rotation group in standard quantum
mechanics, and the Lorentz group in quantum gravity. It is pointed
out that, although the components of the angular momentum of, say,
an electron, cannot be simultaneously measured, nevertheless any
one can be measured and the theory is still rotationally
invariant. It is claimed that the same reasoning will prevail in
the case of measurements made with respect to different frames in
loop quantum gravity.
If this evasion of Lorentz violation were to be successful in
sum-over-histories approaches such as spin-foams, we would be able
to say that, although the individual histories of our theory
(which are configurations in one Hilbert-space basis, given at all
times) do violate the symmetry, somehow it can still be true that
our measurements do not. The underlying reasoning is that the
results of measurements need not all directly correspond to
properties of the histories. But in standard quantum theory,
approximate, macroscopic measurements (measurements that we can
make without significantly altering the state of the system)
\textit{do} correspond to properties of the histories. The main
point of this article is that even these macroscopic properties
are put in danger if the histories are not Lorentz invariant.
In the angular momentum example, this danger does not exist, since
all components of the (approximate) angular momentum of a
macroscopic object can be seen as a property of the histories. In
other words, measurements of angular momentum components of, say,
a baseball, can in principle be made in the same basis, up to an
acceptable degree of accuracy. But there is an important
difference in the case of Lorentz symmetry, more fully explained
later on. If a history in a quantum gravity SOH was like a
lattice, then even \textit{macroscopic} quantities could fail to
be properties of the history in highly boosted frames. This
situation is qualitatively different from that of rotational
invariance. It is argued below that this difference will prevent
Lorentz invariance in a class of discrete models.
This article is intended to introduce a different way of looking
at the problem of Lorentz violation in quantum gravity, through
the lens of the sum-over-histories formalism for quantum
mechanics. As such it is does not take the argument to any level
of technical sophistication, but only presents a framework that,
it is hoped, will facilitate further debate. Roughly, the
argument proceeds in the following stages:
1) The outcome of the measurement of a macroscopic quantity
corresponds to a property of some subset of the histories in the
SOH. It follows that, in our quantum gravity SOH, there must be
an approximate correspondence between some of the histories and
Lorentzian manifolds (or at least those properties of Lorentzian
manifolds that we expect to be measurable in principle). It is
further assumed that some of the histories should have Minkowski
space as an approximation.
2) A semi-classical state that ``tracks'' Minkowski space is
considered, along with quantities that are defined with respect to
some co-ordinate frame on this Minkowski space. Consider a
quantity that is macroscopic and measureable in this ``fiducial''
coordinate frame. For Lorentz invariance to hold, this quantity
must be macroscopic and measureable in all frames. Therefore there
must be histories, each of which has properties that correspond to
the outcomes of measurements of the macroscopic quantity in all
frames.
3) There is a condition on the histories of a quantum gravity SOH
(arguably true for lattices and similar discretisation schemes)
under which no one history can contain such properties in all
frames. It is shown that this leads to either Lorentz violation or
a lack of macroscopically observable properties.
4) Spin foam quantum gravity models are likely to satisfy this
condition if they are discrete at the Planck scale.
The argument relies on the correspondence between the outcomes of
measurements and the properties of histories in the SOH. This
view, suggested by Feynman in his original paper on the subject
\cite{Feynman:1948ur}, is not currently \textit{en vogue}, and so
a justification of the point is necessary. Because of this, the
first section below is a detour into sum-over-histories quantum
mechanics, which also serves to fix some notation. As this
viewpoint might be useful for other problems in quantum gravity, a
fuller treatment is given in the appendix. In subsequent
sections, we return to the specific case of discrete quantum
gravity, covering the other points mentioned above. An outline of
the conclusion is this: Lorentz violation at the level of the
individual histories of a theory, if it causes macroscopic
properties to be badly approximated in some frames, leads to
Lorentz violating predictions.
\section{Properties of histories, outcomes of measurements}
Must there be an approximate correspondence between some of the
histories in our quantum SOH, and the histories of the classical
theory that we wish to approximately recover? Specifically to
quantum gravity, we might ask: should some of our histories have
Lorentzian manifolds as approximations at some large scale? The
existence of such a discrete/continuum correspondence is supported
in some of the literature (\textit{e.g. } \cite{SpacetimeAsCS}, and
\cite{Bombelli:2004si} in which such a correspondence is sought
for spatial configurations rather than histories); however, it is
not an uncommon opinion that this will not be necessary (see \textit{e.g. }
\cite{Markopoulou:2002ru}). The properties of the continuum
manifold need only become evident, it is argued, in quantum sums
over many of the fundamental histories. This view is no doubt
motivated by thinking of properties like the momentum of a quantum
particle. This property is certainly not a property of any history
(if those histories are written in the position basis)
\footnote{A similiar intuition might be noted, coming from the
expansion of path integrals for interacting quantum field theories
written in terms of Feynman diagrams. There, the histories appear
merely as abstract graphs, plus some extra information, that do
not correspond to properties that we eventually measure. But,
again, the intuition that histories do not correspond to
measurements does not extend to macroscopic properties.}. From
this, it is sometimes concluded that individual histories in the
SOH are of no physical significance.
Here it is argued that this reasoning fails to apply for
properties that are macroscopic in a suitable sense. The momentum
of an electron may not be a property of a history, but in a
semi-classical state, the momentum of a baseball (measured with
some acceptable degree of accuracy) is. On a suitably
coarse-grained scale, the momentum of the baseball can be read off
from a typical history in the same way as it would have been in
the classical theory. This not only holds in all standard quantum
theories, but is a necessary feature of any physically realistic
quantum theory, as explained in the appendix.
This SOH view of quantum mechanics is nothing but a re-casting of
the standard formalism; it is possible to describe our present,
successful theories in this language. The following arguments on
Lorentz violation apply only to theories that are compatible with
this standard framework, and so this is introduced as a first
condition for the main argument to hold.
\begin{condition}
\label{c:soh} The quantum gravity theory is compatible with SOH
formalism.
\end{condition}
\subsection{The sum over histories}
\label{s:SOH}
First, we recall some features of standard quantum mechanics. In
the paper in which he introduced the SOH formalism
\cite{Feynman:1948ur}, Feynman also gave some appealing and useful
ways of picturing it, allowing an easy connection between the
formalism and observations.
The theory has a history space $\Omega$, the elements of which are
the histories. A history gives a possible configuration of the
system in question (in some special Hilbert-space basis) at all
times under consideration. For example, in the case of the
Schr\"odinger particle they are continuous (but not necessarily
differentiable) paths $x(t)$ between some initial and final times.
A brief quote from Feynman shows how measurements are dealt with
in this formalism:
\begin{quote}
The probability that a particle will be found to have a path x(t)
lying somewhere within a region of space-time is the square of a
sum of contributions, one from each path in the region.
\cite{Feynman:1948ur}
\end{quote}
So, in measurement situations, our quantum theory associates
probabilities to properties of the histories. When we talk of a
property $X$ of a history, we can associate to it a set of
histories that have that property, $\Gamma(X) \subset \Omega$, and
a question ``Does the system have property $X$?''. For example,
the property may be that $a<x(t)<b$ for some particular $a$, $b$
and $t$. Answering the associated question requires a measurement,
in this case a standard measurement. More generally, we might ask
``did the particle's world-line pass through spacetime region
$R$?'' for some particular $R$, which amounts to a non-standard,
``continuous'' measurement.
If we observe a property $X$ we condition on the set of histories
$\Gamma(X)$ with that property, throwing away all the histories
not in $\Gamma(X)$ and carrying out the appropriate
renormalisation of the probabilities (the equivalent of the
``collapse of the wave function'' from this perspective). We
might want to carry out another measurement, finding that the
system has another property, $Y$, and we would then further
condition on the smaller set of histories $\Gamma(X) \cap
\Gamma(Y)$, and so on. We will call this subset of the history
space the \textit{measured set of histories} or \textit{measured
set}. See \cite{Sinha:1991cj} for an application of the formalism
to a familiar experiment.
From the quote, we can see that the individual histories in the
measured set do have at least some significant properties. The
properties of the system that we observe are those shared by all
the histories in the measured set. So far, there is no need to
talk about anything but properties of histories in order to
describe the observations\footnote{Note that we have not stepped
outside of the normal interpretation of quantum mechanics.
Physical properties can therefore be interpreted here in an
operationalist sense. However, if preferred, it is also possible
to apply a ``decoherent histories'' style interpretation (see \textit{e.g. }
\cite{Hartle:1992as}) without altering the argument.}. But all
these measurements were in the specially selected basis. So the
question becomes: how much of the observed data can be described
by measurements in this one basis?
As noted above, not all microscopic variables in a quantum system
are directly represented in the histories. But it presents no
difficulty to perform measurements of all \textit{macroscopic}
quantities in one basis, when they are suitably defined. Recall,
for example, the properties of a semi-classical, coherent state
for the Schr\"odinger quantum theory: the product of uncertainties
of position and momentum is minimised, and the quantum state, when
peaked on a certain position and momentum, ``tracks'' the
associated classical solution over time. An approximate
measurement of position (projecting onto a certain macroscopic
range, say) can be made without significantly disturbing the state
-- indeed, this might be taken as a definition of a macroscopic
measurement. By appeal to the classical theory, we can now see
that a time-sequence of such position measurements must be enough
to approximately determine the conjugate variable, the momentum.
Thus the (approximate) momentum of a macroscopic object can be a
property of the histories in the measured set, just as the
position is -- even though the ``typical path'' can be highly
fractal on smaller scales.
The argument that this is the case for standard theories, and
moreover must be the case for any realistic quantum theory with a
good semi-classical regime, is left for the appendix. Also
treated there, in section \ref{a:Feynman}, is a method sketched
by Feynman for indirectly finding the results of all microscopic
measurements in one basis. The conclusion is that \textit{each
individual history in the measured set must have all of the
properties that we observe}.
\subsection{Effective descriptions}
\label{s:eff}
Sometimes our fundamental histories are not directly described in
terms of the properties that we are familiar with at the
macroscopic level. But our observations may be formalised in terms
of these these effective properties.
For example, if our system is a collection of molecules, we might
make a hydrodynamical approximation. But this continuum
description is of course not perfect. Some of the properties of
the hydrodynamical description do not have any physical
significance. An example is the property that a density
perturbation of amplitude $A$ and wavelength $\lambda$ exists in
some region of the fluid. This ceases to make sense in the
underlying history when $\lambda$ becomes close to the
intermolecular separation.
If an effective property $X$ does not have any corresponding
property in a fundamental history $h$ (\textit{i.e. } if it is impossible to
tell from $h$ whether or not $X$ has happened) then we must
consider $X$ false for that history, and so $h$ would not be
included in $\Gamma(X)$. Such properties are termed
``undecidable'' for that history (the converse being
``decidable'').
\section{Lorentz invariance}
\label{s:LI}
These considerations will now be applied to the problem of Lorentz
invariance in quantum gravity. But what would Lorentz symmetry
mean in this context?
Here our system is spacetime, and our histories, at least on an
effective level, are Lorentzian manifolds. It is simplest to
assume that there is a semi-classical state that tracks Minkowski
space, in order to talk about global Lorentz invariance. This
should not be a controversial assumption, as much effort has gone
into attempting to describe such states in the various quantum
gravity programs. To say that this state is Lorentz invariant
means that there should be no way to pick out a preferred
direction in spacetime by any measurement. This immediately tells
us that, if we expect Lorentz invariance, then we expect the
semi-classical behaviour to persist in all frames. But
semi-classical behaviour (and what it means to track a classical
solution) should be defined in terms of some class of observable
quantities that we expect to be able to measure.
\subsection{Macroscopic observables in quantum gravity}
Our measurements might be measurements of time and length in one
frame, made at scales appropriate to the semi-classical state, \textit{e.g. }
the informal, very approximate measurements of space and time that
we make every day. They could also include measurements of other
fields. Different observers might set out to measure similar
macroscopic quantities in other frames.
Already, the terms ``macroscopic'', ``semi-classical'' and
``tracking Minkowski'' have been used, but no exact definition has
been given. What exactly are these scales at which observables
can be considered macroscopic, in quantum gravity? No detailed
investigation shall be launched into here; it is only necessary to
stress the uncontroversial point that some familiar observables
must be considered to be in this class. It is trivial to observe
good approximations to classical predictions, for example, in
electromagnetic waves moving on an approximately flat background,
for certain values of frequency and amplitude. Our theory of
quantum gravity must be able to reproduce these results and others
like them, and so measurements of such waves must be in the class
of macroscopic measurements that can be made in that theory. And
in our semi-classical state that tracks Minkowski space, we can
add the extra expectation of Lorentz invariance. This has the
important consequence that any frame-dependent measurements
considered macroscopic with respect to one frame $F$ must be
considered macroscopic with respect to frames that are boosted
with respect to $F$.
As noted in appendix \ref{a:semi}, no such measurements should
alter the state of the system in any detectable way, and so they
should not affect each other -- there must come a point at which
the presence or absence of wave-function collapse ceases to be
physically significant, and this point comes after the
semi-classical scale of the given system is reached. By analogy,
we can define an approximate measurement of one component of the
angular momentum of a baseball, that does not significantly alter
the state; likewise, the measurement can be made in other frames
rotated with respect to the first, as many of them as we choose,
without causing detectable changes in the state. (Otherwise, to
repeat a point made in the appendix, the question of whether wave
function collapse occurred when we photographed a spinning ball,
or later when we developed the photograph, would become
physically significant. Such theories are physically
unacceptable.)
This is perhaps a source of the difference between the arguments
presented here, and those of \cite{Rovelli:2002vp}. There, the
non-commutativity of observables corresponding to measurements in
different Lorentz frames is emphasised. The arguments above
contrast with this. They basically assert that for semi-classical
states, there is a sense in which non-commutativity must become
practically insignificant for some class of approximate
measurements, in order to prevent macroscopic quantum interference
effects. This is the situation in standard theories, and there
seems to be no reason to change the requirement in the case of
quantum gravity.
Following the reasoning of the previous section, all of these
macroscopic observations must correspond to properties of the
underlying histories. Further investigation shows that only
certain theories can maintain Lorentz invariance under this
condition.
\subsection{A simple example}
\label{s:simple}
A light-cone lattice offers a simple discretisation of 2D
Minkowski space, with a fairly obvious discrete/continuum
correspondence. This example offers some interesting insights.
Let the histories of our theory be examples of such a lattice,
with a scalar field living on it. More specifically, the
structure is a directed graph on a set of elements $\{e(i,j)\}$
with $i,j\in {\hbox{\openface Z}}$. The graph edges run between the pairs
$\{e(i,j),e(i+1,j)\}$ and $\{e(i,j),e(i,j+1)\}$. To each element
is associated a real number $\phi(i,j)$ to represent the scalar
field. Different values of the field give rise to different
histories.
These histories are not defined in terms of a manifold at all.
They are purely ``abstract graphs". Also, the labels given to the
elements need not be considered to be part of the structure; they
can be reconstructed from the directed graph edges up to
``discrete translations'' (\textit{i.e. } $i \rightarrow i+ x$, $j
\rightarrow j + y$, $x,y\in {\hbox{\openface Z}}$)\footnote{More properly,
the histories should be considered to be equivalence classes of
the directed graphs under the above relabellings of the
elements.}. The only ``intrinsic" properties of an element are the
graph edges connected to it and value of the field there.
Despite this, a correspondence to 2D Minkowski space can be made
in an obvious way: the elements can be assigned to points in
Minkowski with $u=a i$ and $v=a j$, where $a$ is some length
(which might be imagined to be the Planck length), and $(u,v)$ are
the standard light cone co-ordinates in Minkowski space, in terms
of which the metric is $ds^2=2dudv$. This embedding defines the
manifold approximation: a scalar field $\Phi(u,v)$ on Minkowski
space can be considered as an approximation to $\phi(i,j)$ if a
frame can be found such that $\phi(i,j)=\Phi(ai,aj)$.
Realistically, we would only assume this to some degree of
approximation, and also require that the field $\Phi(u,v)$ was not
``quickly varying'' with respect to the distance $a$. It can
easily be seen that this correspondence principle is consistent
with the discrete translation invariance of the fundamental
histories.
Note that, as well as defining a field on the Minkowski space, the
fundamental history also defines a frame on the Minkowski space
(in which the $(u,v)$ co-ordiniates are defined), which we will
call the ``lattice frame". For any single fundamental history,
the lattice frame in the effective Minkowski description is fixed
relative to the field configuration.
How do we begin to talk about Lorentz invariance in this model?
Lorentz invariance only makes sense in the continuum, and so any
Lorentz transformations must be applied at the level of the
effective continuum description. It will be said that one
effective property $X$ is related to another $X'$ by a Lorentz
transformation $\Lambda$ if every field configuration $\Phi(u,v)$
satisfies property $X$ if and only if $\Lambda \Phi(u,v)$
satisfies $X'$.
The problem with this kind of discretisation is that it is ``not
equally good in all frames". For a given fundamental history,
some properties that are \textit{decidable} (in the sense of
section \ref{s:eff}) are related by Lorentz transformations to
\textit{undecidable} properties. For example, a plane wave,
written $\Phi(u,v)=\sin(u/\lambda)$ in the lattice frame, can only
be an approximation to a discrete field when $\lambda>>a$.
Therefore the property ``there is a plane wave of amplitude $A$
and wavelength $\lambda>>a$ with respect to frame $F$, in region
$R$'' is only decidable for a particular fundamental history when
the frame $F$ is sufficiently near to the lattice frame (remember
that the lattice frame is fixed relative to the field
configuration for each history).
Crucially, there is nothing to stop these properties from being
semi-classical, macroscopic properties. Even on a very fine
lattice, an extreme enough boost will take a wave of arbitrarily
long wavelength to one with ``sub-lattice'' wavelength. See
figure \ref{f:regions} for a further example.
\begin{figure*} \centering
\resizebox{4.4in}{2.2in}{\includegraphics{regions2.eps}}
\caption{\small{The diagram represents two regions, $R_1$ and
$R_2$, in 2D Minkowski space, into which is embedded a light-cone
lattice as described in section \ref{s:simple}. $R_1$ is related
to $R_2$ by a boost. A scalar field defined on the lattice can be
approximated by a field on the Minkowski space. But it is hard to
imagine how all the properties of the field decidable in $R_1$
could also be decidable in $R_2$, as $R_2$ contains no embedded
lattice points. With sufficient boosts, similar situations can be
constructed with any finite lattice density and any size of
region, leading to the presence of macroscopic properties of the
history that are decidable in one frame but not another.
}\label{f:regions}} \end{figure*}
\subsection{Lorentz violation, from histories to observations}
This is one example from a class of such discretisation schemes.
A condition on such schemes is now presented that encapsulates
the problem.
\begin{condition}
\label{c:lv}
The set of histories of the theory that have Minkowski space
(plus matter) as an effective description is non-empty. Consider
some region $R_1$ in Minkowski and let $X_1$ be a macroscopic
effective property of that region, and also consider another
region $R_2$ related by a boost transformation (with some
sufficiently large boost factor) to $R_1$. Let $X_2$ be the
property of $R_2$ related to property $X_1$ of $R_1$ by that
boost. Then for some $X_1$, either $X_1$ or $X_2$ is
undecidable, for all histories. \end{condition}
In the lattice example above, this condition is no doubt
satisfied. We will now consider a hypothetical realistic model
in which the above condition is also satisfied. Does this lead
to Lorentz violation? It might be imagined that the ``lattice
frame'' could vary over the histories in the measured set,
somehow ameliorating the frame dependence of the individual
histories. This is now shown to be impossible: the above
condition leads to in-principle observable Lorentz violation.
What if we were to attempt to measure for both properties $X_1$
and $X_2$? We need the probability associated with the measured
set $\Gamma(X_1) \cap \Gamma(X_2)$. Unfortunately, either $X_1$
or $X_2$ is undecidable in every history. This means that the
set $\Gamma(X_1)$ is disjoint from the set $\Gamma(X_2)$, and so
$\Gamma(X_1) \cap \Gamma(X_2) = \emptyset$. This pair of
macroscopic properties is undecidable in all histories, and hence
always false; it is unphysical. If there are many easily
measurable pairs like $X_1$ and $X_2$, this is absurd. The only
solution is that only one of the properties can be considered a
physical, measurable quantity, and so, since one is related to
the other by a boost, Lorentz symmetry is violated. The
conclusion is that \textit{This kind of Lorentz violation in the
individual histories necessarily leads to Lorentz violation at
the level of macroscopic observables}.
In this argument, no scale has been put on the Lorentz violation
of the individual histories as expressed in condition \ref{c:lv},
hence the vagueness of the phrase ``with some sufficiently large
boost factor''. But we would expect that Planck scale
discreteness implies Lorentz violation at a related scale, as it
does in the case of the lattice. A quantification of the effect
would depend on the range of boosts over which a certain property
was well approximated in the individual histories. No attempt to
quantify this violation for any theory is made here.
In the specific semi-classical situation being described, some of
the symmetries of GR remain; general covariance has been broken to
the Poincar\'e invariance of Minkowski space. How then do we fix
the positions of regions $R_1$ and $R_2$ in all relevant
histories? The positions of the regions have no meaning \textit{a
priori}. They must be determined with reference to something else,
by the properties of other fields in the theory. Here it is
assumed that this can be done, the reason being that, if it could
not be, then the theory would have failed in any case. It must be
possible to identify in all relevant histories, for example, the
world-line of the planet earth. With this done, regions could be
identified with reference to it and the above conclusion drawn. In
the lattice example, the world-line of the planet earth would be
described by the configuration of the fields on the lattice, and
would necessarily be identifiable across all histories in the
measured set. The details of this scheme to fix the positions of
$R_1$ and $R_2$ would not impact the argument that no single
lattice-like history can accurately represent our two macroscopic
properties $X_1$ and $X_2$ (which, it should be noted, may be
defined in overlapping regions, and/or be supplemented with many
additional, similar properties).
\section{Consequences for discrete quantum gravity}
\label{s:spinfoams}
In order to reach the conclusion, we must assume condition
\ref{c:lv}, a condition arguably true for lattice-like discrete
structures. The question now becomes: which discrete structures
proposed in the various approaches to quantum gravity satisfy the
condition, and which evade it?
A discretisation scheme along the lines of a regular lattice
would satisfy condition \ref{c:lv}, but that is not a
particularly interesting case; this is not the basis of any
popular attempt to quantise gravity. In order to fully address
the issue in a given approach, a method of assigning continuum
approximations to the discrete underlying structure would have to
be specified: a map from the individual histories to Lorentzian
manifolds.
In causal set theory, the correspondence between the discrete
underlying structures and the continuum manifold was early on made
explicit, and these discrete structures have been shown to be
Lorentz invariant in a strong sense \cite{Bombelli:2006}, evading
condition \ref{c:lv}. In particular, in contrast to the light-cone
lattice, a scalar field dynamics on a fixed causal set background
can be defined which does not violate Lorentz symmetry
\cite{Henson:2006}.
The situation for spin-foam models is less clear, as here a
correspondence between continuum manifolds and the underlying
histories has not been made completely explicit. However, some
clues are available. An analogous situation has been considered
in loop quantum gravity, where discrete spatial configurations
must be approximated by continuous Euclidean manifolds. In
\cite{Bombelli:2004si}, a correspondence is developed for the
simple but suggestive case of unlabelled graphs.
A graph can be associated to a manifold by a random process.
First, a locally finite set of points is randomly selected from
the manifold by a Poisson process (the resulting set of points
being called a ``sprinkling''). To this sprinkling, a graph is
associated via a Voronoi procedure \cite{Bombelli:2004si}. A
graph is said to be approximated by a manifold if it could have
arisen in this way from a sprinkling of that manifold, with
relatively high probability. This in-principle correspondence can
be reversed to order to recover geometrical information (\textit{e.g. } on
area, volume and dimension) directly from a given graph. To allow
for quantum behaviour at short scales, the discrete/continuum
correspondence could be applied only on large scales, by using
some coarse-graining of the fundamental graph
\cite{Bombelli:2004si}.
In this way many desirable features are recovered, for example a
proportionality between the area of a surface of co-dimension 1,
and the number of edges crossing it (something desirable from the
standpoint of loop quantum gravity), and also a proportionality
between the number of vertices in a region and its volume. Thus
it is plausible that the scheme could be extended to spin
networks. Since these are nothing but spatial slices of
spin-foams, a similar scheme might be sought in that case, keeping
these necessary proportionalities.
However, a na\"ive application of such a scheme to spin-foams
could not be Lorentz invariant. It has been proven that no finite
valency graph can be associated to a sprinkling of Minkowski space
in a way that respects Lorentz symmetry \cite{Bombelli:2006}, and
each spin-foam contains such a graph. Thus there would always be a
preferred frame associated to each history like the lattice frame
of section \ref{s:simple}, and it is reasonable to assume that
condition \ref{c:lv} would be violated in this case. Without the
randomness of sprinkling, the correspondence might have to rely on
some more regular embedding of vertices into manifolds, something
even more likely to produce Lorentz violations, as it occurs for
regular lattices.
It could be argued that the correspondence principle will be more
subtle, not relying on attempts to embed vertices into manifolds.
But this would seem unlikely in the light of the successes of the
scheme for spatial configurations mentioned above. Moreover, it
is hard to imagine how to avoid the concept of embedding, at least
at a coarse-grained level, if the vertices of the discrete
structure are to correspond in any way to spacetime points or
``elementary regions''.
But there is another objection to this line of argument. The
Lorentz violation on a lattice is a result of the discreteness.
The finer the lattice, the higher the boosts at which it can
successfully represent macroscopic properties. But for any
lattice there will come a boost at which these properties cease to
be decidable. Condition \ref{c:lv} requires that there is a boost
factor sufficient to bring about this situation in all histories
that correspond to Minkowski. This clearly implies that there is
an upper bound on the boost necessary for each individual history.
In the case of spin-foams, this might be questioned. Could there
be no upper bound to how ``fine'' an spin-foam can become, in the
appropriate sense?
The form of the sum-over-triangulations in spin-foam quantum
gravity has not been fixed, and so only more speculation can be
offered on this point. An embedding scheme that put no upper bound
on the number of vertices per unit of 4-volume could possibly
evade condition \ref{c:lv} on these grounds. It has even been
suggested that a continuum limit may have be taken to get
consistent results from spin-foam models \cite{Baez:2002aw}, in
which case the fundamental histories would not be discrete at all.
But these alternatives seem unlikely to be consistent with the
correspondence of continuum 3-volume and area with properties of
the slices of the underlying spin-foam. Only the establishment of
a full discrete/continuum correspondence principle could
unambiguously answer this question. Even so, it is interesting to
note that for such an evasion to be possible, Planck scale
discreteness would be sacrificed. The import of this observation
depends upon the value put on Planck scale discreteness as a
desirable aspect of a candidate quantum gravity theory
\footnote{see \textit{e.g. } \cite{Henson:2006kf} for the attitude towards
discreteness taken in the causal set program.}.
\subsection{Further discussion}
Apart from the possible applications of condition \ref{c:lv} to
spin-foams, there are other possible objections to the argument
laid out here that should be mentioned. For example, some
theories may not satisfy condition \ref{c:soh}; certain approaches
to quantum gravity might not have any SOH formulation in their
final forms. But this seems unlikely to affect the arguments on
spin-foams, especially when the goal of the program is expressed
as finding a path-integral theory for quantum gravity (as, for
example, in \cite{Oriti:2001qu}). Any theory that did not satisfy
condition \ref{c:soh} would differ considerably from standard
quantum mechanics; in that case many questions would arise
concerning the consistency of the theory and the interpretational
framework to be used.
Secondly, there is some lack of precision in the idea of
``effective descriptions'' here, but it would be difficult to
argue that a lattice-like structure does in fact have enough
structure to represent waves in all frames. Similarly, although
all the ``macroscopic properties'' of quantum gravity are not
described, they must include the well-observed macroscopic
properties that are present in today's successful theories.
In the argument, it was assumed that the quantum gravity theory
would have semi-classical solutions corresponding to Minkowski
space, and this was used in subsequent arguments. This might not
be possible in principle, and it is not a realistic scenario. But
it should be possible to generalise to a situation in which
Minkowski was only a good approximation in a region, without
altering the main points of the argument. The failure of
lattices, and similar structures, to represent macroscopic
properties at very high boosts, is not a subtle effect but a very
marked one, and so it is unlikely to be removed by small
corrections to the picture given above.
Other objections might be found in the practicality of finding and
measuring pairs of events such as $X_1$ and $X_2$, but one would
have to explain why ``an electromagnetic wave with given frequency
and amplitude, travelling through a region $R_1$'' is not a good
property to use for $X_1$, if it is accepted that properties of
this form are measured for in tests of Lorentz invariance. The
biggest problem is that there is no concrete prediction for the
scales at which Lorentz violation would be seen, and this leaves
the door open to Lorentz violation at unmeasurably high boosts.
Thus the argument presented here should be seen as only one step
in the ongoing discussion of Lorentz violation.
\section{Conclusion}
The argument set out above works from the sum-over-histories view
of quantum mechanics. Firstly, it is argued that all macroscopic
observations correspond, in a fairly direct way, to properties of
histories in the measured set of histories. If a semi-classical
state is invariant under some symmetry, then the class of
observables that can be considered to be macroscopic will also be
invariant under this symmetry. In a semi-classical state of
quantum gravity that tracks Minkowski space, we expect to be able
to make macroscopic measurements in any frame. Above, a condition
has been set down under which this is impossible, as there are not
enough properties in the underlying histories to represent these
observations. Some discussion has been given of the relevance to
the spin-foam and causal set quantum gravity programs.
One purpose of the article was to call attention to the necessity
of an in-principle correspondence between fundamental histories in
a quantum gravity theory, and Lorentzian manifolds which
approximate to them. In the case of spin-foams, even establishing
some broad features of this correspondence would enable some
conclusions to be drawn from the arguments above, and would
doubtless be useful elsewhere.
Finally, no discussion of doubly special relativity has been made
here, and it is possible that this allows some compromise between
this line of reasoning and those put forward elsewhere, for
example in \cite{Livine:2004xy}. In conclusion, the arguments
given may put the spin-foam program in the happy situation of
predicting effects that are almost within reach of observation, or
conversely, the principle of Lorentz invariance could be used to
decide amongst various spin-foam models. But these applications
are dependent on further developments in the ongoing discussion of
Lorentz violation in quantum gravity.
The author is grateful to Jeremy Butterfield, Adrian Kent, Fay
Dowker, Rafael Sorkin and Sumati Surya for correspondence and
discussions of this work, and to the organisers and participants
in the Loops '05 conference, many of whom contributed to the
debate on this issue. The author was supported by DARPA grant
F49620-02-C-0010R at UCSD, where some work on the article was
carried out.
|
2,869,038,156,766 | arxiv | \section{Introduction}
Probing physics beyond the Standard Model\ requires high precision experiments
preferably, on quantities whose standard model values are suppressed.
A well known example of this type of observable
is the $ \rho $ parameter, whose values is very close to one due to
the $ \su2_R $ transformation properties of the Standard Model\
scalar doublet.
A particularly interesting set of processes for which the Standard Model\
contribution is very much suppressed
consists of those for which CP is
violated. In the Standard Model\ amplitudes for CP violating processes
are associated with the phase
of the Kobayashi-Maskawa matrix and are extremely small
\cite{Quinn93,Peccei93}. In contrast,
many kinds of new physics generate comparatively
large amounts of CP violation \cite{Gunion92}.
CP violating observables are therefore very good candidates in which to
look for new physics effects.
In this talk I will consider the possibility of observing CP
violating processes
within the gauge-boson and scalar sector of the Standard Model \cite{Ma93};
part of this work was done in collaboration with J. Gunion and
B. Grzadkowky, a generalization is under investigation.
With only Standard Model\ interactions these effects are negligible, but this
need not be the case in general. The environment in which I will
study these processes is the photon-photon collider.
Though such a machine will probably be constructed using
back-scattered laser radiation in an $ e^+ e^- $ collider \cite{Ginzburg83},
in this talk I will consider, for clarity and brevity, an
ideal monochromatic
$ \gamma \gamma $ collider in which both photons can be given any
desired polarization. As I will show, even in this utopian
situation there are great difficulties in observing a clear signal
for some of the processes considered.
The approach which I will follow in this talk is to parametrize the
effects of new physics in a model and process independent way by
using an effective lagrangian \cite{Wudka94}.
This is, by its very nature, a process
and model independent approach which preserves all the successes of
the Standard Model\ while incorporating new physics in a consistent manner.
I will not describe the formalism in detail here but refer the
reader to \cite{Wudka94}.
Briefly, what is required is to construct all dimension
six operators containing Standard Model\ fields and respecting the symmetries
of the Standard Model, which also violate CP. The effective lagrangian consists
of the Standard Model\ lagrangian plus a linear combination of these operators
with undetermined coefficients. The value of these coefficients
cannot be determined
without further knowledge of the physics underlying the Standard Model;
nonetheless these couplings can be estimated using consistency
conditions. These estimates are not numerically accurate, nonetheless
they do provide reliable order of magnitude value, which is what is
needed in order to determine the sensitivity of a given
experiment to the scale of new physics.
In this talk I will consider the processes
\begin{equation}
\gamma \gamma \rightarrow
W^+ W^- , \ Z Z , \ H H , \ H . \end{equation}
Some comment on the fermion anti-fermion
final state will be made at the end.
I will assume a ``light'' Higgs \begin{equation} m_H \ll 3~\hbox{TeV} .
\end{equation}
For the processes of interest the relevant dimension six operators are
\cite{Buchmuller86}
\begin{eqnarray}
{\cal O}_{ \varphi \tilde W } &=& \left( \varphi^\dagger \varphi \right)
W_{ \mu \nu }^I \tilde W_{ \mu \nu }^I \\
{\cal O}_{ \varphi \tilde B } &=& \left( \varphi^\dagger \varphi \right)
B_{ \mu \nu } \tilde B_{ \mu \nu } \\
{\cal O}_{ B \tilde W } &=& \left( \varphi^\dagger \tau^I \varphi \right)
B_{ \mu \nu } \tilde W_{ \mu \nu }^I \\
{\cal O}_{ \tilde W } &=& \epsilon_{ I J K } W_{ \mu \nu }^I
W_{ \mu \nu }^J \tilde W_{ \mu \nu }^K
\end{eqnarray}
So that the lagrangian becomes
\begin{eqnarray}
&& \!\!\!\!\!\!\!\!\!\ {\cal L} = {\cal L}_{SM} + \inv{ \Lambda^2 }
\Bigl[
g g' \; \alpha_{ B \tilde W } {\cal O}_{ B \tilde W }
+ g^3 \alpha_{ \tilde W } {\cal O}_{ \tilde W } \nonumber
\\ && \quad + g^2 \alpha_{ \varphi \tilde W } {\cal O}_{ \varphi \tilde W }
+ g' {}^2 \alpha_{ \varphi \tilde B } {\cal O}_{ \varphi \tilde B }
\Bigr] .
\end{eqnarray}
The scale $ \Lambda $ determines the limit of applicability of this
parametrization of heavy physics effects: all processes studied using
$ {\cal L} $ must have energies below $ \Lambda $. Indeed,
the assumption that heavy physics
effects are summarized by a series of effective local operators
can only be true if the energy scale of interest is significantly
smaller than the scale of the heavy physics.
Moreover if we are studying processes whose energies
are such that the underlying physics is apparent, we would
not bother to study their radiative effects in order to re-discover
it. These remarks, though
obvious, are often ignored in the literature.
The coefficients
$ \alpha_i $ will be chosen so that $ \Lambda $ corresponds to the
scale where the heavy physics effects are observed directly.
To estimate them suppose
first that the heavy physics is weakly coupled; in this case
one can verify that
all the operators $ {\cal O} $
are generated by loops by the underlying theory. We then expect
\begin{equation} | \alpha_i | \sim{ 1 \over 16 \pi^2 } . \end{equation}
If the underlying theory is strongly interacting the argument
required to estimate the coefficients $ \alpha_i $ is the
same as the one used in the so-called ``naive dimensional analysis''
\cite{Georgi84}.
The $ \alpha_i $ are in fact running
coupling constants defined by matching conditions at the scale $ \Lambda $,
at which the underlying physics becomes apparent. Then consistency
requires that a change in the renormalization mass $ \mu
\rightarrow c \mu $ with $ c \sim O ( 1 ) $ should not change the
order of magnitude of the $ \alpha_i $. This gives $ | \alpha_{
\tilde W} | \sim 1/ 16
\pi^2 $ and $ | \alpha_{ \varphi \tilde B , \varphi \tilde W ,
B \tilde W } | \sim 1 $.
For a strongly
coupled theory, however, the Higgs mass is expected to receive
large -- $ O(\Lambda)$ -- corrections so that this scenario is
in general inconsistent with the above assumption that the Higgs
is light. The exception occurs when this mass is
protected by a symmetry (such as
supersymmetry). In this case, however, the low energy spectrum of the
models are invariably richer than that of the Standard Model.
I will therefore assume that a light Higgs is not viable for a natural
strongly coupled heavy theory. In such a situation a different
parametrization, the so called chiral representation,
of the effective lagrangian is required and will not be considered
here due to time limitations. Because of this I will adopt the
estimates $ | \alpha_i | \sim 1 / 16 \pi^2 $.
\section{Results}
I will then consider an ideal photon collider where the
photons have definite momentum and prefect polarizations.
As mentioned above,
I will not consider the realistic situation where the
photons to be considered are produced by back-scattered laser light.
This is done due to time limitations, and also to avoid complications
which, though quantitatively very important, obscure to a certain
degree the basic problems one has to deal with when trying to uncover
new physics using the processes considered in this talk.
The photons' center of mass momenta are $ k_{ 1 , 2 } = {1\over2} \sqrt{s}
( 1 , \pm 1 , 0 , 0 ) $ with polarizations $ \epsilon_{ 1 , 2 }
= \inv{\sqrt{2} } ( 0 , 0 , 1 , \pm \eta ) $, $ | \eta | = 1 $.
In all calculations I will choose $ \eta $ so as to suppress (or in the
optimal case to eliminate) the Standard Model\ contributions.
When the final particles have the same mass
the final particle momenta are
$ {1\over2} \sqrt{s} ( 1 , \pm \beta \cos \theta , \pm \beta \sin \theta , 0 )
$ where $ \beta $ is the velocity of the final particles
and $ \theta $ is the center of mass scattering angle.
With these preliminaries I turn now to the various reactions.
\subsection{$ \gamma \gamma \rightarrow Z Z . $}
To the order we
are working in the effective lagrangian there are no contributions.
The leading terms for this reaction come from dimension eight operators
\cite{Baur93}, and will not be considered further in this talk.
\subsection{$ \gamma \gamma \rightarrow W^+ W^- . $}
For this process the only contributing operators are $ {\cal O}_{ \tilde W } $
and $ {\cal O}_{ B \tilde W } $; if both final $W$ vector bosons are
longitudinal only the second operator contributes. The relevant
diagrams are
\setbox1=\vbox to 2 truein{\epsfxsize=2 truein\epsfbox[0 0 576 792]{f1.ps}}
\centerline{\box1}
\vskip -50pt
\noindent where the solid dot denotes an $ {\cal O}_{ B \tilde W } $ insertion.
Assuming $ \sqrt{s} \gg m_W $ allows for the use of the equivalence theorem,
so that the $W$ vector bosons can be replaced by the corresponding
Goldstone particles. For longitudinally
polarized $W$ bosons at these energies in the final state
$ {\cal O}_{ \tilde W } $ does not contribute, so the final result will
depend on $ \alpha_{ B \tilde W } $ only.
Choosing $ \epsilon_{ 1 2, } = \inv{ \sqrt{2 } }
( 0 , 0 , 1 , \pm i ) $ ({\it i.e.}\ $ \eta = i $) yields the amplitude
\begin{equation} {\cal A}( \gamma \gamma \rightarrow W^+ W^- ) =
{ \alpha \beta_W \over 2 \pi } { t \over \Lambda^2 }
{ 1 \over 1 - \hbox{m}_W^2 / t } ; \end{equation}
where $ \beta_W = 16 \pi^2 \alpha_{ B \tilde W } \sim 1 $.
Note that this amplitude is real. The total cross section corresponding
to this amplitude is \begin{equation}
\sigma ( \gamma \gamma \rightarrow W^+ W^- ) = { \alpha^2
\beta_W^2 s \over 192 \pi^3 \Lambda^4 } \left[ 1 + O \left(
{ \hbox{m}_W^2 \over s } \right) \right] . \end{equation}
In the limit of large center of mass energy the Standard Model\ contribution
vanishes; the amplitude for longitudinally polarized $W$
vector bosons in the final state is \cite{HVeltman93}
\begin{equation}
{\cal A}_{ S M } ( \gamma \gamma \rightarrow W^+ W^- ) =
- { 8 \pi i \alpha ( 1 - \beta^2 ) \over 1 - \beta^2 \cos^2 \theta } ;
\end{equation}
where $
\beta = \sqrt{ 1 - 4 \hbox{m}_W
^2 / s } $ (note that it is purely imaginary so it
will not interfere with the $ {\cal O}_{ B \tilde W } $ contribution). The
corresponding cross section is
\begin{eqnarray}
\sigma_{ S M } &=& { 2 \pi \alpha^2 \over s } ( 1 - \beta^2 ) \times
\nonumber \\
&& \left[ 1 + { 1 - \beta^2 \over 2 \beta } \ln \left( { 1 + \beta
\over 1 - \beta } \right) \right] ; \end{eqnarray} which
indeed vanishes as $ \beta \rightarrow 1 $:
$ \sigma_{ SM} \simeq 2 \pi ( 2 \alpha \hbox{m}_W /s )^2 $
as $ s \rightarrow \infty $.
I will consider the observability of this process from three points of view
\begin{description}
\item[i)] {} Require first
$ \sigma \lowti{ new } \gg \sigma_{ S M } $; this implies
$ s^3/ (1536 \pi^4 \hbox{m}_W^2 ) \gg \Lambda^4 $. Since we also want
$ \Lambda > \sqrt{s} $ (else the new physics can be probed
directly), this implies $ \sqrt{s} > 31 \hbox{TeV} $; for an accelerator
of this energy scales of order $ 32 \hbox{TeV} $ can be probed.
This result is obtained by assuming
that $ \beta_W = 1 $.
\item[ii)] {} Require $ N \lowti{new } > \sqrt{ N_{ S M } +
N \lowti{ new } } $, where
$ N \lowti{ new } $ is the number of events generated by the
new physics and $ N_{ S M } $ Standard Model\ events. Using again
$ \Lambda^2 > s $ and $ \beta_W = 1 $ this condition is equivalent
to the requirement that the luminosity for the machine is greater
than $ 2.4 \times 10^5 $/fb.
\item[iii)] Require that the forward backward asymmetry be greater
than $ 0.1 $ and that there be more than 10 Standard Model\ events.
This is equivalent to a luminosity above $ 2 \times 10^5 $/fb.
\end{description}
This clearly illustrates the enormous problems one has to deal with:
absurdly large luminosities have to be invoked in order to
detect a signal.
This problem can be traced back to the estimate
$ \beta_W = 1 $. One might be tempted to relax this condition and
assume, for example, $ \beta_W \sim 16 \pi^2 $ in which case the
required luminosities drop to $ \sim 10 $/fb. Unfortunately such large
values for the coefficients are inconsistent with the whole
approach. In other words, there is no consistent way of generating
such large coefficients from the underlying dynamics without
radically altering the Standard Model\ itself (for example,
one would then expect $ \rho - 1 = O ( 1 ) $).
\subsection{ $ \gamma \gamma \rightarrow H H .$}
The contributing effective
operators to this process are $ {\cal O}_{ \varphi \tilde W ,
\varphi \tilde B , B \tilde W } $ appearing in the diagrams
\setbox2=\vbox to 2 truein {\epsfxsize=2 truein\epsfbox[0 0 576 792]{f2.ps}}
\centerline{\box2}
\vskip -50pt
\noindent
where the heavy dot denotes an effective operator insertion.
The Standard Model\ contributions come from loops such as
\setbox3=\vbox to 2 truein{\epsfxsize=2 truein\epsfbox[0 0 576 792]{f3.ps}}
\centerline{\box3}
\vskip -50pt
\noindent whose evaluation is straightforward.
For simplicity I will consider here
only the expression for the Standard Model\ generated by a heavy top loop,
this corresponds to the effective operator \cite{Steeger87}
\begin{equation} {\cal O} \lowti{heavy \ top} = { 35 \over 54 }
{ \alpha G_F \over \sqrt{8} \; \pi }
\left( {1\over2} H^2 \right) \left( {1\over4} F_{ \mu \nu } ^2 \right)
\end{equation}
where $G_F $ is the Fermi constant.
Note that this operator vanishes for the
choice of polarizations
\begin{equation} \epsilon_{ 1 , 2 } = \inv{ \sqrt{ 2 } } ( 0 , 0 , 1 , \pm 1 )
;
\qquad ( \eta = 1 ) \end{equation}
this will be true for all
the Standard Model\ contributions provided the KM mixing terms are ignored (which
I will do in the following due to the smallness of these effects.
The cross section for this process and for the above choice of polarizations
is \begin{eqnarray} &&
\!\!\! \sigma( \gamma \gamma \rightarrow H H ) =
{ \alpha^2 \beta_H^2 s \over 256 \pi^3 \Lambda^4 } ; \\
&&
\!\!\!\beta_H = 16 \pi^2 \left( \alpha_{ \varphi \tilde W } +
\alpha_{ \varphi \tilde B } - \alpha_{ B \tilde W } \right) .
\end{eqnarray} Note that $ \beta_H \sim 1 $.
Since the Standard Model\ contribution vanishes due to the choice of polarization
vectors, the observability of this process is rate dominated.
To estimate the observability of this process I require that 10
events be generated in one year. This corresponds to \begin{equation}
\Lambda \,{\raise-3pt\hbox{$\sim$}}\!\!\!\!\!{\raise2pt\hbox{$<$}}\, 0.2 \times \sqrt{ \ell \over 100/ \hbox{fb} } \; \hbox{TeV}
\end{equation} where $ \ell $ denotes the luminosity.
To determine the content of this result recall that, since we are assuming
that the heavy physics is not directly observed, that $ \Lambda
> \sqrt{s} > 2 m_H $. Assuming for example, $ m_H = 250 \hbox{GeV} $ requires
$ \ell > 170 /$fb. On the other hand if $ \Lambda = 1 \hbox{TeV} $ then
$ \ell > 220 /$fb.
For this process the required luminosities are not absurdly
large as in the previous cased, but they are still large requiring
many years' integrated luminosity to observe even a marginal signal.
As before this can be traced to
the consistent estimates of the coefficients in the Lagrangian, if
I had (incorrectly)
taken $ \beta_H \sim 16 \pi^2 $ the required luminosity would
drop by two orders of magnitude.
\subsection{Higgs production.}
The same effective operators considered
above can be used to study the production of single Higgs bosons in
photon colliders. The graph is simply
\setbox4=\vbox to 1.5 truein{\epsfxsize=2 truein\epsfbox[0 0 576 792]{f4.ps}}
\centerline{\box4}
\vskip -50pt
\noindent where the heavy dot denotes an effective operator
vertex.
Note that there is no Standard Model\ contribution (at tree level),
and that the one loop contributions can be eliminated by the above
choice of polarization vectors. The cross section is then \begin{equation}
\sigma( \gamma \gamma \rightarrow H ) = { \alpha^2 \beta_H^2
m_H \over 32 \sqrt{2} \; \pi G_F \Lambda^4 } \delta( \sqrt{s } - m_H )
\end{equation} so that, integrating this over the width of the
Higgs and taking \begin{equation} \Gamma_H = { 3 g^2 m_H^3 \over
128 \pi m_W^2 } , \end{equation}
corresponding to a mass above the $WW$ threshold, yields
\begin{eqnarray} \bar \sigma &=&
\int_{ m_H - \Gamma_H/2 }^{ m_H + \Gamma_H/2 }
\sigma( \gamma \gamma \rightarrow H ) { d \sqrt{s} \over \Gamma_H }
\nonumber \\
&=& { 256 \pi^2 \beta_H^2 \over 3 } \left( { m_W s_W \over \Lambda } \right)^4
\inv { m_H ^2 } \end{eqnarray} so that,
\begin{equation} \bar \sigma \ell = \left( { \ell \over 100 / \hbox{fb} }
\right) \inv{ m_H^2} \left( { 16.2 \over \Lambda } \right)^4 \end{equation}
where $ \Lambda $ and $ m_H $ are measured in \hbox{TeV}.
Taking, for example,
$ m_H = 1 \hbox{TeV} $ and $ \ell = 10 / $fb then values of $ \Lambda $ below
$ 5 \hbox{TeV} $ generate more than ten events.
This is a non-trivial result: a $ 1 \hbox{TeV} $ accelerator can probe scales
five times its energy using this reaction; for lower Higgs mass or higher
luminosities the sensitivity to $ \Lambda $ (as a multiple of
$ \sqrt{s} $) improves.
\subsection{$ f \bar f $ final state}
For the process $ \gamma \gamma \rightarrow f \bar f $ there is
a forbidding zoo of operators that are Cp violating and should be included in
the effective lagrangian describing such processes.
For example,
denoting a left handed fermion doublet by $F$ and a right handed singlet by
$f$,
\begin{eqnarray}
&& i \left( \varphi^\dagger D_\mu \varphi \right)
\left( \bar F \gamma^\mu F \right) - \hbox{ h.c.} ; \nonumber \\ &&
i \left( \varphi^\dagger \sigma_I D_\mu \varphi \right)
\left( \bar F \sigma_I \gamma^\mu F \right) - \hbox{ h.c.} ; \nonumber \\
&& \left( \varphi^\dagger \varphi \right) \left( \bar F f \varphi -
\hbox{ h.c.} \right) ; \nonumber \\ &&
\left( \bar F \sigma^{ \mu \nu } f \right) \varphi B_{ \mu \nu } ;
\end{eqnarray}
\noindent etc. The analysis of such contributions is under way.
\section{Conclusions}
\begin{itemize}
\item A consistent application of the effective lagrangian
method gives, for most processes considered in this talk,
unobservable rates, even in the optimal situation where the
Standard Model\ contribution vanishes. This can be traced back to the
consistent estimation of the coefficients of the effective operators.
\item An ad-hoc over-estimation of the coefficients of the
operators can give very nice predictions which might be claimed
to be observable in near future colliders. These results are,
however, completely unreliable being based on an inconsistent model.
\item The best final state here considered is that of single
Higgs production for which the accelerator becomes a very respectable
probe into the physics underlying the Standard Model. If there is no Higgs,
or if its mass lies beyond $ 4 \pi v \simeq 3 \hbox{TeV} $, then new interactions
can be expected at this energy. This scenario was not explored in this
talk.
\item The experimentally benign case of a two fermion final
state is currently being studied.
\end{itemize}
\bigskip
The author gratefully acknowledges the help of J. Gunion and B. Grzadkowsy.
|
2,869,038,156,767 | arxiv | \section{introduction}
The diluted Ising antiferromagnet Fe$_{x}$Zn$_{1-x}$F$_{2}$ in an
external magnetic field has proven to be a good model system for the
random-exchange Ising model (REIM) ($H\approx$ 0) and the random field
(RFIM) Ising model ($H>$ 0) \cite{one,two}. In the original
derivation of the equivalence between the RFIM and a diluted Ising
antiferromagnet in a uniform applied field, weak dilution and small
values of $H/J$ were assumed \cite{three} ($J$ is the magnitude of the
exchange interaction). In the Fe$_{x}$Zn$_{1-x}$F$_{2}$ system, weak
dilution implies Fe concentrations well above the percolation
threshold \cite{four} $x_{p}$=0.25 and the most convincing
experimental results on the RFIM critical behaviour have been obtained on
samples with \cite{one,two} $x\geq$0.46. On the other hand,
interesting dynamic properties may become observable in the limit of
strong dilution. RFIM systems have been argued to attain extremely
long relaxation times at temperatures near $T_{c}$ \cite{five} and for
large values of $H/J$, where the ordered phase is destroyed, it has
been argued that a glassy phase will appear, even without exchange
frustration being present in the system \cite{six}. Experimental
results on Fe$_{x}$Zn$_{1-x}$F$_{2}$ samples of concentration at or
slightly above $x_{p}$ have revealed some dynamic properties similar
to those of conventional spin glasses \cite{seven,eight,nine,ten}. In
a recent paper \cite{eleven} it was found that the percolating
threshold sample Fe$_{0.25}$Zn$_{0.75}$F$_{2}$ exhibits magnetic
ageing, a typical spin glass feature, whereas the slowing down of the
dynamics followed an Arrhenius law, i.e. it did not support the
existence of a finite temperature spin glass phase transition.
Results using neutron scattering \cite{twelve} and Faraday rotation
technique \cite{thirteen} have established random-field induced spin
glass like dynamic behaviour in Fe$_{0.31}$Zn$_{0.69}$F$_{2}$. Recent
magnetization experiments revealed that a similar behaviour occur at
intense applied fields in samples of Fe$_{x}$Zn$_{1-x}$F$_{2}$, with $x$
= 0.56 and 0.60 \cite{fourteen}. In this paper we discuss experimental
results from dc-magnetisation and ac-susceptibility measurements on
the same Fe$_{0.31}$Zn$_{0.69}$F$_{2}$ system of earlier neutron and
Faraday rotation measurements. In zero applied
field, a slowing down of the dynamics occurs at low temperatures that
obeys a pure Arrhenius law and some slowing down is also observable
near the antiferromagnetic transition temperature. In applied
dc-fields, additional slow dynamical processes are introduced near
$T_{N}$ by the random fields. A comprehensive static and dynamic
phase diagram in the $H-T$ plane is deduced that, in parts, adequately
compares with an earlier published phase diagram on the same compound
\cite{thirteen}.
\section{experimental}
A high quality single crystal \cite{twelve,thirteen} of
Fe$_{0.31}$Zn$_{0.69}$F$_{2}$ in the form of a parallelepiped with its
longest axis aligned with the crystalline $c$-axis was used as a
sample. The frequency dependence of the ac-susceptibility in zero
applied dc-field was studied in a Cryogenic Ltd. S600X SQUID-magnetometer.
A commercial Lake Shore 7225 ac-susceptometer was employed for the
ac-susceptibility measurements in a superposed dc magnetic field and
the temperature dependence of the magnetisation in different applied
dc-fields was measured in a Quantum Design MPMS5 SQUID-magnetometer.
The magnetic field was in all experiments applied parallel to the
$c$-axis of the sample.
\section{Results and Discussion}
Fig. 1 shows the temperature dependence of both components of the
ac-susceptibility, (a) $\chi'$($\omega,T$) and (b)
\begin{figure}
\centerline{\hbox{\epsfig{figure=figg1x.eps,width=7.0 cm}}}
\caption{
\hbox {Temperature dependence of the ac-susceptibility at}
different frequencies as indicated in the figures. The probing ac field is 1
Oe. (a) $\chi'(\omega)$ and (b) $\chi''(\omega)$.
}
\label{fig1}
\end{figure}
$\chi''$($\omega,T$). The different frequencies ranges from 0.051-51
Hz as indicated in the figures. The transition from a paramagnetic
Curie-Weiss behaviour at high temperatures to long range
antiferromagnetic order is signaled by the cusp in $\chi'$($\omega,T$)
at about 20 K. A small bump in $\chi''$($\omega,T$) is observed at
about the same temperature. Below 15 K the ac-susceptibility becomes
frequency dependent. The out-of-phase component increases and a
frequency dependent maximum that shifts towards lower temperatures
with decreasing frequency is observed below $T\approx$ 5 K. The
frequency dependence of $\chi'$($\omega,T$) and $\chi''$($\omega,T$)
at low temperatures shows some resemblance with the behaviour of an
ordinary spin glass. However, earlier neutron scattering measurements
indicated that AF LRO is established below $T_{N}$ $\approx$
19.8 K in this system \cite {twelve}, provided the sample is submitted to a
slow cooling process. To investigate the nature of the slowing down of the
dynamics
at low temperatures, a comparison is made with the behavior observed in
ordinary spin glasses. A 3d spin glass exhibits conventional
critical slowing down of the dynamics \cite{fifteen} according to:
\begin{equation}
{\frac{\tau }{\tau_{0} } } = \left({\frac{T_{f}-T_{g} }{T_{g} } }
\right)^{-z\nu},
\label{conv}
\end{equation}
where $\tau_{0}$ is the microscopic spin flip time of the order
$10^{-13}$-$10^{-14}$ s, $T_{g}$ the spin glass temperature and $z\nu$
a dynamical critical exponent. Defining the inflection point in
$\chi''$($\omega,T$) as a measure of the freezing temperature $T_{f}$
for a relaxation
\begin{figure}
\centerline{\hbox{\epsfig{figure=figg2x.eps,width=6.5cm}}}
\caption{
\hbox {The best fit of the relaxation times to activated} dynamics:
log$t$ vs. $T_{f}^{-1}$, implying a pure Arrhenius behaviour of the
slowing down of the dynamics.
}
\label{fig2}
\end{figure}
time ($\tau$) corresponding to the observation time,
$t\approx$1/$\omega$, of the ac-susceptibility measurement, the
derived data may be employed for dynamic scaling analyses. The data
do not fit conventional critical slowing down according to eq. 1 with
physically
plausible values of the parameters. Activated dynamics could
govern the dynamics still yielding a finite phase transition
temperature. The slowing down of the relaxation times should then
obey:
\begin{equation}
ln\left({\frac{\tau }{\tau_{0} } }\right) = {\frac{1}{T_{f} } }
\left({\frac{T_{f}-T_{g} }{T_{g} } }
\right)^{-\psi\nu},
\label{activated}
\end{equation}
where $\psi\nu$ is a critical exponent \cite{fh}. The derived data
fits eq. 2 with $T_{g}\approx 0$ which implies that the slowing
down rather is described by a generalized Arrhenius law:
\begin{equation}
log\left({\frac{\tau }{\tau_{0} } }\right) \propto T_{f}^{-x}.
\label{arrhenius}
\end{equation}
Fig. 2 shows the best fit to this expression yielding x=1 and
$\tau_{0}$=10$^{-14}$ s for 0.051 $ \leq \omega/2\pi$(Hz) $\leq$
1000.
The observed frequency dependent ac-susceptibility shows striking
similarities with the behaviour of alleged reentrant antiferromagnets.
In such a system there is a transition from a paramagnetic phase to an
antiferromagnetic phase and spin glass behaviour is observed at low
temperatures. The reentrant Ising antiferromagnet
Fe$_{0.35}$Mn$_{0.65}$TiO$_{3}$ displays similar features as this
system, e.g. the low temperature slowing down of the dynamics is
found to obey a pure Arrhenius behaviour \cite{reanti}.
Furthermore,
the more diluted system Fe$_{0.25}$Zn$_{0.75}$F$_{2}$ (on the
percolation threshold) does not display long range antiferromagnetic
order but it exhibits a slowing down of the relaxation times that
follows a pure Arrhenius law \cite {eleven} with a similar value of
$\tau_{0}$ as
here derived for Fe$_{0.31}$Zn$_{0.69}$F$_{2}$.
In Fig. 3 (a) $\chi'$($\omega, T, H$) and (b) $\chi''$($\omega,
T, H$) are plotted for $\omega/2\pi$=125 Hz in different superposed dc
magnetic fields $H\leq$ 2 T. At these rather low fields, the maximum
in $\chi'$($\omega, T, H$) near $T_{N}$($H$) gets rounded and is
pushed towards lower temperature with increasing magnitude of
\begin{figure}
\centerline{\hbox{\epsfig{figure=figg3x.eps,width=8.0cm}}}
\caption{
\hbox {(a) $\chi'(\omega, T, H)$ and (b) $\chi''(\omega, T, H)$ at}
$\omega/2\pi$=125 Hz vs. $T$ in different applied dc-fields: 0, 0.25,
0.5, 0.75, 1, 1.25, 1.5, 1.75 and 2 T. The inset in (b) shows
$\chi''(\omega, T, H)$ for dc-fields 0.25, 0.375, 0.5, 0.625 T. The
amplitude of the ac-field is 10 Oe.
}
\label{fig3}
\end{figure}
the
magnetic field. The corresponding bump in the out-of-phase component
in zero dc-field, increases in magnitude and sharpens for increasing
dc-fields up to 1 T (the inset of Fig. 3 (b) displays fields up to
0.625 T). A measure of the phase transition temperature
$T_{N}$($H$) is given by the position of the maximum in the derivative
d($\chi'$$T$)/d$T$ \cite {Fisher relation}. For fields $H \leq$
1.5 T, $T_{N}$($H$) is pushed to lower temperatures with increasing
field strength following a REIM to RFIM crossover scaling, as described in
ref. 13. At higher fields the maximum is washed out which signals that
the antiferromagnetic phase is destroyed. The destruction of the
antiferromagnetic phase by strong random fields in Fe$_{x}$Zn$_{1-x}$F$_{2}$
was observed by earlier Faraday rotation \cite {thirteen} and neutron
scattering
\cite {twelve} measurements in the same system ($x$ = 0.31), and by recent
magnetization \cite {fourteen} and dynamic susceptibility studies
\cite {sixteen} in less diluted samples ($x$ = 0.42, 0.56 and 0.60).
A glassy dynamics is found in the upper portion of the $H-T$ phase diagram
of Fe$_{x}$Zn$_{1-x}$F$_{2}$, at least within the interval $0.31 \leq $x$
\leq 0.60$.
In increasing applied dc-fields the out-of-phase component is enhanced
in a rather narrow but widening region near the antiferromagnetic
phase transition due to the introduction of random fields that create new
slow dynamical processes in the system. The increase of
$\chi''$($\omega, T, H$) at lower temperatures, corresponding to the
processes causing the slowing down of the dynamics already in zero
field, remains observable also when the field is increased. This latter feature
cannot be entirely attributed
\begin{figure}
\centerline{\hbox{\epsfig{figure=figg4x.eps,width=7.0cm}}}
\caption{
\hbox {(a) $\chi'(\omega, T,
H)$ and (b) $\chi''(\omega, T, H)$ vs.
$T$}
in an applied dc-field $H$=1.5 T and at different frequencies
$\omega/2\pi$=15, 125 and 1000 Hz. The
amplitude of the ac-field is 10 Oe.
}
\label{fig4}
\end{figure}
to random fields. For larger
fields these low temperature processes and the processes caused by the
random fields start to overlap, and at the highest dc-fields they even
become indistinguishable. In Fig. 4 both components of the
ac susceptibility are plotted, in an applied dc-field $H$=1.5 T,
for $\omega/2\pi$=15, 125 and 1000 Hz. Note that the temperature
of the maximum in $\chi'$($\omega, T, H$), at $T_{N} (H)$, shifts to
lower temperatures as the frequency decreases. By way of contrast,
no shift in the peak temperature is observable as a function of the
frequency in dynamic susceptibility measurements performed
in Fe$_{0.46}$Zn$_{0.54}$F$_{2}$ \cite {test}
and Fe$_{0.42}$Zn$_{0.58}$F$_{2}$ \cite {sixteen}, within the field
limits of the weak RFIM problem in each case. The frequency dependent
behaviour of $T_{N} (H)$ is a feature associated with the effects of strong
random fields in samples of Fe$_{x}$Zn$_{1-x}$F$_{2}$,
particularly with $x$ close to $x_p$.
In Fig. 5 (a) and (b) $\chi'$($\omega, T, H$) and $\chi''$($\omega, T,
H$) are plotted for $\omega/2\pi$=125 Hz in different superposed dc
magnetic fields $H\geq$2 T. The maximum in the in-phase-component is
flattend, the susceptibility is strongly surpressed and the onset of the
out-of-phase susceptibility is shifted towards lower temperatures as
the dc-field is increased. No sign of a transition to an antiferromagnetic
phase is observed.
Fig. 6 shows the temperature dependence of the field cooled (FC),
$M_{FC}$($T$)/$H$, and zero field cooled (ZFC), $M_{ZFC}$($T$)/$H$,
susceptibility \cite{seventeen} at three different applied
magnetic
fields. Below a temperature $T_{ir}$ the magnetisation becomes
irreversible. $T_{ir}$ decreases with increasing
\begin{figure}
\centerline{\hbox{\epsfig{figure=figg5x.eps,width=7.0cm}}}
\caption{
\hbox {(a) $\chi'(\omega, T, H)$ and (b) $\chi''(\omega, T, H)$ at}
$\omega/2\pi$=125 Hz vs. $T$ in different applied dc-fields: 2, 2.5,
3, 3.5, 4, 4.5 and 5 T. The
amplitude of the ac-field is 10 Oe.
}
\label{fig5}
\end{figure}
\begin{figure}
\centerline{\hbox{\epsfig{figure=figg6x.eps,width=7.0cm}}}
\caption{
\hbox {Temperature dependence of the dc-susceptibility} for
zero-field-cooled (ZFC) and field-cooled (FC) procedures in different
fields, as indicated in the figure.
}
\label{fig6}
\end{figure}
field. The
irreversibility point is associated with an observation time mainly
governed by the heating rate of the ZFC experiment which in our
experiment corresponds to about 100 s.
In Fig. 7 an $H-T$ magnetic phase diagram is shown,
in which some of the
above discussed experimental characteristics are summarised. The open circles
represent $T_{N}$($H$), the solid circles $T_{ir}$($H$), diamonds
the spin freezing temperatures $T_{f}$($H$) for $\omega
/2\pi$=125 Hz and open triangels label $T_{f}$($H$=0) for different
frequencies. The onset of $\chi''$($\omega, T, H$) at frequencies
$\omega/2\pi$=15, 125 and 1000 Hz are shown as solid triangels, solid
squares and open squares respectively. Those are measures that
mirror the observation time dependence of $T_{ir}$.
\begin{figure}
\centerline{\hbox{\epsfig{figure=figg7x.eps,width=7.0 cm}}}
\caption{
\hbox {$H-T$ diagram of the Fe$_{0.31}$Zn$_{0.69}$F$_{2}$ }system.
$T_{N}$($H$) open circles, $T_{ir}$($H$) solid circles, onset of
$\chi''(\omega, T, H)$ for $\omega/2\pi$=15 (solid triangels), 125
(solid squares) and 1000 Hz (open squares),
$T_{f}$($H$, $\omega/2\pi$=125 Hz) diamonds and $T_{f}$($H$=0)
triangels for different frequencies $\omega/2\pi$ (from left to
right): 0.051, 0.17, 0.51, 1.7, 5.1, 17, 51, 125 and 1000 Hz.
}
\label{fig7}
\end{figure}
In diluted Ising antiferromagnets, $T_{N}$ is expected to decrease with
increasing
magnetic fields as:
\begin{equation}
{\epsilon \propto H
^{2/\phi} \qquad \textnormal{;}\qquad
\epsilon=\left({\frac{T_{N}(H)-T_{N}(0)+bH^{2}
}{T_{N}(0) } }
\right)
}
\label{less}
\end{equation}
where $\phi$ is a crossover exponent and $bH^2$ a small mean field
correction. For low fields, $H\leq$1.5 T, we find $\phi\approx$1.4
using $b$=0 for $T_{N}$($H$) as indicated by the solid line in Fig.
7. For higher fields, $H\geq$1.5 T, a reversal of the curvature of
$T_{ir}$($H$) occurs. The dashed line corresponds to a functional
behaviour according to eq. 4 with an exponenet $\phi\approx$ 3.4. A
largely equivalent phase diagram has earlier been established for the
same system utilising Faraday rotation measurements \cite{thirteen}.
One significant difference being that $T_{ir}$($0$) $\approx
T_{N}$($0$) in ref. \cite{thirteen}, whereas we find a significant
difference between these two temperatures, as is also observed in other dilute
antiferromagnets \cite{voffe}. The field dependence of $T_{N}$($H$)
is equivalent to those of the more concentrated
Fe$_{0.46}$Zn$_{0.54}$F$_{2}$ and Fe$_{0.72}$Zn$_{0.28}$F$_{2}$
where the scaling behaviour of eq. 4 gives $\phi \approx$1.4 for fields
up to 2 T and 10 T respectively \cite{eighteen}. The new features of
the phase diagram in Fig. 7 as compared to the one of ref.
\cite{thirteen} are the observation
time dependent spin freezing
temperatures at low temperature and the observation time dependence of
$T_{ir}$($H$) demonstrated by the shifts of the $T_{ir}$($H$) contours
towards higher temperatures when decreasing the observation time.
A possible mechanism for the spin freezing at low temperatures may be a
weak frustration present in a third nearest neighbour interaction of this
compound. Results of numerical simulation \cite {nineteen} indicates that a
small frustrated bond plays no role in the REIM properties of
Fe$_{x}$Zn$_{1-x}$F$_{2}$ under weak dilution. However, it causes dramatic
influences in the antiferromagnetic and spin glass order parameters close
to the percolation threshold.
\section{conclusions}
Dynamic and static magnetic properties of the diluted antiferromagnet
Fe$_{0.31}$Zn$_{0.69}$F$_{2}$ have been studied. The dynamic
susceptibility in zero dc-field shows similarities to a reentrant
Ising antiferromagnet with a slowing down of the dynamics at low
temperatures best described by a pure Arrhenius law. Hence, there is
no transition to a spin glass phase at low temperatures.
The field dependence of the antiferromagnetic
transition temperature follows the predicted scaling behaviour for a
random field system, in accord with earlier experimental
findings\cite{thirteen,eighteen}. The onset of $\chi''(\omega, T, H)$ occur
above the antiferromagnetic phase transition, even in zero
applied magnetic field. $\chi''(\omega, T, H)$ shows a frequency dependent
behaviour that mirror the observation time dependence of the FC-ZFC
irreversibility line. The dynamics of the diluted antiferromagnet
Fe$_{0.31}$Zn$_{0.69}$F$_{2}$ has been shown to involve not only
random field induced slow dynamics near $T_{N}$($H$), but additional
slow dynamics originating from the strong dilution appears at low
temperatures.
\section{acknowledgments}
Financial support from the Swedish Natural Science Research Council
(NFR) is acknowledged. One of the authors (FCM) acknowledge the support
from CNPq and FINEP (Brazilian agencies).
\begin {references}
\bibitem{one} V. Jaccarino and A. R. King, Physica A {\bf 163}, 291
(1990).
\bibitem{two} D. P. Belanger, Phase Trans. {\bf 11}, 53 (1988); D. P.
Belanger, in: {\it Spin Glass and Random Fields} edited by A. P. Young
(World Scientific, Singapore, 1997).
\bibitem{three} S. Fishman and A. Aharony, J. Phys. C {\bf 12}, L729
(1979).
\bibitem{four} M. F. Sykes and J. W. Essam, Phys. Rev. {\bf 133}, A310
(1964).
\bibitem{five} A. J. Bray, J. Phys. C {\bf 16}, 5875 (1983).
\bibitem{six} J. R. L. de Almeida and R. Bruinsma, Phys. Rev. B {\bf 37},
7267 (1987).
\bibitem{seven} F. C. Montenegro, S. M.Rezende and M. D. Coutinho-Filho,
J. Appl. Phys. {\bf 63}, 3755 (1988); ibid. Europhys. Lett. {\bf 8}, 383
(1989).
\bibitem{eight} S. M. Rezende, F. C. Montenegro, M. D.
Coutinho-Filho, C. C. Becerra and A. Paduan-Filho, J. Phys. (Paris)
Colloq. {\bf 49}, c8-1267 (1988).
\bibitem{nine} F. C. Montenegro, U. A. Leitao, M. D. Coutinho-Filho
and S. M. Rezende, J. Appl. Phys. {\bf 67}, 5243 (1990).
\bibitem{ten} S. M. Rezende, F. C. Montenegro, U. A. Leitao and M. D.
Coutinho-Filho, in {\it New Trends in Magnetism} edited by M. D.
Coutinho-Filho and S. M. Rezende (World Scientific, Singapore, 1989),
p. 44.
\bibitem{eleven} K. Jonason, C. Djurberg, P. Nordblad and D. P.
Belanger, Phys. Rev. B {\bf 56}, 5404 (1997).
\bibitem{twelve} D. P. Belanger, Wm. E. Murray Jr, F. C. Montenegro, A.
R. King, V. Jaccarino and R. W. Erwin, Phys. Rev. B {\bf 44}, 2161
(1991).
\bibitem{thirteen} F. C. Montenegro, A. R. King, V. Jaccarino, S-J.
Han and D. P. Belanger, Phys. Rev. B {\bf 44}, 2155 (1991).
\bibitem{fourteen} F. C. Montenegro, K. A. Lima, M. S. Torikachvili,
A. H. Lacerda, J. Magn. Magn. Mater. {\bf 177-181}, 145 (1998); ibid.
in {\it Magnetism Magnetic Materials and their Applications} edited by
F. P. Missel (Trans Tech Publications Ltd, Switzerland, 1999), p. 371.
\bibitem{fifteen} P. C. Hohenberg and B. I. Halperin, Rev. Mod. Phys.
{\bf 49}, 435 (1977).
\bibitem{fh} D. S. Fisher and D. A. Huse, Phys. Rev. B {\bf 38}, 373
(1988); {\bf 38}, 386 (1988).
\bibitem{reanti} K. Jonason, P. Nordblad and A. Ito,
unpublished.
\bibitem{Fisher relation} M.E. Fisher, Phil. Mag. {\bf 7}, 1731 (1962).
\bibitem{sixteen} A. Rosales-Rivera, J. M. Ferreira and F. C. Montenegro,
unpublished.
\bibitem{test} A. R. King, J. A. Mydosh and V. Jaccarino, Phys. Rev. Lett.
{\bf 56}, 2525 (1986).
\bibitem{seventeen} Susceptibility is here defined and calculated as
$M/H$.
\bibitem{voffe} P. Nordblad, J. Mattsson, W. Kleemann, J. Magn. Magn.
Mater. {\bf 140-144}, 1553 (1995).
\bibitem{eighteen} A. R. King, V. Jaccarino, D. P. Belanger and S. M.
Rezende, Phys. Rev. B {\bf 32}, 503 (1985).
\bibitem{nineteen} E. P. Raposo, M. D. Coutinho-Filho and
F. C. Montenegro, Europhys, lett. {\bf 29} 507 (1995);
ibid. J. Magn. Magn. Mater. {\bf 154}, L155 (1996).
\end{references}
\end{multicols}
\end{document}
1
1
|
2,869,038,156,768 | arxiv | \section{Introduction}
Red supergiants and red giants are the most luminous stars in, respectively,
star forming or old passive galaxies. Being cool, they are the dominant
sources of near-IR light. In highly
reddened starburst galaxies, the near-IR light from red supergiants
is sometimes the only direct information available on the stellar populations.
Models for their spectra are thus important, even though they
are also particularly difficult to construct (rich molecular line spectrum,
extended atmospheres). If they are
successful in reproducing empirical spectra, it will be legitimate
to use them instead of observed spectral libraries in future analyses
of galaxies.
Recently, Levesque et al. (2005) have shown that up-to-date models
compare well with optical observations of red supergiants,
and shown that this success helps in explaining the
location of observed red supergiants in the HR-diagram.
At near-IR wavelengths (1--2.5\,$\mu$m), the most prominent molecular
features are those of CO and CN. Their sensitivity to surface gravity
and effective temperature (\mbox{$T_{\mbox{\rm \tiny eff}}$} ) has been the basis of the 8-colour
classification system of White \& Wing (1978), although they are
also sensitive to other parameters (Tsuji 1976, McWilliam \& Lambert 1984,
McGregor 1987,
Bessell et al. 1991, Origlia et al. 1997). Strong CN bands are
predicted for low gravity stars with temperatures around 4500\,K, and
are indeed observed in local red supergiants (Lan\c{c}on \& Wood 2000) and
in extragalactic objects such as the bright star clusters of M\,82
(Lan\c{c}on et al. 2003). However, it had not been verified until now whether
models are capable of reproducing the strengths of various CN and CO bands
throughout the near-IR range simultaneously. Nor whether they can match
optical and near-IR properties together.
An important aspect not accounted for in recent collections of model spectra
for red supergiants is internal mixing.
Standard stellar evolution predicts non-solar
surface abundance ratios due to convective dredge-up in the red supergiant
phase (Iben 1966, Maeder 1981).
Early observations had pointed out the inadequacy of
solar abundance ratios in individual cases (e.g. $\alpha$ Ori, Beer et
al. 1972). More recently, both theory and observations showed
that main sequence rotation or other processes are capable of mixing
CNO-cycle products into the atmosphere even before the red supergiant phase
is reached (Maeder \& Meynet 2001, Trundle \& Lennon 2005).
In red supergiants, He and $^{14}$N surface abundances
are typically enhanced while $^{16}$O and $^{12}$C abundances are reduced.
Modified abundances of C,N and O alter the relative strengths of the
predominant molecules.
In this paper, we present recent \mbox{\tt PHOENIX}\ models specifically computed
to address some of the above points.
The emphasis is on the effects of
non-solar abundance ratios of C,N and O; a more complete study of other
parameters (in particular micro-turbulent velocities) has been started and
will be reported in a forthcoming paper.
The model assumptions are described in
Sect.\,\ref{models.sec} and the predicted colours and molecular features
in Sect.\,\ref{trends.sec}. In Sect.\,\ref{twocolour.sec} and
\ref{spectra.sec}, the models are compared with spectroscopic
data covering wavelengths from 1 to 2.4\,$\mu$m or,
for a subsample, from 0.51 to 2.4\,$\mu$m. Giants of class III,
luminous giants of class II and supergiants of class I are discussed
successively. The discussion in Sect.\,\ref{discussion.sec} focuses on
fundamental parameter determinations from spectra, including the effects
of mass and surface abundances on \mbox{$T_{\mbox{\rm \tiny eff}}$} . A brief conclusion summarizes
the results.
\section{\mbox{\tt PHOENIX}\ models with solar and modified abundances}
\label{models.sec}
\subsection{Summary of model ingredients}
\begin{figure}
\includegraphics[clip=,width=0.49\textwidth]{5824f1.ps}
\caption[]{Typical differences between \mbox{\tt PHOENIX}\
spectra obtained with an initial
wavelength sampling step of 2\,\AA\ (grey) and 0.1\,\AA\ (black). Both spectra
have been smoothed by convolution with a gaussian with FWHM=15\,\AA. The
models shown have T$_{\rm eff}$=4000\,K, log($g$)=1, M=1\,M$_{\odot}$, but
differences are important for any of the calculated models. Only the
high resolution calculations match the data.}
\label{effetRes.fig}
\end{figure}
The model atmospheres and synthetic spectra were computed with \mbox{\tt PHOENIX}\
version 13.11.00B.
The model setup is identical
to that of Ku\v{c}inskas et al. (2005). We recall only the most relevant
details here. The models are computed in spherical symmetry.
A mixing length to pressure scale height ratio of 2.0 is used for
all models.
Dust is allowed to
form in the atmospheres but is assumed to immediately rain out of the
photospheric layers; therefore, no dust opacities are used in the models
shown here. This is an important assumption for cool models with large
extensions. In addition, all models presented here have large
enough gravities to not produce a radiatively driven wind and,
therefore, winds are not included.
The model spectra were computed specifically for comparison with data
that has a spectral resolving power of order 1000, i.e. $\Delta \lambda \simeq
10$\,\AA\ at 1\,$\mu$m. {\em Nevertheless, we emphasize
that the model spectra must be computed at high spectral resolution
before smoothing, in order to sample individual absorption line
profiles properly and to obtain low resolution spectra that
resemble observations}\ (Fig.\,\ref{effetRes.fig}).
We used a wavelength sampling step of 0.1\,\AA\ throughout.
Using only half these points produces negligible changes
at $\lambda>8000\,\AA$ (0.1\,\% rms), and small variations in the shapes
of the strongest optical bands at $\lambda<8000\,\AA$ (2\,\% rms).
The small sampling step used here is an important change with respect
to previous collections of \mbox{\tt PHOENIX}\ spectra, which were computed with
an initial wavelength sampling step of 2\,\AA\ (e.g. Ku\v{c}inskas et al., 2005,
and models included in the library of Martins et al., 2005).
The models discussed cover effective temperatures ranging from 2900
to 5900\,K and gravities in the set log($g$)=\{$-1$,$-0.5$,0,1,2\}
(cm.s$^{-2}$).
The micro-turbulent velocity is set to
a constant value of $v_{\rm mic}$\,=\,2\,km\,s$^{-1}$,
except in a few exploratory models.
Values of 2 to 3\,km\,s$^{-1}$
are typical for red giant stars (Smith \& Lambert 1985).
A more extensive grid of models covering higher values of this
parameter is in the process of being calculated.
Two stellar masses are considered\,: 1\,M$_{\odot}$ and 15\,M$_{\odot}$.
Models at 9\,M$_{\odot}$ were also computed, but the differences with
the 15\,M$_{\odot}$ ones are negligible. For M=1\,M$_{\odot}$, many of the
calculations at log($g$)=$-1$ did not converge (radiation pressure incompatible
with the assumption of no winds),
and this mass-gravity combination is therefore excluded from
the discussion.
We also restrict the discussion to optical and near-IR wavelengths, with
a focus on wavelengths between 0.81 and 2.4\,$\mu$m.
\subsection{Abundances}
The reference set of models assumes solar abundances, based on the review
of Grevesse \& Noels (1993). The values most relevant to our study are
summarized in Col.\,2 of Tab.\,\ref{abundances.tab}. A subset of
models with solar-scaled abundances but $\log(Z/Z_{\odot})=-0.3$ was also
computed but will only be discussed briefly in Sect.\,\ref{Teffscales_models.sec}.
The second set of models has the same metallicity Z=0.02 as the reference
set, but modified abundances of $^4$He, $^{12}$C, $^{14}$N, $^{16}$O
(Col.\,5 of Tab.\,\ref{abundances.tab}).
In the following, the adopted modified
abundances will be refered to as ``RSG-specific abundances".
The RSG-specific abundances were selected by
inspection of the evolutionary tracks
of Schaller et al. (1992; their case of standard mass loss)
for stars with initial masses above 7\,M$_{\odot}$, at evolutionary
timesteps with effective temperatures below 4500\,K. The values selected are
representative of the final red supergiant stages of a star of
initial mass 20\,M$_{\odot}$ (\mbox{$T_{\mbox{\rm \tiny eff}}$}$\simeq$3550\,K).
Stars of lower initial masses would
have RSG abundances closer to the main sequence values, while
the tracks at 25\,M$_{\odot}$ reach significantly larger modifications
(tracks above 25\,M$_{\odot}$ don't extend to the low effective temperatures
of red supergiants).
Note that the initial mass fractions of Schaller et al. (1992) are
not exactly the same as assumed in our reference set (mainly because
of their larger He abundance), but that these differences are small
compared to those that distinguish red supergiants from zero age main sequence
stars.
\begin{table*}
\begin{center}
\caption[]{Surface abundances (by mass)}
\label{abundances.tab}
\begin{tabular}{c|lll|lll}
& Adopted & Geneva & Padova & Adopted & Geneva & Padova \\
& reference & 1992-1994 & 1993 & RSG-specific & 1994 & 1993 \\
Element & set & ZAMS & ZAMS & set & RSG & RSG \\
(1) & (2) & (3) & (4) & (5) & (6) \\ \hline
$^1$H & {\bf 0.703} & 0.680 & 0.700 & {\bf 0.580} & 0.55 & 0.63 \\
$^4$He & {\bf 0.280} & 0.300 & 0.280 & {\bf 0.400} & 0.43 & 0.35 \\
$^{12}$C & {\bf 0.0030} & 0.0044 & 0.0049 & {\bf 0.0022 } & 0.0020 & 0.0032 \\
$^{14}$N & {\bf 0.00092} & 0.0014 & 0.0012 & {\bf 0.0065} & 0.0080 & 0.0051 \\
$^{16}$O & {\bf 0.0083} & 0.0106 & 0.0106 & {\bf 0.0076} & 0.0068 & 0.0084 \\
\hline
\end{tabular}
\end{center}
{\em Notes:}\
{\bf Col. 2:} Abundances adopted in our reference set of solar metallicity
models.
{\bf Col. 3:} For comparison, main sequence abundances of Schaller et al.
(1992), also used by Meynet et al. (1994). Metal abundance ratios based on
Anders \& Grevesse (1989).
{\bf Col. 4:} Main sequence abundances of Bressan et al. (1993). Metal
abundance ratios based on Grevesse (1991).
{\bf Col. 5:} Abundances adopted in our RSG-specific set of models (see text).
{\bf Col. 6:} Final RSG abundances of Meynet et al. (1994)
for 20\,M$_{\odot}$ stars.
{\bf Col. 7:} Final RSG abundances of Bressan et al. (1993) for
20\,M$_{\odot}$ stars.
\end{table*}
In Tab.\,\ref{abundances.tab}, the adopted abundances are compared
to other values in the literature. The tracks of Meynet et al. (1994)
assume larger mass loss rates than those of Schaller et al. (1992).
More ingredients distinguish the models of Bressan et al. (1993) from those
of the Geneva group. Nevertheless, predicted surface abundance alterations
are of a comparable amplitude. Evolutionary tracks for stars with initial
rotation predict that comparable $^{14}$N enhancements are reached
already by the end of the main sequence (Meynet \& Maeder 2000, 2003), and
are further increased during the RSG phase. The achieved abundance ratios
depend strongly on initial rotation velocity, on initial mass and
on mass loss prescriptions.
One major difference between rotating and non-rotating models is
the length of time a star spends as a red supergiant
with modified abundances in one and the other case
(see also Maeder \& Meynet 2001).
Mixing also occurs along the red
giant branch and asymptotic giant branch
for low and intermediate mass stars.
The RSG-specific abundances adopted here are more extreme than those
obtained through 1st dredge-up on the RGB (e.g. Iben 1964,
Charbonnel et al. 1996, Girardi et al. 2000).
In particular, the RSG-specific enrichment in He and $^{14}$N
and the drop in H and $^{12}$C are larger than expected from 1st dredge-up.
More mixing may however occur through
second dredge-up on the early asymptotic giant branch (for stars with
initial masses above about 4\,M$_{\odot}$, e.g. Becker \& Iben 1979,
Boothroyd \& Sackmann 1999), and though ``non-standard"
extra mixing for low mass stars that have evolved on the red giant branch past
the RGB-bump (Charbonnel 1994, Charbonnel \& do Nascimento 1998).
Both these processes affect relatively luminous giant stars. The second
one seems to be less efficient at the quasi-solar metallicities we consider
than for population II stars. Therefore, we expect our RSG-specific
abundances {\em not} to be appropriate for most solar neighbourhood giants of
class III, while they might be relevant to some giants of class II.
We note that future calculations with modified surface abundances will
include mixing-induced enhancements in the $^{13}$C abundance, since
$^{13}$CO is a clear feature in near-IR spectra of cool stars.
The effects of recent changes in measurements of the solar abundance
ratios (Asplund et al. 2005) will also be investigated.
\subsection{Spectra in numerical form}
The model spectra for M=1\,M$_{\odot}$ with solar abundances,
for M=15\,M$_{\odot}$ with solar abundances, and
for M=15\,M$_{\odot}$ with RSG-specific abundances,
are made available in FITS format through
CDS.
Because the quality assessments made in this paper are restricted
to resolutions of order $10^3$ in the near-IR (and a few hundred
at wavelengths below 0.81\,$\mu$m), the spectra made available
are smoothed with a Gaussian to a full width at half
maximum of 2\,\AA. The initial models, calculated with a
wavelength step of 0.1\,\AA, can be requested from A.L. or P.H.H.
\section{Trends in the models}
\label{trends.sec}
\begin{figure*}
\includegraphics[clip=,width=\textwidth]{5824f2.ps}
\caption[]{Effects of gravity, temperature and surface abundances on
\mbox{\tt PHOENIX}\ model spectra. In each figure, the upper spectrum has
$\mbox{$T_{\mbox{\rm \tiny eff}}$} = 4000$\,K and the lower one $\mbox{$T_{\mbox{\rm \tiny eff}}$} = 3300$\,K. For easier
comparison, fluxes have been normalized to values comparable to
those in the upper left diagram. The figures on the left are at log($g$)=2,
those on the right at log($g$)=-0.5. The upper figures
are for solar abundances, the lower ones for RSG-specific abundances.
The effect of mass (1\,M$_{\odot}$ vs. 15\,M$_{\odot}$) is too small
to be easily identified on this type of figure.}
\label{modeltrends.fig}
\end{figure*}
Spectra illustrating the effects of mass, gravity and surface abundances
are provided in Fig.\,\ref{modeltrends.fig}. In this
section, we will discuss quantitative trends using selected
colours and molecular band indices. The indices
are measured for each spectrum using:
(i) the standard J, H, K filter passbands of Bessell \& Brett (1988);
(ii) narrow and intermediate-band filters as described in
Tab.\,\ref{indexdef.tab}.
All narrow and intermediate filter passbands are approximated with rectangles
of identical central wavelength and width as the filters in the original
references (as already done by Bessell et al. 1989).
A model Vega spectrum provides
the zero points in all passbands.
\begin{table}
\caption[]{Filter and index definitions}
\label{indexdef.tab}
\begin{tabular}{llll}
Filter & Center & Width & Notes \\
& ($\mu$m)& (\AA) & \\ \hline
104 & 1.0395 & 50 & quasi-continuum (1) \\
108 & 1.0800 & 60 & CN (1) \\
110 & 1.1000 & 50 & quasi-continuum near CN (1) \\
220 & 2.2000 & 1100 & quasi-continuum (2) \\
236 & 2.3600 & 800 & 1st overtone CO (2) \\
COH & 1.6222 & 80 & 2nd overtone CO \\
COHc1 & 1.6160 & 30 & absorption-minimum near CO \\
COHc2 & 1.6285 & 30 & absorption-minimum near CO \\
\hline
Index & \multicolumn{3}{l}{Definition (3)} \\ \hline
104-220 & \multicolumn{3}{l}{$-2.5\log$(104/220)$+$cst} \\
CO(2.3) & \multicolumn{3}{l}{$-2.5\log$(236/220)$+$cst} \\
CO(1.6) & \multicolumn{3}{l}{$-2.5\log$[2\,COH/(COHc1+COHc2)]$+$cst} \\
CN(1.1) & \multicolumn{3}{l}{$-2.5\log$(110/108)$+$cst} \\
\hline
\end{tabular}
\\
{\em Notes :}\ (1) Adapted from the 8-colour system of R.F.\,Wing
(White \& Wing 1978). (2) Adapted from Frogel et al. (1978).
(3) cst stands for a constant that gives the index the value 0 for
a model spectrum of Vega.
\end{table}
\subsection{Colours}
\begin{figure*}
\centerline{
\includegraphics[clip=,angle=270,width=0.48\textwidth]{5824f3.ps}
\includegraphics[clip=,angle=270,width=0.48\textwidth]{5824f4.ps}
}
\caption[]{Temperature sensitive near-IR colours
(solar abundances, 15\,M$_{\odot}$).
In the right panel, the effect of extinction on cool stellar spectra
is shown for A$_V$=1 (using the
extinction law of Cardelli et al. 1989, with R$_V$=3.1).}
\label{Teff_colours.fig}
\end{figure*}
\begin{figure*}
\centerline{
\includegraphics[clip=,angle=270,width=0.48\textwidth]{5824f5.ps}
\includegraphics[clip=,angle=270,width=0.48\textwidth]{5824f6.ps}
}
\caption[]{Gravity sensitivity of H-K (solar abundances,
15\,M$_{\odot}$). Extinction vector as in Fig.\,\ref{Teff_colours.fig}.}
\label{logg_colours.fig}
\end{figure*}
As shown in Fig.\,\ref{Teff_colours.fig}, colours that combine flux measurements
around 1.04\,$\mu$m, in the J band and in the K band are good
indicators of \mbox{$T_{\mbox{\rm \tiny eff}}$}\ in theory, as their sensitivity to surface gravity is low.
Above 3400\,K, the spread in log($g$) corresponds to a full spread in
\mbox{$T_{\mbox{\rm \tiny eff}}$}\ of about 200\,K for the 15\,M$_{\odot}$ models (left panel).
For 1\,M$_{\odot}$ models, the corresponding spread
is much smaller: about 60\,K, centered on a line very close to the
models at 15\,M$_{\odot}$ and log($g$)=0.
At the lowest temperatures, contamination of the pseudo-continuum in
the K band with H$_2$O absorption leads to reduced fluxes in low gravity stars.
Unfortunately, in the two-colour plots useful for observers the
extinction vectors run almost exactly parallel to the temperature
sequence (right panel)\,: more resolved spectral information
is necessary to estimate an effective temperature from near-IR data.
Figure\,\ref{logg_colours.fig} illustrates the gravity dependence
of colours involving H band fluxes.
At high gravities, the minimum of the opacity of H$^-$ around 1.6\,$\mu$m
produces a distinct hump in the H band spectra, with correspondingly
blue H-K and red J-H colours. At low gravities, molecular absorption due
mainly to CO and CN erases this continuum opacity feature.
Such an effect was already mentioned by Bessell et al. (1991), though
their interpretation probably underestimated the r\^ole of CN as compared
to CO. The observations of Lan\c{c}on et al. (2007) and those described
in Ku\v{c}inskas et al. (2005) provide a convincing
validation of the JHK colours of the new models.
The effect of mass on the H band flux is insignificant at log($g$)$>$0.
For lower gravities, H-K increases by up to only 0.02 magnitudes when
going from 15 to 1\,M$_{\odot}$, at \mbox{$T_{\mbox{\rm \tiny eff}}$} $>$4000\,K.
Switching from solar-scaled to RSG-specific abundances has the following
(small) effects on the above colours. All colours tend to become bluer.
Colour differences in H-K and J-H remain smaller than 0.04\,mag (and
are $<$0.02\,mag for most stellar parameters). The colour index 104$-$220
(defined in Tab.\,\ref{indexdef.tab}) is reduced by up to 0.08\,mag.
The bolometric corrections to the K band,
BC(K), are essentially unchanged (the \mbox{\tt PHOENIX}\ values agree with
those of Levesque et al. 2005 to within a few percent between
3400 and 4300\,K). Effects this small would be difficult
to exploit on real stellar spectra.
\subsection{Molecular indices}
\subsubsection{CO}
CO is a long known indicator of luminosity class (Baldwin et al. 1973). It is
sensitive to gravity and effective temperature, but also to
metallicity and micro-turbulence. As indicated previously,
a constant micro-turbulent velocity of 2\,km.s$^{-1}$
is used in this paper except in a few models.
Large micro-turbulent velocities deepen the 1st overtone band of CO more
than the 2nd overtone band, because line saturation is more important in
the former than in the latter (e.g. Origlia et al. 1993, 1997
and refs. therein).
\begin{figure*}
\centerline{
\includegraphics[clip=,angle=270,width=0.55\textwidth]{5824f7.ps}
\hspace{-2.0cm}
\includegraphics[clip=,angle=270,width=0.55\textwidth]{5824f8.ps}
}
\caption[]{Measurements of the strength of the 1st overtone CO
band at 2.29\,$\mu$m and of the 2nd overtone CO band at 1.62\,$\mu$m
in the model spectra (15\,M$_{\odot}$).
Symbols: temperature sequences at the indicated gravities,
for RSG-specific abundances. Lines: corresponding sequences
for solar-scaled abundances
(black and light-coloured lines alternate, and
the dashed line has log($g$)=0).
Note that at a given CO\,(2.3), xCO\,(1.6)
tends to be weaker in low gravity stars.
}
\label{Teff_CO.fig}
\end{figure*}
On the left panel of Fig.\,\ref{Teff_CO.fig}, the changes of the
1st overtone CO band at 2.29\,$\mu$m with gravity, temperature and
surface abundances are
shown. As commonly found, CO increases with decreasing temperatures
and gravities. The CO strength progressively levels off when log($g$)
takes negative values (i.e. the further dependence on $g$ is negligible).
Contamination by H$_2$O at high log($g$)
produces a drop of the CO index below 3200\,K. Switching from solar
to RSG-specific abundances reduces the CO strength generally by small amounts.
The effect is maximum around 4500\,K in low gravity models\,: RSG-specific
models with log($g$)=$-1$ at 4500\,K
have the same CO index as solar abundance models
with log($g$)=2, or alternatively with log($g$)=$-1$ but 4800\,K.
The effects of log($g$), \mbox{$T_{\mbox{\rm \tiny eff}}$}\ and abundances on the apparent strength of
the 2nd overtone CO band at 1.62\,$\mu$m
are similar, with two notable exceptions.
First, the effects of changes in the abundance ratios are smaller than
at 2.29\,$\mu$m. Second, low gravity saturation is reached earlier. The
result is summarized in the right panel of Fig.\,\ref{Teff_CO.fig}.
In particular, {\em low gravity stars tend to have weaker CO bands around
1.6\,$\mu$m than high gravity ones, at a given strength of the
2.29\,$\mu$m band.}
Contamination of the H-band fluxes by CN absorption contributes to
producing this trend, as already hinted at by Wing \& Spinrad 1970.
Moving down from 15\,M$_{\odot}$ to 1\,M$_{\odot}$ has a negligible effect
on the near-IR CO bands for log($g$)$>$0. At lower gravities, the 1\,M$_{\odot}$
CO bands are weaker than the 15\,M$_{\odot}$ bands (the effect is stronger
for the 2.3\,$\mu$m band than for the 1.6\,$\mu$m bands).
\subsubsection{CN}
\begin{figure*}
\centerline{
\includegraphics[clip=,angle=270,width=0.55\textwidth]{5824f9.ps}
\hspace{-2.0cm}
\includegraphics[clip=,angle=270,width=0.55\textwidth]{5824f10.ps}
}
\caption[]{Measurements of the strength of the CN band at 1.1\,$\mu$m in
the model spectra (15\,M$_{\odot}$).
Symbols and lines are as in Fig.\,\ref{Teff_CO.fig}. The main effect of
a decrease of mass (15$\rightarrow$1\,M$_{\odot}$)
is a shift of the models at log($g$)$\leq$0
to the left by up to 0.04\,mag in the right hand panel.
}
\label{Teff_CN.fig}
\end{figure*}
CN displays prominent near-IR absorption bands that have been studied
extensively in the context of carbon star models (e.g. Loidl et al. 2001).
The CN bands are prominent in red supergiants as well (White \& Wing, 1978).
While in carbon stars essentially all surface oxygen is locked into CO,
CN coexists with other oxides in the atmospheres of red supergiants.
The behaviour of CN bands with varying model parameters is complex, as shown in
Fig.\,\ref{Teff_CN.fig}. Bessell et al. (1989) describe the decrease
of the CN 1.1\,$\mu$m band strength
with decreasing effective temperature below 3800\,K,
as well as its gravity dependence (stronger CN for lower gravities).
Our more extended temperature range shows that the maximum CN strength
is reached between 4200 and 4800\,K. Both the location of the maximum
and its actual strength depend on surface gravity, and on the chemical
composition of the atmosphere. {\em CN bands are strongly enhanced
in models with RSG-specific abundance ratios.}
The effect of mass is small.
CN is also enhanced when larger
micro-turbulent velocities are assumed (Tsuji 1976). In empirical samples,
spectra with strong CN absorption bands compared to their CO bands are
candidates for modified surface abundances.
\subsubsection{Other molecular bands longwards of 1\,$\mu$m.}
\begin{description}
\item[H$_2$O.] H$_2$O appears very abruptly in the models below
a gravity-dependent threshold temperature: 3600\,K at log($g$)=1,
3100\,K at log($g$)=$-1$ (based on measurements in the K-band wings of
the H$_2$O band centered near 1.9\,$\mu$m). Below this threshold \mbox{$T_{\mbox{\rm \tiny eff}}$},
higher gravities lead to stronger spectral bands. Varying the mass between
1 and 15\,M$_{\odot}$, or switching to RSG-specific
abundances, has only small effects on the H$_2$O bands.
\item[TiO.] Near-IR TiO bands around 1\,$\mu$m (TiO $\delta$, $\Delta\nu=-1$)
and 1.25\,$\mu$m (TiO $\phi$, $\Delta \nu = -1$)
appear progressively in the models below a gravity-dependent temperature:
$\sim$3600\,K at log($g$)=1, $\sim$3400\,K at log($g$)=$-$2
(based on visual inspection of the spectra in that region).
Other near-IR TiO bands longwards of 1\,$\mu$m,
such as the $\phi, \Delta \nu=0$ band near
1.12\,$\mu$m are hidden in CN absorption.
Again, varying the mass between
1 and 15\,M$_{\odot}$, or switching to RSG-specific
abundances, has only small effects.
We note that the next version of \mbox{\tt PHOENIX}\ calculations will include an
update of the TiO partition function and of the electron $f$-values of the
TiO bands for the AMES TiO line list (Schwenke 1998), which appears to
improve spectral synthesis results for M dwarfs (Allard et al., in preparation).
\item[VO.] The 1.05\,$\mu$m VO band (VO A$-$X $\Delta v=0$)
is significant in the model spectra
only at \mbox{$T_{\mbox{\rm \tiny eff}}$}$\leq$3200\,K for log($g$)$\leq$1.
The effect of mass or abundances is small.
\end{description}
\section{Models versus data in two-colour plots}
\label{twocolour.sec}
\begin{figure*}
\centerline{
\includegraphics[clip=,angle=270,width=0.48\textwidth]{5824f11.ps}
\includegraphics[clip=,angle=270,width=0.48\textwidth]{5824f12.ps}
}
\caption[]{Two-colour plots with observational and calculated data.
The thin lines are \mbox{$T_{\mbox{\rm \tiny eff}}$}\ sequences
for solar abundances, the thick lines are for RSG-specific abundances.
Solid lines are at log($g$)=2, dashed lines at log($g$)=$-1$. The dotted
line follows models along an illustrative red giant branch at solar
metallicity ([\mbox{$T_{\mbox{\rm \tiny eff}}$} (K),log($g$)]=[4200,2], [3800,1], [3400,0]). Symbols
as explained are measurements on {\em dereddened} versions of
the spectra of Lan\c{c}on et al. 2007 (see Sect.\,\ref{spectra.sec}),
and dots O-rich Miras from Lan\c{c}on \& Wood 2000 (not dereddened).
The reddening vectors are as in Fig.\,\ref{Teff_colours.fig}.
}
\label{data_colours.fig}
\end{figure*}
In this and the following sections, we compare the models with
the data collected by Lan\c{c}on \& Wood (2000) and Lan\c{c}on et al. (2007).
Both sets provide spectra at a spectral resolving power
of order 1000 between 0.97\,$\mu$m (sometimes 0.81\,$\mu$m) and 2.4\,$\mu$m.
The first set adds low resolution extensions through the optical
range down to 5100\,\AA\ for a few of the stars.
The merged sample contains luminous stars of spectral types
G3I to M5I, G3II to M3II, and G5III to M5III, as well as asymptotic giant
branch variables for comparison.
As shown in Fig.\,\ref{data_colours.fig}, the agreement between models and
data is excellent in near-IR two-colour plots
once extinction has been accounted for (see Sect.\,\ref{spectra.sec}).
Note that in this and the following figures the lines join models
at constant log($g$), while the data should follow real red giant or
red supergiant branches. Evolutionary tracks predict that the
warmer giants have log($g$)$\geq$2 while the cool ones reach
log($g$)$\simeq$0. Red supergiants of various luminosity classes
are expected to have log($g$)$\leq$0.5.
Figure\,\ref{data_indices_colours.fig} combines measurements
of the first and second overtone CO bands and the 1.1\,$\mu$m CN band
with J-K. Agreement between solar metallicity models and empirical
data is good for {\em giant} stars.
The strongest offset is noted for the second overtone CO
bands in the H window, which tend to be too strong in
the cool giant star models. The figures also suggest that
modeled first overtone CO bands might be slightly too weak at
low $\mbox{$T_{\mbox{\rm \tiny eff}}$}$. Because extinction affects J-K,
two-index diagrams with a negligible sensitivity to reddening are
presented in Fig.\,\ref{data_indices.fig}. The same conclusions hold.
The CO line list data are from
Goorvitch \& Chackerian (1994\,a,b), and are known to
work very well in the case of M dwarfs. Therefore, it is unlikely
that the line list data for CO is the cause of the CO band
discrepancies.
We note that $^{13}$CO contributes to the measured strength of the
first overtone CO bands, and that changes in the $^{13}$C abundances
induced by stellar evolution may be responsible for some systematic effects.
Slightly larger micro-turbulent velocities could improve the ratio of the
first to the second overtone CO band strengths, but would also affect
the CN bands.
The outlier giant star near J-K=0.7 with weak band strengths is
the only Population II star of the observed sample (HD\,218732). Eye
inspection of its spectrum reveals a metal poor atmosphere immediately.
By contrast, the other giant stars appear to form a reasonably
homogeneous sample in terms of metallicity.
While the trends with gravity present in the predicted molecular bands
agree qualitatively with those observed, the molecular bands of only
a fraction of the observed red {\em supergiants} can be reproduced
quantitatively. Models with RSG-specific abundances are favoured
for a significant number of these objects, which show stronger CN
bands than the solar metallicity models can produce. However, the CN and
CO measurements show that despite the improvement achieved with the
adopted changes in surface abundances,
the model parameters explored here are not
able to account for the whole range of molecular band strengths observed.
Models with larger micro-turbulent velocities reach further into
some of the areas occupied by real red supergiants,
justifying the ongoing extension
of the model grid. Alternatively, some red supergiants in
nature may require even higher $^{14}$N abundances than tested here
or effective gravities lower than log($g$)=$-1$.
Comparisons between models and data on the star-by-star
basis are provided in the following section.
\begin{figure*}
\centerline{
\hspace*{-0.06\textwidth}
\includegraphics[clip=,angle=270,width=0.4\textwidth]{5824f13.ps}
\hspace{-0.082\textwidth}
\includegraphics[clip=,angle=270,width=0.4\textwidth]{5824f14.ps}
\hspace{-0.082\textwidth}
\includegraphics[clip=,angle=270,width=0.4\textwidth]{5824f15.ps}
}
\caption[]{Plots of molecular indices vs. colours with dereddened data.
Symbols and lines
are as in Fig.\,\ref{data_colours.fig}.
}
\label{data_indices_colours.fig}
\end{figure*}
\begin{figure*}
\centerline{
\hspace*{-0.06\textwidth}
\includegraphics[clip=,angle=270,width=0.4\textwidth]{5824f16.ps}
\hspace{-0.082\textwidth}
\includegraphics[clip=,angle=270,width=0.4\textwidth]{5824f17.ps}
\hspace{-0.082\textwidth}
\includegraphics[clip=,angle=270,width=0.4\textwidth]{5824f18.ps}
}
\caption[]{Molecular index plots with dereddened data. Symbols and lines
are as in Fig.\,\ref{data_colours.fig}.
}
\label{data_indices.fig}
\end{figure*}
\section{Direct comparison between observed and theoretical spectra}
\label{spectra.sec}
\subsection{Method}
\label{method.sec}
The comparison between models and data is based on the computation
of reduced $\chi^2$ differences. The theoretical spectra are
smoothed to the resolution of the data, using gaussian broadening
functions with adequate widths (note that for the optical spectra, whose
resolution was seeing-dependent, we adopt a single smoothing value
which in some cases underestimates the actual $\Delta\lambda$).
They are then resampled to the wavelength step of the data, i.e. 5\,\AA.
A window function is used to eliminate the spectral intervals most
strongly affected by telluric absorption, around 1.15, 1.4 and 1.9\,$\mu$m.
The rms noise of the data is modelled as being proportional to the
square root of the signal. Numerical values given below
assume an average signal-to-noise ratio of 50. This simple noise model is
a reasonable representation of the typical high frequency noise of the
data. The additional uncertainties due to flux calibration errors are not
explicitly accounted for. They lead mainly to uncertainties in the estimated
extinction values. A further discussion of the effects of the
weighting of the data is provided in Sect.\,\ref{discussion.mass.sec}.
A mass of 1\,M$_{\odot}$ is assumed for giants and bright giants, a
mass of 15\,M$_{\odot}$ for supergiants (see Sect.\,\ref{discussion.sec}).
For each empirical spectrum, the adopted algorithm loops
through all model temperatures
and gravities (separately for the two sets of abundances).
At each of these points, it also loops through an adequate
set of extinctions (using the extinction law of Cardelli et al. 1989
with $R_V=3.1$), and minimizes the $\chi^2$ with respect to the extinction
parameter $A_V$. The step in $A_V$ is of 0.05\,mag.
A $\chi^2$ map is produced in \mbox{$T_{\mbox{\rm \tiny eff}}$}$-$log($g$) space,
and the 9 best fits (of the \mbox{$T_{\mbox{\rm \tiny eff}}$}$-$log($g$)-A$_V$ space) are plotted
for inspection. The $\chi^2$ value of the 9th best fit is typically
higher than for the 1st best fit by 10\,\% when the best fits are good, and
by only a few percent when they are poor. Typical uncertainties on
the derived parameters in the case of {\em good} fits
are $\pm$\,100\,K in \mbox{$T_{\mbox{\rm \tiny eff}}$}, and $\pm$\,0.2 in A$_V$.
For these good fits, gravity is usually determined to better than one step
within our set of models (log($g$)=$-1,-0.5,0,1,2$).
Preliminary models with micro-turbulent velocities
larger than 2\,km\,s$^{-1}$ were
tested only for supergiant star spectra and, in this paper,
they are only discussed in cases where the initial fits were poor.
The method is robust with respect to errors in the positions of the individual
molecular lines that jointly define the pseudo-continuum of the
spectra at the resolution of interest here. Such errors were noted as
a difficulty in the measurement of individual metal lines by Vanhollebeke
et al. 2006. In order to verify this, we added random noise to
the data at the level of a few percent, i.e. enough to completely alter the
apparent positions of the blended CN lines between 2 and 2.3\,$\mu$m.
Differences in the derived parameters were well within the
uncertainties stated above.
\subsection{Giant stars}
The data sample contains 35 stars of class III with spectral types G5 to M5
(after elimination of one particularly uncertain luminosity class
and one known metal poor star with obviously weaker spectral features).
Their shortest wavelength is 0.97\,$\mu$m, except for 5 spectra
with an optical extension.
\begin{figure}
\includegraphics[clip=,width=0.49\textwidth]{5824f19.ps}
\caption[]{Fit to a late type giant star spectrum (HD\,145480, M5III).
The data are shown as dotted lines, the model as a solid line. The
window function for the $\chi^2$ fit is also shown.
Model parameters are 3400\,K, log($g$)=0, A$_V$=0.4, $\chi^2=1.7$.
Such a fit quality is typical for all the giants without available
optical spectra.}
\label{goodfit_M5III.fig}
\end{figure}
\begin{figure}
\includegraphics[clip=,width=0.48\textwidth]{5824f20.ps}
\caption[]{Best fits to four giant star spectra
that extend to optical wavelengths.
Data are shown as dotted lines, best fit models as solid lines.
The window function used to reject the noisiest spectral regions is also shown.
From top to bottom: K3.5III star BS\,4432 with model
[\mbox{$T_{\mbox{\rm \tiny eff}}$}, log($g$), A$_V$, $\chi^2$] = [4100\,K, 2, 0.55\,mag, 2.3];
M0III star BS\,4371 with [3900\,K, 1, 0.2\,mag, 4.5];
M2III star BS\,5301 with [3800\,K, 1, 0.55\,mag, 5.2];
M5III star BS\,4267 with [3300\,K, 1, -0.3\,mag, 13.9].
}
\label{fits_4VKgiants.fig}
\end{figure}
Good fits are obtained for essentially all the near-IR spectra
with the solar metallicity models. An example is given
in Fig.\,\ref{goodfit_M5III.fig}. Among the satisfactory fits, there
is a tendency for $\chi^2$ to increase systematically
from about 1 for types $\leq$\,K4
($\chi^2$=0.6-1.4, depending on the actual S/N of individual spectra)
to about 2 (1.5-2.5) for the M stars. This trend is due to a wealth of
weak lines and deeper molecular bandheads at low temperatures, which among
others induces a higher sensitivity of the value of the $\chi^2$
to residual wavelength calibration errors or model line lists.
The $\chi^2$ values for combined optical+near-IR
spectra take values of 3 to 7 for satisfactory fits (considering the sensitivity
of the $\chi^2$ to the smoothing parameter at optical wavelengths and
to flux calibration errors over such an extended wavelength range).
Examples are shown in Fig.\,\ref{fits_4VKgiants.fig}.
The best fit to the spectrum of the coolest giant,
the M5III star BS\,4267 (=\,HD\,94705) requires
marginally negative extinction. Considering the flux calibration uncertainties
and the choice of one particular extinction law, such a result is
not alarming. Cooler models by only 100\,K or models with higher
log($g$) by 0.5 would result in a positive value of the estimated A$_V$.
The most obvious shortcoming of the models for giants this cool is
an overprediction of the TiO absorption bands near 1 and 1.25\,$\mu$m.
The results of the fitting procedure can be summarized as follows:
Temperatures range from 5300\,K for type G5, to 3300\,K for type M5.
As expected from stellar evolution tracks, the highest
available gravity (log($g$)=2) is selected for giants
earlier than K7 (with one exception),
then progressively more spectra are assigned
log($g$)=1 and later 0. A$_V$ values are spread between 0 and 1
(with 4 cases of marginally negative values). No correlation is found
between A$_V$ and \mbox{$T_{\mbox{\rm \tiny eff}}$} .
\medskip
Adopting the models with RSG-specific abundances rather than solar ones
leads to poorer fits in all but one case. The values of the reduced $\chi^2$
increase by 0.5 units on average. While the distribution of estimated
effective temperatures for the sample is relatively flat with the assumption
of solar abundances, it becomes bimodal with RSG-specific abundances\,:
temperatures between 4500 and 5000\,K are systematically avoided, because
the CN bands of these models are too strong at the surface gravities of
giant stars (cf. Fig.\,\ref{Teff_CN.fig}).
This result was expected from Figs.\,\ref{data_indices.fig} and
\ref{data_indices_colours.fig}.
It is satisfactory as our set of RSG-specific abundances
is not designed to match abundances in giant stars.
\subsection{Bright giants}
\begin{figure}
\includegraphics[clip=,width=0.48\textwidth]{5824f21.ps}
\caption[]{Best fits to four bright giant star spectra (class II)
that extend down to 0.81\,$\mu$m. Figure set-up is as in
Fig.\,\ref{fits_4VKgiants.fig}.
From top to bottom: G8II star HD\,150416 with
[\mbox{$T_{\mbox{\rm \tiny eff}}$}, log($g$), A$_V$, $\chi^2$] = [5000\,K, 1, 0.1\,mag, 0.65];
K2II star BD-29\,2374 with [4500\,K, 2, -0.15\,mag, 0.65];
K5II star HD\,168815 with [4100\,K, 1, 1.3\,mag, 1.8];
M0.5IIb star HD\,132933 with [4000\,K, 2, 0.4\,mag, 1.4].
Among the above, we classify the K5II fit as satisfactory, the others
as very good.
}
\label{fits_4TKbrights.fig}
\end{figure}
\begin{figure}
\includegraphics[clip=,width=0.48\textwidth]{5824f22.ps}
\caption[]{Best fits to two bright giant star spectra for which the
CN band at 1.1\,$\mu$m is particularly poorly reproduced, and of the
coolest class II star of the sample.
Figure set-up is as in Fig.\,\ref{fits_4VKgiants.fig}.
From top to bottom: M0Ib/II star HD\,145384 with
[\mbox{$T_{\mbox{\rm \tiny eff}}$}, log($g$), A$_V$, $\chi^2$] = [3400\,K, -0.5, -0.15\,mag, 3.2];
M0.5II star HD\,142676 with [3900\,K, 2, 0.0\,mag, 4.0];
M3II star HD\,153961 with
[3500\,K, 0, 1.0\,mag, 3.3].
}
\label{fits_3IKbrights.fig}
\end{figure}
The sample contains 29 bright giants of class Ib/II or II.
Spectral types range from G3 to M3. None of the
corresponding spectra extend through optical wavelengths, but 11 extend
down to 8100\,\AA. Their properties in terms of colours and molecular
indices are spread between those of giants and supergiants.
On average, they display slightly stronger bands of CO
and significantly stronger bands of CN than giants of class III, at
a given (dereddened) colour.
The solar metallicity model fits to all the spectra
are satisfactory, two thirds of them being very good
(Fig.\,\ref{fits_4TKbrights.fig}).
The models clearly contain all the molecular bands required.
Marginally negative values of A$_V$ are obtained in four cases, which
is again not unexpected considering flux calibration uncertainties.
The most common shortcomings found when the fits are not perfect
are the following: \\
--- There is a tendency for the models to show stronger CO bands at
2.29\,$\mu$m and weaker CN bands at 0.93 and 1.1\,$\mu$m than observed.
This problem is only detectable clearly
when the data extend down to 0.81\,$\mu$m.\\
--- For stars with spectral types around K5II whose spectra extend down
to 0.81\,$\mu$m, the models struggle to reproduce the energy distribution
around 1\,$\mu$m, where it peaks between deep CN bands. This difficulty
is certainly related to the strength of the CN bands at those temperatures
(see Fig.\,\ref{Teff_CN.fig}). \\
--- In two cases (spectral type M0Ib/II and M0.5II), the model CN bands
are too weak while CO is reproduced well (Fig.\,\ref{fits_3IKbrights.fig}).
The \mbox{$T_{\mbox{\rm \tiny eff}}$}\ and log($g$) scales obtained for the bright giants have a
scatter similar to those found for giants and supergiants and are located
between the two, as expected. Bright giants with spectral types earlier
than K4 (included) are assigned log($g$)=2 (with one exception\,:
HD\,170457, G8Ib/II), and values of log($g$) for types K5-M3 are
scattered between 1 and -0.5. No correlation is found between A$_V$ and
\mbox{$T_{\mbox{\rm \tiny eff}}$} .
\medskip
When moving from solar abundances to RSG-specific abundances,
the $\chi^2$ test indicates that the fits are degraded in a majority
of cases (typically by 0.5 $\chi^2$ units, as for the sample of
class III stars). However, a significantly improved $\chi^2$ is obtained
for 4 stars, and the $\chi^2$ changes are insignificant in 7 cases. \\
The improvements correspond to four stars of type K2 to M0
(out of the 11 bright giants available in this range),
with estimated \mbox{$T_{\mbox{\rm \tiny eff}}$}\ of 4300 to 3400\,K.
Eye inspection of the corresponding four spectra shows that the decrease
in $\chi^2$ corresponds to a better fit to the CN bands, which were not
deep enough (by small amounts) in the solar metallicity models.
In some cases, the improved $\chi^2$ was associated with a decrease in \mbox{$T_{\mbox{\rm \tiny eff}}$}\
by 100\,K or an increase of log($g$) by one bin size, but statistics are
too small to define significant trends.\\
Degraded fits are frequently associated with excessive strengths of the
model CN bands when the RSG-specific abundances are used.
The \mbox{$T_{\mbox{\rm \tiny eff}}$} -distribution obtained with the RSG-specific abundances still
shows a zone of avoidance between 4500 and 5000\,K, but the effect is
not as obvious as in the case of class III stars. Although small number
statistics affect this result, we note that all class II spectra with
estimated \mbox{$T_{\mbox{\rm \tiny eff}}$}\ in that range have poorer fits with the RSG-specific abundances
than with solar ones. As expected, models with the adopted RSG-specific
abundances do not apply to the majority of class II stars but they
do allow us to identify candidate objects that may have suffered
more than standard dredge-up.
\subsection{Supergiant stars}
\label{supergiants.sec}
\begin{figure}
\includegraphics[clip=,width=0.48\textwidth]{5824f23.ps}
\caption[]{Good and reasonable best-fits
to warm red supergiant spectra (class I).
Figure set-up is as in
Fig.\,\ref{fits_4VKgiants.fig}.
Abundances are solar.
From top to bottom: G2Ib star HD\,182296 with
[\mbox{$T_{\mbox{\rm \tiny eff}}$}, log($g$), A$_V$, $\chi^2$] = [5000\,K, 0, 0.95\,mag, 1.3];
G8Iab star HD\,206834 with [4900\,K, 1, 0.3\,mag, 0.85];
G5Iab star HD\,170234 with [4500\,K, 0, 2.15\,mag, 1.8];
K3Iab star HD\,187238 with [4100\,K, 0, 1.65\,mag, 2.2].
We have not counted the K3Iab case as a good fit, because the model
CO bands around 1.7\,$\mu$m are too strong. Note that the spectral type
of the second and third stars are likely to be incorrect.
}
\label{fits_goodsuper.fig}
\end{figure}
\begin{figure}
\includegraphics[clip=,width=0.48\textwidth]{5824f24.ps}
\caption[]{A selection of marginally acceptable best-fits to red supergiant
spectra. Figure set-up is as in
Fig.\,\ref{fits_goodsuper.fig}.
From top to bottom: G5Iab star HD\,165782 with
[\mbox{$T_{\mbox{\rm \tiny eff}}$}, log($g$), A$_V$, $\chi^2$] = [4900\,K, $-1$, 2.4\,mag, 2.8];
M1Iab star HD\,98817 with [3700\,K, $-1$, 1.8\,mag, 7.8];
M2Iab star BS\,3364 (=\,HD\,72268) with [3500\,K, $-0.5$, 1.0\,mag, 12.4];
M4.5I star V774\,Sgr with [3200\,K, $-0.5$, 1.7\,mag, 22.7].
}
\label{fits_medsuper.fig}
\end{figure}
\begin{figure}
\includegraphics[clip=,width=0.48\textwidth]{5824f25.ps}
\caption[]{A selection of poor best-fits to red supergiant
spectra. Figure set-up is as in
Fig.\,\ref{fits_goodsuper.fig}.
From top to bottom: G5Ia star HD\,155603
(classified K0\,0-Ia by Keenan and McNeil, 1989)
with [\mbox{$T_{\mbox{\rm \tiny eff}}$}, log($g$), A$_V$, $\chi^2$] = [4300\,K, $-0.5$, 1.8\,mag, 8.6];
M0Ia star Trumpler\,1-27 with [4200\,K, $-1$, 4.4\,mag, 13.4];
M3.5I star IRC\,$-$20427 with [4000\,K, $-1$, 5.35\,mag, 15.2];
M4-5Iab star CL\,Car with [3300\,K, $2$, 1.65\,mag, 21.6].
The fits being of poor quality, the derived parameters are not
reliable and are given here only for the sake of completeness.
}
\label{fits_poorsuper.fig}
\end{figure}
The data sample contains 37 spectra of stars of class I, Ia, Iab or Ib
(after removal of one particularly odd case that is probably misclassified
and of one spectrum with poor correction for telluric absorption). Spectral
types range from G2 to M5. The stars with the latest spectral types
($\geq$\,M2) are all known or suspected variables (as are the vast majority
of late type supergiants in nature). 9 spectra, all with spectral type M,
extend through the optical range; note that the optical and near-IR spectra
of individual objects
were taken within less than three weeks of each other. 8 spectra extend
down to 0.81\,$\mu$m.
Good fits to the red supergiant spectra with solar metallicity models
are obtained for 13 of the 37 spectra, all of which are of
spectral type G2-G8. 16 of the remaining spectra find a model representation
that is still reasonable (though often significantly poorer than the fits
we called satisfactory within class II above).
These are spread over the whole range of
spectral types and include some of the data that extend through optical
wavelengths. In general, stars of luminosity class Ib are easier
to fit than those of class I, Ia or Iab, and all class Ib stars of
our sample can be matched well or reasonably well.
Finally, we classify 7 of the red supergiant fits as poor. Five of these
correspond to variable stars with spectral types later than M3.5 (class
I, Ia or Iab), the two others are of spectral type M0Ia and G5Ia.
Figures \ref{fits_goodsuper.fig}, \ref{fits_medsuper.fig}
and \ref{fits_poorsuper.fig}
illustrate some of the good, intermediate and poor model
fits.
The main shortcomings of the models when the fits are
of {\em intermediate} quality are the following\,: \\
--- A relatively common feature is a shortage of flux in the models
around 1\,$\mu$m, as seen in two spectra of Fig.\,\ref{fits_medsuper.fig}.
This problem was already mentioned for a few bright giants of class II,
as a property that is associated with strong CN bands and can be identified
only when the observed spectra extend far enough to short wavelengths. \\
--- Even when the 1st overtone CO bands after 2.29\,$\mu$m are reproduced
reasonably
well, it happens that the relative strengths of the 2nd overtone CO bands in the
H window are incorrect, the transitions at longer H window wavelengths
(1.65-1.75\,$\mu$m) being too strong in the models compared to the data
(last spectrum of
Fig.\,\ref{fits_goodsuper.fig} and 2nd and 3rd spectrum of
Fig.\,\ref{fits_medsuper.fig}). \\
--- In the coolest models, bands of TiO appear (in particular near 1.25\,$\mu$m)
that are not seen in the data.
\medskip
{\em Poor} fits are obtained for the coolest stars
(e.g. bottom spectrum of Fig.\,\ref{fits_poorsuper.fig})
and for stars with extreme CN bands
(e.g. top three spectra of Fig.\,\ref{fits_poorsuper.fig}).
We recall that the coolest stars are also variable
and that discrepancies
are to be expected in a comparison with static models.
When the CN bands are strong, the derived temperatures are a compromise
between the necessity to reproduce the energy distributions and the CO
bands at 2.29\,$\mu$m
(which pulls towards low temperatures), and the need to maximize CN depths
(which pulls towards 4100\,K, cf. Fig.\,\ref{Teff_CN.fig}). When optical
spectra are taken into account, the relative weight of the CN
bands is reduced compared to CO, optical features and the energy
distribution. On the contrary, when only wavelengths between 0.97
and 2.4\,$\mu$m are available the r\^ole of the CN bands is large.
This explains why in Fig.\,\ref{fits_poorsuper.fig} such a large
difference in \mbox{$T_{\mbox{\rm \tiny eff}}$}\ is obtained between the M3.5Ia star (no optical
spectrum, best fit \mbox{$T_{\mbox{\rm \tiny eff}}$} =\,4000\,K)
and the M4-5I star (optical spectrum available, best fit \mbox{$T_{\mbox{\rm \tiny eff}}$} =\,3300\,K).
The temperatures of the M0Ia and M3.5I stars
of that figure are most probably overestimated. For a similar reason,
the temperature of the G5Ia star at the top of the figure may be
underestimated (compare with the G5Iab star in Fig.\,\ref{fits_medsuper.fig}).
A typical problem with the best fit models for the spectra with
very strong CN is the relative strength of the various CO bands.
The G5Ia star HD\,155603 (Fig.\,\ref{fits_poorsuper.fig}) provides the
most extreme example. It has the strongest 2.29\,$\mu$m CO band of our
whole supergiant sample and among the strongest CN bands as well,
but in the H window CO is almost absent.
None of the current models with
$v_{\rm mic}$\,=\,2\,km\,s$^{-1}$
reproduces this combination. Models with larger micro-turbulent
velocities improve the representation of these extreme spectra.
Water bands are another cause of disagreement between models and data.
The near-IR bands of H$_2$O and CN overlap in wavelength
to such a degree that confusion
can occur (and has occurred in the past, cf. Wing \& Spinrad 1970). The
shapes of the 1.1\,$\mu$m bands of H$_2$O and CN are subtly different
(cf. Fig. 5 of Lan\c{c}on \& Wood 2000). The bands observed in red
supergiants correspond closely to CN, although contamination with
H$_2$O is possible. The H$_2$O band around 1.9\,$\mu$m,
which is very deep and broad in Miras, is inconspicuous in red supergiants.
It may be present at a low level in the coolest ones observed,
such as CL\,Car (Fig.\,\ref{fits_poorsuper.fig}), which are semi-regular
long period variables.
The clearest H$_2$O signature
in the observed red supergiant spectra is a sharp bandhead at 1.32\,$\mu$m,
although the detection of this feature requires good corrections for
telluric absorption bands. Based on this signature, the 7
coolest supergiants of our sample contain H$_2$O (all these are variable).
The models however either do not
show this bandhead (low $g$) or, when they do (high $g$),
also display a 1.9\,$\mu$m band
that is much wider and deeper than observed.
Finally, the semi-regular variables V774\,Sgr, EV Car and CL Car
(Figs.\,\ref{fits_medsuper.fig} and \ref{fits_poorsuper.fig})
have a clear VO absorption band at 1.05\,$\mu$m and small or inexistent
absorption bands at 1\,$\mu$m
and 1.25\,$\mu$m,
two properties that are not matched by the models.
\medskip
\begin{figure}
\includegraphics[clip=,width=0.48\textwidth]{5824f26.ps}
\caption[]{{\em Top:}\ K4Ib star HD 185622a and best fit model with
RSG-specific abundances. {\em Middle:}\ Residuals of the fit shown above
(data$-$model). {\em Bottom:}\ Residuals of the best fit with solar
metallicity models ($\chi^2$ 1.22 times larger than in the RSG-specific case).
Note the CN residuals below 1\,$\mu$m, around 1.1\,$\mu$m and in the
slope between 1.45 and 1.75\,$\mu$m. These are typical and systematic
shortcomings of the solar metallicity models in the cases where
RSG-specific models provide a better fit.}
\label{residuals.fig}
\end{figure}
When moving from the models with solar abundances to the RSG-specific
abundances, the $\chi^2$ test indicates that about a third of the fits
are improved, another third are degraded, and the quality of the final
third of the fits is essentially unchanged.
The deteriorations, when present, are not severe. In most cases, it seems that
abundance values intermediate between the adopted solar and RSG-specific
sets would provide optimal fits, which is not surprising considering
that evolutionary tracks for red supergiants cover a range of abundances.
Eye inspection shows that quite a few stars with equally
good fits with both model sets also fall in this category.
The improvements obtained with RSG-specific abundances for a fraction
of the red supergiants are significant,
although they clearly do not resolve all the difficulties.
They are associated with a better representation
of the observed CN bands and sometimes also with a better match to the CO bands
around 1.6\,$\mu$m (see also Sect.\,\ref{weights.sec}).
One may distinguish two subcategories
of improvements. On one hand, some of the stars that already
had reasonable model counterparts with solar abundances
have better, often good adjustments with RSG-specific abundances.
These are mainly stars of type G and K. An example is given in
Fig.\,\ref{residuals.fig}.
On the other hand, the improvements refer to stars that had poor fits
with solar abundances, and for which the RSG-specific abundances
lead to somewhat better but still unacceptable fits. These are the
same 7 stars as mentioned earlier. The models cannot simultaneously reproduce
their CO bands (1.6 and 2.29\,$\mu$m), their CN bands and
their energy distribution. More extended model grids are
needed to characterize these objects. Problems related to H$_2$O, TiO and VO,
when present, remain essentially unchanged.
The explored changes in surface abundances induce changes
in the best-fit parameters for the sample of observed stars with
maximal amplitudes of $\pm$200\,K. For the sample as a whole,
there is no strong correlation between the change in \mbox{$T_{\mbox{\rm \tiny eff}}$}\
and the actual value of \mbox{$T_{\mbox{\rm \tiny eff}}$},
which is not surprising considering that many fits are imperfect
and that the behaviour expected from theory is complex
(see Sect.\,\ref{Teffscales_models.sec}).
The \mbox{$T_{\mbox{\rm \tiny eff}}$}\ distribution of the red supergiant sample
obtained under the assumption of RSG-specific abundances
shows no anomaly. Scrutiny of the 2D distribution of estimated
parameters in the log($g$)--\mbox{$T_{\mbox{\rm \tiny eff}}$}\ plane suggests that a narrow zone
extending diagonally from [log($g$)=0,\mbox{$T_{\mbox{\rm \tiny eff}}$} =4000\,K] to
[log($g$)=1,\mbox{$T_{\mbox{\rm \tiny eff}}$} =5000\,K] (with no extension to lower gravities)
might nevertheless be underpopulated.
The statistical significance of this gap is low because of small
sample numbers. Its presence would favour a general
picture in which RSG-specific abundances are only relevant to red
supergiants with large initial masses or to late stages
of red supergiant evolution.
\subsection{Effects of the weighting of various parts of the spectra}
\label{weights.sec}
\begin{figure}
\includegraphics[clip=,angle=270,width=0.49\textwidth]{5824f27.ps}
\includegraphics[clip=,angle=270,width=0.49\textwidth]{5824f28.ps}
\includegraphics[clip=,angle=270,width=0.49\textwidth]{5824f29.ps}
\caption[]{{\em Top:}\ Strength of the 2.3\,$\mu$m CO band of the
best fitting solar metallicity
models versus strength of this band in the dereddened observed
spectra (cf. Tab.\ref{indexdef.tab}).
The dotted line highlights the one-to-one relation.
{\em Middle:}\ Same figure for the adopted measure of the 1.6\,$\mu$m CO band.
{\em Bottom:}\ Same figure for the adopted measure of the 1.1\,$\mu$m CN band.
}
\label{fitquality.fig}
\end{figure}
Because there are {\em systematic} differences between the best fit models
and the observed spectra, the best fit model parameters depend on
the weights given to the various spectral features in the fitting
procedure. Our standard method weights the data based on a
reasonable simplified model for the high frequency noise in the data.
This adopted weight is inversely proportional to the square root of
the signal, i.e. spectral regions with large fluxes contribute more to
the $\chi^2$ than regions with small fluxes. Since the spectra of cool
stars peak around 1\,$\mu$m (in the flux density units adopted in this paper),
molecular bands near this wavelength are important in the fit.
In practice, this weighting makes CN bands relatively important and
CO bands (around 1.6\,$\mu$m and 2.3\,$\mu$m) comparatively unimportant.
If the noise was indeed gaussian, uncorrelated between pixels, and
exactly of the amplitude assumed, then our procedure would select the
models with the largest likelihoods. This is not the case (flux calibration
errors, wavelength-dependent gains and contributions of the read-out noise,
etc.), and therefore our weighting is in some ways non-optimal. We may
choose various alternative methods (see Decin et al. 2004 for an
interesting discussion of comparable issues). First, we may decide to fit
measurements of the depths of one or several features rather than spectral
segments. Unfortunately, the selection of one or the other feature remains
somewhat arbitrary. Second, we may decide to focus on either the optical or the
near-IR spectral range. This circumvents the difficulty
of reproducing the global energy distribution (possible uncertainties
in the relative flux calibration of the optical and near-IR data, uncertainties
in the adopted extinction law, etc.). Third, we may keep the whole spectrum
but explore the effects of alternative weightings. We briefly
summarize below the main trends found while investigating these three options.
\medskip
\begin{figure}
\includegraphics[clip=,angle=270,width=0.49\textwidth]{5824f30.ps}
\includegraphics[clip=,angle=270,width=0.49\textwidth]{5824f31.ps}
\caption[]{Same as the top and bottom panels of Fig.\,\ref{fitquality.fig}
but for supergiants only, and using models with RSG-specific abundances.}
\label{fitquality_set6.fig}
\end{figure}
Figures \ref{fitquality.fig} and \ref{fitquality_set6.fig}
show how three important near-IR molecular features in the models compare
with their observed counterparts, when our standard
weighting procedure is applied to select the best fit.
Note that the corresponding figures for H-K, J-H, J-K or 104-220 (not shown)
are very well behaved, which only states that the adopted extinction law
is capable of dealing with the actual extinction (and with
flux calibration errors) rather well.
As expected, systematic drifts away from the perfect match are smaller
for the 1.6\,$\mu$m CO features than for the 2.3\,$\mu$m CO band, the
latter being located in a region of low flux (small contribution to
the $\chi^2$). The best-fit models have systematically deeper 2.3\,$\mu$m CO
bands than the data for warm stars (types G and K), but systematically
too shallow 2.3\,$\mu$m CO bands for the coolest stars (type M).
By changing the weights in the fitting procedure (e.g. by assuming
a constant signal-to-noise ratio), the 2.3\,$\mu$m bands can be reproduced
better, but at the cost of a loss of the fit quality at shorter
near-IR wavelengths.
The CN bands are reproduced well for giant stars. But they are too
shallow in the best fit models for some of the bright giants
and for the supergiants.
Here, changing the fitting weights has a small effect compared to
more fundamental model parameters such as abundances, gravities or
micro-turbulence. RSG-specific abundances
move the bulk of the red supergiants into a satisfactory location
(Fig.\,\ref{fitquality_set6.fig}). With RSG-specific abundances,
the fits to CO bands around 1.6\,$\mu$m are not fundamentally improved or
degraded on average, while the first overtone CO bands (2.3\,$\mu$m)
of the best fits become shallower, i.e. too shallow.
By assigning CO more weight in the fits, it is
possible to reduce this discrepancy while still observing the global
improvement for CN. But with the current grid of models, no
fully satisfactory solution can be found for any weighting scheme.
\medskip
The weights given to various spectral ranges impact on the estimated
stellar parameters. Examples have already been given in
Sect.\,\ref{supergiants.sec}, and further discussions can be found below.
\section{Discussion}
\label{discussion.sec}
Providing estimates of fundamental stellar parameters is a major
application of theoretical spectra. Our discussion focuses
on the determination of \mbox{$T_{\mbox{\rm \tiny eff}}$}\ from near-IR spectra using the
new \mbox{\tt PHOENIX}\ models.
\subsection{Stellar mass}
\label{discussion.mass.sec}
We have mentioned in Sect.\,\ref{trends.sec} that the effects of mass
on colours and molecular indices, at a given \mbox{$T_{\mbox{\rm \tiny eff}}$}\ and log($g$), are small.
The comparison between the
best fit parameters obtained assuming 1\,M$_{\odot}$ and 15\,M$_{\odot}$
nevertheless reveals a trend worth highlighting : {\em for stars with
high surface gravities (log($g$)=2), the temperatures obtained
with 1\,M$_{\odot}$ models are systematically lower by $\sim$100\,K
than those obtained with 15\,M$_{\odot}$ models.} This is particularly
relevant to giants of class III, but also to the warmer of the
bright giants of class II.
Unfortunately, we found no systematic differences between the
$\chi^2$ values obtained with one or the other assumption on mass.
Thus, {\em it is not currently possible to determine mass using spectral fits}
such as those performed in this paper.
Mass has to be fixed a priori by other means.
For luminous giants and supergiants, i.e. stars with low gravities,
we found no systematic effect of mass on best-fit \mbox{$T_{\mbox{\rm \tiny eff}}$}\ or log($g$).
The differences in \mbox{$T_{\mbox{\rm \tiny eff}}$}\ between the two assumptions are scattered
around 0 with typical values of $\pm$100\,K (more in cases where
even the best fits are not satisfactory). We note a correlation between
the difference in \mbox{$T_{\mbox{\rm \tiny eff}}$}\ and the difference in log($g$)\,: when a change
in the assumed mass leads to a rise in the best-fit \mbox{$T_{\mbox{\rm \tiny eff}}$}, it generally
also produces a rise of the best-fit value of log($g$).
\subsection{\mbox{$T_{\mbox{\rm \tiny eff}}$} -systematics related to surface abundances: model
predictions}
\label{Teffscales_models.sec}
\begin{figure}
\includegraphics[clip=,width=0.49\textwidth]{5824f32.ps}
\includegraphics[clip=,width=0.49\textwidth]{5824f33.ps}
\caption[]{Effects of the surface abundances on estimates of \mbox{$T_{\mbox{\rm \tiny eff}}$}\ and
log($g$). The input \mbox{$T_{\mbox{\rm \tiny eff}}$}\ and log($g$)
refer to solar metallicity models. The output parameters to the values obtained
when fitting the solar metallicity spectra with models with RSG-specific
abundances, using the procedure described in Sect.\,\ref{method.sec}.
{\em Solid:}\ Input log($g$)=$-1$. {\em Dashed:}\ Input log($g$)=1.}
\label{modelsystematics.fig}
\end{figure}
The effects of surface abundance ratios on \mbox{$T_{\mbox{\rm \tiny eff}}$}\ estimates
(and on derived gravities) are of larger amplitude
than those of mass, and we therefore describe them in more detail.
They can be studied by
fitting a sample of solar metallicity models
with models with RSG-specific abundances, using the procedure described
for fits to observational data.
{\em The results depend on the wavelength range adopted in the analysis.}\
They are illustrated in Fig.\,\ref{modelsystematics.fig}.
The amplitude of the effect is of several hundred Kelvin. If we
call $\delta \mbox{$T_{\mbox{\rm \tiny eff}}$}$ the difference between the input and output temperatures
(output minus input), we find no simple linear correlation between
$\delta \mbox{$T_{\mbox{\rm \tiny eff}}$}$ and \mbox{$T_{\mbox{\rm \tiny eff}}$}.
The figure based on near-IR data (0.97--2.4\,$\mu$m, with the window
and weight functions
of Sect.\,\ref{method.sec}) is tightly related to the behaviour of the
near-IR CN bands. Output \mbox{$T_{\mbox{\rm \tiny eff}}$}\ values around 4400--4900\,K (depending
on gravity) are avoided because
the CN bands of those RSG-specific models are too strong.
A similar effect is present when optical wavelengths are used
(0.51-0.97\,$\mu$m), but it is
combined in the low-\mbox{$T_{\mbox{\rm \tiny eff}}$}\ range with a variety of effects due to oxides.
The difference between the optical and near-IR temperatures is
largest between 3500 and 4200\,K, where the fluxes below 0.75\,$\mu$m
transit rapidly from being almost nil to being large.
Eye inspection of the fits shows that the best
fit models sometimes deviate wildly from the ``data" in the range {\em not}
included in the fitting procedure, while over the range really used fits
are dangerously good. When optical and near-IR spectra are
used jointly, compensations occur and the correct Teff is recovered (to
within $\pm 100$\,K) below 4200\,K. Positive offsets of up to 400\,K however
persist above this temperature for all gravities.
The output log($g$) equals the input log($g$) below about 4300\,K
when using near-IR data, and
is higher by one log($g$)-sampling step at higher temperatures. When
using optical data, the behaviour depends more strongly on the actual
value of the input log($g$). For high gravities, $\delta$log($g$) is positive
(one sampling step) at $\mbox{$T_{\mbox{\rm \tiny eff}}$} > 3600\,K$. For low gravities,
$\delta$log($g$) is nil at the lowest and highest temperatures, but
peaks with a value $+2$ around 4200\,K.
We note that corresponding plots can be produced for the
``extinction"-correction A$_V$ (which accounts for colour changes in the
analysed wavelength ranges reasonably well). The qualitative
aspects of the graphs for $\delta$A$_V$ are similar to those of $\delta \mbox{$T_{\mbox{\rm \tiny eff}}$}$,
with a maximal amplitude of $\pm$0.6 magnitudes.
\medskip
For comparison, we have performed a limited exploration
of the effects of metallicity (with solar scaled abundance ratios) on the
derived temperatures. Models at log$(Z/Z_{\odot})=-0.3$ were
computed for log($g$)=1 and $-0.5$, and best fits to these were
obtained using solar metallicity models. Plots
similar to those in Fig.\,\ref{modelsystematics.fig} were constructed.
The effects of the change in $Z$ on the derived \mbox{$T_{\mbox{\rm \tiny eff}}$}\ is notably {\em smaller}
than those just described for modified abundance ratios.
When using optical wavelengths,
the trend expected from the well known metallicity-temperature degeneracy is
found (lower temperatures are required at lower metallicity to produce similar
optical band depths). The offset varies between 100\,K (low \mbox{$T_{\mbox{\rm \tiny eff}}$} ) and 200\,K
(high \mbox{$T_{\mbox{\rm \tiny eff}}$} ). At near-IR
wavelengths, the correct temperatures are recovered unchanged except for a
few deviations of $\pm 100$\,K. In both wavelength ranges, however,
gravities higher than input are derived (by one gravity bin).
For complementary discussions on metallicity effects, we refer to
Ku\v{c}inskas et al. (2006) and Levesque et al. (2006).
\subsection{\mbox{$T_{\mbox{\rm \tiny eff}}$}\ estimates for real stars}
\label{Teffscales_data.sec}
\begin{figure}
\includegraphics[clip=,angle=270,width=0.49\textwidth]{5824f34.ps}
\caption[]{Effective temperatures derived from fits to
near-IR spectra ($\lambda>0.97\,\mu$m),
compared with trends in the literature. RSG-specific abundances are
used for class I stars, solar abundances for classes II and III. Solid
lines: temperature scale for giants, from van Belle et al. (1999)
for \mbox{$T_{\mbox{\rm \tiny eff}}$}$<$5000\,K and from Schmidt-Kaler (1982) for \mbox{$T_{\mbox{\rm \tiny eff}}$}$>$5000\,K.
Dotted line: temperature scale for supergiant stars from
Levesque et al. (2005). Default spectral types from the SIMBAD
database (operated at CDS, Strasbourg, France) are used for this figure.}
\label{Teffscales_IR.fig}
\end{figure}
\begin{figure}
\includegraphics[clip=,angle=270,width=0.49\textwidth]{5824f35.ps}
\caption[]{Same as Fig.\,\ref{Teffscales_IR.fig}, but
using only spectral types from Keenan \& McNeil (1989)
or Buscombe (1998).
Using solar abundances moves the G6 supergiant in this figure down 200\,K,
the K0 supergiant up 100\,K, and the M supergiants above 3600\,K up by
100 to 200\,K.}
\label{Teffscales_goodspt.fig}
\end{figure}
Figure\,\ref{Teffscales_IR.fig} compares the effective temperatures derived
in this paper from near-IR spectra with temperature scales from the literature.
For giants, the plotted reference scale
(below 5000\,K) is based on angular diameter measurements (van
Belle et al. 1999).
The number of red supergiants with angular diameter measurements is
small. For supergiants, we therefore show the scale recently obtained
from fits of \mbox{\tt MARCS}\ model atmosphere spectra to optical spectra by
Levesque et al. (2005). The agreement is good, but the scatter is large.
As a sanity check, we may restrict our data sample to
stars with {\em optical} spectra and discard near-IR wavelengths
($\lambda > 1\,\mu$m) in the fitting procedure.
This provides temperatures that are {\em a priori} more
directly related to spectral types.
In addition, we keep only stars with
MK spectral types from Keenan \& McNeil (1989) or Buscombe (1998),
and with small variability (according to the Simbad database
information). {\em Using solar abundance models}
for direct comparison with the
results of Levesque et al. (2005), we find that 8 of the 9 stars in
the subsample have estimated temperatures within less than 50\,K of
the reference relations. Most of the stars in the subsample are supergiants
and all are of type K5 or later. Thus, in this range of parameters,
there is no indication of a systematic
difference between the temperatures derived from optical spectra
using the new \mbox{\tt PHOENIX}\ models or the \mbox{\tt MARCS}\ models of
Levesque et al. (2005).
To illustrate what fraction of the scatter in Fig.\,\ref{Teffscales_IR.fig}
may be due to spectral classification errors,
Fig.\,\ref{Teffscales_goodspt.fig} reproduces the graph using only
MK spectral types from Keenan \& McNeil (1989) or Buscombe (1998),
when available.
A considerable scatter remains. Some of it is due to the real scatter
in the properties of the stars (surface abundances, gravity, unknown
variability).
For supergiants in particular, and especially at low temperatures,
the scatter also reflects the large
intrinsic uncertainties associated with the relatively poor quality of
the model fits. We expect the spread to shrink once
models with a wider range of parameters (surface abundances, micro-turbulent
velocities) will have been computed.
We have also examined the diagrams of estimated \mbox{$T_{\mbox{\rm \tiny eff}}$}\ vs. spectral type
obtained when any available optical data is included in the fitting
procedure. They are similar to those described above. Individual
stars are moved by up to 200\,K, but no systematic trend can be clearly
identified (because the stars that move most are also those for which the
fits are poorest). Despite the added difficulty of fitting a broader
wavelength range, the final dispersion is not significantly enhanced.
\section{Conclusions}
We have presented two grids of \mbox{\tt PHOENIX}\ models for the spectra of
luminous cool stars, one at solar metallicity, the other with RSG-specific
surface abundances. We have described the properties of these models
and compared them with observations, with a focus on the molecular
features found in near-IR spectra at resolution
$\lambda/\Delta \lambda \simeq 1000$. At these
wavelengths, red giants and supergiants dominate the integrated light of
stellar populations.
Our main conclusions are the following.
\begin{itemize}
\item Models must be computed with a wavelength sampling step of about
0.1\,\AA\ in order to reproduce the low resolution near-IR spectra
adequately.
\item The solar metallicity models provide a very good representation of
empirical spectra of giants of class III and of a large fraction of the
luminous giants of class II. As expected, RSG-specific abundances are found
inadequate for the modelling of the bulk of the giant stars (they are
rejected because they provide poorer fits and lead to a
zone of avoidance in the derived \mbox{$T_{\mbox{\rm \tiny eff}}$} -distribution).
RSG-specific abundances are favoured for
some class II giants, which may have suffered mixing in excess of standard
first dredge-up.
\item Red supergiant spectra of spectral types G and K, and of
luminosity class Ib (sometimes also Iab) can be reproduced reasonably well.
Serious disagreements remain in the case of very luminous (Ia and some Iab)
and of very cool supergiants (type M).
RSG-specific abundances tend to improve the fits to strong
CN bands, although the global effect on the fit quality is not as
spectacular as one might have hoped. However,
changing the surface abundance ratios has a significant
impact on the derived effective temperatures (the
effect is larger than that found when moving from $0.5\,Z_{\odot}$ to
$Z_{\odot}$). Therefore,
it will remain necessary to account for this effect of stellar evolution
in future model grids.
\item While it is easy (relatively)
to produce good fits to the spectra of either the J,
{\em or} the H, {\em or} the K band spectra of luminous cool stars,
it remains more difficult to reproduce all their optical and near-IR
molecular bands simultaneously. As a result, estimated stellar parameters
(\mbox{$T_{\mbox{\rm \tiny eff}}$}, log($g$), A$_V$) depend on the spectral range of the analysis.
The effects of changes in the surface abundances on these parameters
also depend on the wavelengths under study.
\item The \mbox{$T_{\mbox{\rm \tiny eff}}$}\ scales derived from the comparison of a collection
of near-IR stellar spectra ($1-2.4\,\mu$m)
with models are generally consistent with previous scales,
albeit with considerable scatter. For cool red supergiants, the
current uncertainties on individual estimated \mbox{$T_{\mbox{\rm \tiny eff}}$}\ values frequently
exceed $\pm 100$\,K.
\end{itemize}
About 20\,\% of the analysed red supergiant spectra have such strong CN
bands that they call for models with high micro-turbulent velocities,
and/or even more surface nitrogen than we
have considered, and/or for gravities lower than log($g$)=$-1$.
The coolest of these are variable, and variability may contribute
to the building of an extended atmosphere with low effective gravities.
Large micro-turbulent velocities have been derived for a
number of red supergiants in the past, and our first calculations confirm
that increasing this parameter will help reproducing the spectra
of type Ia supergiants. In particular, a better agreement with observations
is expected for the ratio between the first and second overtone
CO band strengths. A grid of models is currently being
calculated. Somewhat higher nitrogen abundances than we have explored
are expected to exist in nature, for instance when stellar
rotation increases internal mixing. Because
low resolution near-IR spectra of red supergiants are
relatively easy to acquire, their comparison with models at the
specific abundances predicted by stellar tracks with rotation will
provide interesting tests of stellar evolution theory.
Considering stars
with lower gravities is a more challenging modelling task,
as they will develop strong winds. In addition, the winds may
be loaded with dust. Since winds are
a well known empirical property of many red supergiants, the development
of models that include winds is a necessity of the future.
\begin{acknowledgements}
PHH was supported in part by the P\^ole Scientifique de Mod\'elisation
Num\'erique at ENS-Lyon and by Universit\'e Louis Pasteur at Strasbourg
Observatory. Some of the calculations presented here were
performed
at the H\"ochstleistungs Rechenzentrum Nord (HLRN), and at the
National Energy
Research Supercomputer Center (NERSC), supported by the U.S. DOE, and
at the
computer clusters of the Hamburger Sternwarte, supported by the DFG
and the
State of Hamburg. We thank all these institutions for a generous
allocation of computer time.
We thank C. Charbonnel for insightful discussions of
aspects of this work.
This research has made use of the SIMBAD database and the VIZIER service,
operated at CDS, Strasbourg, France. It uses data (in preparation
for publication) acquired using the NASA Infrared Telescope
Facility, Hawaii, USA, and the 2.3m Telescope of the Australian National
University, Siding Spring, Australia.
\end{acknowledgements}
|
2,869,038,156,769 | arxiv | \section{Introduction}
Since its beginning, the COVID-19 pandemic is accompanied by an `infodemic', in the course of which vast amounts of misinformation, hate speech, rumors and conspiracy theories are being spread, in particular through social and online media \cite[19]{depoux_pandemic_2020,the_lancet_infectious_diseases_covid-19_2020}. Due to their oftentimes racist or antisemitic content, many of the conspiracy theories contribute to an increase in discrimination and even violence against the targeted groups
\citep{bundesverband_rias_ev_antisemitismus_2020,gover_anti-asian_2020,meisner_sundenbocke_2021}.
The unprecedented role of digital technologies and social media in the spread of conspiracy theories and hate speech present a key difference to previous pandemics: Respective narratives are being shared on video platforms, social networks, and messenger services, and with them racism, antisemitism, and calls for violence, which sometimes translate into violent attacks in the real world.\footnote{The assaults in Christchurch, Halle, or Hanau were impelled by racist and antisemitic conspiracy narratives disseminated via different online platforms, and the killers used online platforms to stage their killing in live streams \citep{musyla_christchurch_halle_hanau_2020}}
The sheer volume and the rapidly evolving online dissemination of antisemitism and conspiracy theory content make data-driven algorithmic approaches indispensable \citep{marcellino_detecting_2021}.
Researchers, anti-discrimination or fact checking organizations require technical support in identifying corresponding comments or articles on a large scale. Current publicly available services for automated detection of related phenomena such as toxic language, however, do not adequately cover antisemitism, in particular when it is communicated using codes and metaphors \citep{steffen_toxicity_2022}.
In order to improve underlying machine learning models, comprehensive labeled data from online and social media are required.
Existing datasets related to conspiracy theories or antisemitism are typically generated by filtering texts using explicit keywords such as `5G', `Bill Gates' or `jew*'. However, such approaches introduce a keyword bias to the generated corpora that makes it difficult to detect lesser-known or new conspiracy narratives or to identify intentionally obfuscated or coded terms, the latter being increasingly used e.g. to evade regulation by platform operators. Especially antisemitic content is often conveyed in an encoded way, using metaphors and codes that work without explicit reference to Jews or Israel \citep{zannettou_quantitative_2020,becker_decoding_2021-1}. In addition, newer forms of antisemitism are on the rise \citep{schmalenberger_tertiary_2022,schneider_querdenken-demo_2020} which are not sufficiently covered by standard working definitions, and difficult to discover through commonly used keywords.
This can also be observed in the context of the COVID-19 pandemic, when some anti-restriction protesters compare themselves with victims of the Shoah or equate the mandatory use of face masks with the obligation for Jewish citizens to wear the `Yellow Star' in Nazi-Germany \citep{schneider_querdenken-demo_2020}.
In this paper, we draw on extensive research to develop an annotation guide for antisemitism and conspiracy theories in online content in the context of the COVID-19 pandemic. Regarding antisemitism, we focus on encoded forms of antisemitism and post-Holocaust antisemitism. We develop our annotation scheme as an interdisciplinary team to ensure a comprehensive conceptual approach. We provide real-world examples with our working definition to allow for its further development and its adaptation to other cultural, historical, and linguistic contexts and additional data sources. Furthermore, we use our working definition to annotate a German-language dataset \textit{TelCovACT} consisting of 3,663 Telegram messages posted between March 11, 2020 and December 19, 2021 and thus promote research in a less studied language. We chose Telegram because of its popularity among opponents of the government's measures to combat the coronavirus and the frequent spread of conspiracy theories and antisemitic statements \citep{european_commission_directorate_general_for_justice_and_consumers_rise_2021,winter_uberdosis_2021}. The dataset is made available to foster further research, especially on automated detection of antisemitic and conspiracy-theory
content.
\section{Related Work}
Our literature review focuses on studies in which large amounts of data have been collected and analyzed, typically in conjunction with annotation efforts, in order to provide an overview of existing datasets and associated definitions and categorization schemes.
\paragraph{Conspiracy theories in social media}
Some recent works provide openly accessible annotated datasets and use them, often as part of challenges, to develop models for automated classification \citep{alam_fighting_2021,golbeck_fake_2018}. In this context, conspiracy theories tend to be considered rather a subcategory and are often used synonymously with rumors or misinformation \citep{serrano_nlp-based_2020}, so that existing datasets and annotation schemes are not based on a common theoretical foundation. In a systematic literature review of recent research on conspiracy theories, only around a third of considered works provided a definition of the term, thus analyzing “conspiracy theories online without explicitly defining the main object of their research” \citep{mahl_conspiracy_2022}.
A frequently applied approach is to address specific known conspiracy theories and gather data by searching for selected keywords
\citep{marcellino_detecting_2021,memon_characterizing_2020,moffitt_hunting_2021,serrano_nlp-based_2020} without discussing the labeling process in detail \citep{gerts_thought_2021,pogorelov_fakenews_2020} or by resorting to examples in order to provide a definition \citep{moffitt_hunting_2021}. In most cases, a few thousand records are labeled manually; \citet{marcellino_detecting_2021}, however, use a much larger basis of 150,000 texts by refining the combination of search terms that refer to four well-known conspiracy theories but the process is not fully clear.
Classification models trained on such keyword-based datasets yield moderate to high accuracy
and typically employ language models such as BERT. Part of the research makes their datasets
and codebooks
openly available.
In addition to the rather pragmatic approaches, some works provide a solid theoretical foundation of the subject, discussing the relations among concepts such as conspiracy theories, rumors or misinformation \citep{kou_conspiracy_2017,samory_government_2018,wood_propagating_2018}. The different approaches to defining conspiracy theories turn out to share many common conceptual elements, in particular the assumption of a ``secret plot between powerful people or organizations'' \citep{mahl_conspiracy_2022} that work deliberately for their own sake and against the common good \citep{uscinski_why_2020}. Based on an extensive literature review of definitions and categorizations of conspiracy theories, \citet{samory_government_2018} deduce that the majority of relevant research ``relies on agents, actions, and goals as key elements in defining conspiracy theories or conspiracy beliefs,'' making paradigmatic examples dispensable. Along these lines, \citet{kou_conspiracy_2017} provide an operational definition of a conspiracy theory about public health crises containing the following three criteria: 1) the theory includes an explanation of the causality behind an event, 2) the explanation refers to primary actors (individuals or organizations, `the Other') whose actions are being kept secret from the public, 3) the actions have a malicious purpose, harming the greater good in favor of the actor’s own agenda. We incorporate these findings into our definition of conspiracy theories.
\paragraph{Conspiracy theories and antisemitism}
We could not identify any publicly available datasets connecting conspiracy theories and antisemitism, and it seems that antisemitism has only recently attracted the attention of research on conspiracy theories in social media. This is supported by \citet{mahl_conspiracy_2022} who found hat only 2.1\% of the recent empirical studies addressing single conspiracy narratives focus on antisemitic narratives.
Yet, at the same time, antisemitic stereotypes play an important role in current conspiracy theory discourses surrounding the COVID-19 pandemic. An alarming prevalence of antisemitism, both among religious conspiracies showing an age-old religious superstition and within deep state conspiracies, has been found on Twitter, together with a prevailing Nazi-Germany rhetoric in numerous German tweets debating coronavirus health measures \citep{media_diversity_institute_antisemitism_2021}. In an analysis of different social media platforms, \citet{cohen_antisemitism_2021} find that Jews are the second most targeted group in toxic posts.
The pandemic has led to new antisemitic conspiracy theories \citep{cohen_antisemitism_2021}, while recycling old stereotypes. The narrative of ``Jews ruling international financial, political and media institutions'' is identified as most dominant antisemitic conspiracy theory element across different European countries and social media platforms \cite[9]{european_commission_directorate_general_for_justice_and_consumers_rise_2021}. Similar findings are supported by an analysis of the YouTube presence of three leading British conspiracy theory spreaders with direct connections to the far right: the ``West as a whole is portrayed as dominated by a ruthless and bloodthirsty elite, whose members are often referred to using racially charged terms such as `Zionists', `Rothschilds', or `Rothschild Zionists''' \cite[97]{allington_antisemitic_2021}.
A recent comprehensive report examined the links between COVID-19 anti-vaccination conspiracy theories and antisemitism in Twitter and Facebook in seven European countries between March and August 2021 \citep{media_diversity_institute_antisemitism_2021}. Their key findings include that (1) anti-vaxxers typically perceive themselves as victims and resort to Holocaust comparisons and the self-labeling as `the new Jews'; (2) references to and variations of established antisemitic conspiracy theories such as `The Great Reset' and `New World Order' play a significant role; and (3) antisemitic codes such as `globalists' are frequently used throughout Europe. This showcases why an awareness of antisemitic codes, the structure of antisemitic argumentations, and specific forms such as post-Holocaust antisemitism is relevant for a classification of circulating antisemitic conspiracy content.
\paragraph{Antisemitic online content}
Almost all scientific studies known to us that tackle large-scale annotation of texts with respect to antisemitism utilize the working definition by the International Holocaust Remembrance Alliance (IHRA) as the main basis for their coding schemes \citep{becker_decoding_2021,chandra_subverting_2021,guhl_online_2020,jikeli_annotating_2019,jikeli_toward_2022,schwarz-friesel_judenhass_2019}.\footnote{\citet{chandra_subverting_2021} combine the IHRA definition with Brustein’s categorization of antisemitism into political, economic, religious, and racial antisemitism.} As shown by \citet{jikeli_annotating_2019}, using an English-language corpus containing the word (stems) `Jew*'
or `Israel', the IHRA definition is well suited in such a setting to generate a gold standard corpus for antisemitic content. However, many parts of the IHRA definition need further elaboration and refinement in order to serve as an annotation basis for automatic detection systems, as argued by \citet{jikeli_toward_2022} who extend their previous work to build an annotation scheme with many examples based on a close reading of the definition, with a clarification of ``grey zones'' and extensive literature on antisemitic stereotypes. Similarly, the project `Decoding Antisemitism' states to use the IHRA definition as a conceptual framework but extend it with further categories \citep{becker_decoding_2021-1} in order to annotate comments of major media outlets.
We found only few studies that address algorithmic detection of antisemitic online content. We identified the paper by Warner and Hirschberg (\citeyear{warner-hirschberg-2012-detecting}) as the earliest work in this regard, who used a so-called template-based strategy to extract features from text and then trained an SVM, obtaining an F1 score of $\sim\! 0.6$ on a custom dataset consisting of 9,000 paragraphs. \citet{guhl_online_2020} used a commercial software to train a classifier for online content from the imageboard 4chan. The model achieved an F1-score of $\sim \! 0.76$ on a small and rather specific dataset. \citet{ozalp_antisemitism_2020} trained a supervised machine learning model using 853 manually annotated tweets to detect online “antagonistic content related to Jewish identity” in tweets containing certain keywords by UK-based users. \citet{de_smedt_online_2021} created a machine learning based system for scoring English and German language texts regarding the level of antisemitic toxicity. The model is based on a self-developed lexicon consisting of over 2,000 relevant words and phrases containing Nazi-Germany rhetoric, dehumanizing adjectives and verbs inciting to violence, far-right terminology, alt-right neologisms, coded language, and revived conspiracy theories. In \citet{chandra_subverting_2021}, a multimodal deep learning classification model is trained on text and images, with an F1-score of 0.71 for Twitter and even 0.9 on Gab.
Large-scale analyses of antisemitism based on models guided by annotated datasets, including all previously mentioned, use corpora created with keyword filters. These are often related to Jewishness or Judaism (e.g. `jew*', `hebrew') \citep{chandra_subverting_2021,jikeli_annotating_2019,jikeli_toward_2022,ozalp_antisemitism_2020}, or the state Israel, or reflect known and sometimes platform-specific antisemitic slurs (e.g. `kike', `ZioNazi', `(((jew))))' \citep{zannettou_quantitative_2020}, or are associated with antisemitic stereotypes and narratives (e.g. `happy merchant', `6 million') \citep{guhl_online_2020}. The authors partly reflect the limitation induced by restricting to such keywords. \citet{ozalp_antisemitism_2020}, for instance, underline that ``much antisemitic hate speech comes in the form of conspiracy theories (or allusions to such theories) and image-based hate speech—such as memes—that would not be captured by these keywords''. \citet{jikeli_toward_2022} justify such a restriction with the otherwise low percentage of positive examples in the annotated dataset that is not affordable under highly limited time budgets. It is, however, noteworthy that despite such a restriction on the content of the respective corpora, the annotation process is described by some as difficult \citep{jikeli_toward_2022,ozalp_antisemitism_2020}
The studies presented so far are limited to English-language content. German-language texts are covered by Monika-Schwarz Friesel's large-scale empirical study on antisemitic online content based on a large variety of text corpora and the project Decoding Antisemitism that analyzes comments to articles in mainstream media outlets in English, German and French \citep{becker_decoding_2021-1}.
Except for \citet{chandra_subverting_2021,jikeli_toward_2022}, the manually annotated datasets are not announced as publicly available (on request).
\section{Elaboration of adequate working definitions}
Antisemitism and conspiracy theories are inherently complex phenomena that can be difficult to annotate, especially when expressed as short texts in messenger services or social media platforms \citep{ozalp_antisemitism_2020}. Thus, careful elaboration of underlying theories is necessary for reliable annotation of datasets \citep{ross_measuring_2016}.
\subsection{Antisemitism}
As our basis for a working definition of antisemitism, we turn to the working definition of the International Holocaust Remembrance Alliance (IHRA):
\begin{quote}
Antisemitism is a certain perception of Jews, which may be expressed as hatred toward Jews. Rhetorical and physical manifestations of antisemitism are directed toward Jewish or non-Jewish individuals and/or their property, toward Jewish community institutions and religious facilities. \citep{international_holocaust_remembrance_alliance_working_2016}
\end{quote}
This definition has been recognized and implemented by numerous countries, cities, governmental and non-governmental institutions in various political and social fields. It has also shown to be a viable ground to manually annotate corpora that carry an explicit reference to Jews, Israel or antisemitic stereotypes \citep{jikeli_annotating_2019,jikeli_toward_2022}.
As other researchers have pointed out, one of the strengths of the definition is that it covers most contemporary manifestations of antisemitism offering descriptive examples. However, the definition needs to be interpreted in context and its examples need to be concretized in order to use it as guidance for annotation \citep{jikeli_toward_2022}. For example, the definition does not (explicitly) address comparisons of current political measures of democratic governments to contain the pandemic with Nazi crimes against Jews.
Likewise, antisemitic narratives that address non-Jews are briefly mentioned in the definition but not further elaborated \citep{jikeli_annotating_2019}. Conspiracy theories regarding vaccinations are a particularly vivid example that combine these manifestations of antisemitism \citep{media_diversity_institute_antisemitism_2021}. We thus propose the following extensions and concretizations of the presented definition:
\begin{enumerate}
\item Post-Holocaust antisemitism (PHA) that refers to the instrumentalization of the victims of the Holocaust for a political agenda.
\item Linguistic encodings of antisemitic statements that do not mention Jews or the State of Israel.
\end{enumerate}
Our conceptualization aims at being both close enough to empirical data to be sufficiently context-sensitive for contemporary manifestations of antisemitic conspiracy narratives and at the same time abstract and generic enough to be adaptable to future research.
\paragraph{Post-Holocaust antisemitism}
The term post-Holocaust antisemitism (PHA) was coined by \citet{marin_post-holocaust_1980} to describe ``antisemitism without antisemites''. Corresponding narratives explicitly name Jews as part of argumentation strategies which instrumentalize the victims of the Holocaust for a political agenda and at the same time shift the perpetrator-victim coordinates by undertaking relativizing Holocaust comparisons. According to \citet{salzborn_verschworungsmythen_2021}, the instrumentalization is an essential component of this form of antisemitism. It fulfills a dual function: With regard to the past, it historically relativizes the Holocaust and infamously instrumentalizes the antisemitic policy of extermination. With regard to the present, it allows conspiracy theorists to demonize democratically legitimized rulings and measures by describing themselves as victims of a dictatorial state.
In times of the COVID-19 pandemic, we encounter forms of PHA in comparisons or equations of the state measures to combat the pandemic with the National Socialist persecution of Jews. In Germany, some participants of demonstrations against restriction policies wear a yellow star with the imprint `ungeimpft' (unvaccinated) and thus symbolically compare themselves with Jews who under the National Socialist regime were forced to wear one. Leaflets and placards reading `Impfung macht frei' (vaccination sets you free), a reference to the slogan `Arbeit macht frei' (work sets you free) at the entrance to Auschwitz and other Nazi concentration camps, were distributed or shown at protests \citep{belghaus_impfgegner_2020}. While this form of antisemitism first evolved in Germany (and Austria), where it has been analyzed as attempts of rejecting the guilt for the Shoah, these narratives seem to have undergone a process of transnationalization in recent years, especially in the context of protests against COVID-19 measures \citep{media_diversity_institute_antisemitism_2021}.
\paragraph{Encoded antisemitism}
While the anonymity of online platforms on the one hand presents a fertile ground for explicit antisemitic hate speech, antisemitism is also often expressed via encoded, implicit manifestations. Findings from the ongoing research project Decoding Antisemitism which analyzes comments on German, French and British mainstream social media platforms indicate that “users use a variety of coded forms to communicate their antisemitic attributions” \cite[7]{becker_decoding_2021-1}, including semiotic markers such as icons or emoticons, abbreviations, word plays, allusions, and metaphors \cite[7]{becker_decoding_2021-1}.
Expressing antisemitic beliefs in an encoded, implicit form allows users to avoid social ostracism, the deletion of content from social media platforms, or even criminal consequences.
Despite political and ideological differences, antisemitic discourses show a great uniformity and homogeneity regarding the stereotypes and codes used in them, highlighting their relevance for the transmission of antisemitism \citep{schwarz-friesel_judenhass_2019}. Encoded manifestations of antisemitism, via codes or metaphors, can thus also be assumed to play an important role for the online dissemination of antisemitism which is why it is of central importance to include them in annotation guidelines for antisemitic content. This also applies to manifestations of antisemitism in the context of COVID-19 conspiracy narratives: While on the one hand, the pandemic was framed by some conspiracy theorists as a smokescreen which was used by Zionists, the Rothschild family or George Soros to expand their power, other conspiracy narratives were even more popular: These narratives did not explicitly mention Jews as initiators of the pandemic and the resulting global crisis, but instead turned against Bill Gates, the `New World Order' or generally against `the' (economic or political) elites \citep{ajc_berlin_ramer_institute_antisemitische_2021,finkelstein_antisemitic_2020,european_commission_directorate_general_for_justice_and_consumers_rise_2021}. While the former narratives attribute special political and/or economic power to Jewish persons or groups, the latter operate encoded and get along completely without naming Jews.
\subsection{Conspiracy theories}
The term conspiracy theory was first coined by Popper, who argued that social sciences should not fall into the trap of providing simple explanations for unintended events, which he termed as `conspiracy theory of society' \cite[306]{popper_offene_2003}. According to Popper, unlike scientific explanations, conspiracy theories provide simple answers for complex social and political events.
Some conspiracy theories even refer to scientific studies as well as academic experts to support their arguments. At the same time, a defining feature of conspiracy theories is their ``self-sealing quality'', meaning that they ``are extremely resistant to correction, certainly through direct denials of counterspeech by government officials.'' \citep{sunstein_conspiracy_2009}.
\paragraph{Working definition}
The concept of conspiracy theories is often used synonymously with similar forms of deceptive content such as disinformation (intentional dissemination of incorrect information) or misinformation (unintentional dissemination of incorrect information), rumors (unverified information), or fake news (fabricated news or a label used for delegitimizing news media) \citep{mahl_conspiracy_2022}.
While conspiracy theories partially overlap with these concepts (e.g. a conspiracy theory might contain misinformation), they do have their own unique characteristics as attempts to create an alternative interpretation of events \citep{mahl_conspiracy_2022}: Conspiracy theories formulate the strong belief that a secret group of people, who have the evil goal of taking over an institution, a country or even the entire world, intentionally cause complex, and in most cases unsolved, events and phenomena \citep{butter_nichts_2018}. The exact intention (of a power-takeover) does not always have to be explicitly articulated; what is important, however, is the existence of a harmful intention and that the respective goal is of significant relevance to the public.\footnote{This does not include, in particular, theories without a specific harmful intent, such as the alleged existence of aliens covered up by governments.} A conspiracy theory can thus be considered an effort to explain some event or practice by reference to the machinations of powerful people, who have managed to conceal their role \citep{sunstein_conspiracy_2009}. Such a narrative is based on a simple dualism between good and evil which leaves no space for unintentional, unforeseeable things or mistakes to happen. Thus, a conspiracy theory needs actors (e.g. corrupt elites) who supposedly pursue a concrete malicious goal (e.g. control the population) using a strategy (e.g. by inserting a microchip via vaccinations) (see also \citet{samory_government_2018}).
The nature of social media and messenger services entails that more complex narratives are often incompletely rendered, especially when the counterpart can be assumed to be (partially) knowledgeable \citep{sadler_fragmented_2021,ernst_extreme_2017}. Accordingly, we believe that it is useful to annotate which of the components (actors, goal, strategy) actually appear in a given text. This will allow for post-annotation categorization of conspiracy theories in terms of completeness and fragmentation.
\paragraph{Conspiracy theories and the COVID-19 pandemic}
Studies show that particularly times of crises such as pandemics are prone to the emergence and spread of conspiracy theories \citep{heller_rumors_2015,kitta_vaccinations_2012,starbird_rumors_2014}. Since the threat posed by a disease is not directly tangible, pandemics often foster a range of conspiracy theories \citep{hepfer_verschworungstheorien_2015}. As an effect, the identified `culprits' can be named concretely and become tangible, which seemingly helps to structure an overwhelming situation. As was the case with other pandemics, the interpretations circulating on social networks lead to fatal mis- and disinformation about the origin and routes of infection or measures against the COVID-19 disease \citep{smallman_whom_2015}. In a similar way, a growing body of literature has observed how the outbreak of the pandemic not only led to a circulation of conspiracy theories but also how such theories led to the catalyzation or emergence of transnational movements such as QAnon and the so called Querdenken movement \citep{bodner_covid-19_2020}.
\paragraph{Compatibility with antisemitism}
There is a high degree of compatibility between antisemitism and conspiracy theories that is largely due to the strong structural ties between these two phenomena. \citet[49-50]{haury_antisemitismus_2019} elaborates the following fundamental principles characterizing modern antisemitism which are also central for our understanding of conspiracy theories:
\begin{itemize}
\item A specific form of personification, which attributes subjectless societal processes to the conscious, intentional, and malevolent actions of individuals; this inevitably induces the construction of an omnipotent enemy who has secretly taken over crucial points of control.
\item A Manichean worldview which radically divides the world into good and evil, a dualism based on an ontological construction of group identities. In this process, the enemies are represented as a foreign, external community with an immutable `nature' and characteristics. The usually nationally or ethnically constructed in-group is typically imagined as inherently good, naturally rooted, and free of internal conflicts or contradictions.
\item The enemy group is imagined as a corrosive and subversive threat for the in-group, potentially destroying its identity as well as its societal and political structures. Expulsion and extermination of the enemy group are seen as not only legitimate measures but as a last resort in face of the omnipotent, conspiratorial enemies who allegedly aim at destroying the collective.
\end{itemize}
Conspiracy theories are thus an ideal medium for the dissemination of antisemitic tropes, images and narratives.
Looking at the narratives of protesters against COVID-19 countermeasures in Germany that reference conspiracy theories, it is possible to identify facets of all the outlined characteristics \citet{lelle_struktureller_2020}:
Conspiracy theorists accusing individuals like Bill Gates, the (meanwhile former) chancellor Angela Merkel or virologist Christian Drosten of having a stake in and making profit out of the pandemic are examples of personification. Furthermore, the named persons are often accused of being part of a global conspiracy. The protesters perceive themselves as an `awakened' group which is fighting evil and spreading truth to disclose the lies of the global, malignant group of conspirators. Even though it must be noted that not all speakers at the demonstrations promote the idea of a nationally or ethnically defined community, most of them do emphasize the notion of community, unity and `naturalness'. As a consequence they contribute to a homogenization of the group of protestors. Furthermore, the frequent participation of Nazis and so-called `Reichsbürger' at the protests contributes to the spread of ethnic and nationalist ideologies in the movement.
Finally, the increasing aggression, violent fantasies on posters and in chat groups, as well as the first violent attacks attributed to the spectrum of this movement point to its partial radicalization \citep{cemas_2021}. In some cases, this process is accompanied by a shift from structurally antisemitic attitudes to explicit and violent hatred of Jews \citep{rose_pandemic_hate_2021}.
\section{Corpus and annotation scheme}
We pre-selected all public channels identified by a research project to have a central role for mobilization against COVID-19 measures in the early phase of the pandemic \citep{forschungsinstitut_gesellschaftlicher_zusammenhalt_factsheet_2020} with more than 1,000 followers. In addition, we retrieved a Twitter dataset using keywords explicitly related to `Querdenken' from time periods around key demonstrations in 2020 and 2021 and identified all Telegram channels linked from it. The channels from both sources were then manually ranked for relevance to the research task using a random sample of 100 messages per channel. From the initial 215 channels, 133 were considered as particularly relevant.
We restricted to messages sent between March 11, 2020, the day COVID-19 was declared a pandemic by the WHO, and December 19, 2021. Very short messages or such with high similarity to other texts were excluded. Further details can be found in the datasheet documenting the dataset \citep{bischoff_etal_2022}.
\subsection{Annotation scheme}
Our annotation scheme is based on the working definitions of antisemitism and conspiracy theories presented above.
As \textbf{two main categories}, we use the labels \textbf{`antisemitism'} and \textbf{`conspiracy theory'} to indicate that a message includes the respective content. For each main label, we provide \textbf{sub-labels} to annotate the \textbf{content} or \textbf{narrative structure} and the \textbf{stance} of a message if it was classified as antisemitism and/or conspiracy theory.
The provided \textbf{sub-labels for content} reflect our working definition of antisemitism, including \textbf{`encoded antisemitism', `post-Holocaust antisemitism', and `other forms of antisemitism'} to cover examples which would neither fit our definition of encoded nor post-Holocaust antisemitism. Annotators were encouraged to select only one content-related sub-label for antisemitism. Examples are provided in Table \ref{tab:1}.
For \textbf{conspiracy theory}, we used the narrative structure related sub-labels \textbf{`actor'}, \textbf{`strategy'}, \textbf{`goal'}, and \textbf{`reference'}, out of which the annotators could select all applicable labels. A good example for illustration is the first row in Table \ref{tab:1}, with `the satanic zionists' being the actor, `to kill billions of people' the goal, and `riots and fake pandemic' constituting the strategy. Regarding stance, we provided the sub-labels \textbf{`critical', `affirmative', `neutral or unclear'} for \textbf{antisemitism}, and \textbf{`authenticating', `directive', `rhetorical question', `disbelief'}, and \textbf{`neutral or uncertain'} for \textbf{conspiracy theory}, the latter based on an adapted form of the Rumor Interaction Analysis System (RIAS) scheme \citep{wood_propagating_2018}).
We additionally labeled \textbf{`pandemic reference'} and \textbf{`GDR (German Democratic Republic) reference'} to indicate respective content. We furthermore used three technical labels: \textbf{`memorize task'} to mark a message for later consideration, e.g. because it was regarded as paradigmatic for a certain label; \textbf{`task unsuitable'} for messages annotators regarded generally unsuitable, e.g. because they contained sensitive information difficult to anonymize, were not German-language, or too short and thus incomprehensible; \textbf{`review required'} if annotators were uncertain how to classify a task and therefore wanted review by another annotator. Our detailed annotation guide is available at \citep{steffen_annotation_2022}.
\begin{table*}[]
\small
\begin{tabular}{p{0.07\textwidth} | p{0.85\textwidth} }
encoded & The long-term plan cooked up by the satanic zionists to kill billions of people is blowing up after the failure of their riots and fake pandemic. \\
\hline
PHA & The MASK now becomes the Yellow Star of the unvaccinated!
A year ago it was a \#conspiracy theory that \#unvaccinated are marked separately. Today \#Lindner demands a \#mask obligation for all who are not \#vaccinated. Is the \#mask really becoming the new \#Jewish star? \\
\hline
other & What has the Jew done to us? All ``vaccines'' are gene poison injections and come from Jewish corporations!
\end{tabular}
\caption{Example texts communicating different forms of antisemitism.}
\label{tab:1}
\end{table*}
\subsection{Final dataset}
Our dataset \textit{TelCovACT} consists of 3,663 records, approximately 14\% of which are labeled as antisemitic and 36\% as communicating conspiracy theories. At least one conspiracy-theory (antisemitism) related message was identified in 101 (85) of 133 channels.
For almost all texts containing antisemitic content, the stance was classified as affirmative (94\%). Almost 60\% were labeled as encoded\footnote{Note that multiple labels were possible.}, making it the most frequent sub-form of antisemitism in the corpus. For conspiracy theories, belief was the the most frequent stance (95\%), followed by `authenticating' (24\%). The narratives most often included a strategy (72\%) and an actor (64\%). It should be noted that conspiracy theories were mostly communicated in a fragmented way, with only 26\% containing all of actor, strategy and goal, while 13\% communicated the respective content using a reference only, such as `\#QAnon'.
More than 72\% of all texts labeled as antisemitic also contained conspiracy theory content, while the majority of conspiracy theory messages (71\%) were not labeled as antisemitic. This does not mean that the majority of the conspiracy theories were not antisemitic; our annotation scheme requires more than a mention of an antisemitic conspiracy theory to be labeled as such. Considering the level of fragmentation in the communication of conspiracy theoretical content, with 13\% communicating via a reference only, it is not surprising that antisemitism is expressed in a minority of conspiracy theory messages. It is worth noting that the distribution of sub-forms of antisemitism differs significantly (chi-squared test with p-value $<0.01$) depending on the presence of conspiracy theory content: In the group of texts communicating both, the PHA label is given to less than 9\%, followed by `other' with 27\%, and encoded antisemitism being the leading label with a frequency of 72\%. However, if no conspiracy content was present, other forms of antisemitism are found most frequently (56\%), followed by PHA in 25\% of all cases, and only 23\% labeled as encoded.
We computed chi-square test scores in order to find words that are most significant for the two categories. As expected, in texts communicating conspiracy theories we find in particular references to well-known theories such as New World Order, the Great Reset, deep state or `plandemie' (referring to a planned pandemic), actors such as Bill Gates, freemasons, Soros or Clinton, and words indicating strategies or goals such as lie, execute or dictatorship. The most significant words that are positively correlated with texts communicating antisemitism include references to Jewish identity such as jew or jewish as well as frequent codes such as illuminati, Soros, Rothschild, freemason or satanic.
\subsection{Data ethics and privacy}
Our approach to handle data ethics, privacy and protection follows best practices as documented in \citep{rivers_ethical_2014}. This includes exclusively collecting publicly available data and preventing data from being used to identify authors: Even though Telegram's Terms of Service states that user names and ids cannot be linked to a user's phone number as the only personal data collected by Telegram, we chose to additionally anonymize the dataset by replacing user names, user-ids as well as links to such by USER. Furthermore we decided only to provide our annotated dataset on personal request for research purposes approved for our ethical standards, thereby preventing any attempt of abuse.
We also note that the annotator team comprised nine individuals with diverse socio-demographic backgrounds, working in various disciplines (five in political science or sociology and four in data science) with different levels of academic training.
\section{Annotation process and evaluation}
The annotators reflected on their annotation experiences and discussed examples of conflicting annotations in a workshop to gain insights into factors which had affected their annotation decisions.
During the workshop, almost all discussed conflicts could be resolved. After the joint workshop, 445\footnote{Of the 500 texts originally selected at random, 55 were marked as unsuitable and thus excluded from the evaluation.} texts were labeled by two annotators per message and used to compute the inter-annotator reliability. Our results indicate solid agreement among annotators, with Cohen’s kappa being 0.7 for conspiracy theory and 0.84 for antisemitism.
The main insights from the workshop (and resulting modifications to the initial annotation scheme) are the following:
\paragraph{Positively biased corpus}
The annotated dataset was generated exclusively from channels known to spread conspiracy theories and antisemitic content, and all annotators were aware of this `positive bias'. This contextual knowledge influenced how some interpreted a message.
Furthermore, the dataset mainly consisted of messages affirming antisemitism or conspiracy theories, making the selection of stance labels appear rather obsolete. However, for more heterogeneous data sources, the differentiation by stance is generally a useful additional information that can be utilized for training classifiers \citep{marcellino_detecting_2021}.
\paragraph{Antisemitism}
Annotators expressed discomfort with classifying a text as `no antisemitism’, a label that was provided in our initial version of the annotation scheme, arguing that this could be interpreted as confirming a given text to be antisemitism-free. Furthermore they explained that sometimes a text itself could not be classified as antisemitic following the provided definition, but nonetheless would contain certain antisemitic undertones. For the final annotation scheme, we thus removed the choices `no’ and `uncertain’ for both main categories and instead added a label `review required' for cases requiring exchange with others.
It was discussed how to classify texts that do not match our definition of antisemitism but include references to antisemitic conspiracy theories such as QAnon. We had only provided a `reference' label for the conspiracy theory category, but not for antisemitism. Annotators discussed whether these messages should be classified as `encoded antisemitism’; however, it was argued that the appearance of a single term or code is not sufficient, even when it is often used as an antisemitic code, making it difficult to introduce such a label for antisemitism.
Annotators perceived the sub-label `other’ as potentially trivializing because it invokes the impression of being used for `secondary’ forms of antisemitism, while being too coarse-grained as it subsumes a variety of antisemitic content. Since our aim was to focus on post-Holocaust antisemitism and encoded antisemitism as previously underexposed manifestations of antisemitism in existing annotation tasks, we nonetheless consider the use of the sub-label `other’ as adequate. Depending on the research focus, however, a differentiation of it should be considered, e.g. using existing annotation schemes as in \citet{jikeli_toward_2022}.
Other discussions evolved around messages mentioning Israeli politics. While the content itself mostly could not be classified as antisemitic, some annotators saw the mere focus on Israel in the context of our channel selection as a clear indication of antisemitic bias. While such consideration of focus and agenda-setting is common for qualitative approaches like critical discourse analysis, we doubt that it can be transferred to the training of classifiers that typically work on single-message level and have no knowledge of the `overall tendency' of a channel (in fact, finding channels communicating a certain amount of respective content is a plausible application scenario of a classifier). Another example for such a controversy around the consideration of context was a message which described COVID-19 prevention measures as systematic discrimination and fascist. For some annotators, the use of the term `Faschismus' in a German-language COVID 19-context indicated a clear relativization of the Shoah. They argued that in a German context, the term fascism is widely used synonymously with the German national socialist regime, and thus interpreted this message as a manifestation of post-Holocaust antisemitism. Other annotators doubted this interpretation, arguing that the term fascism potentially describes different kinds of phenomena.
\paragraph{Conspiracy}
The provided definition with its division into the elements actor, strategy, and goal was overall perceived as helpful and comparatively easy to apply. At the same time, annotators stated they sometimes found it difficult to clearly separate strategy and goal. Furthermore, if they could not identify a goal, they were more hesitant to label a text as conspiracy theory. Moreover, several messages were observed in which actors were only implicitly mentioned, e.g. as `they' or `our enemies', which is why we consider it an important feature of our annotation scheme to include implicit mentions.
Some texts were classified as conspiracy theory even though neither actor nor strategy nor a goal could be identified. This applies for example to texts describing the great majority of society ignorant of the conspiracy, e.g. by referring to them as `Schlafschafe’ (sheeples), or calling for an awakening of the masses. It was suggested to include the element `target’ or `victim’ of a conspiracy to our definition to include these kinds of texts. Other examples missing the defined triple were texts suggesting that `the truth’ was generally disguised. It was argued that both types of messages should be labeled as conspiracy theory.
Additionally, some annotators decided to apply the sub-label `reference’ if they interpreted a message as conspiracy theory but perceived it as too implicit and fragmented to apply the actual definition. In various cases, it became evident that background knowledge had influenced the decision, for example if a message contained links to platforms known to be disseminating conspiracy theory content.
A lot of texts turned out to contain fake news, dis- or misinformation. For the sake of feasibility, however, we had deliberately decided against providing respective labels, since this would require thorough fact checking, in some cases even scientific expert knowledge. Nevertheless, such text fragments were perceived as important discursive elements of conspiracy theories by some annotators who chose to label them as 'reference’ as a workaround.
These decisions contributed to a partly inconsistent application of this sub-label.
\section{Discussion and future work}
The fraction of post-Holocaust antisemitism (PHA) in our dataset was lower than expected from qualitative analyses of Twitter data and public protests.
We assume that the lack of regulation and content moderation on Telegram allows to uncover more directly one's antisemitic views, while PHA occurs more frequently in regulated contexts.
The high proportion of encoded antisemitism, in particular in connection with conspiracy theory content, confirms that the antisemitic codes of a `global elite' in control of global political and economic processes can easily be adapted for the expression of belief in conspiracy theories. The large association of the two phenomena underlines the importance of approaching antisemitism in large online corpora without the restriction to keywords explicitly referring to Jews, Jewishness or Jewish collectives or institutions.
It is also worth noting that classification models trained on a dataset like this with high overall toxicity are more likely to actually learn aspects specific to antisemitism or conspiracy theory and not to related but differing phenomena such as offensive language or hate speech.
With respect to the annotation process, we found that a major factor for different annotators' assessments was the handling of the context of a message or the consideration of related concepts such as misinformation. In this context, it is worth noting that some research explicitly includes contextual information such as images or external links into the annotation decision \citep{jikeli_toward_2022}, which is particularly helpful when labeling short texts.
However, this places additional demands on the training of classification models. An alternative might be to treat threads or sequences of texts as entities instead of single messages.
It also became clear that for annotators with a stronger affiliation to qualitative disciplines, it feels unfamiliar, not to say problematic, if they are asked to take a binary yes/no decision when interpreting a text. On the other hand, the group discussion showed that it is possible to reach a shared understanding and interpretation based on the predefined categories provided in our annotation guide in most of the cases; however, direct exchange between annotators needs to be assured in the labeling process.
We find it important to make these differences and difficulties transparent, because we consider them relevant for other interdisciplinary research as well. After all, the divergences demonstrate the complexity of annotating human-written artifacts, a task which inevitably reduces complex social phenomena to a simplified classification. With this, it will hardly ever be possible to dissolve all conflicts emerging among annotators.
These conflicts could also be made productive to foster explicit and careful choices of how to resolve annotator disagreements: As \citet{gordon_jury_2022} have pointed out, the question whose voices are being heard when providing data for machine learning algorithms is often still left implicit and typically resolved by a majority vote. We think that especially for complex social phenomena, this process should gain more attention in future research – last but not least because power relations and discrimination affect people differently and are thus received with more or less sensitivity by them.
\small
|
2,869,038,156,770 | arxiv | \section{Introduction}
The fundamental limit to the accelerating $E$-field in an SRF cavity
is the ability of the superconductor to resist penetration of the associated
magnetic field $H$ (or equivalently $B$). SRF cavities are routinely run
at peak magnetic fields above the maximum field $H_{c1}$
sustainable in equilibrium; there is a metastable regime at higher fields
due to an energy barrier at the surface~\cite{bean64}. \ensuremath{H_{\mathrm{sh}}}\ marks the
stability threshold of the Meissner state. In Fig.~\ref{fig:HshVsKappa}
we show results from linear stability analysis~\cite{transtrum11}, valid near
$T_c$, for \ensuremath{H_{\mathrm{sh}}}\ as a function of the Ginzburg-Landau parameter $\kappa$,
the ratio $\lambda/\xi$ of the London penetration depth $\lambda$ to the
coherence length $\xi$. Niobium has $\kappa \approx 1.5$, most of the promising
new materials have large $\kappa$. At lower temperatures, one must move to
more sophisticated Eliashberg theories~\cite{catelani08}, for which \ensuremath{H_{\mathrm{sh}}}\ is known
analytically for large $\kappa$; numerical studies at lower $\kappa$ are
in progress~\cite{catelaniUnpublished}. Broadly speaking, the results so far
for isotropic materials appear similar to those of Ginzburg-Landau.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{hshVsKappa.pdf}
\caption{From ref.~\cite{transtrum11}, showing a numerical estimate of \ensuremath{H_{\mathrm{sh}}}\ in
Ginzburg-Landau theory over many orders of magnitude of $\kappa$ (black solid
line), along with a large-$\kappa$ expansion (red dashed line), and a Pad\'e
approximation for small $\kappa$ (blue dotted-dashed line).}
\label{fig:HshVsKappa}
\end{figure}
This manuscript will briefly summarize theoretical work on \ensuremath{H_{\mathrm{sh}}}\ (the
threshold of vortex penetration and hence the quench field).
First, we discuss the effect of materials anisotropy
on \ensuremath{H_{\mathrm{sh}}}~\cite{liarte16}. Second, we discuss theoretical
estimates of the effect of disorder~\cite{liarte17}, and
preliminary unpublished simulations of
the effects of surface roughness and materials inhomogeneity.
Third, we discuss key practical implications of theoretically calculated point
defect energies, interactions, relaxation times, and mobilities in the promising
new cavity material Nb$_3$Sn. Finally, some magnetic flux is trapped in
cavities during the cooldown phase, and the response of these flux lines
to the oscillating external fields appears to be the dominant source
of dissipation in modern cavities. We
model potentially important effects of multiple weak-pinning centers on this
dissipation due to trapped flux.
\section{The effect of materials anisotropy on the maximum field}
\label{sec:Anisotropy}
Some of the promising new materials are layered, with strongly anisotropic
superconducting properties (MgB$_2$ and the pnictides, for example, but not
Nb$_3$Sn or NbN). Fig.~\ref{fig:anisotropicVortex} illustrates an anisotropic vortex
(magnetized region blue, vortex core red) penetrating into the surface
of a superconductor (grey). The anisotropy here is characteristic of MgB$_2$
at low temperatures, except that the vortex core is expanded by a factor
of 30 to make it visible.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{anisotropicVortex.pdf}
\caption{From ref.~\cite{liarte17}, showing vortex (blue disk) and vortex core (red disk)
of zero-temperature MgB$_2$ in the $ac$ plane, with the external magnetic field parallel
to the normal of the plane of the figure. We have drawn the core region about 30 times
larger with respect to the penetration depth, so that the core becomes discernible.}
\label{fig:anisotropicVortex}
\end{figure}
Near $T_c$, we find in ref.~\cite{liarte16} that a simple coordinate change
and rescaling maps the anisotropic system onto the isotropic case
(Fig.~\ref{fig:HshVsKappa} above, as studied in ref.~\cite{transtrum11}).
We find, near $T_c$ where
Ginzburg-Landau theory is valid, that \ensuremath{H_{\mathrm{sh}}}\ is nearly
isotropic for large~$\kappa$ materials (Fig.~\ref{fig:materialsAnisotropy}.
At lower temperatures, different heuristic estimates of the effects of
anisotropy on \ensuremath{H_{\mathrm{sh}}}\ yield conflicting results. Further work at lower
temperatures could provide valuable insight into the possible role of
controlled surface orientation for cavities grown from these new materials.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{materialsAnisotropy.pdf}
\caption{From ref.~\cite{liarte16}, showing the phase diagram of anisotropic
superconductors in terms of mass anisotropy and GL parameters.
The shaded blue and orange regions correspond to regions where the
superheating field anisotropy can be approximated by $\gamma^{1/2}$ and
$1$, respectively, within 10\% of accuracy. Note that
the superheating field of MgB$_2$ is nearly isotropic near $T = T_c$.
}
\label{fig:materialsAnisotropy}
\end{figure}
\section{Disorder-mediated flux entry and materials anisotropy}
Defect regions and inhomogeneity of superconductor properties can
weaken the performance of SRF cavities. In ref.~\cite{liarte16} we used
simple estimates based on Bean and Livingston's energy barrier
arguments~\cite{bean64}, to estimate the effects of disorder in lowering
\ensuremath{H_{\mathrm{sh}}}\ by providing flaws that lower the barrier to vortex penetration.
Here we use these calculations to shed light about the relationship between tin
depleted regions, low critical temperature profiles, defect sizes and
quench fields.
Consider an external magnetic field $B$, parallel to the
surface of a semi-infinite superconductor occupying the half-space
$x>0$. If $B$ is larger than the lower critical field $B_{c1}$ (and
smaller than $B_{c2}$), the vortex lattice phase is thermodynamically
favored. However, if the field is not large enough, a newborn vortex
line near the superconductor surface will have to surpass an energy
barrier to penetrate the superconductor towards the bulk of the
material. This instability typically is surmounted by the simultaneous
entry of an entire array of vortices, whose interactions lower one another's
barriers. Disorder, in contrast, will lead to a localized region allowing
one vortex entry at a time. Bean and Livingston provided simple analytical
calculations for the energy barrier felt by one vortex line; we extended
their calculation to estimate the dirt needed to reduce this barrier to
zero at a quench field $H_q < \ensuremath{H_{\mathrm{sh}}}$.
The new materials have larger $\kappa$, and in particular smaller vortex
core sizes $\xi$; naively one would expect vortex penetration when flaws
of size $\xi$ arise. Are these new materials far more sensitive to dirt
than niobium? Reassuringly, Fig.~\ref{fig:reliabilityDirt} shows that the low values
of the coherence length do not make these new materials substantially more
susceptible to disorder-induced vortex penetration~\cite{liarte17}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{reliabilityDirt.pdf}
\caption{From ref.~\cite{liarte17}, showing the reliability of vortex nucleation,
in a simple model of Gaussian random disorder, for three
candidate superconductors. Solid curves are for a 3D semicircular vortex
barrier model; dashed curves are for 2D pancake vortex nucleation in a
2D superconducting layer.}
\label{fig:reliabilityDirt}
\end{figure}
We can use our model to estimate the suppressed superconducting
transition temperature $T_c^{\min}$ and the flaw depth $D_c$ needed
to allow vortex penetration, as a function of $H_q$ (or, in Tesla, $B_q$)
(Fig.~\ref{fig:quenchFieldPlot}). For Nb$_3$Sn we find a flaw size of
$D_c\sim 100$nm and $T_c^{min} \sim 12$K can allow vortex nucleation
and quenches at fields of $\sim 77$mT (Fig.~\ref{fig:quenchFieldPlot}),
consistent with experimental results~\cite{hallIPAC17a}.
\begin{figure}[!htb]
\centering
(a)
\vspace{-0.21cm}
\includegraphics[width=0.9\linewidth]{TcVsx.pdf}
(b)
\vspace{-1cm}
\includegraphics[width=0.9\linewidth]{quenchFieldVsTc.pdf}
\caption{(a) Critical temperature profile that allow nucleation of vortices
in Nb$_3$Sn cavities at a field of $\sim 77$mT. (b) Suppressed
superconducting transition temperature $T_c^{\min}$ (black), and flaw depth
$D_c$ (red), as a function of the quench field.}
\label{fig:quenchFieldPlot}
\end{figure}
\section{Time-dependent Ginzburg-Landau simulations of rough surfaces
and disorder}
To quantify the dependence of \ensuremath{H_{\mathrm{sh}}}\ on surface roughness and disorder,
we have developed a time-dependent Ginzburg-Landau simulation.
Fig.~\ref{fig:surfaceRoughness} shows the density $|\psi|^2$ of superconducting
electrons at a field just above \ensuremath{H_{\mathrm{sh}}} (top left), showing the entry of several vortices
for a 2D system with an irregular surface. On the bottom left, we show the
corresponding supercurrent ${\mathbf{j}}$; on the top right we show the magnetic field
$H$ (perpendicular to the plane of the simulation), and on the bottom right we
show the effect of surface roughness on $|\psi(\theta)|^2$ around the
perimeter. Our initial results quantify how inward-curving regions in the
plane perpendicular to the applied field on the perimeter can act as vortex
nucleation sites in this geometry. An open question remains what the effect
of curvature and surface roughness have when oriented parallel to the applied
field.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{surfaceRoughness.png}
\caption{Spatial dependence of the density of superconducting electrons
(top left), supercurrent (bottom left), and the induced magnetic field (top right). On
the bottom right, we show the variation of the order parameter around the perimeter
of the superconductor.}
\label{fig:surfaceRoughness}
\end{figure}
The effect of roughness in Fig.~\ref{fig:surfaceRoughness} is to lower \ensuremath{H_{\mathrm{sh}}}\ by
a few percent. By systematically varying the details of the roughness parameters,
we can use this tool to identify at what scale roughness will have significant impact on
vortex nucleation. SRF cavity roughness can be smoothed to varying degrees.
Our TDGL environment can be used to find dangerous regimes or configurations
that can have serious consequences for cavity performance.
We can also use this tool as a way to explore vortex dynamics and the
effects of pinning sites on trapped residual magnetic flux. Pinning sites originate
from inhomogeneities in the material, such as grain boundaries or spatial
inhomogeneities in the alloy stoichiometry. By incorporating this information
into our TDGL environment we can try to better understand the mechanisms
driving residual resistance for typical cavities.
\section{DFT calculations}
Nb$_3$Sn cavities are created by depositing tin vapor on the surface of a
niobium cavity, which reacts with the niobium to form an irregular surface layer of the
compound. Of interest are regions of ``tin depleted'' Nb$_3$Sn, known
to have a lower superconducting transition temperature than the surrounding Nb$_3$Sn.
These regions may be the nucleation centers responsible for quenches observed well
below \ensuremath{H_{\mathrm{sh}}}\ expected for perfect Nb$_3$Sn~\cite{hallIPAC17a}.
Density functional theory (DFT) can be used to study layer growth, tin depletion,
and other features of Nb$_3$Sn layers at the single-particle level. This information,
combined with experimental data and accounting for the effects of grain boundaries and
strain, makes it possible to build a multiscale model of layer growth.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.6\linewidth]{antisiteDisorder.png}
\caption{Illustration of antisite disorder. We estimate that on the order of 1\% of
lattice sites are affected by antisite defects ``frozen in'' from the high coating temperature.
This would make them by far the most common point defect in Nb$_3$Sn layers.}
\label{fig:antisiteDisorder}
\end{figure}
Our initial work uses in-house DFT software to calculate defect formation and interaction
energies, impurity energies, and energy barriers in Nb$_3$Sn. We have found that
antisite disorder (Figure~\ref{fig:antisiteDisorder}), rather than impurities or vacancies,
likely sets the electron mean free path in Nb$_3$Sn and may also be responsible for
collective weak pinning. We have also found that under certain conditions during
growth, it is energetically favorable for Nb$_3$Sn to form at tin-depleted stoichiometry,
while during annealing existing Nb$_3$Sn near the surface or grain boundaries can
become tin-depleted by diffusion (Figure~\ref{fig:tinDepletion}). Either or both of these
tin depletion mechanisms may result in quench nucleation centers; by understanding
them we can for the first time make informed modifications to the coating process in an
attempt to limit tin depletion and produce better cavities.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{tinDepletion.png}
\caption{Experimental data showing tin depletion. At left is a tin density map of a layer
cross section showing regions of significant (7-8\%) tin depletion, in this case mostly
deep in the layer relative to the RF penetration depth (dashed line). At right are close ups
showing slight (1-2\%) tin depletion right at the surface of the layer. Images by Thomas
Proslier at Argonne National Lab, received via personal communication with Daniel Hall.}
\label{fig:tinDepletion}
\end{figure}
\section{Dynamics of trapped vortices; potential role of weak pinning}
When the field is high enough for penetration of new vortices, one expects a
cascade of vortices leading to a quench. Vortices trapped during the cooling
process, while not immediately fatal, do act as sources of residual resistance.
Experiments show that the non-BCS surface resistance is proportional to
the trapped flux, both for nitrogen-doped Nb cavities~\cite{gonnella16}
and for Nb$_3$Sn~\cite{hallIPAC17b}. This suggests that trapped vortices may be
a dominant contribution to the quality factor of the cavity.
Previous studies of the residual resistance due to a trapped flux
line~\cite{gurevich13} focused on the Bardeen-Stephen viscous
dissipation~\cite{bardeen65} of a free line pinned a distance
below the surface, as the external field drags the line through a
otherwise uniform superconducting medium. Experimental measurements
in nitrogen-doped Nb cavities showed good agreement to this theory,
except that the distance to the pinning center was presumed to change
linearly with the mean-free path~\cite{gonnella16} as it changes due to
nitrogen doping. Since nitrogen (or other contaminant gases~\cite{koufalis17a,koufalis17b})
should act as weak pinning centers (with many impurities per coherence
length cubed), we have been modeling the role of weak pinning in
vortex dissipation.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{yxzT34.pdf}
\caption{From ref.~\cite{hallIPAC17b}, showing vortex line solutions at several
times, using measured parameters for Nb$_3$Sn.}
\label{fig:weakPinningSolution}
\end{figure}
Line defects pulled through a disordered medium is one of the classical
depinning transitions~\cite{fisher98}. The disorder acts as
a random potential, and macroscopically there is a threshold force per
unit length $f_{pin}$ needed to depin the line and start motion
(Fig.~\ref{fig:weakPinningSolution}). This depinning
transition is preceded by avalanches of all sizes (local regions of
vortex motion) and followed by fluctuations on all scales (jerky motion
of the vortex line in space and time). For our initial estimates, we
have ignored these fluctuations, using a `mean-field' model where our
superconducting vortex line has a threshold supercurrent
$j_d \propto f_{pin}^{2/3}$ for motion. We presume also that the
energy dissipated is $f_{pin}$ times the area swept out by the vortex
as the external surface field pulls it to and fro (Fig.~\ref{fig:pinningEnvironment}).
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{pinningEnvironment.pdf}
\caption{Illustration of a vortex line (red curve) subject to an external rf magnetic field
and the collective action of many pinning centers.}
\label{fig:pinningEnvironment}
\end{figure}
The residual resistance measured in Nb$_3$Sn cavities shows a linear dependence
on the peak RF field (Fig.~\ref{fig:sensitivity}, \cite{hallIPAC17b}). The scaling
properties of the terms included in the earlier work~\cite{gurevich13}
all predict no dependence on the strength of the external oscillating field.
Our theory including weak pinning but ignoring the viscous dissipation
produces a dissipation that is linear in this external field. Our estimates,
however, suggest that our theory should be valid at MHz frequencies, but
at the operating GHz frequencies the viscous term must be important for
the energy dissipation. Our preliminary calculations suggest that incorporating
both can provide a reasonable explanation of the experimental data, but
we still do not obtain quantitative agreement.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{sensitivity.pdf}
\caption{From ref.~\cite{hallIPAC17b}, showing the sensitivity of residual
resistance to trapped magnetic flux, as a function of the peak rf field.}
\label{fig:sensitivity}
\end{figure}
\section{Conclusion}
The collaboration between scientists inside and outside traditional
accelerator physicists made possible by the Center for Bright Beams
has been immensely fruitful. This proceedings illustrates the richness
of the science at the intersection of accelerator experimentalists
working on SRF cavities with condensed-matter physicists with interests
in continuum field theories and {\em ab-initio} electronic structure
calculations of materials properties. (One must also note the important
contributions of experimental condensed matter physicists in the collaboration.)
Current SRF cavities are pushing fundamental limits of superconductors,
and are a source of fascinating challenges for theoretical condensed-matter
physics. Conversely, we find that theoretical calculations are remarkably
fruitful in guiding and interpreting experimental findings.
\section{acknowledgment}
We thank Alex Gurevich for useful conversations.
\iffalse
\newpage
\printbibliography
\else
|
2,869,038,156,771 | arxiv | \section{Introduction}
The continuum description of a physical system requires to define a suitable,
coarse grained order parameter $h(\mathbf{x},t)$ and to build a free energy ${\cal F}$ if the system is at equilibrium,
or a partial differential equation (PDE)
obeyed by $h$ if the system is out of equilibrium.
In some cases, the PDE itself can be derived by some free energy. This is surely the case for
a system relaxing towards equilibrium (think to a phase separation process~\cite{Bray}), but it may
also be true for pattern forming systems, in which case ${\cal F}$ is a
{\it pseudo} free energy (also called Lyapunov functional~\cite{HH}).
Typically, ${\cal F}$ is made up of a potential part $\tilde U(h)$, which
is the energy density for an homogeneous state, plus a part which accounts
for the energy cost of the inhomogeneities of the order parameter.
The simplest way to weight spatial variations of $h(\bm{x},t)$ is to consider a term proportional to
$(\nabla h)^2$. This surface tension term appears in completely different contexts,
from magnetism to surface physics. In the former case,
the misalignment of spins produces an energy cost which is
proportional to the gradient square of the magnetization~\cite{Langer}.
In the latter case, if the energy of a surface of local height $h(\bm{x})$ is
proportional to the total extension of the surface, we simply get $S=
\int d{\bm x}\sqrt{1+(\nabla h)^2}\simeq
S_0 +\frac{1}{2} \int d{\bm x} (\nabla h)^2$,
where $S_0=\int d{\bm x}$ is the area of the system.
If surface tension combines with a double well potential $\tilde U(h)$, which accounts
for the existence of two macroscopic stable states, ${\cal F}={\cal F}\ss{GL}$ is called
Ginzburg-Landau free energy and it plays a relevant role in the theory of phase transitions and phase ordering.
In one dimension, a simple description of energetics and dynamics can be given in terms of kinks~\cite{Nepo}.
A kink $\hk(x)$ is
the simplest non-homogeneous state which interpolates between the two minima of the potential,
$\pm h\ss{m}$,
and it has two main features: it is a monotonous function, and it is localized,
i.e. its derivative is exponentially small except in a finite size
region. The explicit expression of a kink for a specific potential, see Eq.~(\ref{eq_kink}),
$\hk(x)=h\ss{m}\tanh(h\ss{m}x/\sqrt{2})$, make both properties obvious.
The reason why kinks play a major role derives from
the possibility to describe $h(x,t)$ as a sequence of kinks and,
finally, by the possibility to describe the continuum dynamics in terms
of an effective dynamics between kinks, which act as fictitious, interacting particles.
In poor terms, kinks have an attractive force which decreases exponentially with their distance:
the attractive force implies instability and coarsening;
the exponential dependence with distance implies coarsening is logarithmically slow.
In spite of the widespread importance of ${\cal F}\ss{GL}$,
we should not come to the wrong conclusion that its form is universal.
This caveat is particularly appropriate if bending rigidity is important:
soft matter and biophysics, dealing with membranes~\cite{membranes} and filaments~\cite{filaments},
are a relevant example.
This fact, the relevance of bending rigidity with respect to surface tension,
is not purely phenomenological. On the contrary, it has been recently
derived rigorosuly from an hydrodynamic model~\cite{Thomas_PRE}.
According to this model, the energy cost of inhomogeneities is proportional
(in a one-dimensional model) to the squared second spatial derivative of $h$, $(h_{xx})^2$,
rather than to the squared derivative, $(h_x)^2$.
This modification is of paramount importance, because kinks are no longer
monotonous functions and this fact will be seen to change drastically their dynamics,
which turns out to be frozen.
The goal of our manuscript is twofold:
first, we extend ${\cal F}\ss{GL}$ to a free energy which depends on surface tension,
bending and possibly higher order terms.
Second, we reconsider the problem to pass from a continuos formulation
of the dynamics to a discrete description in terms of kinks,
proposing a new approach. A detailed numerical comparison
with continuum dynamics reveals that standard approaches where the
order parameter profile is written as a superposition of kinks fail to
reproduce quantitatively exact dynamics. Instead, our new approach
is quantitative.
The paper is organized as follows. In Section II we define the various continuos models
and in Section III we give a simple derivation of known results.
In Section IV we propose a new derivation of kink dynamics and compare numerically
different approaches.
In Section V we discuss the stability of steady states and in Section VI we
summarize the results.
\section{Continuous models}
As explained in the Introduction, a good starting point to introduce the dissipative dynamics of interest for us here is
the Ginzburg-Landau free energy.
For a scalar order parameter in a one-dimensional system,
\be
\tilde{\cal F}\ss{GL}=\int dx \left( \frac{K_1}{2} h_x^2 + \tilde U(h)\right),
\label{F_gl}
\ee
where $\tilde U(h)$ is an arbitrary symmetric double well potential,
with two equivalent minima for $h=\pm h\ss{m}$, which are the
ground states of the full free energy.
If $\tilde U(h)=U_0 U(h)$, rescaling space we obtain
\be
\tilde{\cal F}\ss{GL}
=\sqrt{K_1 U_0} \int dx \left(\frac{1}{2} h_x^2 + U(h)\right) \equiv
e_0 {\cal F}\ss{GL}.
\ee
In the following the energy scale $e_0$ will be set equal to one, while
we don't rescale $h\ss{m}$ to one for pedagogical reasons.
Furthermore, for definiteness, in this Section we consider a standard
quartic potential, $U(h)=-h\ss{m}^2h^2/2+h^4/4$.
The free energy ${\cal F}\ss{GL}$ is the starting point to study the dissipative dynamics
when the system is relaxing towards equilibrium.
When studying dynamics the existence of conservation laws is of primary importance and
two main universality classes exist, depending on whether the order
parameter, $h(x,t)$, is conserved or not.
In the two cases we obtain, respectively, the Cahn-Hilliard (CH) and the
Time Dependent Ginzburg Landau (TDGL) equation,
\bea
\p_t h(x,t) &=& -\frac{\delta {\cal F}\ss{GL}}{\delta h}= h_{xx} + h^2\ss{m}h -h^3 \quad \mbox{TDGL}\label{TDGL},\\
\p_t h(x,t) &=& \p_{xx}\frac{\delta {\cal F}\ss{GL}}{\delta h} = -\p_{xx}(h_{xx} + h^2\ss{m}h -h^3) \quad \mbox{CH}\quad \label{CH}.
\eea
In both cases, it is straightforward to show that
\be
\frac{d {\cal F}\ss{GL}}{dt} =\int dx \frac{\delta {\cal F}}{\delta h}\frac{\partial h}{\partial t} \le 0 .
\ee
Equation~(\ref{TDGL}) typically describes phase separation in a magnet,
because in this case relaxation dynamics does not conserve magnetization.
Equation~(\ref{CH}) can instead describe phase separation in a binary alloy, where
matter
is conserved.
Here we will focus to a so-called {\it symmetric} quench, where the average value of
the order parameter is zero.
In the above two cases, TDGL and CH equations, the overall picture of dynamics is well known.
The solution $h=0$, corresponding to the disordered
or homogeneous phase, is linearly unstable, as easily seen by a stability analysis.
In fact, if $h(x,t)=\varepsilon e^{\sigma t}e^{iqx}$, to first order in $\varepsilon$ we find
\be
\sigma(q) = \left\{
\begin{array}{cc}
h\ss{m}^2-q^2 & \qquad \mbox{TDGL}\\
h\ss{m}^2 q^2-q^4 & \qquad \mbox{CH}
\end{array}
\label{sigma}
\right. ,
\ee
so that the homogeneous solution is linearly unstable for small $q$.
Because of such instability, small regions of the two phases $h=\pm h\ss{m}$ appear,
separated by kinks.
A kink is a steady solution of TDGL/CH equations which connects the two minima
of the potential $U(h)$ for $x\to\pm\infty$. For the standard quartic potential,
such solution has the simple form
\be
h(x)=\pm h\ss{k}(x)\equiv \pm h\ss{m}\tanh\left(\frac{h\ss{m}}{\sqrt{2}} x\right).
\label{eq_kink}
\ee
More generally, TDGL/CH equations have periodic solutions of arbitrarily large
wavelength which can be thought of as superpositions of
kinks $(h\ss{k}(x))$ and antikinks $(-h\ss{k}(x))$.
These kinks feel an attractive interaction, and
when a kink and an antikink meet they annihilate,
therefore leading to an increasing average distance between the remaining ones (coarsening process).
In an infinite system this process lasts forever, but in one dimension it is logarithmically slow.
The above picture is well known and goes back to works by Langer~\cite{Langer} and Kawasaki and
Ohta~\cite{Kawasaki_Ohta}.
The main idea is to write $h(x,t)$ as a suitable superposition of positive and negative kinks,
getting a set of discrete equations for their positions $x_n(t)$.
This approach will be discussed in the next Section.
First, we need to show how this picture should be modified if the surface tensione term
($h_x^2$) in the GL free energy is replaced by a bending term ($h_{xx}^2$).
If bending rigidity dominates
over surface tension, the Ginzburg-Landau free energy should be written
\be
{\cal F}\ss{GL4}=\int dx \left[ \frac{1}{2} h_{xx}^2 + U(h) \right] ,
\label{F_gl4}
\ee
and Eqs.~(\ref{TDGL},\ref{CH}) are modified as follows,
\bea
\p_t h(x,t) = -h_{xxxx} + h\ss{m}^2 h - h^3 \quad\mbox{TDGL4}, \label{TDGL4}\\
\p_t h(x,t) = -\p_{xx}\left( -h_{xxxx} + h\ss{m}^2 h - h^3\right) \quad\mbox{CH4}, \label{CH4}
\eea
where the label `4' highlights the replacement of a second spatial derivative with a forth
spatial derivative.
In its turn, the linear spectra (\ref{sigma}) should be replaced by
\be
\sigma(q) = \left\{
\begin{array}{cc}
h\ss{m}^2-q^4 & \qquad \mbox{TDGL4}\\
h\ss{m}^2 q^2-q^6 & \qquad \mbox{CH4}
\end{array}
\right. ,
\ee
showing that the homogeneous state is still unstable for large wavelength fluctuations.
In spite of these similarities, the study of steady states is not straightforward as for TDGL/CH, where it essentialy boils down
to solve the problem of a particle of coordinate $h$ in the potential $V(h)=-U(h)$.
Steady states are now determined by the time independent equation
\be
-h_{xxxx} -U'(h)=0.
\label{eq_kink4}
\ee
The forth order derivative introduces new classes of kinks, because fixing the conditions
$h(x\to\pm\infty)=\pm h\ss{m}$ is no longer sufficient to uniquely determine a solution.
According to Ref.~\cite{Peletier_Troy} kinks can be labeled by their number of zeros, i.e. the number of
points where the kink profile vanishes (Eq.~(\ref{eq_kink}) shows that for TDGL/CH kinks
this number is equal to one).
The asymptotic behavior, i.e. the limiting behavior of $h\ss{k}(x)$ for large $|x|$,
is determined by the linearization of Eq.~(\ref{eq_kink4}) around $h=h\ss{m}$,
\be
h_{xxxx} = -U''(h\ss{m}) (h-h\ss{m}).
\ee
It is easily found that $h(x)=h\ss{m}+R(x)$, where the tail $R(x)$ is given by
\be
R(x) = A\cos\left(\kappa x+\alpha\right)\exp\left(-\kappa x\right),
\label{R4}
\ee
where $\kappa=(U''(h\ss{m}))^{1/4}/\sqrt{2}$, while
the amplitude $A$ and the phase $\alpha$ are undetermined within the linear theory.
The exact shape of kinks for TDGL and TDGL4 models is plotted in Fig.~\ref{fig_kinks},
where we limit for TDGL4 to the kink with only one zero.
A similar picture, oscillating kinks and kinks with more zeros, emerges in other
PDEs, e.g. the convective Cahn-Hilliard equation~\cite{Zaks}.
In both cases there is no evidence of such multihump kinks during dynamics,
which lead us to assume they are dynamically irrelevant.
Therefore in the next Section we are studying kink dynamics assuming kinks which
cross the horizontal axis only once.
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{kink_profiles.eps}
\end{center}
\caption{
Plot of kinks appearing in TDGL/CH (dashed line) and in TDGL4/CH4 (full line).
In the latter case, the tail continues to oscillate around $\pm h\ss{m}$, but its exponential decay
allows to make visible only the first two oscillations.
}
\label{fig_kinks}
\end{figure}
\section{Kink dynamics made simple}
\label{sec_skd}
The following, semiquantitative treatment of a profile simply consisting of the superposition of
a negative and a positive kink allows to grasp the relation between the kink tail $R(x)$ and
the kink interaction.
In order to get a result as general as possible, we consider an energy functional
which is the sum of a symmetric double well potential (as before) plus
arbitrary quadratic terms,
whose only constraint is to satisfy the symmetry $x\to -x$.
Its most general form is
\be
{\cal F}=\int dx \left[U(h)-\frac{1}{2}\sum_{i=0}^M (-1)^i a_{2i}(\partial^i_x h)^2\right] ,
\label{F_ggl}
\ee
where $a_{2i}$ are constants and the notation $(\partial^i_x h)$ means the $i-$th order spatial derivative
of $h$.
We have also introduced the factor $\frac{1}{2}(-1)^i$ so as to get rid of it when evaluating
the functional derivative, according to the relations
\be
\begin{split}
\frac{\delta {\cal F}}{\delta h}
= &
U'(h) - \sum_{j=1}^M a_{2j}\partial_{x}^{2j}h ,\\
\equiv & U'(h) - {\cal L}[h].
\end{split} \ee
The model we are going to analyze is a nonconserved, purely dissipative model,
where dynamics is driven by ${\cal F}$ according to the relation
$\p_t h = -(\delta {\cal F}/\delta h)$, i.e.,
\be
\p_t h = {\cal L}[h] - U'(h).
\label{TDGL2-4}
\ee
If $\hk(x)$ is the kink profile centred at $x=0$, the two-kinks approximation amounts to writing
\be
h(x,t)=\hk(x+x_0(t)) - \hk(x-x_0(t)) -h\ss{m},
\label{2ka}
\ee
where the kinks are centred in $\pm x_0(t)$ and the constant term must be added
in order to get the correct values in the different regions (for an $N-$kinks approximation,
the constant term is more complicated, see Eq.~(2.8) of Ref.~\cite{Kawasaki_Ohta} and Eq.~(\ref{app_multikink})
here below).
Using Eq.~(\ref{2ka}) it is easy to evaluate $\p_t h$,
\be
\p_t h(x,t) = \dot x_0 \big(h'\ss{k}(x+x_0) + h'\ss{k}(x-x_0)\big),
\ee
and its spatial integration,
\be
\int_{-\infty}^{+\infty} dx \p_t h(x,t) = 4h\ss{m}\dot x_0.
\label{x0p}
\ee
As for the RHS of Eq.~(\ref{TDGL2-4}),
while we simply have
\be
{\cal L}[h]={\cal L}[\hk(x+x_0)]-{\cal L}[\hk(x-x_0)],
\label{L_hk}
\ee
the evaluation of $U'(h)$ is a bit more involved. As soon as $|x|\gg a$, $a$ being
the size of the core of the kink,
$\hk(x)\simeq \pm [ h\ss{m} + R(|x|)]$, for $x\gtrless 0$ respectively.
Therefore, we can approximate Eq.~(\ref{2ka}) as follows
\be
h(x,t) \simeq \left\{
\begin{array}{cr}
\hk(x+x_0) +R(-x+x_0) & \quad\mbox{for } x<0,\\
-\hk(x-x_0) +R(x+x_0) & \quad\mbox{for } x>0,
\end{array}
\right.
\ee
and write, in the two cases,
\be
U'(h) \simeq \left\{
\begin{array}{c}
U'(\hk(x+x_0)) +U''(\hk(x+x_0))R(-x+x_0) \qquad\mbox{for } x<0 \\
-U'(\hk(x-x_0)) +U''(\hk(x-x_0))R(x+x_0) \qquad\mbox{for } x>0
\end{array}
\right. ,
\ee
so that
\begin{equation} \begin{split}
& \int_{-\infty}^{+\infty} dx U'(h) =
\int_{-\infty}^{+\infty} dx\left[ U'(\hk(x+x_0)) - U'(\hk(x-x_0)) \right] \\
&+ \int_{-\infty}^0 dx \left[ U'(\hk(x-x_0)) + U''(\hk(x+x_0))R(-x+x_0) \right] \\
&+ \int_0^{+\infty} dx \left[ -U'(\hk(x+x_0)) + U''(\hk(x-x_0))R(x+x_0) \right] .
\end{split}
\label{int_U}
\end{equation}
In the previous expression, a simple change of variable in the
second line integral, $x\to -x$, shows it is equal to the third line integral.
We can now match the spatial integration of the two sides of Eq.~(\ref{TDGL2-4}).
Using Eqs.~(\ref{x0p},\ref{L_hk},\ref{int_U}), we obtain
\be\begin{split}
4h\ss{m}\dot x_0 =& \int_{-\infty}^{+\infty} dx \big( {\cal L}[h] - U'(h)\big) \\
=& \int_{-\infty}^{+\infty} dx \Big( {\cal L}[\hk(x+ x_0)] -U'(\hk(x+ x_0)) \Big)\\
-& \int_{-\infty}^{+\infty} dx \Big( {\cal L}[\hk(x- x_0)] -U'(\hk(x- x_0)) \Big) \\
+& 2\int_{0}^{+\infty} dx \Big( U'(\hk(x+x_0)) - U''(\hk(x-x_0))R(x+x_0) \Big).
\end{split} \ee
Since the integrands in the second and third line vanish, we finally get
\be \begin{split}
\dot x_0 =&
\frac{1}{2h\ss{m}}\int_0^\infty dx\Big( U'(\hk(x+x_0)) -U''(\hk(x-x_0))R(x+x_0)\Big)\\
\simeq & \frac{1}{2h\ss{m}}\int_0^\infty dx\Big( U'(h\ss{m}+R(x+x_0)) -U''(\hk(x-x_0))R(x+x_0)\Big)\\
=& \frac{1}{2h\ss{m}}\int_0^\infty dx \Big(U''(h\ss{m})-U''(\hk(x-x_0))\Big) R(x+x_0) + {\cal O}(R^2).
\end{split} \ee
The quantity within large brackets in the final integral is exponentially small when $|x-x_0|\gg a$,
so we can approximate the integral as the integrand value for $x=x_0$ times
the extension over which the function in square brackets is non vanishing, i.e. $a$.
Finally, we can write
\be
\dot x_0 \simeq \frac{a}{2h\ss{m}}[U''(h\ss{m})-U''(0)] R(\ell),
\label{s2kd}
\ee
with $\ell=2x_0$.
In conclusion, the speed of the right kink is barely proportional to $R(\ell)$, where $\ell$ is its
distance from the left kink (the quantity in square brackets being positive,
since $U''(h\ss{m})>0$ and $U''(0)<0$).
This result means that
a kink exerts a force on its right neighbour at distance $\ell$, force which is proportional to $R(\ell)$, where $R(x)$ is
the difference between the kink profile and its limiting value for large, positive $x$,
$R(x) = \hk(x) -h\ss{m}$. For the standard TDGL
equation, $\hk(x)=h\ss{m}\tanh(\frac{h\ss{m}}{\sqrt{2}}x)$ and $R(x)=R_2(x)$, with
\be
R_2(x) = -2h\ss{m}\exp(-\frac{h\ss{m}}{\sqrt{2}}x),
\label{R2}
\ee
while for TDGL4, $R(x)=R_4(x)\equiv A\cos\left(\kappa x+\alpha\right)\exp\left(-\kappa x\right)$,
see Eq.~(\ref{R4}).
We can assume that Eq.~(\ref{s2kd}) may generalize to any sequence of
kinks located in $x_n(t)$ (with $x_{n+1}>x_n$),
\be
\dot x_n = \frac{1}{\hk'(0)}\left( U''(h\ss{m})-U''(0)\right) [R(x_n-x_{n-1}) - R(x_{n+1}-x_n)],
\label{snkd}
\ee
where the size $a$ of the kink core has been evaluated as $a=2h\ss{m}/\hk'(0)$.
Above equation should be supplemented by
the constraint that two neighbouring kinks annihilate when they overlap
(see details on numerical schemes in Appendix~\ref{app_num}).
As a matter of fact, such kink dynamics can be derived using
a superposition of $N$ kinks,
\be
\begin{split}
h(x,t) &= (-1)^n \hk(x-x_n(t)) \\
&+ \sum_{k<n} (-1)^k [\hk(x-x_k(t)) - h\ss{m}]\\
&+\sum_{k>n} (-1)^k [\hk(x-x_k(t)) + h\ss{m}].
\label{app_multikink}
\end{split}
\ee
This approach was initially used by Kawasaki and Ohta~\cite{Kawasaki_Ohta} to study TDGL and CH equations.
In the next Section we are going to propose a novel approach and to compare both with numerical
integration of the full continuum equations.
\section{Improved kink dynamics}
\label{sec_kd}
We now provide a more general approach to kink dynamics:
we don't assume explicitely a specific ``multikink" approximation, as, e.g., Eq.~(\ref{app_multikink}),
and we consider the general energy functional given in Eq.~(\ref{F_ggl}).
We don't claim our approach is rigorously founded: its validity (and usefulness)
are rather supported by the final comparison with numerics.
\subsection{Nonconserved case}
The nonconserved case corresponds to the dynamics
\begin{equation}
\partial_{t}h= -\frac{\delta {\cal F}}{\delta h} = \sum_{i}a_{2i}\partial_{x}^{2i}h-U^{\prime}(h).
\label{nc}
\end{equation}
In Fig.~\ref{schematic} we show the schematic of the system. It has been drawn for TDGL4/CH4 kinks, but notations are
generally valid. More precisely, $x_n$ means the position of $n-$th kink and
$x_{n\pm\frac{1}{2}}$ the points halfway between kinks $n$ and $(n\pm 1)$.
For ease of notation, $x_{n\pm\frac{1}{2}}$ is replaced by $n\pm\frac{1}{2}$ in integrals' extrema
and $h(x_{n\pm\frac{1}{2}})$ is replaced by $h_{n\pm\frac{1}{2}}$.
\begin{figure}
\begin{center}
\includegraphics[height=2cm]{kink_schematic.eps}
\end{center}
\caption{
(Color online)
Schematic of studied system with relevant notations.
}
\label{schematic}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=5cm]{kink_tdgl.eps}
\end{center}
\caption{
(Color online)
Exact dynamics and analytical approximations of the motion of two kinks for the TDGL model.
Black squares: exact dynamics (integration of Eq.~(\ref{TDGL})). Red full line: our model, Eq.~(\ref{v_n-tdgl}),
and Ei and Ohta's model. Blue dashed line: Kawasaki and Ohta's model.
}
\label{fig_tdgl}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=5cm]{kink_tdgl4.eps}
\end{center}
\caption{
(Color online)
Exact dynamics and analytical approximations of the motion of four kinks for the TDGL4 model.
Black squares: exact dynamics (integration of Eq.~(\ref{TDGL4})). Red full line: our model, Eq.~(\ref{v_n-tdgl4}).
Blue dashed line: Kawasaki and Ohta's model.
}
\label{fig_tdgl4}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=5cm]{kink_ch.eps}
\end{center}
\caption{
(Color online)
}
Exact dynamics and analytical approximations of the motion of four kinks for the CH model.
Black squares: exact dynamics (integration of Eq.~(\ref{CH})). Red full line: our model, Eq.~(\ref{v_n-ch}).
Blue dotted line: our model, Eq.~(\ref{v_n-ch_simple}).
Green dashed line: Kawasaki and Ohta's model.
\label{fig_ch}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=5cm]{kink_ch4.eps}
\end{center}
\caption{
(Color online)
Exact dynamics and analytical approximations of the motion of four kinks for the CH4 model.
Black squares: exact dynamics (integration of Eq.~(\ref{CH4})). Red full line: our model, Eq.~(\ref{v_n-ch4}).
Blue dotted line: our model, Eq.~(\ref{v_n-ch4_simple}).
Green dashed line: Kawasaki and Ohta's model.
}
\label{fig_ch4}
\end{figure}
We assume that apart from the annihilation process, which occurrs when
$\ell_n = x_{n+1}-x_n \approx a$,
kinks retain their profile when moving. So, for $x$ around $x_n$
the previous equation can be rewritten as
\begin{equation}
-\dot{x}_{n}\partial_{x}h=\sum_{i}a_{2i}\partial_{x}^{2i}h-U^{\prime}(h).
\end{equation}
We then multiply both terms by $\partial_{x}h$ and integrate between $x_{n-\frac{1}{2}}$ and $x_{n+\frac{1}{2}}$ :
\begin{equation}
-\dot{x}_{n}\int_{n-\frac{1}{2}}^{n+\frac{1}{2}}\mathrm{d}x~(\partial_{x}h)^2=\sum_{i}a_{2i}\int_{n-\frac{1}{2}}^{n+\frac{1}{2}}\mathrm{d}x~\partial_{x}h\partial_{x}^{2i}h-U(h_{n+\frac{1}{2}})+U(h_{n-\frac{1}{2}}).\nonumber
\end{equation}
Direct integration and integration by parts give
\begin{equation}\label{kdnc}
\dot{x}_{n}=\frac{1}{\int_{n-\frac{1}{2}}^{n+\frac{1}{2}}\mathrm{d}x~(\partial_{x}h)^2}
\left[\sum_{i}a_{2i}\left(\frac{(-1)^{i}}{2}[(\partial_{x}^{i}h)^2]_{n-\frac{1}{2}}^{n+\frac{1}{2}}
+\sum_{k=1}^{k<\frac{i}{2}}[\partial_{x}^{2k}h\partial_{x}^{2i-2k}h]_{n-\frac{1}{2}}^{n+\frac{1}{2}}\right)
+U(h_{n+\frac{1}{2}})-U(h_{n-\frac{1}{2}})
\right].
\end{equation}
We stress that above result derives from one single assumption, $\p_t h\simeq -\dot{x}_{n}\partial_{x}h$ for
$x$ close to $x_n$.
Equation (\ref{kdnc}) can be further elaborated because in the region halfway between $x_n$ and $x_{n+1}$
we can expand $h(x,t)$ around the asymptotic values $\pm h\ss{m}$,
\be
h(x,t) \simeq \pm \left[ h\ss{m} + R(x-x_n)+R(x_{n+1}-x)\right] ,
\label{h_expansion}
\ee
where $+/-$ applies for a positive/negative $n-$th kink.
Using this notation, we finally get
\begin{equation}
\begin{split}
\dot{x}_{n}=&\frac{1}{\int_{-\infty}^{+\infty}\mathrm{d}x~(\partial_{x}h\ss{k})^2}
\left\{\sum_{i}a_{2i}\left[(1+(-1)^{i})\left(R^{(i)}\left(\frac{\l_{n}}{2}\right)^{2}-R^{(i)}\left(\frac{\l_{n-1}}{2}\right)^{2}\right)\right.\right.\\
&\left.\left.-4\sum_{k=1}^{k<\frac{i}{2}}\left(R^{(2k)}\left(\frac{\l_{n}}{2}\right)R^{(2i-2k)}\left(\frac{\l_{n}}{2}\right)-R^{(2k)}\left(\frac{\l_{n-1}}{2}\right)R^{(2i-2k)}\left(\frac{\l_{n-1}}{2}\right)\right)\right]+2U''(h\ss{m})\left(R^{2}\left(\frac{\l_{n}}{2}\right)-R^{2}\left(\frac{\l_{n-1}}{2}\right)\right)\right\} ,
\end{split}
\label{kdnc2}
\end{equation}
where, at denominator of Eq.~(\ref{kdnc}), we made the approximation
\be
\int_{n-\frac{1}{2}}^{n+\frac{1}{2}}\mathrm{d}x~(\partial_{x}h)^2 \simeq
\int_{n-\frac{1}{2}}^{n+\frac{1}{2}}\mathrm{d}x~(\partial_{x}\hk)^2 \simeq
\int_{-\infty}^{+\infty}\mathrm{d}x~(\partial_{x}\hk)^2 ,
\ee
i.e.,
we have assumed that close to $x_n$ the kink profile is similar to the static profile $\hk(x)$ and
we have extended the extrema of the integral to $\pm\infty$,
because $\partial_{x}\hk$ is concentrated around $x_n$.
Therefore, in the general case of an equation with several terms $a_{2i}\ne 0$ the expression of the
speed of a kink is fairly complicated. One remark is in order:
the different contributions to the RHS of Eq.~(\ref{kdnc2}) are not proportional to $R(\l_n)$ and $R(\l_{n-1})$, as appearing
in the simple approach given in the previous Section.
This point is better clarified by focusing on two explicit cases.
$\bullet$ For TDGL, the only nonvanishing term in the summation (\ref{nc}) is $a_2=1$, so Eq.~(\ref{kdnc2})
strongly simplifies to
\be
\dot x_n = \frac{2U''(h\ss{m})}{\int_{-\infty}^{+\infty}\mathrm{d}x~(\partial_{x}\hk )^2}
\left[ R^2\left(\frac{\l_n}{2}\right) - R^2\left(\frac{\l_{n-1}}{2}\right) \right] ,
\quad \mbox{[NEW approach]}
\label{newtdgl2}
\ee
which must be compared with Eq.~(\ref{snkd}), rewritten here for convenience:
\be
\dot x_n = \frac{1}{\hk'(0)}\left( U''(h\ss{m})-U''(0)\right) [R(\ell_{n-1}) - R(\ell_n)],
\quad \mbox{[KO approach]}
\tag{\ref{snkd}}
\ee
where KO stands for Kawasaki and Otha~\cite{Kawasaki_Ohta}.
In the specific TDGL case $R(\l)$ is a simple exponential,
so that
\be
R(\l)= \mbox{constant}\,\times\, R^2\left( \frac{\l}{2}\right) .
\ee
In conclusion, the new approach (\ref{newtdgl2}) and the old approach (\ref{snkd})
differ for the prefactor only. Let us work out the two prefactors for
the explicit expression $U(h)=-\frac{h^2}{2}+\frac{h^4}{4}$.
Using Eq.~(\ref{eq_kink}) for the kink profile and Eq.~(\ref{R2}) for its tail (both with $h\ss{m}=1$),
we find
\bea
\label{v_n-tdgl}
\dot{x}_{n}=& 12\sqrt{2}\left[\exp\left(-\sqrt{2}\l_{n}\right)-\exp\left(-\sqrt{2}\l_{n-1}\right)\right],
\quad \mbox{[NEW approach]} \\
\dot{x}_{n}=& \phantom{1}6\sqrt{2}\left[\exp\left(-\sqrt{2}\l_{n}\right)-\exp\left(-\sqrt{2}\l_{n-1}\right)\right].
\quad \mbox{[KO approach]}
\label{v_n-old}
\eea
Equation~(\ref{v_n-tdgl}) agrees with Ei and Ohta~\cite{Ei_Ohta} and with Carr and Pego~\cite{Carr_Pego}.
These authors use a perturbative approach where the small parameter is
the extension of the domain wall defining the kink, but while Carr and Pego rely on the existence
of a Lyapunov functional, Ei and Ohta do not.
Instead, Eq.~(\ref{v_n-old}) agrees with Kawasaki and Ohta~\cite{Kawasaki_Ohta},
whose approach has been exemplified in Sec.~\ref{sec_skd}.
In Fig.~\ref{fig_tdgl} we compare old (dashed line) and new (full line) approach
with exact kink dynamics (squares), showing that the new approach is quantitatively correct.
$\bullet$
For TDGL4 equation, the two approaches give substantially different results, as we are going to show.
In Eq.~(\ref{kdnc}) we now have only the term $i=2$, with $a_4=-1$, and
\be
\dot x_n = \frac{2}{\int_{-\infty}^{+\infty}\mathrm{d}x~(\partial_{x}\hk )^2}
\left\{ -\left[ \left( R''\left( \frac{\l_n}{2}\right) \right)^2 -
\left( R''\left( \frac{\l_{n-1}}{2}\right) \right)^2 \right]
+ U''(h\ss{m}) \left[ R^2\left(\frac{\l_n}{2}\right) - R^2\left(\frac{\l_{n-1}}{2}\right) \right] \right\}
\label{kdtdgl4}
\ee
Now, see Eq.~(\ref{R4}), $R(\l)=A\cos(\kappa\l+\alpha)\exp(-\kappa\l)$, so that (even up to a constant)
\be
R(\l)\ne R^2\left( \frac{\l}{2}\right) \quad \mbox{and} \quad R(\l)\ne R''^2\left( \frac{\l}{2}\right).
\ee
If we use the correct expression for $R(\l)$ we obtain
\begin{equation}
\dot{x}_{n}=\frac{2U^{\prime\prime}_{m}A^{2}}{\int_{-\infty}^{+\infty}\mathrm{d}x~(\partial_{x}\hk )^2}
\left[\cos\left(\kappa \l_{n}+2\alpha\right)\exp\left(-\omega \l_{n}\right)-\cos\left(\kappa \l_{n-1}
+2\alpha\right)\exp\left(-\kappa \l_{n-1}\right)\right] .
\label{v_n-tdgl4}
\end{equation}
In Fig.~\ref{fig_tdgl4}, we compare the full numerical solution of the continuum TDGL4 model (squares) with our results (Eq.~(\ref{v_n-tdgl4}), full line)
and with results obtained with the multikink approximation (dashed line).
Our new approach of kink dynamics reproduces quantitatively very well the full numerical solution.
In addition, the results from the multikink ansatz approach
cannot be corrected using a simple rescaling of time, as in the case of TDGL.
\subsection{Conserved case}
Similarly to the nonconserved case, we are going to consider the general model
\begin{equation}
\partial_{t}h=-\partial_{xx}\left(\sum_{i}a_{2i}\partial_{x}^{2i}h-U^{\prime}(h)\right) ,
\label{c}
\end{equation}
which requires more involved mathematics, whose details are partly given in Appendix~\ref{app_cal}.
Here we provide the final result,
\begin{equation}
\dot{x}_{n}=\frac{1}{4 h\ss{m}^{2}\l_{n}\l_{n-1}-A_{n}(\l_{n}+\l_{n-1})}
\left\{\l_{n-1}\left[\dot{x}_{n+1}A_{n+1}+f(\l_{n+1}, \l_{n-1})\right]
+\l_{n}\left[\dot{x}_{n-1}A_{n-1}+f(\l_{n}, \l_{n-2})\right]\right\}
\label{eq_app}
\end{equation}
where
\be
A_{n}=\int_{n-\frac{1}{2}}^{n+\frac{1}{2}}\mathrm{d}x~\partial_{x}h\int_{n-\frac{1}{2}}^{x}\mathrm{d}x^{\prime}~(h-h_{n-\frac{1}{2}})
\label{eq_An}
\ee
and
\be
\begin{split}
f(x,y)=\sum_{i}a_{2i} &
\left[ (1+(-1)^{i})\left(R^{(i)}\left(\frac{x}{2}\right)^{2}-R^{(i)}\left(\frac{y}{2}\right)^{2}\right) \right. \\
& \left. -4\sum_{k=1}^{k<\frac{i}{2}}\left(R^{(2k)}\left(\frac{x}{2}\right)R^{(2i-2k)}\left(\frac{x}{2}\right)
-R^{(2k)}\left(\frac{y}{2}\right)R^{(2i-2k)}\left(\frac{y}{2}\right)\right)\right]+2U^{\prime\prime}_{m}\left(R^{2}\left(\frac{x}{2}\right)-R^{2}\left(\frac{y}{2}\right)\right),
\end{split}
\label{eq_fxy}
\ee
which reduces to
\begin{equation}
\begin{split}
\dot{x}_{n}=& \frac{1}{4 h\ss{m}^{2}\l_{n}\l_{n-1}-2\sqrt{2}(\l_{n}+\l_{n-1})}
\left\{\l_{n-1}\left[2\sqrt{2}\dot{x}_{n+1}+8U^{\prime\prime}_{m}\left(\exp(-\sqrt{2}\l_{n+1})-
\exp(-\sqrt{2}\l_{n-1})\right)\right]\right.\\
&\left. +\l_{n}\left[2\sqrt{2}\dot{x}_{n-1}+8U^{\prime\prime}_{m}\left(\exp(-\sqrt{2}\l_{n})-
\exp(-\sqrt{2}l_{n-2})\right)\right]\right\} \qquad \mbox{[CH]}
\end{split}
\label{v_n-ch}
\end{equation}
for the CH equation, and to
\begin{equation}
\begin{split}
\dot{x}_{n}=&
\frac{1}{4 h\ss{m}^{2}\l_{n}\l_{n-1}-A(\l_{n}+\l_{n-1})} \times \qquad\qquad\qquad\qquad \mbox{[CH4]}\\
&\left\{\l_{n-1}\left[\dot{x}_{n+1}A+2U^{\prime\prime}_{m}A^{2}
\left(\cos\left(\kappa \l_{n+1}+2\alpha\right)\exp\left(-\kappa \l_{n+1}\right)
-\cos\left(\kappa \l_{n-1}+2\alpha\right)\exp\left(-\kappa \l_{n-1}\right)\right)\right]\right.\\
&\left.+\l_{n}\left[\dot{x}_{n-1}A+2U^{\prime\prime}_{m}A^{2}
\left(\cos\left(\kappa \l_{n}+2\alpha\right)\exp\left(-\kappa \l_{n}\right)
-\cos\left(\kappa \l_{n-2}+2\alpha\right)\exp\left(-\kappa \l_{n-2}\right)\right)\right]\right\},
\end{split}
\label{v_n-ch4}
\end{equation}
for the CH4 equation, with $A=\int_{-\infty}^{+\infty} dx (h\ss{m}^2-\hk^2)$.
The previous two equations are rather involved and the expressions for kink speeds $\dot x_n$
are coupled, see the terms proportional to $\dot x_{n\pm 1}$ on the Right Hand Side.
Since the terms proportional to $\dot x_n$ in the Right Hand Side of Eq.~(\ref{v_n-ch4})
are smaller than the term $\dot x_n$ on the Left Hand Side by a factor $\sim 1/\ell_n$ for large $\ell_n$,
we may neglect them when $\ell_n\gg a$.
Analogously, at denominators we can neglect the terms linear in $\ell$ with respect the terms
quadratic in $\ell$. Finally, we obtain
a simplified version of Eqs.~(\ref{v_n-ch},\ref{v_n-ch4}):
\begin{equation}
\begin{split}
\dot{x}_{n}=& \frac{1}{4 h\ss{m}^{2}\l_{n}\l_{n-1}}
\left[8\l_{n-1}U^{\prime\prime}_{m}\left(\exp(-\sqrt{2}\l_{n+1})-
\exp(-\sqrt{2}\l_{n-1})\right)\right.\\
&\left. +8\l_{n}U^{\prime\prime}_{m}\left(\exp(-\sqrt{2}\l_{n})-
\exp(-\sqrt{2}\l_{n-2})\right)\right] \qquad \mbox{[CH simplified]}
\end{split}
\label{v_n-ch_simple}
\end{equation}
and
\begin{equation}
\begin{split}
\dot{x}_{n}=&
\frac{1}{4 h\ss{m}^{2}\l_{n}\l_{n-1}} \times \qquad\qquad\qquad\qquad \mbox{[CH4 simplified]}\\
&\left\{\l_{n-1}\left[2U^{\prime\prime}_{m}A^{2}
\left(\cos\left(\kappa \l_{n+1}+2\alpha\right)\exp\left(-\kappa \l_{n+1}\right)
-\cos\left(\kappa \l_{n-1}+2\alpha\right)\exp\left(-\kappa \l_{n-1}\right)\right)\right]\right.\\
&\left.+\l_{n}\left[2U^{\prime\prime}_{m}A^{2}
\left(\cos\left(\kappa \l_{n}+2\alpha\right)\exp\left(-\kappa \l_{n}\right)
-\cos\left(\kappa \l_{n-2}+2\alpha\right)\exp\left(-\kappa \l_{n-2}\right)\right)\right]\right\}.
\end{split}
\label{v_n-ch4_simple}
\end{equation}
In Figure~\ref{fig_ch}, we compare the different approaches and the numerical solution of the continuum CH equation,
while in Fig.~\ref{fig_ch4} we do the same for the CH4 equation.
In both cases, exact numerical results are given by squares, our full analytical expressions
Eqs.~(\ref{v_n-ch},\ref{v_n-ch4}) are given by solid lines,
our simplified expressions Eqs.~(\ref{v_n-ch_simple},\ref{v_n-ch4_simple}) are given by dotted lines,
and the analytical expressions using multikink approximations are given by dashed lines.
The two figures clearly show that our full expressions (\ref{v_n-ch},\ref{v_n-ch4})
reproduce correctly numerics of the continuum model in both cases.
The simplified model provides a reasonable result, but it is quantitatively inaccurate,
proving that the subdominant terms $\sim 1/\ell_n$ are relevant for the interkink distances $\ell_n$
used in the simulations of Fig.~\ref{fig_ch} and Fig.~\ref{fig_ch4}.
However, these subdominant terms should become negligible for larger
interkink distances $\ell_n$.
\section{Stability of steady states}
In Sec.~\ref{sec_skd} we have shown that
TDGL-kinks feel an attractive interaction
while TDGL4-kinks feel an oscillating interaction, even if in both cases $R(x)$ vanishes exponentially at large $x$.
This fact implies two important differences:
(i)~all TDGL steady configurations are uniform, $x_{n+1}-x_n=\ell$, while TDGL4 ones may be even disordered;
(ii)~all TDGL steady states are linearly unstable, while TDGL4 steady states may be stable or unstable.
Let us prove these statements.
We can rewrite Eq.~(\ref{snkd}) incorporating the positive prefactor at RHS in $t$,
\be
\dot x_n = R(x_n-x_{n-1}) - R(x_{n+1}-x_n),
\label{skdnc}
\ee
whose time independent solution is $R(\ell_n)=R(\ell_{n-1})~\forall n$, i.e. $R(\ell_n)=r$, with $\ell_n=x_{n+1}-x_n$.
For the standard TDGL model $R(x)$ is a monotonous function, so the equation $R(\l_n)=r$ has at most one solution. In practice, every
uniform configuration $\ell_n=\ell$ is stationary. Instead, for the TDGL4 model, the equation
\be
R(\ell_n) \equiv A \cos(\kappa\l_n +\alpha) \exp(-\kappa\l_n) = r
\ee
has a number of solutions which increases when decreasing $|r|$, up to an infinite number of solutions for $r=0$.
As for the stability of a steady state, let us first focus on uniform configurations, i.e. all $\l_n=\l$.
In order to study the linear stability of this configuration we need to perturb it,
\be
\ell_n(t)=\ell +\epsilon_n(t),
\ee
and determine the temporal evolution of the perturbations $\epsilon_n(t)\ll \ell$. Using Eq.~(\ref{skdnc}) we get
\bea
\label{nc_gen_stability}
\dot \epsilon_n &=& 2R(\ell_n) -R(\ell_{n+1})-R(\ell_{n-1}) \\
&=& R'(\ell)\big( 2\epsilon_n-\epsilon_{n+1}-\epsilon_{n-1}\big),
\eea
whose single harmonic solution is $\epsilon_n(t)=\exp(\omega t +iqn)$, with
\be
\omega(q) = 4R'(\ell) \sin^2\left(\frac{q}{2}\right).
\ee
We have stability (instability) if $R'(\ell) <0 (>0)$.
Since $R_2(x)$ is an increasing function, see Eq.~(\ref{R2}),
any uniform configuration is unstable for TDGL.
This result leads to a perpetual coarsening dynamics~\cite{Langer}.
Instead, since $R_4(x)$ is oscillating also its derivative is oscillating and with
varying $\ell$ we obtain stable steady states if $R'(\l)<0$
and unstable steady states if $R'(\l)>0$.
In the general case of a nonuniform steady state,
\be
\l_n(t) = \l^*_n+\epsilon_n(t) \qquad \mbox{with} \quad R(\l^*_n)=r,
\ee
Eq.~(\ref{nc_gen_stability}), which is still valid, gives
\be
\dot \epsilon_n = 2R'(\l^*_n)\epsilon_n -R'(\l^*_{n-1})\epsilon_{n-1} -R'(\l^*_{n+1})\epsilon_{n+1}.
\ee
The linear character of the equations allows to write $\epsilon_n(t) =e^{\sigma t} A_n$, getting
\be
2R'(\l^*_n)A_n -R'(\l^*_{n-1})A_{n-1} -R'(\l^*_{n+1})A_{n+1} = \sigma A_n
\label{e:An}
\ee
but the $n$-dependence of $R'(\ell_n^*)$ prevents
the diagonalization with Fourier modes
($A_n \ne e^{iqn}$).
Multiplying Eq.(\ref{e:An}) with $R'(\ell_n^*)A_n^\dagger$, summing aver all $n$, and after some simple recombinations of the l.h.s., we obtain
\begin{eqnarray}
\sum_{n=1}^N |R'(\ell_{n+1}^*)A_{n+1}-R'(\ell_n^*)A_n|^2= \sigma \sum_{n=1}^N R'(\ell_n^*)|A_n|^2 ,
\label{e:quad_form}
\end{eqnarray}
which shows that eigenvalues $\sigma$ are real. Furthermore,
if all quantities $R'(\ell_n^*)$ have the same sign, $\sigma$ has the sign of $R'(\ell_n^*)$.
In particular,
any steady-state kink configuration with $R'(\ell_n^*)<0$ for all $n$ is stable.
As a consequence, $R'(\ell_n^*)<0$ for all $n$ is a sufficient condition
for stability, and there is an infinite number of stable configurations
in which the system can be trapped and stuck during the dynamics.
If the quantities $R'(\ell_n^*)$ exhibit both positive and negative signs, Eq.~(\ref{e:quad_form}) does not allow to draw conclusions.
However, in the simple cases of a period-2 configuration, $\l^*_n=\l^*_{n+2}$,
or a period-3 configuration, $\l^*_n=\l^*_{n+3}$,
we can prove that $R'(\ell_n^*)<0$ is also a necessary condition for stability.
Let's show it explicitly for the period-2 configuration. If
\be
\l^*_{2n} = \l\ss{s2} \qquad
\l^*_{2n+1} = \l\ss{s1},
\ee
we obtain two coupled equations which are solved assuming
\be
A_{2n} = c_2 e^{i2nq} \qquad
A_{2n+1} = c_1 e^{i(2n+1)q} .
\ee
The resulting eigenvalue equation is
\be
\sigma^2 -2 \left( R'(\l\ss{s1}^*) + R'(\l\ss{s2}^*) \right)\sigma +
4 R'(\l\ss{s1}^*) R'(\l\ss{s2}^*) \sin^2 q =0.
\ee
We have stability if both eigenvalues are negative, i.e.
\be
\mbox{stability} \quad \Leftrightarrow \quad
R'(\l\ss{s1}^*)<0 \;\; \mbox{and} \;\; R'(\l\ss{s2}^*)<0 .
\ee
\section{Summary and discussion}
Our paper studies kink dynamics deriving from a generalized Ginzburg-Landau free energy,
see Eq.~(\ref{F_ggl}).
The potential part of the free energy, $U(h)$, is the classical, symmetric double well potential,
typical of a bistable system. The ``kinetic" part of the free energy
is the sum of squares of order parameter derivatives of general order.
The main motivation to study such free energy is that there are systems whose ``kinetic"
free energy is not given by surface tension, proportional to $(h_x^2)$, but rather to
bending energy, which is proportional to $(h_{xx}^2)$. Since the two terms are not mutually
exclusive, it is quite reasonable to consider the free energy
\be
{\cal F} = \int dx \left[ U(h) + \frac{K_1}{2} (\p_x h)^2 + \frac{K_2}{2} (\p^2_x h)^2 \right] .
\ee
Then, we have generalized previous expression to Eq.~(\ref{F_ggl}). However, even if our treatment
is valid in full generality, we have focused on two cases: $K_1=1,K_2=0$ and $K_1=0,K_2=1$, i.e. to pure
surface tension systems (to check existing results) and to pure bending systems (novel system
of specifical biophysical interest~\cite{Thomas_PRE}).
Once ${\cal F}$ is given, we may derive a generalized Ginzburg-Landau equation, see Eq.~(\ref{nc}),
or a generalized Cahn-Hilliard equation, see Eq.~(\ref{c}).
The standard approach to derive an effective kink dynamics is to assume a specific form of $h(x,t)$ as
a suitable superposition of kinks, $\hk(x-x_n(t))$, located in $x_n(t)$. This method has proved to be
fruitful, because it has allowed to explain coarsening dynamics of TDGL/CH
models~\cite{Kawasaki_Ohta,Kawakatsu_Munakata,Nagai_Kawasaki_I,Nagai_Kawasaki_II,Nagai_Kawasaki_III},
to determine coarsening exponents, to study the effect
of a symmetry breaking term~\cite{PP_kinks}, and the effect of thermal noise.
However, the ability of the multikink approximation to reproduce quantitatively
the exact dynamics of the continuum model was already
questioned by Ei and Ohta~\cite{Ei_Ohta} for the TDGL model.
The failure of this goal is even more transparent when considering the bending energy,
i.e. the TDGL4/CH4 models.
In Figures~\ref{fig_tdgl}-\ref{fig_ch4} we make a detailed comparison of exact results (squares, derived from the direct
integration of the equation) with the standard multikink approximation (dashed lines)
and with our new results (full lines). The conclusion is that the new approach gives a reliable, discrete
description of the exact, continuous dynamics: see how full lines follow squares in all Figs.~\ref{fig_tdgl}-\ref{fig_ch4}.
We can still ask why we should derive an approximate kink dynamics if we have the full exact dynamics
of order parameter $h(x,t)$. There are several good reasons:
(i)~an analytical approach to nonlinear full dynamics is hard if not impossible;
(ii)~kink dynamics is easy to understand and analytical methods are feasible;
(iii)~numerical simulation of kink dynamics is far faster than the simulation of the full PDE.
In addition to be numerically reliable, some of our kink models (TDGL4/CH4) have the
advantage of showing an oscillating tail $R(x) = \hk(x) - h\ss{m}$. This oscillation implies two
important features. Firstly, an oscillating tail means an oscillating force between kinks, as opposed to the
classical TDGL/CH models. Therefore, the long term dynamical scenario is not a coarsening scenario,
but the freezing in one of the many stable states~\cite{Thomas_PRE}. This can give rise to a consistency problem
when we use the approximation $\l_n\gg a$ to derive kink dynamics. However, the approximation is expected to give
reasonable results even for not so far kinks and comparison with exact numerics supports such claim.
Secondly, an oscillating tail $R(x)$ is at the origin of a quantitative discrepancy between classical multikink
approaches and our approach. Using numerical simulations, we have shown that our approach provides much better quantitative results.
For example, classical results for TDGL4 provide an interkink force
proportional to $R(\l)$ while a force $F(\l)\approx R^2(\l/2)$ appears to be more appropriate.
If it were $R(\l)\simeq \exp(-\kappa\l)$, the two approaches would be equivalent, apart a rescaling of time.
Instead, if $R(\l)\simeq \cos(\kappa\l +\alpha)\exp(-\kappa\l)$ the two approaches are definitely different.
In this paper we have focused on the derivation of kink dynamics and on the quantitative comparison with
the exact dynamics of the PDE. The kink models for TDGL4 and CH4 are also considered in Ref.~\cite{Thomas_long}
where we specially use them for long time dynamics of the deterministic model and for any time
dynamics of the stochastic models. In fact, once we have proven (here) their quantitative reliability,
we can use them with confidence whenever the direct numerical integration of PDEs would be
too demanding in terms of CPU time. This is certainly the case if we require to go to very long times
or if we need to add stochastic noise to the equations.
Our evaluation of the simulation times for the PDE ($t\ss{PDE}$) and for the kink model ($t\ss{k}$)
allows to conclude that we gain four orders of magnitude, $t\ss{PDE}/t\ss{k}\approx 10^4$.
\acknowledgments
We wish to thank Xavier Lamy for usueful
insights about the stability analysis of kink arrays.
We also acknowledge support from
Biolub Grant No. ANR-12-BS04-0008.
|
2,869,038,156,772 | arxiv | \section{Introduction}
\label{sec:intro}
In an attempt to better understand a system as complex as the human brain, multimodal measurements can be beneficial since they are able to provide information on complementary aspects of the same system. Through jointly analyzing the data resulting from different modalities, their individual advantages may be exploited and at the same time some of their disadvantages can be mitigated~\cite{2015_Lahat_Multimodal,2015_Adali_Fusiona}. In this way, a more accurate localization of the activated brain areas can be performed.
Two of the most commonly used modalities for monitoring the brain activity are the electroencephalography (EEG) and the functional Magnetic Resonance Imaging (fMRI). fMRI is a noninvasive brain imaging technique, which indirectly studies brain activity by measuring fluctuations of the blood-oxygen-level dependent (BOLD) signal~\cite{2008_lindquist_statistical}. The first BOLD fluctuation occurs roughly 2--3 seconds after the onset of the neural activity, when the oxygen-rich (oxygenated) blood starts displacing the oxygen-depleted (deoxygenated) blood. This rises to a peak after 4–6 seconds, before falling back to the baseline (and typically undershooting slightly).
The time course of the BOLD signal corresponding to a transient neural activity is called the Haemodynamic Response Function (HRF). Although fMRI has a high spatial resolution, often at the millimeter scale, it is a ``delayed'' measure of the brain activity, with its temporal resolution being limited by the repetition time of the scanner (TR), usually of the order of seconds~\cite{2008_lindquist_statistical}.
EEG provides information with respect to the neural electrical activity in the brain as a function of time. This is done via the use of multiple electrodes that are placed at certain locations over the scalp or (in more rare cases) over the cortex under the skull. The EEG signal results from the electrical measurement of the neuronal activation, realized through the movement of charged ions at the junction between the synapses of (the dendrites of) the neurons. This provides a more direct measure of the neuronal activity compared to fMRI (sensitive to millisecond changes in neural processing) and hence a better temporal resolution. However, EEG has poor spatial resolution, limited by the number of electrodes employed and the resistive properties of the extra-cerebral tissues. Furthermore, due to the fact that electrodes are more sensitive to neural activations that occur closer to the scalp, the determination of the exact location of activations that take place in deeper areas is more challenging~\cite{2007_sanei_eeg}. The complementary nature of their spatiotemporal resolutions motivates the fusion of EEG and fMRI towards a better localization of the brain activity, both in time and space~\cite{2015_Karahan_spacecoupling,2017_schward_thesis}.
Data fusion generally refers to the analysis of several datasets in a way that they interact and inform each other. Different types of fusion can be realized~\cite{2015_Lahat_Multimodal,2015_Karahan_spacecoupling,2017_ramachandram_typefusion} but generally the definition may differ with regard to the degree of generality and also
depending on the specific research areas~\cite{2015_cocchi_fusion}. Different types of applications, involving diverse sets of inter-related data, have been proposed. These include metabolomics~\cite{2007_Acar_metabolomics}, array processing~\cite{2013_sorensen_array}, sentiment analysis~\cite{2013_zadeh_sentiment}, multidimensional harmonic retrieval~\cite{2013_sorensen_harmonics1,2013_sorensen_harmonics2}, link prediction~\cite{2013_ermis_links} and, of course, biomedical applications~\cite{2015_Lahat_Multimodal,2015_Karahan_spacecoupling,2017_ramachandram_typefusion,2008_calhoun_ica,2015_Adali_Fusiona,2015_Adali_Fusionb} among many others. Fusion of EEG and fMRI data is expected to be of practical value given their complementary nature as described above.
\subsection{Categorization of data fusion}
Data fusion techniques can be categorized in various ways. The main categorizations are based on a) the level where the fusion is performed and b) the way the fusion is performed (Fig.~\ref{fusionlevel}). Two main levels and 2 sub-levels have been defined and become a reference classification~\cite{2017_ramachandram_typefusion,2019_hall_typefusion}, namely, ``early''/low-level fusion and ``late'' fusion, which is subdivided into mid-level fusion and high-level fusion. In ``early''/low-level fusion (or observational level), raw datasets (or blocks of data) are used. Mid-level (features level or state-vector level) fusion is considered when the data fusion methods operate on features extracted from each dataset separately, so, instead of using raw data for modelling the task at hand (e.g., classifying), features of the data are used. The high-level (decision/information level) fusion methods model each dataset separately and only decisions (model outcome) from processing of each data block are fused.
The categorization based on the way the fusion is performed is two-way. The earliest approaches for fusion of fMRI and EEG (and a large number of recent ones, e.g., ~\cite{2015_ferdowsi_new}) are essentially ``integrative'' in nature. The rationale behind these methods is to employ objective functions for decomposition of the fMRI signal with constraints based on information from EEG (or vice versa). Recently, the emphasis has been turned to ``true'' fusion, e.g.,~\cite{2016_hunyadi_fusion,2017_acar_acmtf1,2017_acar_acmtf2,2017_Eyndhoven_HRF,2004_Martinez_PLS,2015_Karahan_spacecoupling}, where the decomposition of the data from each modality can influence the other using all the common information that may exist. During optimization, the factors, which have been identified as shared, are appropriately ``coupled'' and thus a bridge between the two modalities is established. For a detailed literature review of such methods, the interested reader is reffered to~\cite{2018_kofidis_partially,2015_Adali_Fusiona}.
\subsection{Fusion of EEG and fMRI}
Multivariate bilinear (i.e., matrix-based) methods, mainly based on Independent Component Analysis (ICA)~\cite{2006_calhoun_jica_pica,2012_mijovic_whyhowjica,2014_swinnen_jica,2015_hunyadi_parallel} and relying on the concatenation of different modes, have been, up to recently, the state of the art for jointly analyzing EEG and fMRI. However, by definition such methods fall short in exploiting the inherently multi-way nature of these data. fMRI and EEG datasets are inherently multi-dimensional, comprising information in time and along different voxels or channels, subjects, trials, etc. For EEG, in order to better exploit the information, the signal can be expanded in additional dimensions, e.g. through incorporating spectral features by computing a wavelet transform of the EEG data or using the segment/Event Related Potential (ERP) mode (ERP is the response to a specific sensory, cognitive, or motor stimulus)~\cite{2015_cong_tensorseeg}. This multi-dimensional nature of the EEG and fMRI datasets points to the adoption of tensor (multi-linear) models instead of the bi-linear ones. Several tensor decomposition methods have been applied in fMRI and EEG Blind Source Separation (BSS), including Canonical Polyadic Decomposition (CPD) or Parallel Factor Analysis (PARAFAC)~\cite{2004_andersen_structure-seeking,2007_Acar_cpdeeg}, and its generalizations known as PARAFAC2~\cite{2017_chatzichristos_BTD2,2017_Loukianos_PFAC2} and Block Term Decomposition (BTD)~\cite{2012_de_lathauwer_block,2019_chatzichristos_journal}.
The representations that are possible with tensor models can a) improve the ability of extracting spatiotemporal modes of interest~\cite{2004_andersen_structure-seeking,2007_stegeman_comparing,2013_helwig_critique}, b) facilitate neurophysiologically meaningful interpretations~\cite{2004_andersen_structure-seeking}, and c) produce unique (modulo scaling and permutation ambiguities) representations under mild conditions~\cite{2000_sidiropoulos_uniqueness}. Those mild conditions can be even more relaxed in the case of coupled tensor decompositions than their single-tensor counterparts. It has been demonstrated that coupling through one or more common factors that are shared among tensors can ensure uniqueness beyond what is possible
when considering separate decompositions~\cite{2015_sorensen_coupled}. Moreover, tensorial methods are able to make predictions more robustly in the presence of noise, compared to their two-way counterparts~\cite{2004_andersen_structure-seeking,2017_sidiropoulos_reviewtensor,2019_chatzichristos_journal}. It should be noted that the biomedical data are usually highly corrupted by noise~\cite{2004_andersen_structure-seeking}.
\begin{figure}
\hspace*{-1.5cm}
\begin{tikzpicture}[all tensors/.style={dim={2,2,2}, fancy}, node distance=0.4cm, chain]
\node at (-2,0) [tensor] (t) {$\ten{T}_1$};
\node [frontal matrix, dim={2,2},right=0.2cm of t, 2d] (m) {$\mat{M}_1$};
\node [right of = m, node distance = .75cm] (X1) {};
\node at (5,3) [tensor] (t2) {$\ten{T}_1$};
\node [frontal matrix, dim={2,2}, right=-0.75cm of t2,1d] (m2) {$\mat{M}_1$};
\draw [->,thick] (X1) to [bend left=35] node[above] {} (4,3);
\draw [->,thick] (9.5,3) -- (11,3) ;
\draw [line width=0.8mm] (0,1.5) -- (11,1.5) ;
\draw [line width=0.2mm] (3,-2.4) -- (11,-2.4) ;
\draw [->,thick] (X1) to node {} (3.8,-0.4);
\node at (3.5,0.1) [tensor,dim={1,1,1}] (t3) {$\ten{T}_1$};
\node at (3.75,-1.1)[frontal matrix, dim={1,1}, 1d] (m3) {$\mat{M}_1$};
\node at (6.5,0.1)[frontal matrix, dim={1,1}, front fill=green, 2d] (m4) {$\mat{f}_1$};
\node at (6.5,-1.1)[frontal matrix, dim={1,1}, front fill=ForestGreen, 2d] (m5) {$\mat{f}_2$};
\node at (8.475,-0.5)[frontal matrix, dim={2.05,1.05}, 1d] (m6) {};
\node at (8.5,-1)[frontal matrix, dim={1,1}, front fill=ForestGreen, color=ForestGreen, 2d] (m6) {};
\node at (8.5,0)[frontal matrix, dim={1,1}, front
fill=green,color=green, 2d] (m7) {};
\node[text width=3cm] at (8.75, -0.5) {\textbf{F}};
\draw [->,thick] (5.5,0.1) -- (6.8,0.1) ;
\node at (4.85,-0.5) [align=center,draw,rounded corners,fill=red!20] {feature\\selection};
\draw [->,thick] (5.5,-1.1) -- (6.8,-1.1) ;
\draw [->,thick] (8,0.1) -- (8.7,-0.45) ;
\draw [->,thick] (8,-1.1) -- (8.7,-0.55) ;
\draw [->,thick] (10,-0.5) -- (11,-0.5) ;
\draw [->,thick] (X1) to [bend right=32] node[above] {} (3.5,-3.5);
\node at (3.5,-3.5) [tensor,dim={1,1,1}] (t3) {$\ten{T}_1$};
\node at (3.75,-4.6)[frontal matrix, dim={1,1}, 1d] (m3) {$\mat{M}_1$};
\node at (6.5,-3.5)[align=center, draw,fill=Dandelion] {Modelling};
\node at (6.5,-4.6)[align=center,draw,fill=Dandelion] {Modelling};
\draw [->,thick] (5.5,-3.5) -- (6.8,-3.5) ;
\draw [->,thick] (5.5,-4.6) -- (6.8,-4.6) ;
\draw [->,thick] (8.8,-3.5) -- (11,-4) ;
\draw [->,thick] (8.8,-4.6) -- (11,-4.2) ;
\node at (10.7,-0.4)[frontal matrix, dim={10,1}, front
fill=Dandelion, 2d] (m10) {};
\node[align=center,font=\large,rotate=270] at (11.2,1) {\textbf{Modelling}};
\node[align=center,font=\large] at (-3,2.5) {\textbf{\underline{Early fusion}}};
\node[align=center,font=\large] at (-3,-3) {\textbf{\underline{Late fusion}}};
\node[align=center,font=\large] at (3,1.8) {\underline{Low level}};
\node[align=center,font=\large] at (3,-2.1) {\underline{Mid level}};
\node[align=center,font=\large] at (3,-5.5) {\underline{High level}};
\end{tikzpicture}
\caption{Different types of data fusion approaches.}
\label{fusionlevel}
\end{figure}
Various ways to realize the coupling have been proposed, depending on the coupled mode: a) coupling in the spatial domain with the use of the so-called lead-field matrix, which summarizes the volume conduction effects in the head (by transforming the 2D spatial information of the EEG to the 3D spatial information of the fMRI) ~\cite{2015_Karahan_spacecoupling}, b) coupling in the time domain using the convolution of the EEG time course with an HRF~\cite{2004_Martinez_PLS}, and c) coupling in the subject domain, using the assumption that the same neural processes are reflected in both modalities with the same covariation~\cite{2006_calhoun_jica,2014_swinnen_jica,2016_hunyadi_fusion}.
Heterogeneity in the datasets is also manifested in the models used to represent them. In the EEG-fMRI fusion example, classical approaches adopt a space (channels) $\times$ time $\times$ frequency/ERP tensor model for EEG (for the single-subject case) whereas the fMRI signal is commonly represented as a matrix with its dimensions corresponding to space (voxels) $\times$ time. Their fusion relies on the coupling of the EEG tensor and the fMRI matrix along their common mode (in one of the ways described before). Thus, although the multi-way nature of EEG has been exploited in earlier fusion methods~\cite{2017_acar_acmtf1,2016_hunyadi_fusion}, it has been so far neglected for fMRI. Furthermore, those methods rely on preprocessing of the fMRI data using the General Linear Model (GLM) framework. A spatial map of interest (areas of activation) per subject is extracted from the fMRI data and all the spatial maps are stacked in a matrix (space $\times$ subjects), hence discarding the extra dimension of time and relying on Coupled Matrix Tensor Factorization (CMTF) to solve the joint BSS problem.\footnote{Advanced CMTF (ACMTF)~\cite{2017_acar_acmtf1,2017_acar_acmtf2} allows the presence of both shared and unshared components in the coupled factor(s) and provides a way to automatically determine them. Recently the uniqueness properties of such partially coupled decompositions have been studied~\cite{2018_kofidis_partially,2019_sorensen_coupledpartially}.} In the GLM framework, a canonical HRF is assumed to be known (and be invariant in space and among subjects), the expected signal changes are defined as regressors of interest in a multiple linear regression analysis and the estimated coefficients are tested against a null hypothesis. In the EEG and fMRI studies using GLM, the EEG signal (or part of it) is used as the regressor of interest. Intra- and inter-subject variability of HRF is known to exist~\cite{2004_handweker_bold}, hence a possible misspecification of the HRF may lead to biased estimates of widespread activity in the brain~\cite{2008_lindquist_statistical,2004_handweker_bold}. Moreover, the mismatch of the temporal characteristics of EEG and fMRI further limits the potential of GLM analysis~\cite{2015_hunyadi_parallel}. The use of the spatial maps of GLM categorizes such CMTF-based methods as late fusion~\cite{2017_ramachandram_typefusion}.
In all of the approaches that were previously described, the coupling between the corresponding modes is ``hard'', meaning that the shared factors are constrained to be equal in the two datasets (after any transformation applied, e.g., convolution with an HRF). Such an assumption is very restrictive, since it implies that the used transformation is valid for every area of the brain and any subject. In order to alleviate any problems caused in the modelling by the fact that the shared factors are forced to be the same between modalities, a ``softer'' assumption of similarity (or with similar properties), not necessarily of the strong equality, can be made instead~\cite{2014_seichepine_soft,2015_Farias_softMultimodal}. Furthermore, different methods can be used to account for a possible misspecification of the HRF. Constraining the HRF to a class of ``plausible'' waveforms and estimating the optimal one from the data itself has been proposed in~\cite{2017_Eyndhoven_HRF} for the single-subject case. Such approaches will be called ``flexible''.
In this work, we investigate early~\cite{2017_ramachandram_typefusion} fusion of fMRI and EEG via soft (assuming similarity and not strong/hard equality) and flexible coupling. As explained previously, soft and flexible coupling are different ways to accommodate for a possible missmodelling of the HRF. Their main difference is that with soft coupling all the HRFs of the different subjects are assumed to be similar (and not equal) with an a-priori known HRF; while in the flexible approach only the model of the HRF is a-priori known and the variables of the model, which determine the exact shape of the HRF, are estimated via optimization. In our approach, we want to demonstrate the gains from:
\begin{itemize}
\item Using raw data instead of features (early fusion), omitting the GLM preprocessing step in an effort to fully exploit the information underlying the raw data~\cite{2017_ramachandram_typefusion,2015_Lahat_Multimodal}
\item Exploiting the multi-way nature of both modalities either by multi-way tensors (when possible) for both modalities or double CMTFs
\item Using flexible and soft coupling models in order to alleviate the problem of mismodelling of the HRF.
\end{itemize}
We also want to compare the flexible and soft coupling methods via simulated data. Furthermore, we propose an alternative modelling for the HRF, and we demonstrate the advantage of the proposed methods over methods based on ICA, hard coupling and uncoupled CPD per modality.
\subsection{Notation}
Vectors, matrices and higher-order tensors are denoted by bold lower-case, upper-case and calligraphic upper-case letters, respectively. For a matrix $\boldsymbol{A}$, $\boldsymbol{A}^\top$ and $\boldsymbol{A}^{\dag}$ denote its transpose and pseudo-inverse, respectively. An entry of a vector $\boldsymbol{a}$, a matrix $\boldsymbol{A}$, or a (3rd-order) tensor $\boldsymbol{\mathcal{A}}$ is denoted by $a_i$, $a_{i,j}$, or $a_{i,j,k}$, respectively. Matlab notation is used to denote a column of a matrix $\mathbf{A}$, namely $\mathbf{A}(:,j)$ is its $j$th column. $\boldsymbol{I}_m$ is the $m$th-order identity matrix and $\boldsymbol{1}_{m}$ denotes the $m\times 1$ vector of all ones. The symbols $\otimes$ and $*$ denote the Kronecker and the Hadamard (elementwise) products, respectively. The column-wise Khatri--Rao product of two matrices, $\boldsymbol{A} \in \mathbb{R} ^{I\times R}$ and $\boldsymbol{B} \in \mathbb{R} ^{J\times R}$, is denoted by $\boldsymbol{A}\odot\boldsymbol{B}=\begin{bmatrix}\boldsymbol{a}_1\otimes \boldsymbol{b}_1, \boldsymbol{a}_2\otimes \boldsymbol{b}_2,\ldots, \boldsymbol{a}_R\otimes \boldsymbol{b}_R \end{bmatrix}$, with $\boldsymbol{a}_j,\boldsymbol{b}_j$ being the $j$th columns of $\boldsymbol{A},\boldsymbol{B}$, respectively. The outer product of two tensors is denoted by $\circ$. For an $N$th-order tensor, $\boldsymbol{\mathcal{A}} \in \mathbb{R} ^{I_1 \times I_2 \times \cdots \times I_N}$, $\boldsymbol{A}_{(n)}\in \mathbb{R} ^{I_n \times I_1I_2 \cdots I_{n-1}I_{n+1} \cdots I_N}$ is its mode--$n$ unfolded (matricized) version (whose rank is known as mode--$n$ rank), which results from mapping the tensor element with indices $(i_1,i_2,\ldots,i_N)$ to a matrix element $(i_n,j)$, with $j=1 + \sum_{k=1,k\neq n}^N [ ( i_k -1 ) J_k]$, $J_k=~\begin{cases}
1, \qquad \mathrm{for} \quad k=1 \: \mathrm{or} \: k=2 \: \mathrm{and} \: n=1, \\
\prod_{m=1,m\neq n}^{k-1}I_m, \qquad \mathrm{otherwise}.
\end{cases}$ \\
\section{Methods}
\label{sec:theory}
\subsection{Canonical Polyadic Decomposition (CPD)}
CPD (or PARAFAC) \cite{2017_sidiropoulos_reviewtensor} approximates a 3rd-order tensor, $\boldsymbol{\mathcal{T}} \in \mathbb{R} ^{I_1 \times I_2\times I_3}$ (naturally extended to tensors of higher order), by a sum of $R$ rank-1 tensors,
\begin{equation}
\label{cpd1}
\boldsymbol{\mathcal{T}} \approx \sum_{r=1}^{R} \boldsymbol{a}_r \circ \boldsymbol{b}_r \circ \boldsymbol{c}_r
\end{equation}
Equivalently, for the $k$th frontal slice of $\boldsymbol{\mathcal{T}}$,
\begin{equation}
\label{cpd3}
\boldsymbol{T}_{k} \approx \boldsymbol{A} \boldsymbol{D}_k \boldsymbol{B}^\top, \quad k=1,2,\ldots,I_3
\end{equation}
\noindent where $\boldsymbol{A}=\begin{bmatrix}\boldsymbol{a}_1,\boldsymbol{a}_2,\ldots,\boldsymbol{a}_R\end{bmatrix}$, $\boldsymbol{B}$ and $\boldsymbol{C}$ are similarly defined matrices, and $\boldsymbol{D}_k$ is the diagonal matrix having the elements of the $k$th row of $\boldsymbol{C}$ on its diagonal. The main advantage of the CPD, besides its simplicity, is the fact that it is unique (up to permutation and scaling) under mild conditions~\cite{2017_sidiropoulos_reviewtensor}. Uniqueness of CPD is crucial to its application in BSS problems. Its performance is, however, largely dependent on the correct estimation of the tensor rank, $R$. Several heuristic methods have been proposed for the latter problem~\cite{2003_bro_new}.
\subsection{PARAFAC2}
PARAFAC2~\cite{2017_sidiropoulos_reviewtensor} differs from CPD in that strict multilinearity is no longer a requirement. CPD applies the same factors across all the different slices, whereas PARAFAC2 relaxes this constraint and allows variation across one of the modes (in terms of the values and/or the size of the corresponding factor matrix). For this reason, PARAFAC2 is not a tensor model in the strict sense as it can represent both regular tensors, with weaker constraints than CPD, as well as irregular tensors (collections of matrices of different dimensions) with size variations along one of the modes. It can be written in terms of the (here frontal) slices of the tensor $\boldsymbol{\mathcal{T}}$ as
\begin{equation}
\label{parafac22}
\boldsymbol{T}_{k} \approx \boldsymbol{A}_k \boldsymbol{D}_k \boldsymbol{B}^\top, \quad k=1,2,\ldots,I_3 ,
\end{equation}
\noindent with $\boldsymbol{A}_k$ being different for different $k$'s. This type of decomposition is clearly non-unique. Thus, in order to allow for uniqueness, it has been proposed to add the constraint that the cross products $\boldsymbol{A}_k^\top \boldsymbol{A}_k$ be constant over $k$. This has been shown~\cite{1999_kiers} to be equivalent to setting $\boldsymbol{A}_k=\boldsymbol{P}_k\boldsymbol{F}$, where the $R \times R$ matrix $\boldsymbol{F}$ is the same for all slices, while the variability is represented by the columnwise orthonormal $I_2 \times R$ matrix $\boldsymbol{P}_k$. Under this constraint, one has to fit the equivalent model
\vspace{-1mm}
\begin{equation}
\label{parafac23}
\boldsymbol{P}_k^\top\boldsymbol{T}_{k} \approx \boldsymbol{F} \boldsymbol{D}_k \boldsymbol{B}^\top, \quad k=1,2,\ldots,I_3.
\end{equation}
\vspace{-2mm}
As shown in~\cite{1999_kiers}, $\boldsymbol{P}_k$ can be computed as $\boldsymbol{P}_k=\boldsymbol{V}_k\boldsymbol{U}_k^\top$, where $\boldsymbol{U}_k$ and $\boldsymbol{V}_k$ are the left and right singular matrices of $\boldsymbol{F} \boldsymbol{D}_k \boldsymbol{B}^\top\boldsymbol{T}_{k}^\top$. As can be seen from Eq.~(\ref{parafac23}), the problem of fitting PARAFAC2 has been transformed into that of fitting a CPD model with transformed data. Applications of PARAFAC2 in fMRI and EEG analysis include~\cite{2015_ferdowsi_new,2017_chatzichristos_BTD2} and ~\cite{2017_Loukianos_PFAC2}, respectively.
\subsection{ICA-based methods}
Classical approaches for jointly analyzing fMRI and EEG include Joint Independent Component Analysis (JICA) (using one~\cite{2006_calhoun_jica,2006_calhoun_jica_pica} or multiple~\cite{2014_swinnen_jica} electrodes for EEG), and Parallel ICA~\cite{2015_hunyadi_parallel,2006_calhoun_jica_pica}. JICA jointly analyzes data from the same subjects from both modalities simultaneously. To achieve this, it uses the features derived from the first-level analysis of fMRI (spatial maps) and the averaged ERP epochs of EEG, hence JICA is also classified as a late fusion model. JICA assumes that a stronger ERP yields a stronger BOLD fluctuation in the same area (and vice versa), which supports the common assumption of having the same linear mixing system in the two modalities (in the subjects domain). Furthermore, each pair of coupled components is assumed to be dependent between the modalities and at the same time statistically independent of the rest of the components~\cite{2012_mijovic_whyhowjica}.
Parallel ICA first identifies components separately for each modality, performing a temporal ICA in EEG and a spatial ICA in fMRI. In a second step, the corresponding extracted components are identified based on their correlation in the temporal domain. Parallel ICA can be performed either at a single-subject level~\cite{2015_hunyadi_parallel} or at a multi-subject level using Group ICA~\cite{2010_lei_steff}.
\subsection{Modelling of the HRF}
As mentioned in the introduction, the GLM framework is most commonly adopted in fMRI analysis. Analysis within the GLM is rooted in the simple assumption that the variance in the fMRI BOLD signal can be modeled by the convolution of a (assumed to be known) HRF with the event/stimulus. The haemodynamic response is composite and nonlinear, resulting from the neuronal and vascular changes, which is known to vary among different subjects as well as among different areas of the same brain (inter-subject and intra-subject variability)~\cite{2004_handweker_bold}.
GLM-based methods explicitly need an estimate of the functional shape of the HRF to infer the expected activation pattern from the experimental task. Among the different available models for the HRF, the one that is more widely used is the model based on the two Gamma distributions~\cite{2008_lindquist_statistical,2004_handweker_bold}, usually referred to as double Gamma HRF model:
\begin{equation}
H(t,z)=\Gamma^{-1}(z_{(1)}) z_{(2)}^{z_{(1)}} t^{z_{(1)}-1} \mathrm{e} ^{-z_{(2)}t} - z_{(3)} \Gamma^{-1}(z_{(4)}) z_{(5)}^{z_{(4)}} t^{z_{(4)}-1} \mathrm{e} ^{-z_{(5)}t},
\end{equation}
\noindent where $\Gamma(\cdot)$ is the Gamma function, $\Gamma(\cdot)^{-n}=1/\Gamma(\cdot)^{n}$, and $z_{(1,2,3,4,5)}$ are the parameters that control the functional shape of the HRF. The values $z_{(1)}=~6$, $z_{(2)}~=1, z_{(3)}=~\frac{1}{6},z_{(4)}=16,z_{(5)}=1$ are used to generate the canonical HRF used in GLM.
Several other models have been proposed, such as the methods based on the cosine function~\cite{2004_zarahn_hrf}, radial bases~\cite{2004_riera_hrf}, and spectral basis function~\cite{2002_liao_hrf}. Furthermore, neuro-physiologically informed non-linear models of the HRF have been proposed, describing the dynamic changes in deoxyhemoglobin content as a function of blood oxygenation and blood volume~\cite{1998_buxton_dynamics,2004_buxton_modeling}, with a model of the blood flow dynamics during brain activation, where neuronal activity is approximated by the stimulus/task input scaled by a factor called neural efficiency, in the so-called ``balloon'' model. However, it must be pointed out that the models exhibit differences both in capturing the evoked changes of the HRF as well as in the number of parameters used to model the HRF~\cite{2009_lindquist_hrf}.
In this work, a new lighter model for the functional shape of the HRF will be tested, based on the Lennard-Jones potential~\cite{2018_LennardJones_wiki}. The latter is used in physics to model the repulsive and attractive forces between neutral atoms or
molecules. Due to its computational simplicity, the Lennard-Jones potential is used extensively in computer simulations even though more accurate potentials exist. This light model will be used in view of the smaller number of parameters used and the smoother partial derivatives which will be used during the optimization\footnote{A detailed description and motivation of the use of the Lennard-Jones potential along with a fit analysis with real data can be found in~\cite{2020_morante_hrf}. }. The Lennard-Jones model (as it will be henceforth referred to) is defined over the non-negative real numbers and can be expressed as:
\begin{equation}
H(t,z)=\Gamma^{-3}(z_{(1)} t) - z_{(2)} \Gamma^{-6}(z_{(3)} t)
\end{equation}
\noindent where $z_{(1,2,3)}$ are the parameters that control the functional shape of the HRF. Therefore, it can be noted that the proposed model only has three such parameters, compared to the five parameters of the double Gamma distribution model above.
Its time derivative can be obtained as follows:
\begin{equation}
\frac{\partial H}{\partial t}=-3 z_{(1)} \Gamma^{-3}(z_{(1)}t)\psi_{0}(z_{(1)}t)+6z_{(2)} z_{(3)}\Gamma^{-6}(z_{(3)}t)\psi_{0}(z_{(3)}t),
\end{equation}
\noindent where $\psi_{0}$ is the polygamma function~\cite{2020_LennardJones_polygamma} of order zero, also called digamma function.
Furthermore, the partial derivatives of the function with respect to each of the parameters are given as:
\textbf{Parameter $z_{(1)}$:}
\begin{equation}
\frac{\partial H}{\partial z_{(1)}}=-3 t\Gamma^{-3}(z_{(1)}t)\psi_{0}(z_{(1)}t)
\end{equation}
\textbf{Parameter $z_{(2)}$:}
\begin{equation}
\frac{\partial H}{\partial z_{(2)}}=-\Gamma^{-6}(z_{(3)} t)
,
\end{equation}
\textbf{Parameter $z_{(3)}$:}
\begin{equation}
\frac{\partial H}{\partial z_{(3)}}=6 z_{(2)} t \Gamma^{-6}(z_{(3)}t)\psi_{0}(z_{(3)}t)
\end{equation}
Note that the partial derivatives that will be used in the non-linear least squares (nls) optimization framework, are much simpler than the corresponding derivatives of the double Gamma HRF model.
\section{Soft-Coupled Tensor Decompositions}
Coupling through equality (hard coupling), which is used both in CMTF-based methods~\cite{2016_hunyadi_fusion,2017_acar_acmtf1,2017_acar_acmtf2} and in JICA~\cite{2012_mijovic_whyhowjica,2014_swinnen_jica,2006_calhoun_jica} approaches, arises from the assumption that the neural sources are reflected, exactly with the same power, in both modalities; however this is restrictive. Even if the exact equality and the independence assumptions, used by JICA, are valid, still the result of the first-level analysis of fMRI (used as an initial step~\cite{2016_hunyadi_fusion,2017_acar_acmtf1,2017_acar_acmtf2,2006_calhoun_jica,2012_mijovic_whyhowjica,2014_swinnen_jica}) is not taking into account the complementary information of EEG. Furthermore, as reported in~\cite{2012_mijovic_whyhowjica}, the result obtained with JICA is mostly influenced by the quality of the ERPs (EEG) and less by the fMRI data. This may indicate that the preprocessing of the fMRI with GLM may fail to retrieve all the information ``hidden'' in the raw fMRI data, due to the constraints of GLM~\cite{2008_lindquist_statistical}.
We propose a framework for early fusion of fMRI and EEG using coupled CPD with soft coupling~\cite{2014_seichepine_soft}, which means similarity and not exact equality (Fig.~\ref{softcoupl}). Fusion based on raw data, though potentially more challenging, may allow better inference~\cite{2017_ramachandram_typefusion}. The coupling could be attempted in any of the modes, depending on the problem at hand.
\begin{figure}
\centering
\includegraphics[width=.6\linewidth]{softcoupl.png}
\caption{Schematic representation of coupled CPDs with ``soft'' coupling.}
\label{softcoupl}
\end{figure}
Considering the 3rd-order fMRI tensor, $\boldsymbol{\mathcal{T}} \in \mathbb{R} ^{I_a \times I_b \times I_3}$ (space $\times$ time $\times$ subjects), and the 4th-order EEG tensor, $\boldsymbol{\mathcal{\tilde{T}}} \in \mathbb{R} ^{I_e \times I_{\tilde{a}}\times I_{\tilde{b}} \times I_3}$ (ERPs/frequency $\times$ space $\times$ trials amplitude $\times$ subjects). Their CPDs can be written as $\boldsymbol{T}_k \approx \boldsymbol{A} \boldsymbol{D}_k \boldsymbol{B}^\top$ and $\boldsymbol{\tilde{T}}_{k(1)} \approx \boldsymbol{E}\boldsymbol{\tilde{D}}_k (\boldsymbol{\tilde{B}} \odot \boldsymbol{\tilde{A}} )^\top $, respectively, with $\boldsymbol{\tilde{T}}_{k(1)}$ being the mode-1 matricization of $\boldsymbol{\mathcal{\tilde{T}}}_k=\boldsymbol{\mathcal{\tilde{T}}}(:,:,:,k)$~\cite{2017_chatzichristos_BTD2}. $\boldsymbol{A}=\begin{bmatrix}\boldsymbol{a}_1,\boldsymbol{a}_2,\ldots,\boldsymbol{a}_R\end{bmatrix}$ is a matrix that contains the weights of the $R$ spatial components ($I_a$ voxels), $\boldsymbol{B},\boldsymbol{C}$ contain the associated time courses $(I_b)$ and subject activation levels of fMRI $(I_3)$, respectively, and $\boldsymbol{D}_k$ is the diagonal matrix formed from the $k$th row of $\boldsymbol{C}$. For the EEG case, matrices $\boldsymbol{E},\boldsymbol{\tilde{A}},\boldsymbol{\tilde{B}},\boldsymbol{\tilde{C}}$ contain the weights of the associated ERPs $(I_e)$, electrodes $(I_{\tilde{a}})$, trials amplitude $(I_{\tilde{b}})$ and the subject activation levels of EEG $(I_3)$, respectively, and $\boldsymbol{\tilde{D}}_k$ is the diagonal matrix formed from the $k$th row of $\boldsymbol{\tilde{C}}$. The proposed cost function to be minimized is given by
\begin{align}
\label{hatzi}
&\sum_{k=1}^{I_3} \| \boldsymbol{T}_k - \boldsymbol{A} \boldsymbol{D}_k \boldsymbol{B}^\top \|_F^2
+ \sum_{k=1}^{I_3} \| \boldsymbol{\tilde{T}}_{k(1)} - \boldsymbol{E} \boldsymbol{\tilde{D}}_k (\boldsymbol{\tilde{B}} \odot \boldsymbol{\tilde{A}} )^\top\|_F^2 \nonumber \\ & + \lambda_A \| \boldsymbol{LA}_{1:R_c} - \boldsymbol{ \tilde{A}}_{1:R_c} \|_F^2
+ \lambda_B \| \boldsymbol{B}_{1:R_c} - \boldsymbol{H\tilde{B}}_{1:R_c} \|_F^2 \\ & + \lambda_C \| \boldsymbol{C}_{1:R_c} - \boldsymbol{\tilde{C}}_{1:R_c} \|_F^2 \nonumber,
\end{align}
\noindent with $\boldsymbol{L}$ being the lead-field matrix used for the EEG forward problem and $\boldsymbol{H}$ the matrix representing the convolution with the HRF and the down-sampling (due to the different sampling rate of the two modalities). The values of $\lambda$'s quantify the degree of coupling. It shall be noted that the weights of the different modalities are set to the unit, due to the fact that they have been both normalised to unity norm prior to the analysis (which is a really important preprocessing step). $R_c$ is the number of common components in the coupled mode(s), so there are $R-R_c$ and $\tilde{R}-R_c$ distinct components of fMRI and EEG, respectively. In this way, different model orders can be assigned to the decompositions of the modalities as long as the number of common components remains the same (without loss of generality, in (11), we assume that the common components are the first $R_c$ ones).
As can be noted in Eq.~(11), the quadrilinear model of CPD selected for decomposing the EEG tensor assumes that every subject has exactly the same ERP, an assumption which is restrictive~\cite{2009_sur_erp} and can be relaxed with the adoption of PARAFAC2~\cite{2010_weis_parafac2,2009_sur_erp}, where $\boldsymbol{E}$ may vary with $k$. Thus, the CPD used for EEG can be replaced by PARAFAC2, with $\boldsymbol{E}_k=\boldsymbol{P}_k \boldsymbol{F}$ and $\boldsymbol{P}_k$ and $\boldsymbol{F}$ computed as in Section 2.2, and the cost function (11) becomes
\begin{align}
\label{hatzi2}
&\sum_{k=1}^{I_3} \| \boldsymbol{T}_k - \boldsymbol{A} \boldsymbol{D}_k \boldsymbol{B}^\top \|_F^2
+ \sum_{k=1}^{I_3} \| \boldsymbol{P}_k^\top\boldsymbol{\tilde{T}}_{k(1)} - \boldsymbol{F} \boldsymbol{\tilde{D}}_k (\boldsymbol{\tilde{B}} \odot \boldsymbol{\tilde{A}} )^\top\|_F^2 \nonumber \\ & + \lambda_A \| \boldsymbol{LA}_{1:R_c} - \boldsymbol{ \tilde{A}}_{1:R_c} \|_F^2
+ \lambda_B \| \boldsymbol{B}_{1:R_c} - \boldsymbol{H\tilde{B}}_{1:R_c} \|_F^2 \\ & + \lambda_C \| \boldsymbol{C}_{1:R_c} - \boldsymbol{\tilde{C}}_{1:R_c} \|_F^2 \nonumber.
\end{align}
\section{Double Coupled Matrix \\ Tensor Factorization
(DCMTF)}
\begin{figure}[!b]
\centering
\includegraphics[width=\linewidth]{blockdesign.png}
\caption{Types of experimental fMRI design: a) Block event design b) Event-related design.}
\label{designs}
\end{figure}
As noted previously, the CPD model assumes multi-linearity for all the modes. The multi-linearity in the ERP/frequency mode can be relaxed with the use of PARAFAC2. Depending on the design of the experiment another assumption used, in order to stack different subject in a tensor, may be inaccurate. In task-related fMRI, currently two major classes of fMRI experimental designs exist: block designs and event-related designs~\cite{2008_lindquist_statistical,2009_Tie_designs}. In a blocked design, a condition is presented continuously for an extended time interval (block) to maintain cognitive engagement, with different task conditions usually alternating in time. The time course of the stimuli (both the sequence of the stimuli and the time intervals) remain stable among subjects (Fig.~\ref{designs}.a). In an event-related design, discrete and short-duration events are presented with randomized timing and order (both during the acquisition of a single subject but also among different subjects). Both designs have advantages and disadvantages. For example, block event design is more robust since relatively large BOLD signal changes with increased statistical power are detected. Moreover, it is statistically powerful and straightforward to analyze, in the sense that the exact shape of the HRF does not influence much the result of the analysis and hence can be assumed to be simple (equal to canonical) with smaller impact. On the other hand, the predictability of block design makes it inappropriate for some cognitive tasks, such as an `oddball' paradigm where a reaction to an unexpected stimulus is examined. Furthermore, it also increases the chance of low-frequency artifacts. Event-related design can detect transient variations in haemodynamic response and allows for the analysis of individual responses to trials. Furthermore, a study connected to the detection of a specific disease, e.g., seizure detection, follows a design similar to an event-related design, since a possible seizure onset can not be aligned among all subjects.
Hence, in event-related designs as well as in studies like seizure detection, the different subjects can not be stacked in the same tensor since the multi-linearity assumption will certainly not be valid. Furthermore, a PARAFAC2 approach cannot be followed either, since the extra constraint of the constant cross product of PARAFAC2 is not valid. Although no connection among the time courses of the different subjects exist, similar areas are probably activated by similar stimuli (hence similar or same spatial maps). Hence, it is still beneficial to retain the neighborhood information exploited by the tensor formulation. In order to retain this multi-way structure (but still respect the difference in time courses) the formulation of the problem can be transformed to a Double (in time among EEG and fMRI and in space among subjects in fMRI) CMTF (DCMTF), as shown in Fig.~\ref{dcmtf}.
The 3rd-order EEG tensors, $\boldsymbol{\mathcal{\tilde{T}}}_k \in \mathbb{R} ^{I_e \times I_{\tilde{a}}\times I_{\tilde{b}}}$, describe the variation over the spatial $(\boldsymbol{\tilde{a}}_{k_r})$, the temporal $(\boldsymbol{\tilde{b}}_{k_r})$ and the spectral/ERP $(\boldsymbol{e}_{k_r})$ modes, for $K$ different subjects. The fMRI matrices, $\boldsymbol{X}_k \in \mathbb{R} ^{I_{a}\times I_{b}}$, contain the variation over the temporal $(\boldsymbol{b}_{k_r})$ and spatial $(\boldsymbol{a}_{k_r})$ modes, with the matrix $\boldsymbol{A}_k=\begin{bmatrix}\boldsymbol{a}_{k_1},\boldsymbol{a}_{k_2},\ldots,\boldsymbol{a}_{k_R}\end{bmatrix}$ comprising the weights of the $R$ spatial components of the $k$th subject and $\boldsymbol{A}$ being a spatial map with which all the subject spatial maps are similar (imposed through a regularization term). The parameter sets $\{z_k\}$ describe the subject-specific HRF matrix, $\boldsymbol{H}_k$, which will be optimized using either the double Gamma model~\cite{2017_Eyndhoven_HRF} or the Lennard-Jones model or any other appropriate model selected. The proposed cost function is given by:
\begin{align}
\label{hatzi3}
\sum_{k=1}^{K} ( \| \boldsymbol{\mathcal{\tilde{T}}}_k - \sum_{r=1}^{R} \boldsymbol{\tilde{a}}_{k_r} \circ \boldsymbol{\tilde{b}}_{k_r} \circ \boldsymbol{e}_{k_r} \|_F^2 +
\| \boldsymbol{X}_k &- \sum_{r=1}^{R} \boldsymbol{a}_{k_r} \circ ( \boldsymbol{H}_k (t,\{z_k\}) \boldsymbol{b}_{k_r}) \|_F^2 \nonumber \\
+ \lambda_1 \| \boldsymbol{A}_{k} - \boldsymbol{A} \|_F^2 )
\end{align}
For the coupling in the time domain, instead of using the flexible approximation with the subject-specific HRF, another soft coupling can be used and, hence, the cost function will become
\begin{align}
\label{hatzi4}
\sum_{k=1}^{K} ( \| \boldsymbol{\mathcal{\tilde{T}}}_k - & \sum_{r=1}^{R} \boldsymbol{\tilde{a}}_{k_r} \circ \boldsymbol{\tilde{b}}_{k_r} \circ \boldsymbol{e}_{k_r} \|_F^2 +
\| \boldsymbol{X}_k- \sum_{r=1}^{R} (\boldsymbol{a}_{k_r} \circ \boldsymbol{b}_{k_r})\|_F^2 \\ \nonumber
& + \lambda_1 \| \boldsymbol{A}_{k} - \boldsymbol{A} \|_F^2 + \lambda_2 \| \boldsymbol{B}_{k} - \boldsymbol{H\tilde{B}}_{k} \|_F^2)
\end{align}
It should be noted that the tuning of two different $\lambda$ parameters might be difficult, but the decomposition of each subject separately, can provide information about the similarity of the spatial maps. A high value of $\lambda_1$ means that the spatial maps of all subject are the same and hence, hence imposing the same constraint in the spatial domain as Equation (12) (the assumption of the same spatial maps is implicitly made by the tensor decomposition introduced in Section~3). Despite the fact that matrices, not higher-order tensors, are considered, the coupling among the spatial components retains the multi-way nature of the multi-subject fMRI case (keep in mind that a 3-way tensor can also be represented as a set of matrices hard coupled in both of their modes~\cite{2013_sorber_optimization}), so the multi-way nature of the data is still exploited. The tuning of $\lambda_2$ is equivalent to $\lambda_B$ of Equation (12).
\begin{figure}
\centering
\includegraphics[width=.6\linewidth]{doublecoupl.png}
\caption{Schematic representation of DCMTF for $K$ subjects.}
\label{dcmtf}
\end{figure}
\section{Simulation results}
A simulated dataset similar to the one used in~\cite{2010_lei_steff} and~\cite{2014_dong_steff2} has been employed in our analysis. A disc with 2452 voxels (dipoles) was created in order to generate the data. For EEG, a concentric three-sphere model with 128 electrodes was set to wrap the disc, and the lead-field matrix computed in~\cite{2014_dong_steff2} has been used. The temporal sampling rate of EEG was 1 kHz while the epoch of the ERPs was set to 400 ms. The fMRI spatial maps were simulated as 2D images of $70 \times 70$ voxels, with the aid of the SimTB~\cite{2012_erhardt_simtb} toolbox. In comparison to~\cite{2010_lei_steff,2014_dong_steff2}, the overlap in time for EEG and in space for fMRI has been increased. In Fig.~5, the assumed neurophysiological sources can be viewed, from left to right: ``vision area'' S1, ``default mode network'' S2, ``auditory cortex'' S3, ``sensory networks'' S4, ``cognition areas'' S5 and ``dorsal attention network'' S6. The activity level at each active voxel was randomly sampled from a Uniform~[0.8,1.2] distribution for each replication of each simulation condition. These assumed active neural sources (rows a, b) along with the assumed ERPs (row d) yield scalp distributions and single-trial images in EEG and spatial maps and time courses of fMRI. Single-trial images (row c) are generated by multiplying each ERP (row d) with the trial amplitude (row a). Scalp potential distribution maps (topoplots, row e) are computed by solving the forward problem for each spatial map of row a. The fMRI BOLD signals (time courses, row f) were computed through the convolution between the trial amplitude (row a) with the canonical HRF.
\emph{In all of the scenarios, we assume coupling of fMRI and EEG in the time domain only, hence $\lambda_A=\lambda_C=0$. Similar conclusions can be reached if the coupling is assumed in one of the other modes.}
This section will be split in 3 subsections:
\begin{itemize}
\item We will exhibit the difference of the soft coupling approximation with the flexible approximation proposed in~\cite{2017_Eyndhoven_HRF} through a comparison study (based on Pearson correlation). Furthermore, in this subsection, we will study the tuning of the $\lambda$ value in the soft coupling method.
\item Different methods will be examined in the case where all the subjects have the same time course: Parallel ICA, uncoupled CPDs (separately decomposing each tensor), hard and soft coupling in the time domain with different $\lambda_B$ values.
\item The same methods will be tested also in the last subsection, but different time courses per subject will be considered, in order to point out the need of an alternative formulation of the problem in such a case.
\end{itemize}
The implementations of the proposed soft coupled decomposition and the DCMTF were performed within the Structured Data Fusion (SDF) framework~\cite{2015_sorber_structured} of Tensorlab~\cite{2016_vervliet_tensorlab} and Non Linear Squares (NLS) was adopted as the optimization scheme. Parallel ICA was implemented (using Group ICA) as in~\cite{2010_lei_steff,2015_hunyadi_parallel}, based on InfoMax~\cite{1995_bell_infomax} for the ICA step.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{simulation.png}
\caption{Simulated sources in EEG and fMRI.}
\label{fig:sim}
\end{figure}
In order to estimate $R_c$, $\boldsymbol{\mathcal{T}}$ and $\boldsymbol{\mathcal{\tilde{T}}}$ are separately decomposed, and a correlation matrix is computed based on the coupled modes of the tensors. Components with similarity exceeding a predefined threshold $t$ comprise the common components~\cite{2016_genicot_initfusion}. The computation of the number of the coupled components, $R_c$ could be incorporated in the cost function, similarly to~\cite{2017_acar_acmtf1,2017_acar_acmtf2}.
In this way, we can also get an indication for the appropriate $\lambda$ values to be used: higher correlation indicates higher values for $\lambda$; hence the $\lambda$'s of the modes which will not be coupled will be set to zero. It is really important that the data of both modalities must be normalized (to unit norm) beforehand (so that the first two terms in (11) have the same weight in the cost function) and preprocessed for removal of artifacts~\cite{2018_walach_normalization}.
The optimal initialization for each modality separately is not guaranteed to be the optimal one for their combination; furthermore, the permutation issues must be taken into consideration. Hence, an initialization method designed specifically for coupled decompositions must be used. For the initialization of the coupled tensor decomposition, the Generalized EigenValue Decomposition (GEVD)-based method proposed in~\cite{2015_sorensen_coupled} is used.\footnote{Special thanks to Nico Vervliet, KU Leuven, for sharing the code by M. S{\o}rensen, University of Virginia, USA.} When prior information is available for any of the modes (or part of them), the respective columns can be excluded from the optimization function and set equal (or almost equal) to the known factors.
Every experiment has been run 30 times (same map and time course, different activation amplitude and different instance of random noise each time). The Pearson correlation values presented in the following figures and tables are the mean Pearson correlation of all the obtained sources with the ground truth. Since the same algebraic initialization is used for every run, the standard deviations of all methods are relatively small, hence they will be reported only in the case that there are differences among the methods.
\subsection{Soft versus flexible coupling}
We will compare the two alternative methods, that we will use to replace the hard coupling. Additionally, we will also examine the significance of the tuning of the $\lambda$ which controls how ``strong" the assumption of coupling will be.
Fig.~\ref{fig:lambda} visualizes the importance of the choice of the $\lambda_B$ value for soft coupling. We can distinguish two cases. In the first case (the solid lines), where the coupling assumption is exact (the simulated data were generated with the use of the canonical HRF, $\boldsymbol{H}$), it can be readily seen that the hard coupling is the best to use. However, the soft coupling analysis can reach the same performance with the appropriate tuning of $\lambda_B$. In the second case (dotted lines), the assumption of exact coupling is violated as the time courses were generated by convolution with different HRFs (the 5 different HRFs presented in~\cite{2018_morante_info} have been used, which have a mean correlation of 0.8 with the canonical HRF), while $\boldsymbol{H}$ (Equation (6)) was constructed based on the canonical HRF. The fact that the time courses are similar but not equal deteriorates the performance of the hard coupling. Hard coupling still performs better than the uncoupled version but it is outperformed by the soft coupling for $\lambda_B > 0.1$. In cases where we move the HRF farther from the canonical HRF, the hard coupled case can become even worse than the uncoupled one.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{lambdas.PNG}
\caption{Correlation of the obtained sources with Uncoupled, Hard coupled and Soft coupled CPDs with different $\lambda_B$ values. }
\label{fig:lambda}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{hrfs.jpg}
\caption{Comparison, based on Pearson correlation and time till convergence, of the soft coupling method with the flexible coupling method using the double Gamma HRF model and the Lennard-Jones HRF model. }
\label{fig:hrfs}
\end{figure}
For the comparison between the soft coupling and the flexible coupling, we simulated a similar single-subject scenario in order to test the performance and the computational burden for every method. We have simulated 4 different scenarios; in each scenario we slightly modify the HRF from which the data are generated. Initially we generated the data with the canonical HRF, while for the other scenarios an HRF with 0.9 correlation with a canonical one was used (0.8 and 0.7, respectively). We can see (Fig.~\ref{fig:hrfs}) that if we manage to tune appropriately the $\lambda_B$ value then the soft coupling method outperforms the other methods (red dot) but its deviation (suboptimal selection of $\lambda_B$ randomly chosen from $\{0.01,0.1,10,100,1000\}$) is large and its performance can be even worse than that of the flexible methods. It seems that the selection of the appropriate model (Lennard-Jones or double Gamma) is a compromise between accuracy and time complexity. In the cases where the HRF is closer to the canonical one, the Lennard-Jones model has similar performance as the double Hamma model in a significantly shorter time. In the cases where the HRF differs more from the canonical one, the performance deteriorates and the time needed to converge can also become longer (while also higher standard deviation is observed). It should be noted that the time needed for the selection of the $\lambda$ value is not represented in the figure since it depends on the intervals of the grid used in the grid search approach followed.
\subsection{Soft coupled tensor decompositions}
To compare the soft coupled tensor decompositions and the double coupled matrix tensor decomposition, multi-subject scenarios were simulated. The data from each subject contained all the six sources presented in Fig.~\ref{fig:sim} with different activation levels; the activation patterns have strengths randomly sampled from a Uniform~[2,5] distribution. Five different subjects are simulated, and for the simulations presented in this section each subject is assumed to have the same time course for every source (differing only in the noise) while in the simulation used in the next section differences are incorporated in the time courses and HRFs of some of the subjects.
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{ticks.png}
\caption{Accuracy of different methods. EEG: diamonds, fMRI: discs}
\label{fig:ticks}
\end{figure}
\begin{figure} [b]
\centering
\includegraphics[width=\linewidth]{ica.jpg}
\caption{Resulting fMRI spatial maps with ICA, at SNR=0.1.}
\label{fig:ica}
\end{figure}
In Fig.~\ref{fig:ticks}, the mean correlation between the obtained sources and the ground truth per method and per modality (diamonds for EEG and discs for fMRI) at different Signal to Noise Ratios (SNR= squared Frobenius norm of the signal over the squared Frobenius norm of the noise) can be observed. In cases (a) (same noise level as in~\cite{2010_lei_steff}) and (b), different levels of noise are tested, while in case (c) the assumption of the same ERP per subject is violated and the ERPs are shifted (the first subject has 0 msec shift while subjects 2--5 have time shifts at increments of 10~msecs with respect to the 1st subject, hence a shift of 40~voxels in the 5th subject). Parallel ICA exhibits inferior performance compared to both the uncoupled (Unc) and soft coupling methods (Coupled CPDs, ``CPDs'' and Coupled PARAFAC2 CPD, ``PF2-CPD'') in all of the cases, due to the overlapping in the sources, which violates the independence assumption. The resulting spatial maps obtained by spatial ICA in case (a) can be viewed in Fig.~~\ref{fig:ica}. Note that, in the areas of overlapping, there is crosstalk between the maps. S4, which overlaps with most of the rest of the sources, can not be identified (for comparison with the ground truth, observe row g of Fig.~\ref{fig:sim}). It can be seen that, in case (a), the correlation for EEG with uncoupled analysis is higher than with soft coupling. This is caused by the performance gain for the fMRI source in the coupled case which results in a slight loss for EEG. Overall, the correlation is increased with soft coupling. In case (b), where the SNR is the same for both modalities, soft coupling yields better results. PF2-CPD in both (a) and (b) cases yields a slightly worse result than coupled CPDs (since the multilinearity assumption used by CPD is valid here). The last case, (c), is the one where the advantage of PARAFAC2 becomes apparent. We observe that ICA is affected by the ERP shifting much less than the uncoupled and the coupled CPD methods, but it still has the worst performance.
\subsection{Flexible double coupled matrix tensor decomposition}
In this subsection, we will test the case where subject variability exists, in time or in space. Hence three different scenarios and two subscenarios (for each) will be tested: In the first scenario, all the subjects have the same time courses (similar to previous subsection). In the second scenario, each subject has a different HRF and a time shift in each of the time courses of the sources. The 5 different HRFs presented in~\cite{2018_morante_info} have been used while also the time courses are shifted (the first subject has no shift while subjects 2--5 have time shifts at increments of 1~sec with respect to the 1st subject, hence a shift of 4~secs in the 5th subject). In the last scenario, the time courses of the subjects are the same but the spatial maps of every subject are different: subject variability was introduced in the spatial domain of two of the sources (2 and 4) and rotation (in increments of 4~degrees per subject) of one of the sources (2) and voxel shifts at increments of 2 voxels with respect to the 1st subject in the other source (4).
In every scenario, two subscenarios are also simulated: a) Only sources 2, 3 and 4 (Fig.~\ref{fig:sim}) with low spatial overlap are used, and b) all the sources are used. We considered these subscenarios in order to examine the impact of overlap, time shift and subject spatial variability, separately.
\begin{table}
\centering
\begin{adjustbox}{width=.98\linewidth}
\begin{tabular}{ | c | c || c | c || c | c|| c | c| }
\hline
\multicolumn{2}{|c||}{\multirow{2}{*}{\textbf{Methods}}} & \multicolumn{2}{|c||} {\textbf{Same time and space} } & \multicolumn{2}{|c||} {\textbf{Diff. time courses}} & \multicolumn{2}{|c|} {\textbf{Diff. spatial maps}} \\
\hhline{~~------}
\multicolumn{2}{|c||} {} & {Low overlap} & {High overlap} & {Low overlap} & {High overlap} & {Low overlap} & {High overlap} \\
\hline
\hline
\multicolumn{2}{|c||} {Parallel ICA} & \textbf{0.95} $\pm$ 0.02 & 0.80 $\pm$ 0.02 & 0.82 $\pm$ 0.02 & 0.68 $\pm$ 0.11 & \textbf{0.88} $\pm$ 0.02 & 0.75 $\pm$ 0.02\\
\multicolumn{2}{|c||} {Uncoupled} & 0.85 $\pm$ 0.02 & 0.82 $\pm$ 0.02 & 0.70 $\pm$ 0.6 & 0.69 $\pm$ 0.08 & 0.70 $\pm$ 0.09 & 0.69 $\pm$ 0.10\\
\multicolumn{2}{|c||} {Coupled Tensors} & \textbf{0.95} $\pm$ 0.02& \textbf{0.92} $\pm$ 0.02& 0.75 $\pm$ 0.02& 0.65 $\pm$ 0.03& 0.78 $\pm$ 0.02 & 0.70 $\pm$ 0.03\\
\multicolumn{2}{|c||} {DCMTF} & 0.91 $\pm$ 0.03& 0.90 $\pm$ 0.03 & \textbf{0.90} $\pm$ 0.04 & \textbf{0.90} $\pm$ 0.03 & \textbf{0.88} $\pm$ 0.04& \textbf{0.87} $\pm$ 0.04\\
\hline
\end{tabular}
\end{adjustbox}
\caption{Performance of the different fusion methods under different scenarios.}
\label{table:tab2}
\end{table}
The mean Pearson correlation of the obtained sources with the ground truth is given in Table~\ref{table:tab2} for the two scenarios. It can be noted that the Parallel ICA method outperforms the other methods in the case where no severe spatial overlap and the same time courses per subject exist. However (as mentioned previously), this method is also affected more severely by the spatial overlap of the sources (since the assumption of the joint distribution of the sources is violated) and additionally it is affected by different time courses per subject since it is based on the assumptions imposed by Group ICA (GICA)~\cite{2005_beckmann_tensorial}. In the case of high overlap and same time course per subject, the soft coupled tensor decomposition exhibits the best performance but on the other hand this method is most affected by the differences in the time courses, since the assumption of multi-linearity is violated in the time domain. The Uncoupled tensors have similar behaviour since the difference in the time course per subject remains even if the EEG and fMRI tensors are decomposed separately. The DCMTF model allows the successful estimation of the underlying sources much better than the other methods in the case of different time courses and HRFs per subject. It should be noted that the performance of DCMTF is similar to that of Soft Coupled Tensors in the case of the same HRF and time course per subject. With different spatial maps per subject, we can note that the ICA-based method is affected less since the assumption of same time course used by GICA is then valid. Concerning the standard deviation of the Pearson correlation we can note that GICA is the more stable method with slightly higher standard deviation in the cases where it fails.
The tuning of $\lambda_1$ is less significant than that of $\lambda_2$. The performance of DCMTF presented in Table~1 in the first two scenarios is with a high value of $\lambda_1$ ($\lambda_1=10^6$), since the spatial maps per subject are the same, while for the last scenario the $\lambda_1$ value was selected based on grid search.
\subsection{Discussion}
From the results obtained in the previous subsections we can understand that if the correct model is selected (based on the type of the problem at hand), the use of ``non-hard'' (soft or flexible) coupling methods and raw data can improve the obtained result.
In every case, the method has to be selected a-priori by the user based on the type of problem. An initial analysis of the data of both modalities separately is recommended. This initial analysis can provide relevant information to the user regarding the model to be selected as well as hints on the values of the (hyper)parameters.
For example, a simple mean-ERP analysis prior to the joint analysis could provide an indication of the amount of the shift in the ERPs, in order to select which of the soft coupled tensor decompositions should be used (PARAFAC2-CPD or coupled CPDs). As mentioned previously, also the number of coupled components, $R_c$, can be obtained by setting a threshold based on the correlation among the components of the two modalities in the initial ``separate'' decomposition.
Furthermore, it should be kept in mind that the design of the experiment plays a significant role in the model selection. An experiment with different time course per subject will lead to the adoption of DCMTF; this does not mean that the multi-way nature of the data will still not be exploited (as previously noted), since the coupling among the spatial modes of fMRI will enrich the optimization problem with spatial neighborhood information among the different subjects.
Concerning reproducibility of the results, we have noted that all the methods have a low standard deviation provided they succeed to correctly separate the sources (high mean Pearson correlation). We have also noted that although the standard deviation of the correlation is low in the Uncoupled tensors method in cases where it it fails, the standard deviation of the correlation among the estimated sources in every run is high (can reach up to 0.40). This means that although the method produces
(almost) equally bad results in every run (since the assumption is not valid) the results differ from one run to another (when the method fails). On the other hand, the coupled tensors method, though it also fails when different time courses exist, produces similarly bad results in every run. This difference could be possibly explained from the coupling constraints which enhance the uniqueness properties of the decomposition.
We have demonstrated that the use of raw data in the problem of fusion of EEG and fMRI, provided the heterogeneity of the data variables~\cite{2018_walach_normalization} is carefully handled, facilitates accurate source identification. As it has been pointed out~\cite{2015_Lahat_Multimodal,2015_Adali_Fusionb}, the use of the raw data can improve the result of the decomposition by exploiting latent correlations between the different datasets, which might have been attenuated by the use of intermediate feature extraction methods (such as GLM). Our findings (especially those of Section~5.3) confirm the inability of GLM (and hence all methods relying on GLM as a preprocessing step, e.g., Parallel ICA) to cope with HRF variability~\cite{2009_lindquist_hrf,2014_swinnen_jica}. Moreover, we have confirmed that ICA-based methods fail to correctly decompose overlapped sources~\cite{2007_stegeman_comparing,2019_chatzichristos_journal}.
\section{Conclusions}
This pre-print briefly reviews the literature of the problem of EEG-fMRI fusion and reports our recent results on this topic, which are based on the adoption of two different tensor models for jointly analyzing fMRI and EEG data. This is an attempt to benefit from the multi-way nature of \emph{both} modalities, performing an early fusion, and hence, bypassing the need to rely on features. Performance gains have been reported compared to ICA methods as well as to the separate analyses of the datasets. The use of coupled PARAFAC2-CPD was seen to outperform the coupled CPD in the presence of shifts in the ERPs per subject. A comparison between flexible and soft coupling approaches has been presented while also an alternative HRF model has been tested for the first time. Future work will include studies with real data, comparisons with methods based on Independent Vector Analysis (IVA)~\cite{2015_Adali_Fusiona} and alternative tensor models (e.g., Block Term Decomposition~\cite{2017_chatzichristos_BTD}). Moreover, a more systematic selection of the $\lambda$ values will be
sought for.
\section*{Acknowledgment}
The authors would like to thank Dr. Li Dong, UEST, China for providing the lead-field matrix used in~\cite{2014_dong_steff2}, Dr. Loukianos Spyrou, Univ. of Edinburgh, UK, Dr. Nico Vervliet and Simon Van Eyndhoven, KU Leuven, Belgium for fruitful discussions on the topics of EEG, soft and flexible coupling in Tensorlab, respectively and Manuel Morante Moreno, University of Athens, for the cooperation in the Lennard-Jones model. Furthermore, we would like to thank Prof. M. Davies and J. Escudero, Univ. of Edinburgh, UK who were coauthors in the conference paper~\cite{2018_chatzichristos_fusion}, a preliminary version of this work. The research leading to these results was funded by the European Union's $7^{\mathrm{th}}$ Framework Program under the ERC Advanced Grant: BIOTENSORS ($n^{\circ}~339804$). This work was also funded by EU H2020 MSCA-ITN-2018: On integrating Magnetic Resonance SPectroscopy and Multimodal Imaging for Research and Education in MEDicine (INSPiRE-MED) Grant Agreement $n^{\circ}~339804$. This research also received funding from the Flemish Government (AI Research Program).
\bibliographystyle{IEEEbib}
|
2,869,038,156,773 | arxiv | \section{Introduction}
Deep reinforcement learning with neural network policies and value functions has had enormous success in recent years across a wide range of domains~\cite{DDPG,TRPO, TRPOGAE, DQN}. In particular, model-free reinforcement learning with policy gradient methods have been used to solve complex robotic control tasks~\cite{levine, Gu-Model-Based}. Policy gradient methods can be generally divided into two groups: off-policy gradient methods, such as Deep Deterministic Policy Gradients (DDPG)~\cite{DDPG} and on-policy methods, such as Trust Region Policy Optimization (TRPO)~\cite{TRPO}.
However, often there are many sources of possible instability and variance that can lead to difficulties with reproducing deep policy gradient methods. In this work, we investigate the sources of these difficulties with both on- and off-policy gradient methods for continuous control. We use two MuJoCo~\cite{MuJoCo} physics simulator tasks from OpenAI gym~\cite{Gym} (Hopper-v1 and Half-Cheetah-v1) for our experimental tasks. We investigate two policy gradient algorithms here: DDPG and TRPO. To our knowledge, there are few works~\cite{rllab} which reproduce existing policy gradients methods in reinforcement learning, yet many use as these algorithms as baselines to compare their novel work against~\cite{rllab,QPROP,SDQN,IPG}. We use the code provided by in~\cite{rllab} and~\cite{QPROP} for TRPO and DDPG (respectively), as these implementations are used in several works directly for comparison~\cite{IPG,QPROP,rllab,houthooft2016vime,rajeswaran2016epopt}
\textbf{Performance Measures : } We examine the general variance of the algorithms and address the importance of presenting all possible metrics across a large number of trials. Three main performance measures commonly used in the literature are: Maximum Average Return, Maximum Return, Standard Deviation of Returns, and Average Return. However, the first two measures are considered to be highly biased, while the last two are considered to be the most stable measures used to compare the performance of proposed algorithms. Thereby, in the rest of this work we only use the Average Return as our comparison measure unless stated otherwise, with final results displaying all metrics\footnote{We leave out Maximum Return unavenged across several trials, we posit this to be an unsuitable metric for reporting results. High-variance policies and environments may yield a vastly larger maximum return in an outlying trial.}.
\textbf{Hyper-parameter Settings : } We also highlight that there can be difficulty in properly fine-tuning hyper-parameter settings, leading to large variations of reported results across a wide range of works as different hyper-parameters are used. As in Tables~\ref{table:ddpg_papers} and~\ref{table:trpo_papers}, this inconsistency within the wide range of reported results makes it difficult to compare DDPG and TRPO as baseline algorithms without careful detailing of hyper-parameters, attention to the fairness of the comparison, and proper tuning of the parameters. Each cited work uses a different set of experimental hyper-parameters for supposed baseline comparisons\footnote{hyper-parameters for each paper can be found in detail in the references provided}. Running these algorithms with suboptimal hyper-parameter configurations may result in inaccurate comparisons against these baseline methods. As such, we highlight the significance of tuning various hyper-parameters and assess which of these yield the most significant differences in performance.
Based on our analysis, we encourage that careful consistency should be maintained when reporting results with both of these algorithms, as they are quite susceptible to hyper-parameters and the external sources of variance or randomness.
\section{Experimental Analysis}
We evaluate the off-policy DDPG~\cite{DDPG} and on-policy TRPO~\cite{TRPO} algorithms on continuous control environments from the OpenAI Gym benchmark~\cite{Gym}, using the MuJoCo physics simulator \cite{MuJoCo}. We empirically show the susceptibility and variance in results due to hyper-parameter configurations on two environments: Hopper ( $\mathcal{S} \subseteq \mathbb{R}^{20}, \mathcal{A} \subseteq \mathbb{R}^{3}$ ) and Half-Cheetah ($\mathcal{S} \subseteq \mathbb{R}^{20}, \mathcal{A} \subseteq \mathbb{R}^{6}$ ). All experiments\footnote{For code used, see: \url{https://github.com/Breakend/ReproducibilityInContinuousPolicyGradientMethods}.} are performed building upon the rllab Tensorflow implementation of TRPO~\cite{rllab} and the Q-Prop Tensorflow implementation of DDPG for our experiments~\cite{QPROP}.
\textbf{Experiment Details : }We run all variations for 5000 iterations and average all results across 5 runs. We investigate several hyper-parameters: batch size, policy network architecture, step size (TRPO), regularization coefficient (TRPO), generalized advantage estimation ($\lambda$) (TRPO), reward scale (DDPG), and actor-critic learning rates (DDPG). For each of these hyper-parameters we hold all others constant at default settings and vary the one under investigation across commonly used values. Lastly, we run a final set of experiments using the overall best cross-section of hyper-parameters for 10 trials using random seeds. We do this to investigate whether there is a significant difference in the results just due to variance caused by the random seeds.
For TRPO, the default hyper-parameters which we use are: a network architecture of (100,50,25) with ReLU hidden activations for a Gaussian Multilayer Perception Policy~\cite{rllab}; a step size of 0.01; a regularization coefficient of $1 \cdot 10^{-5}$; a Generalized Advantage Estimation $\lambda$ of 1.0~\cite{TRPOGAE}. For DDPG, we use default parameters as follows: a network architecture of ($100$,$50$,$25$) with relu hidden activations for a Gaussian Multilayer Perception Policy~\cite{rllab}; actor-critic learning rates of $1\cdot10^{-3}$ and $1\cdot10^{-4}$; batch sizes of $64$; and a reward scale of $0.1$.
\subsection{Common Hyper-Parameters}
First, we investigate several hyper-parameters common to both TRPO and DDPG: policy architecture and batch size. We use the same sets of hyper-parameters as reported in previous works using these implementations in an attempt to reproduce the results reported in these works.
\textbf{{Policy Network Architecture : }} The policy network architecture can play an important role in the maximum reward achieved by the algorithm due to the amount of information storage provided by the network. We use a hidden layer sizes ($64$,$64$) as in~\cite{TRPO}, ($400$,$300$) as in~\cite{rllab,DDPG}, and ($100$,$50$,$25$) as in~\cite{QPROP} for comparing the results of these algorithms\footnote{All of these use RELU activations for the hidden layers and a Gaussian MLP Policy.}.
Our results can be found in Figures~\ref{trpo_arch} and~\ref{ddpg_network}. Notably, the ($400$,$300$) architecture significantly outperforms both other smaller architectures for Half-Cheetah and to a less significant extent Hopper as well\footnote{For TRPO Half-Cheetah using a two-sample t-test on the sample rollouts: against ($64$,$64$) $t=-13.4165,p=0.0000$; against ($100$,$50$,$25$) $t=-11.3368,p=0.0016$. For TRPO Hopper: against ($100$,$50$,$25$) $t=-0.5904,p=0.2952$; against ($64$,$64$) $t=-1.9081,p=0.2198$}. This is true for both TRPO and DDPG. However, the architecture which we found to be the best ($400$,$300$) is not the one which is used in reporting results for baselines results in~\cite{QPROP,IPG}.
\begin{figure}[ht!]
\includegraphics[width=.5\textwidth]{images/hc_arch_trpo}
\includegraphics[width=.5\textwidth]{images/hp_arch_trpo}
\caption{TRPO on Half-Cheetah with different network configurations}
\label{trpo_arch}
\end{figure}
\begin{figure}[ht!]
\includegraphics[width=.5\textwidth]{images/ddpg_HalfCheetah_Network_Structure}
\includegraphics[width=.5\textwidth]{images/ddpg_hopper_network_structure}
\caption{DDPG on Half-Cheetah and Hopper on different network configurations}
\label{ddpg_network}
\end{figure}
For the Hopper environment, for both TRPO and DDPG, results are not as significantly impacted by varying the network architecture, unlike the Half-Cheetah environment. This is somewhat thematic of what we find across all hyper-parameter variations on Hopper, as will be further discussed later. In particular, our investigation of DDPG on different network configurations shows that for the Hopper environment, DDPG is quite unstable no matter the network architecture. This can be attributed partially to the high variance of DDPG itself, but also to the increased stochasticity of the Hopper task. As can be seen in Figure~\ref{ddpg_network}, even with varied network architectures, it is difficult to tune DDPG to reproduce results from other works even when using their reported hyper-parameter settings.
\textbf{Batch Size : }The batch size parameter plays an important role in both DDPG and TRPO. In the off-policy DDPG algorithm, the actor and critic updates are made by sampling a mini-batch uniformly from the replay buffer. Typically, the replay buffer is allowed to be large. In~\cite{DDPG} and~\cite{QPROP}, a batch size of $64$ was used, whereas the original rllab implementation uses a batch size of $32$. Our analysis with different mini-batches for DDPG $(32, 64, 128)$ shows that similar performance can be obtained with mini-batch sizes of $32$ and $64$, whereas significant improvements can be obtained with a batch size of $128$.
For TRPO, larger batch sizes are necessary in general. We investigate the same batch sizes as used in~\cite{QPROP,TRPO} of ($1000$,$5000$,$25000$). As expected, a batch size of $25000$ produces the best results. As we constrain learning to $5000$ episodes, it is intuitive that a larger batch size would perform better in this time frame as more samples are seen. Furthermore, as can be seen in Figure~\ref{trpo_batch} for Half-Cheetah, the smaller batch sizes begin to plateau to a much lower optimum.
By intuition, this may be due to TRPO's use of conjugate gradient optimization with a KL constraint. With small sample batch sizes, gradients differences between steps may be much larger in a high variance environment and results in a more unstable training regime.
\begin{figure}[ht!]
\includegraphics[width=.5\textwidth]{images/trpo_batch_size}
\includegraphics[width=.5\textwidth]{images/trpo_batch_size_hopper}
\caption{TRPO on Half-Cheetah and Hopper - Significance of batch size}
\label{trpo_batch}
\end{figure}
\begin{figure}[ht!]
\includegraphics[width=.5\textwidth]{images/ddpg_HalfCheetah_Batch_Size}
\includegraphics[width=.5\textwidth]{images/ddpg_hopper_batch_size}
\caption{DDPG on Half-Cheetah and Hopper - Significance of the mini batch size}
\label{ddpg_batch}
\end{figure}
We also highlight that the DDPG algorithm with different batch sizes produces similar results for the Hopper environment. While other works have reported different tuned parameters for DDPG, we establish the high variance of this algorithm, producing similar results with different batch sizes for the Hopper environment, while a larger batch size improves performance in Half-Cheetah as seen in Figure~\ref{ddpg_batch}.
\subsection{TRPO-Specific Hyper-Parameters}
\textbf{{Regularization Coefficient} : }The regularization coefficient (RC) (or conjugate gradient damping factor) is used as a regularizer by adding a factor of the identity matrix to the Fisher matrix (or finite difference HVP in~\cite{rllab}) during the conjugate gradient step. We investigate a range of values between $1\cdot10^{-5}$ to $.1$ based on values used in aforementioned works. We don't see a significant difference\footnote{Using an average of 2-sample t-test comparisons, the largest difference from the default parameter in Hopper is $t=2.8965,p=0.1443$ with RC=0.1 and $t=0.8020, p=0.4540$ with RC=.0001.} in using one particular value of RC over another, though it seems to have a more significant effect on Hopper. Figure~\ref{trpo_reg} shows the average learning graphs for these variations.
\begin{figure}[ht!]
\includegraphics[width=.5\textwidth]{images/trpo_reg_coeff_hc}
\includegraphics[width=.5\textwidth]{images/trpo_reg_coeff_hp}
\caption{Regularization coefficient variations for TRPO. Cited implementation values may use different sets hyper-parameters. See associated works for specific details.}
\label{trpo_reg}
\end{figure}
\textbf{{Generalized Advantage Estimation : }} Generalized advantage estimation~\cite{TRPOGAE} has been shown to improve results dramatically for TRPO. Here, we investigate using $\lambda=1.0$ and $\lambda=.97$ for this. We find that for longer iterations, a lower GAE $\lambda$ does in fact improve results for longer sequences in Half-Cheetah and mildly for Hopper\footnote{For Half-Cheetah, $t=2.9109,p=0.0652$ for last 500 iterations and $t=1.9231,p=0.1978$ overall. For Hopper, $t=1.9772,p=0.1741$ for last 500 iterations and $t=-0.1255,p=0.2292$ overall.}. Figure~\ref{trpo_gae} shows the average learning graphs for these variations.
\begin{figure}[ht!]
\includegraphics[width=.5\textwidth]{images/hc_trpo_gae}
\includegraphics[width=.5\textwidth]{images/hp_trpo_gae}
\caption{Generalized advantage estimation lambda value variations for TRPO. Cited implementation values may use different sets hyper-parameters. See associated works for specific details.}
\label{trpo_gae}
\end{figure}
\textbf{{Step Size : }}The step size (SS) (effectively the learning rate of TRPO) is the same as the KL-divergence bound for the conjugate gradient steps. Here, we find that the default value of 0.01 appears to work generally the best for both Hopper and Half-Cheetah\footnote{Hopper most significant t-test difference from default is SS=0.1 with $t=1.0302,p=0.2929$, and for Half-Cheetah difference from default and SS=0.001 $t=-3.1255,p=0.0404$}. Figure~\ref{trpo_step} shows the average learning curves for these variations. The intuition here is the same behind adjusting learning rates in standard gradient optimization methods, though the formulation is through a constraint rather than a learning rate, it effectively has the same characteristics when tuning it.
\begin{figure}[ht!]
\includegraphics[width=.5\textwidth]{images/trpo_step_size_hc}
\includegraphics[width=.5\textwidth]{images/trpo_step_size_hp}
\caption{Step size variations for TRPO. Cited implementation values may use different sets hyper-parameters. See associated works for specific details.}
\label{trpo_step}
\end{figure}
\subsection{DDPG-Specific Hyper-Parameters}
We investigate two hyper-parameters which are unique to DDPG which previous works have described as important for improving results~\cite{rllab,QPROP}: reward scale and actor-critic learning rates.
\textbf{Reward Scale : }As in \cite{rllab}, all the rewards for all tasks were rescaled by a factor of $0.1$ to improve the stability of DDPG. It has been claimed that this external hyper-parameter, depending on the task, can make the DDPG algorithm unstable. Experimental results in \cite{QPROP} give indication that DDPG is particularly sensitive to different reward scale settings.
\begin{figure}[ht!]
\includegraphics[width=.5\textwidth]{images/ddpg_HalfCheetah_Reward_Scale}
\includegraphics[width=.5\textwidth]{images/ddpg_hopper_reward_scale}
\caption{DDPG on Half-Cheetah and Hopper - Significance of the Reward Scaling parameter}
\label{ddpg_reward_scale}
\end{figure}
Figure~\ref{ddpg_reward_scale} shows that even though DDPG performance have been reported to be highly susceptible to the reward scaling parameter, our analysis shows that DDPG does not improve by rescaling the rewards. In fact, for the Half-Cheetah environment, we find that no reward scaling (RS=$1$) yields much higher returns, even though \cite{QPROP} and \cite{rllab} have reported an optimal reward scale value to be $0.1$. Furthermore, we highlight that often for DDPG, learning curves are not shown for all environments and only tabular results are presented, making it difficult to compare how reward scaling has affected results in prior work.
\textbf{Actor-Critic Learning Rates : }We further investigate the effects of the actor and critic base learning rates as given in \cite{QPROP} and \cite{rllab}, which both use $0.001, 0.0001$ (for the critic, and actor respectively). Interestingly, we find that the actor and critic learning rates for DDPG have less of an effect on the Hopper environment than the Half-Cheetah environment. This brings into consideration that keeping other parameters fixed, DDPG is not only susceptible to the learning rates, but there are other sources of variation and randomness in the DDPG algorithm.
\begin{figure}[ht!]
\includegraphics[width=.5\textwidth]{images/ddpg_HalfCheetah_Learning_Rates}
\includegraphics[width=.5\textwidth]{images/ddpg_hopper_learning_rate}
\caption{DDPG on Half-Cheetah and Hopper - Actor and Critic Learning Rates}
\label{}
\end{figure}
\subsection{General Variance}
We investigate the general variance of multiple trials with different random seeds. Variance across random seeds is of particular interest since it has been noted that in several known codebases, there are implementations\footnote{One such example in the codebase we use here: \url{https://github.com/openai/rllab/blob/master/contrib/rllab\_hyperopt/example/main.py\#L21}.} for searching for the best random seed to use. In particular, we determine whether it is possible to generate learning curves by randomly averaging trials together (with only the seed varied) such that we see statistically significant differences in the average reward learning curve distributions. Thereby, we wish to determine if it is possible to report significantly worse results on a baseline policy gradient method such as TRPO or DDPG, just by varying the random seed (or significantly better results for the algorithm under investigation by doing so).
We run a total of $10$ trials with our best tuned hyper-parameter configurations as examined previously. We randomly average two groups of $5$ and plot the results. We find that there can be a significant\footnote{Average 2-sample t-test run across entire training distributions resulting in $t=-9.0916,p=0.0016$ for Half-Cheetah and $t=2.2243,p=0.1825$ for Hopper} difference as seen in Figure~\ref{trpo_variance}. Particularly for Half-Cheetah it is possible to get training curves that do not fall within the same distribution at all, just by averaging different runs with the same hyper-parameters, but random seeds.
\begin{figure}[ht!]
\includegraphics[width=.5\textwidth]{images/trpo_random_trials_hc}
\includegraphics[width=.5\textwidth]{images/trpo_random_trials_hopper}
\caption{TRPO with best hyper-parameter configurations, with average of 5 runs over 2 different set of experiments under same configuration, producing variant results.}
\label{trpo_variance}
\end{figure}
Figure~\ref{ddpg_variance} also shows the significance of DDPG instability. Even with fine-tuned hyper-parameter configurations, our analysis shows that stable results with DDPG, on either of the environments cannot be achieved. This further suggests that there might be randomness due to other external sources which affects performance of DDPG on these continuous control tasks.
\begin{figure}[ht!]
\includegraphics[width=.5\textwidth]{images/ddpg_HalfCheetah_tuned_result}
\includegraphics[width=.5\textwidth]{images/ddpg_hopper_tuned_result}
\caption{DDPG with tuned hyper-parameter configurations, with average of 5 runs over 2 different set of experiments under same configuration, producing variant results.}
\label{ddpg_variance}
\end{figure}
Our results show that for both DDPG and TRPO, taking two different average across 5 experiment runs do not necessarily produce the same result, and in fact, there is high variance in the obtained results. This emphasizes the need for averaging many runs together when reporting results using a different randoms seed for each. In this way, future works should attempt to negate the effect of random seeds and environment stochasticity when reporting their results.
\section{Discussion and Conclusion}
Tables~\ref{table:trpo_papers} and~\ref{table:ddpg_papers} highlight results and metrics presented in various related works which compares to TRPO and DDPG (respectively). We include results from an average of 5 runs across the best cross-section of hyper-parameters (based on our previous investigations). We show various metrics at different numbers of iterations such that a fair comparison can be made against reported results from other works. It can be noted that while some works demonstrate similar results to our own, others vary wildly from our own findings. Furthermore, many works only include the Max Average Return, which can be misleading. Due to the variance we have demonstrated here and the difficulty in reproducing these algorithms, it is extremely important for future works to: (1) report all possible metrics to characterize their own algorithms against TRPO and DDPG (particularly Average Return and Standard Deviation of the returns); (2) report all hyper-parameters used for optimization; (3) attempt to use a somewhat optimal set of hyper-parameters; (4) average results on greater than 5 trials and report how many trials are averaged together\footnote{Further investigation needs be done to determine the amount of trials ($N$) necessary to ensure a fair comparison (i.e. for what $N$ would any $N$-sample average always result in a similarly distributed returns, unlike as has been demonstrated to be possible in Figure~\ref{trpo_variance})}. We intend this work to act as both a guide for accomplishing this and a starting point for determining whether observed values are in line with possible best results on Hopper and Half-Cheetah environments for novice researchers in policy gradients.
\begin{table}[htp!]
\tiny\centering\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}
\hline
Environment & Metric & rllab~\cite{rllab} & QProp~\cite{QPROP} & IPG~\cite{IPG} & TRPO~\cite{TRPO,TRPOGAE}\footnotemark & Ours & Ours & Ours & Ours\\
\hline
\multirow{4}{*}{Half-Cheetah} & Length (iters) & 500 & -- & -- & 500& 500 & 1000 & 2500 & 5000 \\
& Length (episodes)&$\sim$25k&30k&10k&$\sim$12.5k&$\sim$ 12.5k&$\sim$25k&$\sim$62.5k&$\sim$125k\\
& Average Return & 1914.0 & -- & -- & -- & 3576.08 & 3995.4 & 4638.52 & 5010.83 \\
& Max Average Return & -- & 4734 & 2889 & 4855.00 & 3980.61 & 4360.77 & 4889.18 & 5197.40 \\
& Std Return & 120.1 & -- & -- & -- & 434.78 & 502.57 & 419.08 & 443.87 \\
\hline
\multirow{4}{*}{Hopper} & Length (iters) & 500 & -- & -- & 500& 500 & 1000 & 2500 & 5000 \\
& Length (episodes)&$\sim$25k&30k&10k&$\sim$22k &$\sim$ 12.5k&$\sim$25k&$\sim$62.5k&$\sim$125k\\
& Average Return & 1183.3 & -- & -- & -- & 2021.34 & 2285.73 & 2526.41 & 2421.067\\
& Max Average Return & -- & 2486 & -- & 3668.81 & 3229.14 & 3442.26 & 3456.05 & 3476.00\\
& Std Return & 150.0 & -- & -- & -- & 654.37 & 757.68 & 714.07 & 796.58 \\
\hline
\end{tabular}
\caption{Results and descriptions of reported values by various works using TRPO (Hopper and Half-Cheetah environments) as a baseline. "Length(iters)" denotes algorithm iterations and "Length(episodes)" denotes number of episodes.}
\label{table:trpo_papers}
\end{table}
\footnotetext{Results from original implementation evaluation on OpenAI Gym: \url{https://gym.openai.com/evaluations/eval_W27eCzLQBy60FciaSGSJw}; \url{https://gym.openai.com/evaluations/eval_Gudf6XDS2WL76S7wZicLA}}
\begin{table}[htp!]
\tiny\centering\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
Environment & Metric & rllab~\cite{rllab} & QProp~\cite{QPROP} & SDQN~\cite{SDQN} & Ours & Ours & Ours & Ours\\
\hline
\multirow{4}{*}{Half-Cheetah} & Length (iters) & 500 & $1e^6$ (steps) & -- & 500 & 1000 & 2500 & 5000 \\
& Length (episodes)&$\sim$25k&30k&10k&$\sim$ 12.5k&$\sim$25k&$\sim$62.5k&$\sim$125k\\
& Average Return & 2148.6 & -- & -- & 2707.1 & 3127.9 & 3547.1 & 3725.3 \\
& Max Average Return & -- & 7490 & 6614.26 & 3788.2 & 4029.2 & 4460.7 & 4460.7 \\
& Std Return & -- & -- & -- & 907.1& 784.3 & 634.9 & 512.8 \\
\hline
\multirow{4}{*}{Hopper} & Length (iters) & 500 & -- & -- & 500 & 1000 & 2500 & 5000 \\
& Length (episodes)&$\sim$25k&30k&10k&$\sim$ 12.5k&$\sim$25k&$\sim$62.5k&$\sim$125k\\
& Average Return & 267.1 & -- & -- & 790.6 & 883.6 & 838.7 & 857.6 \\
& Max Average Return & -- & 2604 & 3296.49 & 1642.1 & 1642.1 & 1642.1 & 1642.1 \\
& Std Return & -- & -- & -- & 367.9 & 305.2 & 230.9 & 213.7\\
\hline
\end{tabular}
\caption{Results and descriptions of reported values by various works using DDPG (Hopper and Half-Cheetah environments) as a baseline. "Length(iters)" denotes algorithm iterations and "Length(episodes)" denotes number of episodes.}
\label{table:ddpg_papers}
\end{table}
We present a set of results, highlighting the difficulty in reproducing results with policy gradient methods in reinforcement learning. We show the difficulty of fine-tuning and the significant sources of variance in hyper-parameter selection for both TRPO and DDPG algorithms. Our analysis shows that these state-of-the-art on-policy and off-policy policy gradient methods often suffer from large variations as a result of different hyper-parameter settings. In addition, results across different continuous control domains are not always consistent, as shown in the Hopper and Half-Cheetah experiment results. We find that Half-Cheetah is more susceptible to performance variations from hyper-parameter tuning, while Hopper is not. We posit that this may be due to the difference in stochasticity within the environments themselves. Half-Cheetah has a much more stable dynamics model, and thus is less variant in failure modes. Hopper, on the other hand, is prone to quick failure modes which introduce larger external variance, possibly making tuning difficult.
Based on our experiments, we suggest that the ML research community requires better fine-tuned implementations of these algorithms with provided hyper-parameter presets. These implementations should have benchmark results for a wide range of commonly used tasks. Our analysis shows that due to the under-reporting of hyper-parameters, different works often report different baseline results and performance measures for both TRPO and DDPG. This leads to an unfair comparison of baselines in continuous control environments. Here, we provide some insight into the impact of different hyper-parameters to aid future researchers in finding the ideal baseline configurations.
However, we also suggest that these algorithms are often susceptible to external randomness, introduced by the environment and other external hyper-parameters (e.g reward scale in DDPG) which makes it quite difficult to reproduce results with these state-of-the-art policy gradient algorithms. As such, we provide the aforementioned recommendations in reporting implementation details (provide all hyper-parameters and number of trial experiments), reporting results (report averages and standard deviations, not maximum returns), and implementing proper experimental procedures (average together many trials using different random seeds for each).
\section*{Acknowledgements}
The authors would like to thank Joelle Pineau and David Meger for comments on the paper draft. We would like to thank the McGill University Reasoning and Learning Lab and the Mobile Robotics Lab for allowing an engaging and productive research environment. We would also like to thank Alex Lamb and Anirudh Goyal for providing initial feedback on the direction of this work, as part of the ICML Reproducibility in Machine Learning workshop. This work was supported by the \textit{AWS Cloud Credits for Research} program, McGill Graduate Excellence Scholarships, and \textit{NSERC}.
\input{RepML_2017.bbl}
\end{document}
|
2,869,038,156,774 | arxiv | \section{Introduction}
Determining the ground state and properties of $N$ interacting particles in some fixed geometry is
at the core of many disciplines in physics and other natural sciences. However,
in general even for moderate values of $N$, methods based on first principles are either
intractable or extremely time-consuming. Fortunately, the properties of many
systems can be described by correlations that involve just a few particles and
the problem of many particles can be reduced to consider a much smaller number
in the background of the remaining particles. Two-body interactions still
have a tendency to make even such few-body problems very difficult to solve and
insights gained from approximations that allow analytic treatments are therefore
very useful.
In configuration interaction methods, where large basis states of properly symmetrized
wave functions are built and diagonalized to determine system properties, a
basis of harmonic oscillator states can be very convenient as matrix elements of the
two-body interaction are
easy to calculate. This approach was used early on in the context of the
nuclear shell model \cite{goep55,heyd90}.
Turning this method upside down the potentially complicated
two-body interaction could be reproduced or simulated by a simple
harmonic oscillator potential. The great advantage is obviously that
the coupled set of differential equations of motion are much simpler
to solve, and a number of properties are easily systematically
obtained as functions of interaction parameters and particle number.
In subfields of physics where the structure is already established
at a given level of accuracy,
the insight gained from oscillator approximations
is most often insufficient for improvements.
However, in cold atomic gas physics it is now possible to
prepare systems with very exotic and often unknown properties, to use
controlled two-body interactions, to study different geometries and
dimensions, and to vary trapping conditions \cite{bloch08}.
Analytical oscillator approximations can therefore
be expected to be very valuable, as it
has been in other fields of physics. While we are concerned mainly
with the quantum mechanical many-body problem here, we note that
similar techniques are currently used also in the study of classical
dynamics \cite{ludvig2010}.
Obviously, the accuracy of the oscillator approximation increases as
the potentials resemble oscillators. This means that pronounced smooth
potential single minima with room for bound states are directly suited
for investigations of system structures as functions of particle
number and other characteristics. However, also much weaker
binding potentials would reveal correct overall qualitative, and
perhaps also semi-quantitative, behaviour in an appropriate oscillator
approximation. This is especially emphasized by the universal
behaviour of a number of weakly bound structures which only depend on
integral properties like scattering length.
In that case, the bulk part of the potentials are not crucial by themselves but
rather the large distance effect or equivalently the tail of the wave
function or the binding energy. All these quantities are related in
the correct model descriptions but for specific purposes a subset may
suffice. It seems clear that continuum properties like scattering
behaviour are beyond the regime where useful results can be expected. Still
even phase shifts can be extracted from properly discretized continuum
states \cite{zhan08}.
The harmonic oscillator has been used in virtually every aspect of
physics where potentials are needed. Still, in full generality, the
procedure is not well described probably for several reasons. First,
relative and center-of-mass (cm) motions are not separable even for two
interacting particles with different masses in a trapping one-body
potential. Second, the simplest approximation for a self bound
$N$-body system is found by the mean-field approximation where the
spurious cm-motion is ignored. Third, one- or two-body potentials
centered at different points in space have only been of interest when
the harmonic approximation is insufficient, as e.g., in chemistry and
for crystal structures. Now optical lattices and split traps offer the possibility to also
use multi-centered potentials.
If only the relative motion is of interest,
a decoupling scheme is necessary in order to separate the full solution to
relative and cm-motions.
For cold atoms the oscillator approximation has been applied recently
in \cite{brosens97a,brosens97b,brosens97e,brosens97c,brosens97d,tempere98a,lemmens99a,foulon1999,tempere00a}
and in \cite{zalu00,yan03,gajd06}. These works considered
Bose-condensate properties for equal mass particles, and this
is almost the only case where the center-of-mass motion separates.
In this report we develop formalism to treat
the most general harmonically interacting
system for bosons (or distinguishable particles).
In turn, we solve the $N$-body problem for arbitrary
quadratic forms of the one- and two-body interactions. The use of
Cartesian coordinates allows simple solutions for both one, two and
three spatial dimensions, and any non-spherical behaviour of the
corresponding potentials. First we derive the transformation
matrices from initial to final particle coordinates where the
differential equations are completely decoupled. We also derive
expressions for various quantities like energies, root-mean-square
radii, density matrix and its dominating eigenvalue. The hope is to
provide simple tools to reveal at least qualitative features of the
new and unknown systems under design and investigations particularly in
the field of cold atomic gases.
We demonstrate our method by finding solutions in two (2D) and three (3D)
dimensions for $N$ identical and pairwise interacting bosons in
external harmonic one-body potentials. The pairwise interactions
are taken from the celebrated results of Busch {\it et al.} \cite{busch98}
for two particles with zero-range interaction in a harmonic trap, the
validity of which have been tested in ultracold atomic gas experiments \cite{stoferle06}.
Our method thus provides an analytical approximation to the $N$-boson
problem with short-range interactions. The condensate fraction is
readily available from our calculations and shows interesting behaviour
as the scattering length is tuned and the number of particles changes.
In particular, at small positive scattering length we find a highly
fragmented state in both two and three dimensions.
\section{Theoretical derivations}
We first define the Hamiltonian in its most general quadratic form
for both coordinates and kinetic energy derivative operators.
The spin degrees of freedom are omitted and symmetries of the spatial
wave functions can in principle easily be imposed by permutations of
the coordinates. We use matrices to simplify the derivations.
We then derive the coordinate transformation to decouple the set of
coupled oscillators and distinguish between cases where the
cm motion is free and confined by one-body potentials. Lastly,
we calculate pertinent properties.
\subsection{Hamiltonian}
We consider a system of $N$, possibly different, particles of mass
$m_k (k=1,\dots,N)$ interacting through deformed harmonic potentials $V_{\textrm{int}}$. The
particles are in addition subject to external one-body potentials,
$V_{\textrm{ext}}$, for each particle constructed as a sum of harmonic
oscillators with different centers. The total Hamiltonian $H=T+V$
with kinetic energy $T$ and potential $V = V_{\textrm{int}} + V_{\textrm{ext}}$ is then
given by
\begin{eqnarray} \label{e40}
T &=& - \sum_{k=1}^{N} \frac{\hbar^2}{2m_k}\Bigg(
\frac{\partial^2}{\partial x_{k}^2} +
\frac{\partial^2}{\partial y_{k}^2} +
\frac{\partial^2}{\partial z_{k}^2}\Bigg) \;\;, \\ \label{e50}
V_{\textrm{int}} &=& \frac{1}{4} \sum_{i,k=1}^{N} \Bigg( V_{ik,0 } + \mu_{ik}
\Big(\omega^2_{x,ik}(x_{i}-x_{k} + x_{ik,0})^2 \\ \nonumber
&+& \omega^2_{y,ik} (y_{i}-y_{k}+ y_{ik,0})^2 +
\omega^2_{z,ik} (z_{i}-z_{k}+ z_{ik,0})^2 \Big) \Bigg) ,
\\ \label{e60}
V_{\textrm{ext}} &=& \frac{1}{2} \sum_{k=1}^{N} m_k \bigg(
\omega^2_{x,k} (x_{k}-x_{k,0})^2 \\ \nonumber &+&
\omega^2_{y,k} (y_{k}-y_{k,0})^2 + \omega^2_{z,k} (z_{k}-z_{k,0})^2 \bigg)\;,
\end{eqnarray}
where $(x_{k},y_{k},z_{k})$ are the $(x,y,z)$-coordinates of the
$k'$th particle, $\mu_{ik} = m_im_k/(m_i+m_k)$ is the reduced mass of
particles $i$ and $k$,
$(\omega_{x,ik},\omega_{y,ik},\omega_{z,ik})$ are the
frequencies in the $(x,y,z)$-directions for the interaction potential
between particles $i$ and $k$, and
($\omega_{x,k},\omega_{y,k},\omega_{z,k}$) are the frequencies in the
$(x,y,z)$-directions for the one-body potential on the $k'$th particle
with centers specified by $(x_{k,0},y_{k,0},z_{k,0})$. The factor
$1/4$ is made of two factors $1/2$ where one of them is the
conventional notation for an oscillator potential. The other factor
$1/2$ is to count the two-body interaction only once when the $i,k$
summations are extended to assume all integer values from $1$ to $N$.
The shift of the interaction centers, $x_{ik,0}$, for each pair of the
two-body interactions should then change sign when the particles are
interchanged, $x_{ik,0}= - x_{ki,0}$, which implies that the diagonal
has to vanish, $x_{ii,0}=0$, in accordance with zero self interaction.
This Hamiltonian has the most general quadratic form expressed in
terms of the parameters for one- and two-body oscillator potentials.
The shift of potential energy, $V_{ik,0 }$, of the energy for each
pair allows adjustment without change of structure. The shifts of
both one- and two-body oscillator centers suggest applications
approximating optical lattice potentials. The frequencies
traditionally all enter as squares which suggest attraction but the
formalism is equally applicable for imaginary frequencies or
equivalently negative values of these squared frequencies. To produce
stable solutions with such repulsive interactions requires sufficient
attraction from the other two-body interactions or from the external
fields. The choice of Cartesian coordinates allows independent solution for
each dimension, and thereby treats deformations and different
dimensions without any additional complications. Obviously this also
prohibits direct use of symmetries and conserved quantum numbers where
the dimensions are mixed. One example is angular momentum
conservation in the absence of external fields.
We now proceed by rewriting the Hamiltonian in matrix form. For this
we define vectors, $\vec x = (x_{1},x_{2},...,x_{N})^T$, $\vec
\nabla_{x} = (\partial/\partial x_{1},\partial/\partial
x_{2},...,\partial/\partial x_{N})^T$, where a vector is given as a
column of its coordinates, which means the transposed of the row as
indicated by ``$T$''. The $y$ and $z$-direction can be defined
analogously if necessary. We treat each coordinate independently and
may therefore omit the $x$-index to simplify the notation in the
derivation. The $x$-part of the Hamiltonian, $H_{x}$, in
\eref{e40}-\eref{e60} is given by:
\begin{equation}
H_{x} = \frac{1}{2} \vec{\nabla}_{x}^T T \vec{\nabla}_{x} +
\frac{1}{2}\vec{x}^T A\vec{x} + \vec{c}\cdot\vec{x}+V_{\mathrm{shift}} \;,
\label{matrixH}
\end{equation}
where the kinetic energy matrix, $T_{ik}=-\delta_{ik} \hbar^2/(m_i)$,
is diagonal and depends only on inverse masses. The constant term,
$V_{\mathrm{shift}}$ consists of the sum of all separate shift energies:
\begin{eqnarray}\label{ShiftE}
V_{\mathrm{shift}}&=& \frac{1}{2}\sum_{k=1}^N m_k\omega_{x,k}^2x_{k,0}^2 \\ \nonumber
&+& \frac{1}{4}\sum_{i,k=1}^{N} \bigg(V_{ik,0} +
\mu_{ik}\omega_{x,ik}^2 x_{ik,0}^2 \bigg) \;\;.
\end{eqnarray}
The quadratic potential term in \eref{matrixH} contains the
symmetric matrix $A$ which is given in terms of masses and frequencies
by
\begin{eqnarray} \label{e90}
A_{i \neq k} &=& - \mu_ {ik} \omega^2_{x,ik} \\
A_{kk} &=& m_k \omega^2_{x,k} + \sum_{i, i\neq k}^{N}
\mu_{ik} \omega^2_{x,ik} \;. \label{e100}
\end{eqnarray}
The components of the coefficient vector $\vec{c}$ in the linear term are
\begin{equation} \label{e104}
c_k=-m_k\omega_k^2x_{k,0}+\sum_{i=1}^{N}\mu_{ik}\omega_{x,ik}^2x_{ik,0} \; .
\end{equation}
The $y$ and $z$-parts of the Hamiltonian, $H$, are completely analogous
and we have $H = H_{x} + H_{y} + H_{z}$.
\subsection{Reduction to standard form}
The linear term in \eref{matrixH} can be eliminated by translating
the coordinates by
\begin{eqnarray} \label{e93}
\vec{x}^{\prime} = \vec{x} - \vec{a} \;\;\;,\;\;\;
\vec{x} = \vec{x}^{\prime} + \vec{a} \;,
\end{eqnarray}
where the translation vector $\vec{a}$ is determined by the
requirement that all terms linear in $x_{k}^{\prime}$ must vanish from
the Hamiltonian. This condition amounts to $ A\vec{a} = -\vec c$. As
containing only second derivatives the kinetic energy operator remains
unchanged by this linear translation. In total the Hamiltonian in the
new coordinates has only the same quadratic terms in both
coordinates and derivatives. However, linear terms have disappeared
and the term, $-1/2\vec{a}^TA\vec{a}$, should be added to $V_{\mathrm{shift}}$
in \eref{ShiftE}.
Any solution, $\vec{a}$, to $A\vec{a} = -\vec c$ eliminates the linear
terms. Many solutions always can be found, but a unique solution only
exists when $A$ is non-singular, that is $A$ is invertible. A
singular $A$-matrix is equivalent to a subset of non-interacting
particles which also all are unaffected by the one-body
potentials. Then the corresponding degrees of freedom are already
decoupled and following the trivial motion in free space. As already
decoupled they should therefore from the beginning be removed from the
reduction procedure. The dimension of the $A$-matrix is
correspondingly reduced.
In practice, only one frequently occurring example is interesting and
requiring special attention. This is when all interactions are
translation invariant and only depending on coordinate differences between
particles. Then the center-of-mass motion is as in free space.
External one-body fields acting on all particles act on the
center-of-mass coordinate and remove the corresponding degeneracy.
Previous works with oscillators \cite{zalu00,yan03,gajd06} considered
equal mass systems where separation of the center-of-mass motion is
straightforward. Here we provide a general procedure valid for all
sets of masses and all one- and two-body interactions. In all cases we
transform to relative and center-of-mass, $X$, coordinates, where the
latter is defined by
\begin{equation} \label{e114}
X M = \sum_{k=1}^{N} m_k x_k \;\;\;,\;\;\; M = \sum_{k=1}^{N} m_k \;,
\end{equation}
where $M$ is the total mass of the system. Choosing the new set of
coordinates, $\vec{\tilde{x}}$, by $\tilde{x}_{i} \equiv x_i-X$, for
$i=1,2,...,N-1$, supplemented by $\tilde{x}_{N} \equiv X$, we define the
transformation matrix, $F$, by
\begin{equation} \label{e119}
\vec{x}^T = F \vec{\tilde{x}}^T,
\end{equation}
where the transformation matrix takes the form
\begin{eqnarray}
F=\left( \begin{array}{cccc}
1 & 0 & \ldots &1 \\
0 & 1 & \ldots &1 \\
\vdots &\vdots &\ddots &1 \\
-\frac{m_1}{m_N}& -\frac{m_2}{m_N}& \ldots& 1
\end{array} \right). \label{e117}
\end{eqnarray}
The specific labelling singles out the $N'$th (last) particle with
mass, $m_N$, in the transformation $F$. The inverse transformation,
$F^{-1}$, from original to new coordinates is explicitly given by
\begin{eqnarray}
F^{-1}=\left( \begin{array}{cccc}
1-\frac{m_1}{M} & -\frac{m_2}{M} & \ldots &-\frac{m_N}{M} \\
-\frac{m_1}{M} & 1-\frac{m_2}{M} & \ldots & -\frac{m_N}{M}\\
\vdots &\vdots &\ddots &-\frac{m_N}{M} \\
+\frac{m_1}{M}& +\frac{m_2}{M}& \ldots& +\frac{m_N}{M}
\end{array} \right). \label{e123}
\end{eqnarray}
The Hamiltonian is now easily transformed to the new set of
coordinates, by direct insertion of \eref{e117} for the
coordinates and the inverse and transposed, $(F^{-1})^T$, for the
derivative kinetic energy operators, $\vec{\nabla}_{\tilde{x}}$. The
kinetic energy immediately separates into a sum of relative and
center-of-mass dependent terms, whereas the potential part separates
when it is translationally invariant, and otherwise not. In any case
these coordinates can be used in the following derivations.
\subsection{Decoupling the oscillators}
We assume that we have the quadratic form without linear terms. We
transform to relative and center-of-mass coordinates in all cases even
though we could have worked with the initial coordinates. We rename
the coordinates and omit the primes in the following expressions. We begin by
diagonalizing the quadratic part of the potential, which then is the
$F^TAF$ matrix (see \eref{matrixH}, \eref{e90} and \eref{e100}). The
orthonormal coordinate transformation $Q$ corresponding to the
diagonalization is defined by the requirements
\begin{eqnarray} \label{110}
Q^{T} F^T A FQ = D \;\;, \;\; \vec x = F \vec{\tilde{x}} = FQ \vec t \;,
\end{eqnarray}
where $D$ is diagonal with eigenvalues $d_i$.
If $A$ is singular at least one of the eigenvalues, $d_i$, is zero, as
when the interactions only depend on relative coordinates. One of
these zero eigenvalues then has an eigenvector corresponding to the
center-of-mass coordinate. This mode now has to be fully decoupled
after the kinetic energy is expressed in the same coordinates. This is
achieved in general by replacing the zero eigenvalue by any finite
number. If only relative motion is of interest the Hilbert space
spanned by this eigenvector should simply be removed.
We proceed to perform a new non-orthonormal transformation of the
coordinates defined by $\vec t = \bar D \vec u$, where $\bar D$ is
diagonal and given by $\bar D_{ik} = \delta_{ik} \sqrt{d_0/d_i}$. The
number $d_0$ can be chosen arbitrarily. We choose to maintain the
total norm which implies that $\Pi_{i=1}^{N} d_i^{(x)} = d_0^{N}$.
This transformation is nothing but scaling the lengths of each of the
eigenvectors to let $\bar D$ become proportional to the unit matrix. The
transformation from $x$ to $u$-space of the derivative vector is then
given as $\vec{\nabla}_x = (F^{-1})^T Q (\bar D^{-1})^T \vec{\nabla}_u$.
The kinetic energy matrix in the $u$-coordinates is $\bar D^{-1} Q^T
(F^{-1}) T (F^{-1})^T Q (\bar D^{-1})^T$, where the space
corresponding to the center-of-mass coordinate decouples from all the
other degrees of freedom. This kinetic energy matrix is diagonalized,
i.e.
\begin{eqnarray}
\nonumber T_x &=& \frac{1}{2}
\vec{\nabla}_u^T \bar D^{-1} Q^T F^{-1} T (F^{-1})^T Q (\bar D^{-1})^T
\vec{\nabla}_u \nonumber \\ &=& \frac{1}{2} \vec{\nabla}_v^T P^{T}
\bar D^{-1} Q^T F^{-1} T (F^{-1})^T Q (\bar D^{-1})^T P \vec{\nabla}_v
\nonumber \\ &=& \label{e130}
\frac{1}{2} \vec{\nabla}_v^T \bar T \vec{\nabla}_v \;,
\end{eqnarray}
where the orthonormal transformation $\vec{\nabla}_u = P
\vec{\nabla}_v$, or $\vec u = P^T \vec v$, is chosen to make $\bar T$
diagonal, i.e. $\bar T_{kk} = - \hbar^2/(\bar m_k)$, or equivalently
diagonalize $\bar D^{-1} Q^T F^{-1} T (F^{-1})^T Q (\bar D^{-1})^T$.
This final orthonormal transformation leaves the potential energy part as
a diagonal matrix with unchanged diagonal elements, i.e.
\begin{eqnarray}
V_x &=& \frac{1}{2} \vec x^T F^T A F\vec x =
\frac{1}{2} \vec t^T D \vec t =
\frac{1}{2} \vec u^T \bar D^{T} D \bar D\vec u \nonumber \\ &=&
\frac{1}{2} d_0 \vec u^T \vec u = \frac{1}{2} d_0 \vec v^T \vec v
= \frac{1}{2} \vec v^T \bar V_x \vec v \; , \label{e120}
\end{eqnarray}
where $\bar V_x$ is the diagonal unit matrix with elements $d_0=\bar
m_k \bar{\omega}_k^2$, which defines the output frequencies, $\bar{\omega}_k$. This is a factorization return to the ordinary
oscillator notation by using the masses determined from the kinetic
energy eigenvalues in \eref{e130}. If in the process of transforming from the original $\vec x$- cordinates to the $\vec v$ coordinatees zero eigenvalues were generated, then care must be taken to make sure sure that they do not contribute to any final observable. This is easily
achieved with $\bar{\omega}_k = 0$ for these modes.
The total Hamiltonian for the $x$-coordinate is now a set of
decoupled oscillators in the new coordinates $\vec v$, i.e.
\begin{eqnarray} \label{e140}
H_x &=& V_{\mathrm{shift}} \\ \nonumber
&+& \sum_{k=1}^{N} \Big(- \frac{\hbar^2}{2\bar m_{x,k}}
\frac{\partial^2}{\partial v_{x,k}^2} + \frac{1}{2}
\bar m_{x,k} \bar{\omega}_{x,k}^2 v_{x,k}^2\Big) \;,
\end{eqnarray}
where we inserted the label $x$ on masses, oscillator parameter and
$v$-coordinates. The harmonic oscillator eigenvalues and
eigenfunctions correspond to the frequencies $\bar{\omega}_{x,k}$ and
the length parameter $b_{x,k}$ as usual given by $b_{x,k}^2 = \hbar/(\bar
m_{x,k} \bar{\omega}_{x,k})$.
In total the transformation $M$ from initial to new coordinates and vice
versa are
\begin{eqnarray} \label{e143}
\vec v &=& M (\vec x - \vec a) \;,\;\;\; \vec{x}-\vec a= M^{-1}\vec v \;,
\\ \label{e143a} M &=& \bar D^{-1} Q^{T}F^{-1} \;.
\end{eqnarray}
The normal mode of the system is expressed by the eigenvector, and the
corresponding eigenvalue indicates the ease or difficulty of exciting
that particular normal mode.
\subsection{Basic properties}
Observables expressed as expectation values of operators $O$ are found
by
\begin{eqnarray} \label{e158}
\langle \Psi | O |\Psi\rangle &=& \int
d^N \vec x d^N \vec y d^N \vec z \\ \nonumber &&
\Psi^*(\vec v_x,\vec v_y,\vec v_z) O(\vec x,\vec y,\vec z)
\Psi(\vec v_x,\vec v_y,\vec v_z) \; ,
\end{eqnarray}
where the wave functions are simplest in the transformed coordinates
while the operators probably are simpler in the original particle
coordinates.
We have only considered the spatial wave function $\Psi$. Effects of
spin dependence of interaction or quantum statistics have to be
inserted separately.
The wave functions are products of the three one-dimensional harmonic
oscillator wave functions in the new coordinates, that is Gaussians,
$\exp(-v_{x,k}^2/(2b_{x,k}^2))$, times Hermite polynomials with
arguments $v_{x,k}/b_{x,k}$ for each contributing mode $k$. Other
analogous products arise from the $y$ and $z$-directions. The
normalization is as usual when we use the final ($v_{x,k}$) coordinates
in the wave function. All the above transformations, including that
of the non-orthonormal matrix, $\bar D$, was chosen as total norm
conserving. Therefore the volume elements have the same structure in
initial and new coordinates, i.e. $\Pi_{k=1}^{N} dx_k = \Pi_{k=1}^{N}
d v_{x,k}$.
The expectation value of the Hamiltonian gives the energy, which for
the eigenmode, $k$, of given quantum number, $n_{x,k}$, is $\hbar
\bar{\omega}_{x,k} (n_{x,k} + 1/2)$. Adding the similarly obtained
results from the $y$ and $z$-direction we get the total energy for a
given set, $(n_{x,k},n_{y,k},n_{z,k})$, of quantum numbers
\begin{eqnarray} \label{160}
E_{n_{x,k},n_{y,k},n_{z,k}} &=&
V_\mathrm{shift} + \hbar \sum_{k=1}^{N} \big(\bar \omega_{x,k}
(n_{x,k} + 1/2) \\ \nonumber &+& \bar \omega_{y,k} (n_{y,k} + 1/2) +
\bar \omega_{z,k} (n_{z,k} + 1/2)\big) \;,
\end{eqnarray}
where the frequencies of the non-contributing modes are inserted as
zero.
Beside energies the simplest observables are sizes, i.e.
average values of the interparticle distance(s) in the system. The spatial
extension of the probability for each particle is measured by its root
mean square radius. With the relation in \eref{e143} we find for the
ground state $\Psi$
\begin{eqnarray} \label{e187}
&& \langle \Psi | x_i^2 |\Psi\rangle =
\langle \Psi | \bigg((A^{-1} \vec c)_i
+ \sum_{k=1}^{N }( F Q \bar D P^{T})_{ik} v_k \bigg)^2 |\Psi\rangle
\nonumber \\ && =
((A^{-1} \vec c)_i)^2 +
\sum_{k=1}^{N } \frac{1}{2}b_ {x,k}^2 ((F Q \bar D P^{T})_{ik})^2 \; ,
\end{eqnarray}
where we used that the odd powers of the coordinates vanish after
integration. If the center of mass coordinate $X$ is decoupled we
exclude that term in the summation, and the resulting sum is the
computation of the expectation of $(x_i-X)^2$. The other dimensions
should be added.
\subsection{One-body density matrix}
The simplest correlation function is the one-body density matrix,
$\rho(x_1,x_1')$, which is the starting point of the calculation of
statistical properties. It is interesting in itself but serves here
also as a simple illustration of analytical calculations. The one-body
density matrix is relevant for bosons (or distinguishable particles)
which is our main application of the method to be discussed in the
subsequent section. For fermions, the one-body density matrix
is a rather uninteresting object due to the Pauli principle and
off-diagonal long-range order (as arising for instance from pairing)
is determined by the two-body density matrix \cite{yang60}.
While we do not consider fermions explicitly in the current
presentation, we address the general issue
of symmetrization in the subsection below for completeness.
Note here that
we have taken great care to remove the center-of-mass coordinate so
that only intrinsic coherence is studied. This is particularly important
if $N$ is very small \cite{zinner08} or if the system is subjected to
a temporally or spatially varying additional potential \cite{pit00}.
The ground state wave function is a product of Gaussians in the new
coordinates, $\vec v$ with different units of length $b_{k}$. The
exponent can be written $-\vec v^T B \vec v$ where the matrix $B$ is
diagonal with elements $B_{kk}=1/(2b_{k}^2)$. Using \eref{e143}
we return to the initial coordinate where
\begin{equation}
\ln \Psi = - \vec v^T B \vec v = \vec x^T Z \vec x + \textrm{constant} \;,
\end{equation}
which defines the matrix $Z$ when we assume all interaction centers
are at the origin. The exponent of the density matrix for identical
bosons, where we select the particle labeled $1$, is then
\begin{eqnarray}\label{densstart}
&-& x_1Z_{11}x_1 - x_1'Z_{11}x_1' \\
&-& (x_1+x_1')\sum_{k=2}^N(Z_{1k}+Z_{k1})x_k+2\sum_{i,k=2}^Nx_iZ_{ik}x_k.\nonumber
\end{eqnarray}
To integrate over all other coordinates (from $-\infty$ to $+\infty$)
than $x_1$ and $x_1'$ we complete the squares. We define the vector
$\vec{w}=Z_{1k}+Z_{k1}$, for $2 \leq k \leq N$ and make the
substitution $\vec x = \vec f - \vec q$, where $\vec q =
1/2(x_1+x_1')(\bar{Z}+\bar{Z}^T)^{-1}\vec w$, and $\bar Z$ is the $Z$ matrix
without the first row and column. After integration we are left with
the density matrix
\begin{eqnarray}
\rho(x_1,x_1')&=&\mathcal{N}\exp\left[-(x_1Z_{11}x_1+x_1'Z_{11}x_1')\right.\\
&&+\frac{1}{4}(x_1+x_1')^2\vec{w}^T(\bar{Z}+\bar{Z}^T)^{-1} \vec w] \;, \nonumber
\end{eqnarray}
where the normalization, $\mathcal{N}$ is
\begin{equation}
\mathcal{N}=\sqrt{\pi(Z_{11}-\frac{1}{2}\vec{w}^T (\bar{Z}+\bar{Z}^T)^{-1}\vec{w})}.
\end{equation}
Going further, we can re-write the exponent as
\begin{eqnarray} \label{rhorewrite}
&-& \frac{1}{2} Z_{11} (x_1-x_1')^2 \\ &+& \nonumber
\bigg(\frac{1}{4} \vec{w}^T (\bar{Z}+\bar{Z}^T)^{-1} \vec w
- \frac{1}{2} Z_{11}\bigg) (x_1+x_1')^2 \;,
\end{eqnarray}
where the ratio $d_x$ is given as
\begin{eqnarray} \label{e163}
d_x = \frac{2 Z_{11}}{ 2 Z_{11}-\vec{w}^T (\bar{Z}+\bar{Z}^T)^{-1} \vec{w} } \; .
\end{eqnarray}
This ratio determines the largest eigenvalue, $\lambda$, obtained after
diagonalization of the density matrix, i.e.
\begin{equation}
\lambda=\frac{2}{1+\sqrt{d_x}},
\end{equation}
where the subscript $x$ is to remind us that this expression is valid
for one dimension. In more dimensions the wave functions are products
and consequently also the density matrix and its eigenvalues. Thus for
example in three dimensions, we have
\begin{equation}
\lambda=\left(\frac{2}{1+\sqrt{d_x}}\right)\left(\frac{2}{1+\sqrt{d_y}}\right)\left(\frac{2}{1+\sqrt{d_z}}\right).
\label{lambdaeq}
\end{equation}
The size of this eigenvalue is the established measure for the content
of condensate in the wave function. The remaining eigenvalues can be found in terms of the largest one:
\begin{equation}
\lambda_n=\lambda\left(1-\lambda\right)^n,
\label{denseigval}
\end{equation}
where $n$ is a non-negative integer \cite{gajd00}. We thus see that if $\lambda$ is close to
one then all other eigenvalues will fall off very fast, whereas for small $\lambda$ one has a distribution of many non-zero eigenvalues and a highly fragmented state.
\subsection{Symmetries imposed by boson and fermion statistics}
The solutions discussed above
are all obtained without any requirements of symmetry
due to groups of identical particles in the system. The hamiltonian
necessarily must commute with correponding permutation operators, and
the solutions should then have the proper symmetry. It is then only a
question of selecting those states among the complete set of all
solutions. Unfortunately, this tempting conclusion is wrong in
general because it is based on the assumption of non-degenerate
states. Degeneracies allow mixing of different symmetries, which
implies that the solutions are linear combinations of the required
symmetries. Thus, a simple method to restore the required symmetry is
to construct linear combinations of the available solutions with the
same energy.
We have assumed that the hamiltonian is independent of intrinsic
properties like spin of the particles. This means that the total
wavefunction factorizes into intrinsic (including particle spins) and
spatial parts. Here we only consider the symmetries of the spatial
part which subsequently has to be combined with the remaining parts of
the wavefunction. The task is not easily formulated in general since
the system may consist of different groups of particles each with
their own symmetry requirement. This could be a combination of
identical bosons and fermions, or identical particles placed in
different geometries by external fields and consequently effectively
behaving as non-identical particles.
To illustrate the method we assume that all particles are identical,
or alternatively we omit reference to the non-identical particles left
in their orbits while only considering those explicitely mentioned in
the following formulation. We write the symmetrized or
antisymmetrized spatial wavefunction $\Phi$
\begin{equation} \label{e124}
\Phi(\vec r_{1},\vec r_{2},...,\vec r_{N}) = N_p \sum_{p} \pi_{p}
\Psi(\vec r_{p(1)},\vec r_{p(2)},...,\vec r_{p(N)}) \; ,
\end{equation}
where $\vec r_{i} =(x_i,y_i,z_i)$ is the coordinate for particle $i$,
$p$ is a permutation of the set of numbers $\{1,2,...,N\}$, $N_p$ is a
normalization constant and $\pi_{p}$ is $+1$ for spatial (boson)
symmetry, and $\pi_{p} = \pm 1$ for even and odd permutations $p$,
respectively, when we construct wavefunctions with spatial (fermion)
antisymmetry. The energy of $\Phi$ is given as the energy,
$E_{n_{x,k},n_{y,k},n_{z,k}}$ from Eq.(\ref{160}), which is the same
for each term of $\Phi$. This conclusion is due to the fact that the
hamiltonian is invariant under these coordinate changes, and the
energy of the state in Eq.(\ref{e124}) is the expectation value of the
hamiltonian. The wavefunction in Eq.(\ref{e124}) may be identically
vanishing, e.g. when a symmetric, $\Psi$, is antisymmetrized or vice
versa, when an antisymmetric, $\Psi$, is symmetrized. In these cases
the state, $\Phi$, does not describe any physical system.
The procedure can be specified in more details with matrix
manipulations. If the coordinate transformation corresponding to one
of the permutations in Eq.(\ref{e124}) for one of the Cartesian
coordinates, $x$, is described by $P_x$, i.e. $\vec x_p = P_x \vec x$
(with $\vec x = (x_{1},x_{2},...,x_{N})^T$), we
can express the equivalent transformation in the final coordinates by
$\vec v_p = M P_x M^{-1} \vec v$, with $M$ defined in
Eq.(\ref{e143a}). Thus by replacing $\vec v$ by $\vec v_p$ in the
arguments of the terms in the wavefunctions in Eq.(\ref{e124}) the
same variables $\vec v$ are used as in the unpermuted term. The same
coordinates are then used to express all terms in the full
(anti)symmetrized wavefunction
The permutations leave the exponent in the wavefunction unchanged,
i.e. $\vec v_p^T B_p \vec v_p = \vec v^T B \vec v$. The reason is
that the $N-1$ oscillator frequencies, masses and lengths are
identical. The last length is related to the center of mass motion,
which either has to be omitted without confining field or remains
invariant under the permutations as decoupled from the intrinsic
motion described by the $N-1$ frequencies. The Hermite polynomials
change arguments but remain polynomials of the same order in the new
coordinates $\vec v_p$. In total, each term in Eq.(\ref{e124}) is a
different polynomial of the same order in the coordinates
corresponding to the permutation. The expectation value of an
arbitrary operator is then very difficult to find due to the many
terms. Operator expectations are then analytic if they can be found in
an oscillator basis, and in particular this is the case for any
polynomial structure of the coordinate or momentum operators.
However, in general the expectation values may consist of many terms.
Small excitations leave only few different oscillator quanta in the
wavefunction. The non-vanishing terms in Eq.(\ref{e124}) are then
limited to rather few. The extreme case is the state with all
$n_{x,k}=0$ which is completely symmetric under all permutations, that
is only one term describing the ground state for identical bosons. This
case will be our main concern in the remaining part of the paper.
When all quanta are equal to zero except one with an arbitrary
$n_{x,k} \neq 0$, the number of different permutations are $N$, which
is a manageable number.
Another practical limit is when the number of identical particles
effectively is small, e.g. when identical particles are confined by
external fields to different states or spatial locations. Then only
very few permutations are present, and the brute force method is
easily applicable. In any case the brute force methods are only
necessary when information beyond the energy is required. More
suitable procedures can no doubt be designed for specific systems and
corresponding operators, as for instance discussed recently
in the context of the hyperspherical harmonic approach \cite{gattobigio2009}.
However, all derivations can still be made
analytically if the operator expectations are calculable for
oscillator states.
\section{Bosons interacting in a trap}
Two particles interacting in a trap is obviously the simplest
non-trivial system of $N$ interacting particles. Still very
interesting features for two particles were discovered for the extreme
limit of a contact interaction and a confining trap potential
\cite{busch98}.
It would not be surprising if an oscillator approximation has difficulties in a
quantitative description of such systems. On the other hand this is a
challenging problem, and the structure of
$N$ particles in a trap with such pairwise interactions are an
active field of research. We shall
therefore attempt to extract systematic overall features within our
analytical formalism. The
philosophy is to reproduce the two-body properties as close as
possible, and then systematically calculate properties of the
many-body system.
Several other approaches have been used for few-body systems
of atoms interacting via the contact interaction.
Among these are effective field theory
approaches \cite{stet07,alhassid08,rotu10,stet10}, shell-model \cite{zinner09,cem09},
and Monte Carlo calculations \cite{carlson03,chang07,gezerlis08} in three
dimensions for fermions, various hyperspherical
and variational approaches in three \cite{ole02,ole03,ole04a,han06,thoger07}
and in two dimensions for bosons \cite{lim80,ole04b,blume05,chri09}, and
exact diagonalization in two dimensions for fermions \cite{ront09,liu10} and
for bosons \cite{liu10}.
The zero-range interaction
does not allow the diagonalization of the many-body Hamiltonian
in spatial dimensions greater than one,
so all these methods must address that in some way.
Most often, this manifests as a cut-off to a particular subspace,
which effectively renormalizes the interaction strength \cite{stet07,ront08,zinner09}.
In \cite{chri09}, a Gaussian form of the interaction is used to
approximate the zero-range potential, and only repulsive interactions
are considered. With the exception of \cite{ront09}, all discuss energy
results as a function of model space size, and do not yet discuss other observables.
\subsection{Adjusting the oscillator parameters}
First we must decide how to choose the parameters in the oscillator
model. For two identical bosons we have initially five parameters,
i.e. mass, interaction frequency, energy shift, confining
frequency, and center position of the confining field. One of these can
always be chosen as a scale parameter or a unit without consequences
for any of the properties. In the present case, the external field is
provided by its frequency, $\omega_0$, independent of the two-body interaction,
which then leave the interaction frequency, $\omega_{ik}=\omega$ for all $i,k$, and the
energy shift, $V_{ik}=V$ for all $i,k$, to be determined. These parameters all identical
since we consider indistinguishable particles. We can also immediately
set the center position of the confining field to zero, as there is no such
center shifting or multi-center effect in the external field in the original two-body problem.
Our strategy will be to reproduce the ground state properties of the
model of Busch {\it et al.} \cite{busch98} as much as possible using
a harmonic oscillator. We work exclusively in the
domain where the lowest state is the molecular branch that
represents a bound state when the
trap is removed. The population of this particular state was considered
previously in \cite{bert06,bert07}.
In three dimensions, this means that we consider only the positive scattering length
side of the resonance,
whereas in two dimensions this branch is always present. This procedure
provides our effective mapping from the two-body interaction to the
solvable $N$-body problem. We note that our choice of branch means
that for large scattering lengths the two-body energy goes
to $1/2\hbar\omega_0$ (three dimensions) and $\hbar\omega_0$ (two dimensions), where $\omega_0$ is the external field frequency. For small scattering lengths it goes as the inverse of the scattering length squared in both cases, since
it represent the universal bound state that is also present when $\omega_0\rightarrow 0$.
The Hamiltonian, solved in \cite{busch98} for two particles, is
\begin{eqnarray} \label{H3D}
H =-\frac{\hbar^2}{2m}\nabla_r^2 +\frac{1}{2}m\omega_0^2r^2
+ \frac{4\pi\hbar^2a}{m}\delta^{(3)}
(\mathbf{r})\frac{\partial}{\partial r}r \; ,
\end{eqnarray}
where $m$ is the mass of the particles, $\omega_0$ is the external
trap frequency as mentioned above, $r$ is the relative coordinate $\mathbf{r}=\sqrt{1/2}(\mathbf{r}_1-\mathbf{r}_2)$ and $a$ is the scattering length of the two-body potential assumed to be a regularized $\delta$-function.
The solutions are given as eigenvalue equations and corresponding wave
functions, $\psi$. The form of $\psi$ for both two and three
dimensions is found to be
\begin{eqnarray} \label{e227}
\psi(\mathbf{r})\propto\frac{1}{2}\pi^{-D/2}e^{-r^2/(2\ell^2)}
\Gamma(-\nu)U(-\nu,D/2,r^2/\ell^2) \; ,
\end{eqnarray}
where the dimension is $D=2,3$, the length $\ell$ is given by
$\ell^2=2\hbar/(m\omega_0)$, and the relative energy
$E_{rel}/(\hbar\omega_0)=2\nu+D/2$ is given in terms of the
non-integer quantum number, $\nu$. The eigenvalue equations are
respectively
\begin{eqnarray}
2\frac{\Gamma(-\nu)}{\Gamma(-\nu-1/2)} &=& \frac{\ell}{a}, \quad D=3
\label{BuschE3D} \\ \label{e213}
\gamma+\frac{1}{2}\psi(-\nu) &=& \ln\left(\frac{\ell}{a}\right),\quad D=2 \;,
\end{eqnarray}
where $\gamma$ is the Euler-Mascheroni constant. For a given
scattering length and trap frequency $\nu$ is obtained and both
energies and wave functions are determined. There are many solutions to the above equations, but we repeat that we work exclusively with the lowest molecular bound state, corresponding to the lowest solution for $\nu$.
Pertinent features from these solutions are now used to choose the
oscillator parameters. First we directly choose the same external
frequency $\omega_0$ for both particles. Second, we compute the mean
square radii for $D=2,3$ from \eref{e227} and equate to the
corresponding oscillators, i.e.
\begin{equation}\label{freq}
\frac{\langle\psi|r^2|\psi\rangle}{\langle\psi|\psi\rangle}
= \frac{D\hbar}{2\mu\sqrt{\omega_{}^2+\omega_0^2}} \;.
\end{equation}
This determines the oscillator frequency $\omega_{}$. Finally, we
adjust the energy shift for the oscillator model to reproduce the
correct two-body energies, i.e.
\begin{equation} \label{e243}
(2\nu+D/2)\hbar\omega_0=\frac{D}{2}\hbar\sqrt{\omega_{}^2+\omega_0^2}+ V_{} \;,
\end{equation}
where $\nu$ is obtained by solving the relevant eigenvalue equation from
\eref{BuschE3D} and \eref{e213}. The energy shift is
$V_{\textrm{shift}}(N=2)= V_{}$ as seen in
\eref{ShiftE}.
The interaction frequency combined with the trap frequency determines
all structure. The size of the system is crucial and we have chosen
to reproduce the radius. It is not as meaningful to adjust to energies
since the oscillator cannot be expected to describe very weakly bound
and spatially extended structures. Still we expect to get an
indication of the energy variation with $N$ from the shift in the
energy zero point. All oscillator parameters are now determined for
identical bosons, and we can proceed to investigate consequences for
the many-body system.
We note the method to determine the oscillator parameters for the two-body
potential can be considered much more general and other states may be used,
like for instance a higher excited state in the zero-range model. In that case
certain technical considerations arise, for example with respect to the nodal
surfaces of the many-body wavefunction in relation to the two-body interactions
from which it is built. This is particularly important in the case of fermions
due to the requirement of antisymmetry. A discussion of these questions will be
presented elsewhere.
\subsection{The $N$-body system}
The total energy shift for $N$ particles is simply the number of pairs
times the two-body shift, i.e.
\begin{equation}
V_{\textrm{shift}}=\frac{1}{2}N(N-1) V_{} \; .
\label{EnergyShift}
\end{equation}
The external frequency is $\omega_0$ for all particles. Solving the
oscillator model leads to a set of frequencies where one of them
equals the external trap frequency and corresponds to the decoupled
center-of-mass motion. The remaining $N-1$ other frequencies are
degenerate and for each direction given by
\begin{equation}
\bar{\omega}_{x}= \sqrt{N/2}\sqrt{\omega_{}^2+2\omega_0^2/N} \; .
\label{outputfreq}
\end{equation}
The total ground state energy of the relative motion is then
\begin{equation}
E_{gs} = \frac{1}{2}N(N-1) V_{} + \frac{D}{2} \hbar (N-1)
\bar{\omega}_{x} \; .
\label{e303}
\end{equation}
For repulsive interactions the sign of $\omega_{}^2$ in
\eref{outputfreq} and \eref{e303} is changed. If
$\bar{\omega}_{x}$ becomes imaginary the system becomes unstable,
i.e. if $N \omega_{}^2 < - 2 \omega_0^2$, then the pairwise
repulsion is too strong for the external field to confine the system.
The energy expression in \eref{e303} for a gas of many identical
bosons has $N$-dependent terms of different origin. The term
proportional to $N^2$ is solely from the interaction, whereas the
$N^{3/2}$ term originates from kinetic energy, two-body interaction,
and external one-body potential, as given by the solution. The
relative influence of the external potential decreases with $N$. The
remaining part still varies as $N^{3/2}$ which is a compromise between
the pairwise interaction increasing as $N^2$ and the linear kinetic
energy depending on $N$.
We can compare with energy relations directly derived for a gas of
bosons that interact via a $\delta$-function two-body potential \cite{lovelace87,baym96},
which in
the limit of weak interaction in three dimensions is
\begin{equation} \label{e353}
E\simeq \frac{3N}{2}\hbar\omega_0+\frac{N^2U_0}{2(2\pi)^{3/2}b^3} \;\;,\;\;
U_0=\frac{4\pi\hbar^2a}{m} \;,
\end{equation}
where $b$ is the external trap length. This expression has the same
$N^2$ scaling from the interaction as $V_{\textrm{shift}}$ in the oscillator
model. The proportionality factor in $V_{\textrm{shift}}$ is obtained through
the frequency and strongly depends on which state is used for the
adjustment. A more detailed comparison is then less direct. The
linear term in \eref{e353} is obviously arising from kinetic
energy and external field which for the oscillator leads to the term
proportional to $N^{3/2}$.
For a strongly interacting system, where $Na/b\gg 1$, a variational
calculation where part of the kinetic energy is neglected gives
a different dependence \cite{lovelace87,baym96}
\begin{equation}
E=\frac{5}{4}\left(\frac{2}{\pi}\right)^{1/5}
\left(\frac{Na}{b}\right)^{2/5}N\hbar\omega_0 \;.
\end{equation}
The overall energy scales as $N^{7/5}$ which presumably should be
compared to our result when $\omega_{} \ll \omega_0$ where we get a
linear energy scaling from kinetic energy and part of the two-body
interaction. Another part of the interaction is still scaling as
$N^2$. A similar result is not surprisingly found with Thomas-Fermi
approximation, where the kinetic energy term is ignored, but where
higher order terms can be included to improve the description \cite{fu03,zinner09b,thoger09}.
The mean square distance from the center-of-mass is also calculated to
be
\begin{equation} \label{e313}
\langle\left(\mathbf{r}-\mathbf{R}\right)^2\rangle=
\frac{D(N-1)}{2N^{3/2}}\frac{\hbar}{m}
\frac{1}{\sqrt{\omega_{}^2/2+\omega_0^2/N}}\;,
\end{equation}
where $\mathbf{R}$ is the location of the center-of-mass. This
measure of the spatial extension is a reflection of the inverse
behaviour of energies and mean square radii.
\subsection{Energies and radii}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure1.eps}
\caption{Relative energies per particle at several different values of the 3D scattering length as a function of particle number. The different values of the ratio are, from bottom to top: $a/\ell=100,10,5,2,4/3,1,2/3,1/2,1/5,$ and $1/10$. }
\label{3DErel}
\end{figure}
The dependence on particle number $N$ is very explicit, but still two
or rather three terms compete with their different $N$-scaling. We
first show the relative energy per particle (i.e., the energy of the relative motion of the particles without the center-of-mass contribution
corresponding to one particle moving in the external field) in \fref{3DErel} as function of $N$ for three dimensions. It happens that a two-body system is ``unbound'' in the
oscillator model corresponding to positive relative energy. This
occurs when the positive contribution in \eref{e243} is larger
than the negative $V_{\textrm{shift}}$. However, $V_{\textrm{shift}}$ must dominate as
$N$ increases, and in fact we find that once another particle is
added, the relative energy is less than zero for any of the studied
scattering lengths. The $N$-dependence is very smooth and steadily
increases the binding which varies strongly when the scattering
length approaches zero in units of the trap length. This is also the region where the magnitude of the two-body bound state energy increases rapidly.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure2.eps}
\caption{Degenerate frequency, $\bar \omega$ of the $N$-particle systems in 3D plotted logarithmically as a function of the ratio of the scattering length to the external confinement length. The different values of $N$ plotted are, from bottom to top: 3, 4, 5, 6, 10, 20, and 30.}
\label{freq3d}
\end{figure}
The frequency dependent term in the energy proportional to $\hbar
\bar{\omega}_{x}$ is simply the zero point motion of an oscillator.
It is therefore also the smallest unit of excitation of the system
corresponding to one particle lifted in one dimension from the ground
to first excited state. This is the energy of the normal mode of
excitation. The dependence is shown in \fref{freq3d} as
function of scattering length from the Busch {\it et al.} model in
\eref{H3D}. The frequency is
small and constant for scattering lengths larger than the trap length,
while it begins to grow very quickly as soon as the trap length
exceeds the scattering length.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure3.eps}
\caption{Relative sizes, $(r-R_{cm})^2$, at several different values of the 3D scattering length as a function of particle number. From top to bottom, the ratios are $a/\ell=100,10,5,2,4/3,1,2/3,1/2,1/5,$ and $1/10$.}
\label{3Dsize}
\end{figure}
The size of the system is, along with the energy, the most fundamental
property. The intuitive implication that smaller radii follow larger
binding is also observed in \fref{3Dsize}, regardless of the
size of the scattering length, though at the weaker interactions the
change is rather flat for the first few added particles. The
$N$-dependence is again rather simple as seen in \eref{e313}
where the mean square radius decrease with $1/\sqrt{N}$ for large
$N$. Otherwise the radii are varying with frequency as usual. The
interesting part is rather that the sizes increase substantially with
scattering length for given particle number.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure4.eps}
\caption{Relative energies per particle at several different values of the 2D scattering length as a function of particle number. From bottom to top, the ratios are $a/\ell=100,10,5,2,4/3,1,1/2,1/5,$ and $1/10$. Note that a few of the bottom lines at large scattering length over trap length ratios appear to stop at small particle numbers. This is because there is a sign change in the energy, which is discussed around \eref{e373}.}
\label{2DErel}
\end{figure}
We now repeat the procedure in two dimensions. The major difference is
obviously from the adjusted oscillator input parameters. The energy
and radius expressions are already given in \eref{e303} and
\eref{e313}. The $N$-dependence of the energy is shown in
\fref{2DErel} where the same overall structure as in
\fref{3DErel} appear. As in 3D, the energy decreases monotonically with the addition of more atoms for all scattering lengths.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure5.eps}
\caption{The critical number $N_c$ for 2D as function of the ratio of
the scattering length to the external length. The vertical line is $a/\ell=3.3$, at which point (to the left of this line) all systems of more than two particles become self-bound.}
\label{Ncrit}
\end{figure}
The two-body energy shift is less negative because the size
requirement to determine the frequency better matches the one-body
frequency. Then several of the $N$-body energies are positive,
although ``binding'' (negative energy) finally occurs by adding a
sufficient number of particles. This critical number, $N_c$, where
the system becomes self bound is given by
\begin{equation} \label{e373}
N_c = \frac{(\hbar\omega)^2}{V_{}^2}\left[1 + \sqrt{1
+ 4 \frac{\omega_{0}^2}{\omega^2}\left(\frac{V}{\hbar\omega}\right)^2}\right]\; ,
\end{equation}
which for the two dimensional case depends strongly on the initial
scattering length as seen in \fref{Ncrit}. For the largest
scattering length in \fref{Ncrit} $N_c$ is about $12$, whereas
there is binding for all particles for scattering lengths smaller than about three. As the scattering length becomes arbitrarily large, the critical number also increases, meaning that the addition of any number of particles is not sufficient to bind the system at infinite scattering length.
In \fref{freq2d} we show how the degenerate frequency for ten
particle behaves as a function of the scattering length. The behaviour
is very similar to then 3D result, with both curves turning up strong
after the scattering length becomes less than the external confinement
length.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure6.eps}
\caption{Degenerate frequencies, $\bar{\omega}$, of the $N$-particle systems in 2D plotted logarithmically as a function of the ratio of the scattering length to the external confinement length. The particle numbers plotted are, from bottom to top, 3, 4, 5, 6, 10, 20, and 30.}
\label{freq2d}
\end{figure}
\Fref{2Dsize} shows the size results in two dimensions. These
also follow a slightly different trend than in three dimensions. In
two dimensions, for several scattering lengths, the size actually
increases for the addition of a particle (going from three to four
particles), before falling for all additional particles. The
scattering lengths where this behaviour is seen correspond roughly to
those that have positive three-body energies. For smaller scattering
lengths, the relative sizes decreases monotonically with an increase
in particle number.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure7.eps}
\caption{Relative sizes at several different values of the 2D scattering length as a function of particle number. From top to bottom, the ratios plotted are $a/\ell=100, 10, 5, 2, 4/3, 1, 1/2, 1/5, 1/10.$}
\label{2Dsize}
\end{figure}
\subsection{One-body density}
The one-body density matrix has information about the mean-field
content of the wave function \cite{yang60}. This is quantified by the eigenvalue in
\eref{lambdaeq}, which directly measures how much this state has the
structure of a coherent (condensed) state. In \fref{3Dlambda}, we see how this
eigenvalue, $\lambda$, evolves for 3D with interaction strength and
particle number. For the most part it increases with particle number,
though at weaker interactions there is a small minimum around five or
six particles and it increases thereafter, while it decreases
uniformly with interaction strength.
The overall increase with $N$ is in agreement with the theorem that
the mean-field wave function is approached as $N$ tends to infinity \cite{yang60}.
The condensate fraction is large for large scattering lengths where
the external field is decisive, and consequently favors the
corresponding mean-field structure. This follows from the fact noted above
that the two-body wave functions become essentially non-interacting oscillator
states given by the confinement ($\omega$ becomes small).
On the other hand, for small
scattering lengths the structure is far from that of a condensate. The
particles are much more tightly bound with strong correlations. Again this
is consistent with the fact that we have very strongly bound two-body
states in this limit that are very different from the non-interacting harmonic
states.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure8.eps}
\caption{The value of $\lambda$ at several different values of the 3D scattering length as a function of particle number. From top to bottom, the ratios plotted are: $a/\ell=100, 10, 5, 2, 4/3, 1, 2/3, 1/2, 1/5, 1/10.$}
\label{3Dlambda}
\end{figure}
The question of whether a true condensate actually
exists in a realistic quasi-2D setup with harmonic trapping potentials
below a certain critical temperature \cite{bagnato91,bloch08} will not be addressed
here. We simply take the appearance of a large eigenvalue in the one-body density
matrix as our working definition of a condensate as in the 3D case above.
The condensate fraction are shown in \fref{2Dlambda} for the 2D
system.
The same tendency of increase with $N$ is present here. Again
the condensate fraction is large (small) for large (small) scattering
lengths compared to the size of the external field. This reflects the
amount of correlation in the wave function precisely as for 3D.
Quantitatively we find that $\lambda$ is even flatter as a function of
particle number than in 3D, and it is also consistently higher for the
same scattering length. We note that the 2D results consistently produce
larger condensate fractions than in 3D. As we discuss below this
is connected with the fact that the 2D case resembles a non-interacting
system when $a\rightarrow\infty$ better than the 3D case does. The remaining
deviation of $\lambda$ from unity is due to the separation of center-of-mass.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure9.eps}
\caption{Condensate fraction at several different values of the 2D scattering length as a function of particle number. From top to bottom, the ratios plotted are: $a/\ell=100, 10, 5, 2, 4/3, 1, 1/2, 1/5, 1/10.$}
\label{2Dlambda}
\end{figure}
\subsection{Normal modes and symmetries}
The characteristic properties of a system are reflected in the
structures and energies of the normal modes. Here we discuss the energies
of the degenerate frequencies which
is the amount of energy required to excited the modes. Unfortunately, the
degeneracy is itself an obstacle for understanding the uncoupled
structures of single excitations. The reason is that any
wave function expressed as a linear combination of the same energies is
equally well qualified. The set of normal modes only has to be
orthogonal, but that can be achieved in infinitely many ways. One
general exception is if other conserved quantum numbers have to be
restored by specific linear combinations of the degenerate states.
Angular momentum is a prominent example in the absence of external
fields.
We now discuss what determines the degeneracy, or
equivalently what would break it. First, if all masses, all two-body
interaction frequencies, and all one-body frequencies are equal, then
$N-1$ degenerate frequencies are produced along with the one-body
frequency, which is returned unchanged.
If masses are changed, then the degeneracies are broken, and if $N-2$
masses are changed then all degeneracies will be broken. This is the
same for the one-body frequencies (those in \eref{e60}), if $N-2$
of them are different from each other, then all the output frequencies
will be different.
For the interaction frequencies in \eref{e50} the situation is
slightly more complex. If one row and column in the $A$-matrix
of \eref{e90} have the same interaction frequency, i.e., if this
particle interacts the same with all other particles, then there will
be at least one pair of degenerate frequencies. In general, of the
$N(N-1)/2$ interaction frequencies (off-diagonal elements of the $A$-matrix),
if $(N-1)(N-4)/2$ ($N>4$) of them are
different, then that is enough to guarantee all symmetries are
destroyed.
However, all degeneracies can be broken with a wiser
distribution of the different frequencies. If, for example, the
frequencies immediately above the main diagonal of $A$ are all different and all
the remaining ones being the same (a total of $N$ different off-diagonal
frequencies), then that is enough to destroy all degeneracies of the
resulting normal modes.
Thus, degeneracy can be broken or reached through many different
paths. The structure of the resulting degenerate normal modes depends
on the chosen path. We still find it interesting to investigate
specially selected normal modes. If all particles are distinguishable
it should in principle be possible to observe corresponding
vibrational structures where each particle is detected. To probe the
underlying structure, revealed by the normal modes, we therefore
approach the degenerate limit from an entirely non-degenerate system.
The two simplest ways are to differentiate the particles by minute differences in mass, or alternatively in one-body frequency.
We first notice that the normal modes are the results of transforming
the set of oscillators to diagonal form. The energies and eigenvalues
are obtained from the matrix depending on masses, external field
frequency, and two-body interaction frequency. Thus by choosing all
these parameters to be identical the normal modes in two and three
dimensions are the same. This is achieved by the same masses, same
external field, and corresponding choices of scattering lengths in 2D
and 3D such that the two-body interaction frequencies from \eref{outputfreq} are identical. The correspondence is seen in
\fref{scatlength} where both scattering lengths are small at the same time
and also increase simultaneously. The $2D$ scattering length
approach an upper limit of about $1.14 \ell_{2D}$ for increasing
$a_{3D}$. The size in $2D$ is much smaller than in $3D$.
In \fref{scatlength} we also show the relation between 2D and 3D
oscillator shifts obtained from \eref{e243} for corresponding
scattering lengths.
It is clear from the figure that it is possible to map 3D results onto
2D results, but the reverse is not always true. This is due to the
different behaviour of the systems for large scattering length, where
the energy and square radius of the two-body system
are controlled by the properties of the
external trap.
In two dimensions we find that $\langle r^2\rangle=\ell^2$
when $a\rightarrow\infty$ and \eref{freq} then tells us that
we have to choose $\omega=0$, i.e. we have a non-interacting system.
For three dimensions, the situation is somewhat different as the virial
theorem applied for $a\rightarrow\infty$, tells us that $\langle r^2\rangle=\ell^2/2$.
From \eref{freq} we deduce that $\omega=2\sqrt{2}\omega_0$, i.e.
our harmonic equivalent is an interacting system still.
This also implies that the interaction frequency
in 3D, $\omega_{}$ in \eref{outputfreq}, can never be smaller than
$2\sqrt{2}\omega_0$. In 2D, there is no lower limit for the interaction
frequency and it eventually vanishes at a large enough scattering
length where the external field determines all properties.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure10.eps}
\caption{The relation between the 2D scattering length to external
length ratio to the same quantity in 3D for both the interaction
frequency (red curve) and energy shift (green curve).}
\label{scatlength}
\end{figure}
From \fref{scatlength} we can in principle from a series of 2D
calculations for different scattering lengths extract 3D results. The
procedure is to start with the desired 3D scattering length, find the
corresponding 2D scattering length, read off the related 2D results for
normal modes, radii, and oscillator energy shift. The normal modes
and radii are now the 3D results, and the 3D energy is obtained by
finding the related 3D oscillator energy shift from \fref{scatlength}
combined with the expression in \eref{e303}. The procedure can be reversed provided the desired 2D scattering length is within the range accessible to conversion from 3D (i.e., $a_{2D}/\ell_{2D}\leq 1.14$).
The hamiltonian we obtain in our oscillator approximation can be decoupled
by a change of coordinates as discussed above.
The normal modes can therefore be viewed in one dimension at a time as the
amplitudes with which
each of the individual particles are moved when the corresponding
mode is excited on top of the ground state. More explicitly, consider
exciting the $i$'th mode with probability $p$, so that the wavefunction
of the system becomes
$|\Psi(t)\rangle=\sqrt{1-p^2}|0\rangle+\sqrt{p}e^{-i\bar{\omega}_i t} |i\rangle$,
where $\bar{\omega}_i$ is the frequency of mode $i$. The displacement in
time is then $x_i(t)=\langle\Psi(t)|x|\Psi(t)\rangle=A_{0i}\cos(\bar{\omega}_it)$,
with amplitude given by $A_{0i}=2\sqrt{p(1-p^2)}\langle 0|x|i\rangle$. We
illustrate the modes pictorially below.
The fact that the different
spatial directions are decoupled also implies that a spherical
external field produces degenerate modes. This degeneracy can be
lifted by deforming the field but such a symmetry breaking would only
reflect the properties of the field. We therefore consider the
one-dimensional eigenmodes which apply for both $2D$ and $3D$
when using a correspondence of scattering lengths down to $1D$
as was done between $2D$ and $3D$ in \fref{scatlength}.
In \fref{3DNM6} and \fref{3DNM8} we show a sequence of modes for
six and eight particles respectively. The dependence on
scattering length is rather weak for both small and large $a/l$.
The general
picture that emerges from breaking the symmetry by mass differences is
first that the energy of the center-of-mass oscillation is maintained.
Second, the energies of the smallest and the largest mode correspond to
oscillation where either the lightest or the heaviest particles move in one
direction while all the others move less and in the opposite direction.
The remaining
modes with intermediate energies correspond to one particle joining
the lone particle, moving in the order of increasing mass for each mode,
though in most cases it appears that only one or two particles are displaced
significantly, while the remainder stay in a slightly displaced clump.
It should be noted that the normal modes are highly sensitive to the
symmetries of the interaction, and how we broke the symmetry in order
to show non-degenerate normal modes. In this present case, we
changed the masses slightly to make the particles distinguishable, but still with
identical interactions between all particles. We expect that if
the interactions are changed slightly in a certain
prescribed manner, then the normal modes will reflect the symmetries
of that change.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure11.eps}
\caption{Amplitudes of the normal modes in the $x$ direction of a six particle system in 3D calculated with $a/\ell=1$, which is equivalent to a ratio of $0.61$ in the 2D. The largest amplitude is rescaled to unit magnitude. Points in the diagram are numbered from one to six with the mass increasing from particles one to six. Overlapping points are not labeled, and appear in groups usually close to the origin, which is indicate by the black solid horizontal line. The points oscillate through the equilibrium position (the black solid line) with a frequency of the given normal mode(e.g., $x_i(t)=A_{0i}\cos(\bar\omega_i t)$, where $A_{0i}$ are the amplitudes shown in the figure). The frequencies from left to right are 1.003, 10.433, 10.447, 10.456, 10.470, and 10.487 in units of $\omega_0$ and are not plotted to scale. The center of mass mode is the first mode shown.}
\label{3DNM6}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure12.eps}
\caption{Amplitudes of the modes in the $x$ direction of an eight particle system in 3D with $a/\ell=1$, which has an equivalent ratio in 2D of 0.61. As in \fref{3DNM6}, the largest amplitude is normalized to one unit of length. Points in the diagram are numbered from one to eight with the mass increasing from particles one to eight (overlapping points are not labeled). These points oscillate through the equilibrium position, indicated by the solid black horizontal line, with a frequency of the given normal mode. The frequencies of the normal modes are, from left to right: 1.004, 12.030, 12.040, 12.050, 12.059, 12.069, 12.085, 12.097 in units of $\omega_0$ and are not plotted to scale. The center of mass mode is the first mode shown.}
\label{3DNM8}
\end{figure}
\section{Summary and Conclusion}
The properties of the quantum mechanical $N$-body system are
determined from the basic one- and two-body interactions. However, in
general this problem is very hard or impossible to solve.
Here we use an approximate approach which replaces
the interactions with quadratic forms in the coordinates, either by
direct fits of the potentials or by adjusting parameters to reproduce
crucial properties. This approximation allows analytical
investigations of the $N$-body system with the properties expressed in
terms of the two-body characteristics. Having such a direct approach
to the general many-body problem can provide both important analytical
insight and be a valuable benchmark for more intricate methods.
The most general harmonic oscillator potentials to describe one- and
two-body interactions allow analytical solutions of the
$N$-body Schr\"{o}dinger equation. However, the center-of-mass degree of
freedom requires special attention as we have described above.
We have employed Cartesian coordinates and
one-, two-, and three-dimensional systems are therefore equally simple
to handle. We have discussed the
properties of the resulting set of decoupled oscillators
and their relation to the initial interactions. More explicitly we calculated
energies, radii, and the one-body density matrix and presented its
spectrum.
As an application of our formalism, we consider first the simplest case of $N$ bosons in a trap
interacting through
contact potentials. This was done by using a mapping from the energies and radii
for the two-body system to the oscillator parameters for
the analytical calculations. We note that while our choice of energy and
radius as the essential parameters to reproduce in the two-body system was
perhaps the most physically reasonable one, other choices are also possible as long
as one has two quantities that fix the oscillator frequency and the shift.
We calculated energies and radii as
function of boson number and the scattering length as a measure of the
interaction strength. Typically the binding energies increase and the
radii decrease with $N$. We also discussed characteristic limits for
scattering lengths much smaller and much larger than the length scale
of the external trap.
An interesting calculation was done for the
one-body density matrix for which we calculated the dependence of the
lowest eigenvalue with the number of bosons and the scattering length,
with a careful treatment
removing the center-of-mass degree of freedom. The limits of
large scattering length gave large condensate fractions, whereas for
small scattering length we found a fragmented state. The former is due
to the near non-interacting system, whereas the latter is caused by the
strongly bound molecular state which introduces substantial amounts
of correlations in the system. These conclusions apply equally for
both two and three dimensions.
We also computed the critical number of particles that can form a self bound system of
negative energy. This number increases with increasing
scattering length. In three dimensions this happens already for a few
particles while in two dimensions more than 50 bosons are needed for
large scattering lengths.
As a novelty, we considered the normal modes which are characteristic properties of any
system. However, in the case of the $N$-boson system, there is a large degree of
degeneracy that can obscure the detailed behaviour.
The ambiguity
due to degenerate eigenmodes is circumvented by
breaking the symmetry and then approaching the limit of full degeneracy for
the identical boson system. A different way of breaking the symmetry
is to deform the external field but the resulting eigenmodes then
reflect precisely the chosen deviation from spherical symmetry. We
then computed the one-dimensional oscillatory eigenmodes which can
be related to equivalent values of the scattering lengths in two and three dimensions.
What we found in the normal modes was an interesting tendency for the
particles to cluster in smaller groups and then perform motion with
respect to other such groups. This implies that excitations can induce
strong correlations in a many-boson system even when all interactions are
equal.
The method that we have presented in this report is completely general and
can be applied to systems in external fields or to self-bound structures.
The treatment of deformation of the trap or the two-body interaction
is straightforward and we expect that rotation of the external
trapping potential poses no problem as well. While we have only treated
bosons in this work, the extension to fermions is achieved through
proper (anti-) symmetrization of the wave function and also Bose-Fermi
mixtures are accessible. With the possibility of having displaced
centers we can also apply the method for split traps, and also
even more exotic geometries with mixed dimensions which are
under current study within Fermi-Fermi mixtures
of ultracold atoms \cite{nishida08,nishida09,levinsen09,lampo10}. Our
method can also be applied to cold polar molecules, in particular
to the case where two-dimensional confinement is induced to make
layered systems \cite{jin10} where non-trivial two- and three-body bound states
appear in the bilayer \cite{shih09,arm10,kla10,artem10,wunsch10,wunsch11} that open
for study of various exotic many-body states in both bilayer and
multi-layer systems \cite{wang06,wang07,lut09,potter10,pikovski10,zin10}.
The form of the dipolar potential has harmonic oscillator
shape in the inner part when particles are confined in multi-layers
and we therefore expect the the method presented here to readily
provide analytical results valid for intermediate and strong
dipolar strength. The fact that normal modes are easily accessible
with the harmonic method presented makes this even more interesting.
This can help understand the modes of excitation in
chains and complexes for use in thermodynamic calculations of
system properties.
In conclusion, the analytic solutions of coupled harmonic oscillators
can be used to study the overall properties of $N$-body systems
and structures can be calculated in general from two-body properties for
different particle number and geometries. This can serve as a valuable
complement to the understand of results obtained with more intricate
methods or help solve systems that are intractable in other approaches.
\section*{References}
|
2,869,038,156,775 | arxiv | \section{Introduction}
In the magnetospheric accretion scenario, the accretion of material from a circumstellar disk in low-mass pre-main sequence (PMS) stars is funneled by the stellar magnetic field, which disrupts the disk at a few stellar radii (\citealt{hart}, and reference therein).
Our understanding of this process is still not entirely clear.
A key parameter describing the star--disk evolution is the rate of mass accretion, that is, the rate at which mass from the circumstellar disk is transferred onto the central PMS star (see, e.g., review by \citealt{hart}).
In particular, it is important to evaluate the relation between mass accretion rate, stellar mass, and age, how the mass accretion rate changes as a star approaches its main sequence (MS), and how the metallicity or in general the chemical composition of the parent molecular cloud could impact the formation and evolution of the star.
Usually, mass accretion rates are derived from the analysis of continuum veiling, ultraviolet (UV) excess emission, or indeed through a detailed study of the profile and intensity of hydrogen emission lines (e.g., H$\alpha$, Pa$\beta$, Br$\gamma$), which requires medium- to high-resolution spectroscopy for each individual object. Even with modern multi-object spectrographs at the largest ground-based telescopes, these methods can be applied to relatively nearby star-forming regions ($d \lesssim 1-2$ \,kpc), because of crowding.
For this reason, the properties of low-metallicity PMS stars located in extra-galactic star-forming regions remain poorly known.
In the last decade, \cite{demarchi10} developed an efficient method based on {\it Hubble Space Telescope} (HST) photometry that allows the identification of hundreds of PMS stars simultaneously, and the determination of their physical parameters, including effective temperature, luminosity, age, mass, H$\alpha$ luminosity, accretion luminosity, and mass accretion rate, with an uncertainty of between 15\% and 20\%, comparable to that allowed by spectroscopy.
This method has been successfully applied not only to regions of the Milky Way \citep{beccari10,beccari15,zeidler16}, but also to regions of the Small \citep{demarchi11,demarchi13} and Large Magellanic Clouds (e.g., \citealt{katia19}, \citealt{demarchi17}, \citealt{spezzi12}).
This method combines $V$ (F555W) and $I$ (F814W) broadband photometry with narrow-band H$\alpha$ (F656N or F658N) imaging to identify the stars with excess in H$\alpha$ emission and to determine their associated H$\alpha$ emission equivalent width, $EW_{\rm H\alpha}$, the H$\alpha$ luminosity and the accretion properties of the PMS stars selected.
In this work, we use this method to select and study the PMS populations of the stellar system LH 91 \citep{lucke74} in the northeast outer edge of the super-giant shell LMC\,4 in the Large Magellanic Cloud (LMC). This area, investigated with $H\alpha$ and radio observations by \cite{book}, also covers LH 91\,I in the southeast of LH 91 \citep{konti94} and LH 95 in the north of LH 91 \citep{lucke74}.
The most recent work on LH 91 was presented by \cite{gou02} using ground-based $BVR$ and H$\alpha$ photometry. Studying the H$\alpha$ topography of the area, the authors found that LH 91 is loosely related to an \ion{H}{ii} region, which seems to be large and rather diffuse.
In agreement with \cite{lucke74}, the authors confirm that LH 91 does not seem to represent a "classical" stellar system in which the stars are physically related to each other.
Analyzing the color--magnitude diagram (CMD) in the $B$ and $V$ band, the authors estimated the color excess $E(B-V)$ = 0.16 $\pm$ 0.04 using the reddening-free Wesenheit function. Moreover, fitting the Geneva isochrones \citep{geneva} derived adopting metallicity Z=0.008, \cite{gou02} derived the age of the system, finding it to be younger than 10 Myr, similar to that of LH 95 and LH 91I, and in agreement with for example \cite{braun97,braun00}.
Instead, \cite{konti94} estimated an age of about 20 Myr.
Finally, \cite{gou02} also estimated the age of the background field, the population of the observed area around LH 91, to be older than 50 Myr and up to 1.25 Gyr.
This paper is organized as follows:
in Section \ref{phot} we describe the HST photometric observations, in Section \ref{idpms} we illustrate the analysis needed to identify the PMS stars and to estimate the luminosity and the equivalent width (EW) associated to the H$\alpha$ excess. In Section \ref{physpam} we measure the physical properties of the stars selected. In Section \ref{proacc} we determine the accretion properties of the selected PMS stars, that is, the accretion luminosity and mass accretion rate, and we show the relation between the mass accretion rate and the stellar properties of the PMS objects, such as their mass and age. We also compare our results with the findings for other star-forming regions in the LMC with the same metallicity, and in particular with LH 95, the closest region to LH 91 for which accretion properties of PMS candidates have been derived \citep{katia19}. We present our conclusions in the last section.
\section{Photometric observations}
\label{phot}
The LH 91 region was observed with the Wide Field Camera 3 (WFC3/UV) on board the HST in the broad-band filters $F555W$ and $F814W$, and in the narrow-band filter $F656N$, the latest centered on the H$\alpha$ line.
The data were collected as part of HST programs \#12872 (PI: Da Rio) and
\#13009 (PI: De Marchi). A short logbook of the observations is shown in Table~\ref{tab_obs}.
\begin{table}
\centering
\caption{Logbook of the observations}
\begin{tabular}{@{}cccc@{}}
\hline
Camera & Number of exposures & Filter & Exposure time (s) \\
& & & (s) \\
\hline
\multicolumn{4}{c}{Prop ID 12872, PI: Da Rio}\\
\hline
WFC3 & 2 & $F555W$ & 2804 + 2970 \\
& 1 & $F814W$ & 2804 \\
& 1 & $F656N$ & 2970 \\
\hline
\multicolumn{4}{c}{ Prop ID 13009, PI: De Marchi}\\
\hline
WFC3 & 1 & $F656N$ & 2949 \\
\hline
\label{tab_obs}
\end{tabular}
\end{table}
The data were reduced using the standard $DAOPHOTII$ \citep{stetson} procedure. A list of 10 to 20 well-sampled and isolated stars were used to model the point spread function (PSF) on the $F555W$ and $F814W$ images, and a deep photometric catalog of stars was derived via PSF fitting on the images acquired with the broad-band filters. The final magnitude of each star in a given filter is estimated as the mean of the photometric measures in each individual image taken with that filter, while the standard deviation is taken as the associated error. Aperture photometry was then used
to extract the $F656N$ magnitude for each star detected in the optical bands.
The choice of performing aperture photometry on the narrow-band images is driven by the fact that such images are characterized by very little stellar
crowding. As such, the aperture photometry is the ideal choice as it allows accurate estimation of the magnitude free from any uncertainty that is unavoidably associated with the choice of PSF model. We stress here that the background is locally estimated and subtracted in an annulus around the start.
The final catalog of the overlapping fields contains 9423 objects, of which 6980 have a measure in the $F656N$ band.
These $F555W$ and $F814W$ band observations are among the deepest ever taken toward the LH 91 region.
The instrumental magnitudes in $F555W$, $F814W,$ and $F656N$ were calibrated to the
VEGAMAG photometric system using the zero-point values made available by the Space Telescope Science Institute\footnote{https://www.stsci.edu/hst/instrumentation/wfc3/data-analysis/photometric-calibration}.\\
\section{Data analysis}
\label{idpms}
\subsection{PMS star identification}
\label{analisi}
We applied the method developed by \cite{demarchi10} to identify the PMS stars characterized by an active mass accretion process. We measured the physical and accretion properties of these objects (i.e., H$\alpha$ luminosity, H$\alpha$ emission $EW_{\rm H\alpha}$, mass accretion rate, and accretion luminosity) using photometric data. We refer to \cite{demarchi10} for a detailed discussion of the method, while in this work we describe some fundamental steps.
We selected PMS stars on the basis of their H$\alpha$ excess emission \citep{white03}.
First of all, we identified the H$\alpha$ excess emitters in the $(m_{555}-m_{656})$ versus
$(m_{555} - m_{814})$ color--color diagram shown in Fig. \ref{Vi}. The magnitudes were corrected for the extinction contribution of the Milky Way considering the values $A^{MW}_{555}$ = 0.22 mag and $E(m_{555}-m_{814})^{MW}$ = 0.1 \citep{fitz}.
To this aim, we selected from our catalog in $F555W$, $F814W$, and $F656N$ bands all those stars whose photometric uncertainties, that is $\delta_{555}$, $\delta_{814}$, and $\delta_{656}$, are less than 0.05 in each individual band.
A total of 254 stars satisfied these conditions (gray filled dots in the color--color diagram in Fig.\ref{Vi}), out of 9423 sources in the whole catalog. These are typically MS stars that do not present an appreciable H$\alpha$ excess.
With these stars, we define a reference sequence (dashed black line) with respect to which the excess H$\alpha$ emission is computed.
The dotted blue line of Fig. \ref{Vi} represents the theoretical color relationship obtained using the \cite{bessel} model atmospheres for MS stars with the chemical and physical parameters appropriate for the LMC (effective temperature $T_{\rm eff}$ in the range of 3500-40000 K, surface gravity $\log g$ = 4.5, and metallicity index $[M/H]$ $\simeq $ -0.5, \citealt{colucci}). The agreement between our reference sequence and the theoretical one is evident at $m_{F555W}-m_{F814W}$ < 1.
The discrepancy between the models and the data at $m_{F555W}-m_{F814W}$ > 1 can be attributed to small number statistics and to the fact that the majority of these objects are red giants, with different physical characteristics from those assumed in the models.
To select the most probable PMS stars, after the exclusion of the 254 stars taken as reference, we first selected the targets with photometric uncertainties in each individual band as follows: $\delta_{555}$ and $\delta_{814}$ $<$ 0.1 mag, and $\delta_{656} <$ 0.3 mag, for a total of 1309 objects.
As highlighted by \cite{demarchi10}, the contribution of the H$\alpha$ line to the $m_{555}$ magnitude is negligible, and therefore we can define the magnitude of the excess emission as:
\begin{equation}
\Delta H\alpha= (m_{555}-m_{656})^{obs}-(m_{555}-m_{656})^{ref}
,\end{equation}
where the superscripts "obs" and "ref" refer to the observation and reference sequence, respectively.
We then considered the stars with $\Delta H\alpha$ exceeding at least three times the combined mean photometric uncertainties in the three bands $\delta_3$:
\begin{equation}
\delta_3=\sqrt{\frac{\delta_{555}^{2}+\delta_{656}^2+\delta_{814}^2}{3}}
.\end{equation}
A total of 187 stars satisfy these conditions; they are indicated with large red dots in Fig \ref{Vi}.
This means that 187 stars have $(m_{555}-m_{656})$ colors exceeding that of the reference template at the given $(m_{555}-m_{814})$ color by more than three times the combined uncertainties on their $(m_{555}-m_{656})$ values.
The large green dots in Fig.1 are the targets selected with $m_{555}
< 20 mag, which we exclude from our following analysis as we are interested primarily in low-mass PMS candidates. Our final sample of PMS candidates is therefore composed by 181 targets.
As in these bands the reddening vector due to LH 91 runs almost parallel to the median of the reference sequence \citep{demarchi10}, the color--color diagram provides a robust identification of stars with H$\alpha$ excess even before correction for LH 91 reddening.
\begin{figure}
\includegraphics[width=10cm]{VI_VHalpha_LH91.pdf}
\caption{Color--color diagram of the selected stars in the field of LH 91. All magnitudes are already corrected for the extinction contribution of our Galaxy, $A^{MW}_{555}$ = 0.22 mag and $E(m_{555}-m_{814})^{MW}$ = 0.1. The arrow shows the reddening vector of $E(m_{555}-m_{814})$ =0.25 and $E(m_{555}-m_{656})$=0.13 for the adopted LH 91 extinction law.
The dashed line represents the median photospheric ($m_{555}-m_{814})$ color for the 254 stars with $\delta_{555}$, $\delta_{814}$, and $\delta_{656}$ < 0.05 (gray filled dots).
The dotted line shows the model atmospheres of \cite{bessel} computed for the three WFC3/UVES filters.
The PMS star candidates with H$\alpha$ emission excess at the $3\sigma$ level are represented with large red dots.
The large green dots are the brightest PMS star candidates, with $m_{555}$ < 20 mag. Error bars are also shown.
}
\label{Vi}
\end{figure}
\subsection{The color--magnitude diagram}
\label{reddening}
We applied the correction for the extinction contribution of the Milky Way and LH 91 to the magnitudes in each band.
For the Milky Way, we report the values in the previous section.
We estimated the extinction for LH 91 from the value of $E(B-V)$ = $0.16$ $\pm$ $0.04$ color excess in the photometry of \cite{gou02} and converted into $A_V$ assuming the average LMC reddening law $R_{555}$ = $A_{555}/E(m_{555}-m_{814})$ $=$ 2.97 calculated by \cite{demarchi14}.
As \cite{gou02} found that the density of the ambient medium in LH 91 is similar to the value for LH 95, and as \cite{dario09} did not find a significant level of differential extinction while studying the upper
MS stars of the latter, we also consider the differential reddening to be negligible in LH 91.
We show the CMD $(m_{555}-m_{814})_0 $ versus $(m_{555})_0$ in Fig. \ref{cmd}. The small black dots are the targets of the whole sample, namely 9423 stars.
To estimate the age of the system, we fit the CMD with the isochrone models for Z=0.007 ---which is typical of young LMC stars (e.g., \citealt{colucci})--- taken from the PAdova-Trieste Stellar Evolution Code (PARSEC, \citealt{bressan2012}) and distance modulus $(m-M)_0$=18.55 \citep{panagia91,panagia99}.
The turnoff at $m_{555} \sim 20.5$\,mag and the red clump at $m_{555} \sim 19.5$\,mag and $m_{555}-m_{814} \sim 1.0$ are best matched by a 1.5 Gyr isochrone (dashed light-blue line), in agreement with the age of the background field stars evaluated by \cite{gou02}.
Stars with H$\alpha$ excess show a wide apparent spread towards young age and could be divided in two groups, separated by an isochrone at 8 Myr (solid green line).
\begin{figure}
\centering
\includegraphics[width=10cm]{VI_V_LH91_1.5Gyr.pdf}
\caption{Color--magnitude diagram of the field of LH 91. All magnitudes are already corrected for the extinction contribution of our Galaxy and LH 91. The small black dots are the targets of the whole sample (9423 stars).
Small gray-filled dots are the stars with photometric uncertainties < 0.05 mag in each band. The large red dots represent the PMS star candidates with H$\alpha$ excess emission at the $3\sigma$ level.
Solid green and dashed light-blue lines show the theoretical isochrones from \cite{bressan2012} for ages 8 Myr and 1.5 Gyr, respectively, metallicity Z=0.007, and a distance modulus $(m_V -M_V)_0$=18.55.}
\label{cmd}
\end{figure}
\subsection{From H$\alpha$ color excess to H$\alpha$ luminosity}
\label{Ha_luminosity}
To avoid contamination by stars with significant chromospheric activity, we also imposed constraints on $EW_{\rm H\alpha}$, selecting only stars with $EW_{\rm H\alpha}$ $\geqslant$ 10 \AA, because according to \cite{demarchi10} this is a reliable cutoff to separate accretors from those not accreting .
For details of the method used here to derive $EW_{\rm H\alpha}$ from the photometry, we refer to \cite{demarchi10,demarchi11,demarchi13}. Here we recall that, as the width of the H$\alpha$ line is narrow with respect to the width of the filter, the measure of $EW_{\rm H\alpha}$ is given by the difference between the observed H$\alpha$ line magnitude and the level of the H$\alpha$ continuum ($\Delta H\alpha$).
If we assume that the stars used to define the reference sequence have no H$\alpha$ absorption features, their $(m_{555} - m_{656})$ color represents the color of the pure continuum.
Consequently, we calculated the $EW_{\rm H\alpha}$ from the following relation:
\begin{equation}
EW_{\rm H\alpha}=RECTW \times [1-10^{-0.4\Delta H\alpha}]
,\end{equation}
\noindent{where $RECTW$ is the rectangular width of the F656N filter. The uncertainty on the $EW_{\rm H\alpha}$ measure is dominated by the uncertainty on the H$\alpha$ magnitude.}
Moreover, because of the width of the F656N filter, the small contribution due to the emission of the forbidden $\ion{N}{ii}$ line at $\lambda 6548$ is included in $\Delta H\alpha$.
Therefore, following the prescriptions by \cite{demarchi10}, we estimated corrections ranging from 0.2 to 1.4 $\AA$, to be subtracted from the $EW_{\rm H\alpha}$ of our targets.
Figure \ref{ew} shows the $EW_{\rm H\alpha}$ measured for the selected low-mass PMS candidates as a function of the de-reddened $m_{555}-m_{814}$ color.
We performed a preliminary study of the $EW_{\rm H\alpha}$ distribution of the PMS candidates at different ages using the isochrone at 8 Myr as a discriminating factor. We divided the sample into stars older (blue dots) and younger (red squares) than 8 Myr (Fig. \ref{ew}).
\begin{figure}
\centering
\includegraphics[width=10cm]{EW.pdf}
\caption{H${\alpha}$ equivalent width of the selected low-mass PMS candidates, as a function of the de-reddened ($m_{555}-m_{814}$) color. The red squares represent the values of the PMS stars younger than 8 Myr, the blue dots are the oldest PMS stars.
}
\label{ew}
\end{figure}
The values of $EW_{\rm H\alpha}$ for the sample range from $\sim$ 3 $\AA$ to $\sim$ 17 $\AA$, with a median of $\sim$ 9 $\AA$ that applies to both the whole sample and the two subgroups.
After the selection on the $EW_{\rm H\alpha}$, a total of 75 objects satisfy the conservative condition ($EW_{\rm H\alpha}$ $\geq$ 10 $\AA$). The median value of the $EW_{\rm H\alpha}$ is about 12 $\AA$, regardless of age, smaller than the values found in other star formation fields in the LMC, such as LH 95 ($EW_{\rm H\alpha}$ $\sim$ $30$ $\AA$, \citealt{katia19}) and SN 1987A ($EW_{\rm H\alpha}$ $\sim$ $20$ $\AA$, \citealt{demarchi10}).
The difference could be due to the paucity and to stellar mass range of our sample.
Moreover, the figure shows an almost clear separation in color between the two subgroups in LH 91 with the exception of the target with $(m_{555} - m_{814})_{0}$ $\sim$ $0.4$ and $EW_{\rm H\alpha}$ $\sim$ $10$ $\AA$. As the coordinates of this target correspond to those of a massive star in the 2MASS catalog, it could be a Be star. A similar separation in color between the two subgroups of PMS stars was found in LH 95 \citep{katia19}.
The H$\alpha$ emission line luminosity $L_{\rm H\alpha}$ can be determined from the absolute sensitivity of the instrumental setup, the photometric zero point (ZP), the distance of the stars, and from the magnitude in the H$\alpha$ band:
\begin{equation}
L_{\rm H\alpha}=4\pi d^210^{0.4(ZP-m_{656})}\rm{PHOTFLAM}\times \rm{RECTW}
.\end{equation}
The values of the photometric properties of the instruments were taken from \cite{ryon18}, namely the inverse sensitivity PHOTFLAM= 1.714 $\times$ $10^{-17}$ erg cm$^{-2}$ s$^{-1}$ \AA$^{-1}$, and the zero point in the VEGAmag system for the H$\alpha$ filter, ZP= 19.84 \citep{calamida}.
Assuming a distance of 51.4 $\pm$ 1.2 kpc \citep{panagia99} and considering the rectangular width of the $F656N$ filter RECTW= 17.679 \AA, we determined the H$\alpha$ luminosity for the 75 targets, finding a median value of about 8.7 $\times$ $10^{30}$ erg s$^{-1}$ ($0.2 \times 10^{-2} L_\sun$).
This value is slightly lower than the one found by \cite{katia19} in the LH 95 association ($\sim$ 1.2$\times$ $10^{31}$ erg s$^{-1}$, $ 0.3 \times 10^{-2} L_\sun$).
This is not surprising because
\cite{gou02} found that the mean $H\alpha$ intensities of the HII region related to LH 95 (DEM L 252) is about two times higher than the corresponding intensity of the HII region associated with LH 91 (DEM L 251).
\\
In addition, we can compared our result with the median H$\alpha$ luminosity of other regions of the LMC, namely 30 Doradus Nebula
and SN 1987A field; also in these cases our value is lower, the mean L$_{\rm H\alpha}$ estimated in these regions is $\sim$ 4 $\times$ $10^{31}$ erg s$^{-1}$($\sim$ $10^{-2} L_\sun$, \cite{demarchi10}) and $\sim$ 1.5$\times$ $10^{32}$ erg s$^{-1}$ ($\sim$ 4 $\times$ $10^{-2}$ $L_\sun$ \cite{demarchi17}) respectively.
The uncertainty on $L_{\rm H\alpha}$ is dominated by the uncertainties on the H$\alpha$ photometry, on the distance ($\sim$ 5\%) and on instrumental setup ($\sim$ 3\%) (see \citealt{demarchi10}).
The total uncertainty on $L_{\rm H\alpha}$ is about 16 \%.
\section{Physical parameters of the PMS candidates}
\label{physpam}
\subsection{Effective temperature and bolometric luminosity}
We evaluated the effective temperature of the PMS candidates by comparing the theoretical models with the $m_{555}-m_{814}$ color of our sample corrected for the reddening due to the Milky Way and LH 91, as explained in Sect. \ref{reddening}.
To convert the color to $T_{\rm eff}$ we used the models of \cite{bessel} for 3500 K $\leq$ $T_{\rm eff}$ $\leq$ 40000 K, $\log g$=4.5, and metallicity index [M/H] =-0.5 dex.
As the models of \cite{bessel} are not available for temperatures lower than 3500 K, we used the $T_{\rm eff}$-$(V-I_C)$ calibration by \cite{mamajek}, with the assumption that the calibrated $m_{555}$ and $m_{814}$ magnitudes coincide with the $V$ and $I_C$ magnitudes (see \citealt{katia19}).
To obtain the luminosity of the stars $L_{\star}$, we considered the magnitude $m_{555}$ corrected for the interstellar extinction, a distance to LH 91 of 51.4 kpc \citep{panagia99}, and a bolometric solar magnitude of 4.74 mag \citep{mamajek}.
The uncertainty on the effective temperature and stellar luminosity are dominated by the uncertainties on the magnitudes and distance.
In Fig. \ref{hr}, we show the location of the PMS candidates in the HR diagram, with the relative uncertainties, which in some cases are smaller than the symbol size.
We highlight that the majority of the PMS candidates are close to the MS and we could only identify them thanks to the information on their H$\alpha$ excess.
We also plot in Figure 4 the theoretical isochrones for ages of 2, 4, 8, 16, 32, and 64 Myr for Z=0.007 \citep{bressan2012}.
The red squares represent the PMS candidates younger than 8 Myr, while the blue dots the older ones.
\begin{figure}
\centering
\includegraphics[width=10cm]{HR.pdf}
\caption{HR diagram with the location of the low-mass young (red squares) and old (blue dots) PMS candidates. The theoretical isochrones \citep{bressan2012} are calculated for ages of 2, 4, 8, 16, 32, and 64 Myr (lines from right to left, respectively) and Z=0.007. The small black dots are the targets of the sample of our catalog.
}
\label{hr}
\end{figure}
From the HR diagram, it appears that LH 91 is characterized by a more or less continuous star formation, from a few million years to $\sim$ 60 Myr, with a smaller number of PMS candidates for ages younger than 8 Myr.
From the effective temperature and the luminosity of the PMS stars, we derive the stellar radius $R_\star$ of these stars, which we use to estimate the mass accretion rate of the selected PMS objects in Section\,\ref{proacc}. Typical mean errors on $R_\star$ are around 7\% and include both uncertainties on $T_{\rm eff}$ and $L_\star$.
\subsection{Mass and age}
\label{mage}
We derived the mass and age for each target by comparing the location of each star in the HR diagram (Fig. \ref{tracce}) with theoretical PMS evolutionary tracks. We adopted the PARSEC tracks for metallicity $Z=0.007$ \citep{bressan2012} from 0.1 $M_\sun$ to $3.0$ $M_\sun$.
We followed the approach discussed in \cite{Romaniello98} and refined by \citet{demarchi11,demarchi13}.
According to these authors, we define a grid in luminosity and temperature consisting of evenly spaced cells with sizes comparable to the typical observational errors. Given an evolutionary track of a star of a certain mass, we identify the cells crossed by the star during its evolution. For each cell, we extrapolate information associated with the evolutionary track, namely mass and age. The information is then be associated with the observed star belonging to a particular cell (for further details, see \cite{demarchi17}).
Figure \ref{tracce} shows the masses of the PMS candidates spanning from $0.2\,M_\odot$ for the cooler objects up to $1.0\,M_\odot$ for the hottest ones. The median value of the sample is about $0.8 M_\odot$. In the figure, we divided the PMS stars in two subsamples: the younger PMS candidates with an age of less than 8 Myr (red squares), and the PMS candidates older than 8 Myr (blue dots).
The sizes of the dots and squares are proportional to the mass accretion rate, which we determine and discuss in Section\,\ref{proacc}.
Here, we simply want to investigate whether and how the rate of mass accretion is correlated with evolutionary phase and stellar mass. As one can see in Fig.\,\ref{tracce}, the targets with the highest mass accretion rates (the largest symbols) are the youngest PMS stars, while the mass accretion rate decreases at older ages. Furthermore, the stars with higher mass have higher mass accretion values ($\dot{M}_{\rm acc}$) at all ages.
\begin{figure}
\centering
\includegraphics[width=10cm]{HR_tracks_k.pdf}
\caption{HR diagram of our PMS candidates. Red squares and blue dots represent young and old PMS stars, respectively. The size of the symbols is proportionate to the rate of mass accretion, as in the legend. We adopted the PARSEC tracks for metallicity $Z=0.007$ \citep{bressan2012} from 0.2 $M_\sun$ to $1.2$ $M_\sun$ (dashed lines).
}
\label{tracce}
\end{figure}
In Fig. \ref{histo} we show the histograms of the mass (upper panel) and age (lower panel) distribution of the PMS candidates with the bin sizes compatible with the uncertainties on mass and age, respectively.
The black line corresponds to the whole sample, the dashed red line corresponds to the young PMS candidates, while the dotted blue line represents the old PMS candidates.
The older PMS stars include preferentially higher mass stars, with the mass distribution presenting a peak at $\sim$ 0.7 $M_\odot$. The young PMS objects show a continuous distribution in mass, with no evident peak, but this is probably mostly due to the paucity of the subsample.
\begin{figure}
\centering
\includegraphics[width=10cm]{isto.pdf}
\caption{Histograms of the stellar mass (upper panel, bin of 0.05) and age (lower panel, bin of 0.2) for the 75 low-mass PMS candidates in logarithmic scale. The red dashed and blue dotted lines represent the distribution of the young and older populations, respectively.
}
\label{histo}
\end{figure}
The age distribution could suggest a separation between older and younger PMS stars, with a gap in the range between 5 and 10 Myr. The younger population shows a continuous distribution in age up 5 Myr. The older population constitutes about 90\% of the objects, with ages between 10 and $\sim$ 60 Myr and a peak at $\sim 50$\,Myr.
\section{Accretion properties}
\label{proacc}
In the following subsections we describe how we determined the accretion properties of our sample of PMS candidates and we present our study of their relation with the physical properties of the stars.
\subsection{Accretion luminosity}
\label{acclum}
The luminosity of the H$\alpha$ line generated along the funnel flows of circumstellar gas during the magnetospheric accretion process can be used as a tracer to estimate the accretion luminosity.
To determine the accretion luminosity of our sample of PMS candidates, we adopted the relationship obtained by \cite{demarchi10}, who analyzed the data of a group of T Tauri stars in Taurus-Auriga compiled by \cite{dahm}:
\begin{equation}
\log {\frac{L_{\rm acc}}{L_\odot}}= \log{\frac{L_{\rm H\alpha}}{L_\odot}} + (1.72 \pm 0.25)
.\end{equation}
The median of the accretion luminosity of our 75 PMS stars is 0.12 $L_\odot$.
The uncertainty on $L_{\rm acc}$ is dominated by the uncertainty on $L_{\rm H\alpha}$, which is about $16\%$, related to the photometric error on the H$\alpha$ magnitude. There is also a systematic error to take into account due to the uncertainties on the ratio $L_{\rm acc}/L_{\rm H\alpha}$ \citep{dahm,demarchi11}, but as the relation is the same for all stars, this uncertainty does not interfere with the comparison between the targets.
\begin{figure}
\centering
\includegraphics[width=10cm]{L_Lacc_tot.pdf}
\caption{Accretion luminosity as a function of stellar luminosity. Blue and red dots represent the older (age greater than 8\,Myr) and younger (age smaller than 8\,Myr) PMS candidates of LH 91, respectively. The gray filled dots, green diamonds, and black empty dots are the PMS of LH 95 by \cite{katia19}, SN 1987A by \cite{demarchi10}, and 30 Dor by \cite{demarchi17}, respectively. The dashed lines show the linear $L_{\rm acc}-L_\star$ relationship for different values of the coefficient, as indicated. }
\label{lacc}
\end{figure}
In Fig. \ref{lacc} we show the accretion luminosity versus $L_\star$ of the PMS candidates, the blue dots and red squares representing the old and young ones, respectively.
In each star formation region, $L_{\rm acc}$ increases with stellar luminosity, but the range and dispersion of the data are quite different.
For comparison, we show also the data of LH 95, with gray filled dots, SN 1987A with green empty diamonds, and 30 Dor with black empty dots.
In LH 91 and LH 95, the dispersion in $L_{\rm acc}$ seems to decrease with the increase of $L_\star$.
The accretion luminosity spans a range between 0.1 and 1 $L_\star$, with the peak of the distribution at about $0.3\,L_\star$ for LH91. In Section \ref{Ha_luminosity}, we shown that the median $L_{\rm H\alpha}$ in LH 91 is lower than that found in LH 95, and therefore it is not surprising that the values of the accretion luminosity in LH 91 are
also slightly lower than those of the PMS objects in LH 95 ($\sim$ $0.17$ $L_\odot$). This result could be due to two main factors: in LH 95 the mass range of the sample is larger (0.2-1.8 $M_\odot$), and at the same mass the stars are younger.
The samples of 30 Dor and SN 1987A are richer than LH 91 and LH 95, and the range of the accretion luminosity is larger, from 0.1 $L_\star$ to values higher than 1.0 $L_\star$. For a comparison, we focus on the range in stellar luminosity in common between the regions, namely -0.65 $L_\star$ and 0.0 $L_\star$. We evaluated the median accretion luminosity only for the regions 30 Dor, LH 95, and LH 91 finding values of $\sim$ 0.17 $L_\odot$, 0.22 $L_\odot$, and 0.13 $L_\odot$, respectively.
Unfortunately, the range in star luminosity of SN 1987A does not cross-match with those of LH 91 and LH 95, and therefore we cannot make a
direct comparison.
\begin{figure}
\centering
\includegraphics[width=10cm]{Teff_Lacc.pdf}
\caption{Accretion luminosity versus effective temperature. The blue dots and red squares are as in Fig. \ref{lacc}, the gray filled dots represent the PMS stars of LH 95 by \cite{katia19}.
}
\label{tef}
\end{figure}
In Fig. \ref{tef} we show the accretion luminosity versus the effective temperature in logarithmic scale of the old (blue dots) and young (red squares) PMS stars, together with the sample of LH 95 (\citealt{katia19}; gray filled dots).
This plot is very similar to the HR diagram (Fig. \ref{tracce}). While a separation between the old and young candidates in $T_{\rm
eff}$ is evident in the LH 95 sample (see Fig. 9 in \citealt{katia19}), in LH 91 there is a continuous distribution in $T_{\rm eff}$, the PMS stars with the highest accretion luminosity being close to the old subgroup.
\subsection{Mass accretion rate versus stellar age}
Finally, we derived the mass accretion rate $\dot{M}_{\rm acc}$ of our PMS candidates from the free-fall equation \citep{koenigl91,calvet98}:
\begin{equation}
L_{\rm acc} \simeq \frac{GM_\star\dot{M}_{\rm acc}}{R_\star}\left(1-\frac{R_\star}{R_{in}}\right)
,\end{equation}
\noindent{where $G$ is the gravitational constant, $M_\star$ and $R_\star$ are the mass and radius of the PMS candidates, and $R_{\rm in}$ is the inner radius of the accretion disk. $R_{\rm in}$ depends on how exactly the accretion disk is coupled with the magnetic field of the star, and so its value is quite uncertain. We adopt $R_{\rm in} = 5 R_\star$, following \cite{gul98}. The median value of the mass accretion rate of our sample is $\sim$ $4.8$ $\times$ $10^{-9}$ $M_\odot$ $yr^{-1}$, with higher values for the younger population ($\sim$ 1.2 $\times$ $10^{-8}$ $M_\odot$ $yr^{-1}$), and lower values for the older candidates ($\sim$ 4.7 $\times$ $10^{-9}$ $M_\odot$ $yr^{-1}$). The values we find are slightly lower than those found by \cite{katia19} for LH 95, as shown in Fig. \ref{tmacc}, where the median rate is about $7.5$ $\times$ $10^{-9}$ $M_\odot yr^{-1}$.
The mass accretion rate in LH 91 is also lower than the median value measured in the field of SN 1987A
($2.6$ $\times$ $10^{-8}$ $M_\odot$ $yr^{-1}$, as found by \citealt{romaniello04}, and $2.9$ $\times$ $10^{-8}$ $M_\odot yr^{-1}$ as measured by \citealt{demarchi10}) and in 30 Dor by ($\sim$ 8 $\times$ $10^{-8}$ $M_\odot yr^{-1}$; \citealt{demarchi17}).}
The uncertainty on $\dot{M}_{\rm acc}$ is dominated by the uncertainty on $L_{\rm H\alpha}$, which is of about $16\%$, but we have to consider also the contribution of $R_\star$ ($7\%$, including a $5\%$ systematic uncertainty on the distance modulus), stellar mass ($\sim$ $7\%$), the intrinsic uncertainties due to the evolutionary tracks ($2\%$-$6\%$, for more details see the Appendix A of \citealt{katia19}), and knowledge of the relation $L_{\rm acc}$--$L_{\rm H\alpha}$, which in this case is not very accurate (a factor of $\sim$ 2; \citealt{demarchi10}).
Finally, the contribution of other sources of systematic error ---such as physical processes different from accretion (e.g., chromospheric activity or ionization of nearby massive stars) or nebular continuum--- that could affect the determination of $\dot{M}_{\rm acc}$ are considered to be negligible \citep{demarchi10}. In summary, the combined statistical uncertainty on $\dot{M}_{\rm acc}$ is of about $20\%$.
A snapshot of the mass accretion rate as a function of the age is shown in Fig. \ref{tmacc}.
We divided the sample in two subsamples with the stellar mass larger (yellow filled squares) and smaller (empty black triangles) than the median stellar mass ($\sim 0.8 \,M_\odot$).
The gray filled dots are the PMS candidate in LH 95 \citep{katia19}.
As expected, the accretion appears to decrease with time, in line with the predicted evolution of viscous disks \citep{hart}, but there is a large spread in mass accretion rate at a given age.
We performed a linear fit to the two subsamples, and find a similar slope: $-0.31 \pm 0.07$ for the high masses and $-0.39 \pm 0.04$ for the low masses. These values are in agreement within the error with those evaluated in other MCs regions \citep{demarchi11,demarchi13,demarchi17,katia19}.
This plot also shows that the mean mass accretion rate of the PMS stars in LH 91 is slightly lower than in LH 95, because our sample is composed mainly of older stars (more so than 30 Myr), close to the MS when the accretion process is less powerful.
\begin{figure}
\centering
\includegraphics[width=10cm]{age_Macc_err.pdf}
\caption{Observed mass accretion rate versus age in LH 91. Yellow filled squares and empty black triangles represent the targets with mass greater and smaller than the median mass of the PMS candidates sample, respectively. The gray-filled dots are the PMS candidates in LH 95 \citep{katia19}. The error bars on the age and mass accretion rate are reported. When the uncertainties are smaller than the symbol size, they are not visible. The arrows represent lower limits. The dashed yellow and solid black lines represent the regression fit of the two subsamples of LH 91.
}
\label{tmacc}
\end{figure}
\subsection{Mass accretion rate versus stellar mass}
Figure \ref{mmacc} shows the mass accretion rate as a function of the stellar mass of our PMS candidates. Younger PMS candidates are represented by red squares, while the older PMS candidates are marked with blue dots.
The gray filled dots represent the PMS candidates in LH 95 \citep{katia19}. From Figs. \ref{tmacc} and \ref{mmacc} it is evident that the mass accretion rate is typically higher for the younger and more massive stars. Only two low-mass stars (with masses of about 0.3 $M_\odot$ and 0.4 $M_\odot$) have a high mass accretion rate (2.2 $\times$ $10^{-8}$ $M_{\odot}$/yr and 1.4 $\times$ $10^{-8}$ $M_{\odot}$/yr respectively).
Figure \ref{mmacc} reveals a large spread in $\dot{M}_{\rm acc}$ values for a given stellar mass.
This is hardly surprising considering the large spread of ages (see also \cite{rigliaco11}).
Moreover, the older sample of PMS candidates in LH 91 reaches lower values of mass accretion rate when compared to the stars at similar masses in LH 95.
Again, it is interesting to note how the stars of any given mass appear to be younger in LH 95 than in LH 91. This could simply be the result of different evolutionary stages between stars in LH 91 and LH 95, with the former sample being
more evolved than the latter. This could in turn justify the smaller number of accretors and the lower values of the mass accretion rate in LH 91 compared to LH 95.
This difference might in turn be caused by other physical differences in star formation environment, for example the gas density of the regions.
It would seem natural that an environment with lower gas density would result in less massive circumstellar disks, and therefore a more modest mass accretion rate. To verify this effect, we compare the median mass accretion rate in LH 91 with that of the three star-forming regions at similar metallicity in the LMC, namely
LH 95, SN 1987A, and 30\,Dor, for which a study of accretion properties was performed.
Considering targets with the same mass range ($0.4-1.0\,M_\odot$) and younger than 8\,Myr, we obtained a mean $\dot{M}_{\rm acc}$ value of $1.1 \times 10^{-8}\,M_\odot$/yr for LH 91, $4.4 \times 10^{-8}\,M_\odot$/yr for LH 95, $3.7 \times 10^{-7}\,M_\odot$/yr for SN 1987A, and $5.9 \times 10^{-8}\,M_\odot$/yr for 30\,Dor.
We also estimated the mean dust density of the aforementioned four regions taking into account the mass surface density map\footnote{https://www.asc.ohio-state.edu/astronomy/dustmaps/} by \cite{utomo}.
Considering regions with a radius of 1.5 arcmin, we found values of $0.11 \pm 0.01\,M_\odot$/pc$^2$ for LH 91, $0.16 \pm 0.01\,M_\odot$/pc$^2$ for LH 95, and similar value for SN 1987A, and 0.65 $\pm$ 0.09 $M_\odot$/pc$^2$ for 30 Dor. Even though this kind of analysis is only qualitative, we find some tentative indication that regions with higher dust densities also
have higher mass accretion rates (and possibly higher gas density), with the exception of SN 1987. A much more detailed analysis would be necessary to address this issue in more detail, which goes beyond the scope of this work.
\begin{figure}
\centering
\includegraphics[width=10cm]{M_macc.pdf}
\caption{Distribution of $\dot{M}_{\rm acc}$ as a function of $M_{\star}$. The symbols are as in Fig. \ref{lacc}.
}
\label{mmacc}
\end{figure}
\subsection{Spatial distribution of the PMS candidates}
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{prova_distr_3.pdf}
\caption{{\it Left panel}: Color-composite image of LH 91 from WFC3 observations in the F555W, F814W, and F656N filters. The image is rotated 30 degrees to align the two figures. {\it Right panel}: Distribution of the PMS stars. Symbols are the same as in Fig. 5. The crosses represent the massive stars from the 2MASS catalog, and the triangles are Be stars from Table\,1 of \cite{gou02}. North is up and east to the left.}
\label{space}
\end{center}
\end{figure*}
The left panel of Figure \ref{space} shows a color-composite image from WFC3 observations in the F555W (red), F814W (blue), and F656N (green) filters of LH 91.
The right panel in the same figure shows the spatial distribution of the PMS stars in our sample projected onto the sky.
As in figure \ref{tracce}, the sizes of the dots and squares are proportionate to the ranges of the mass accretion rate. Different colors represent different ages: older than 8\,Myr (blue dots) and younger than 8\,Myr (red squares).
The crosses are massive stars selected from the 2MASS catalog \citep{cutri} with
$J - H < 0.8$ and $J < 15$ mag (see also \citealt{katia19}), while the triangles are Be stars found by \cite{gou02} (see their Table\,1).
To align the orientation of the two figures, the left image is rotated 30 degrees.
With the aim of better understanding the correspondence between the fields, we indicate some bright stars with the letters A to D.
Regions with a lack of stars in the right figure correspond to those rich in gas in the left figure, shown in green. The dust associated to the gas could be obscuring the stars behind it. The PMS objects appear to be distributed more or less uniformly over the region, and are not clustered around the massive stars, unlike the younger population of LH 95 \citep{katia19}.
This result is in agreement with the conclusions of \cite{gou02}, who found only a weak match between the HII region of LH 91 and the two Be stars located to the southwest side of the region. Therefore, it appears that in LH 91 there is no obvious region of higher star-formation intensity, at least currently.
\section{Conclusions}
\label{conclusion}
We presented a multiwavelenght analysis of the stellar populations in LH 91, a star-forming region in the LMC, observed with the WFC3 on board the {\it HST}.
We applied a photometric detection method to identify PMS candidates still actively accreting matter from their circumstellar disks.
The method combines HST broad-band F555W and F814W photometry with narrow-band F656N imaging in order to identify stars with H$\alpha$ excess emission and to subsequently measure their accretion luminosity $L_{\rm acc}$ and equivalent width $EW_{\rm H\alpha}$, and to derive their mass accretion rate $\dot{M}_{\rm acc}$.
The main results of our analysis can be summarized as follows:
\begin{enumerate}
\item From the photometric catalog of 9423 well-detected stars, we identified about 180 low-mass PMS candidates on the basis of their excess H$\alpha$ emission, that is, with
their ($m_{555} - m_{656}$) color exceeding that of the reference template at the same ($m_{555}-m_{814}$) color by more than three times the combined uncertainties on their ($m_{555}-m_{656}$) values.
\item We measured the $EW_{\rm H\alpha}$ of the PMS stars, finding values in the range of $\sim$ 3 $\AA$ - 17 $\AA$, with a median of 9 $\AA$. We selected stars with $EW_{\rm H\alpha}$ $\geq$ 10 $\AA$, which are typical values of actively accreting PMS stars. A total of 75 objects satisfy this condition.
\item We estimated the stellar effective temperature and luminosity thanks to the \cite{bessel} relations for $3500 \leq T_{\rm eff} \leq$ 40000 K, and the \cite{mamajek} calibrations for $T_{\rm eff} < 3500$ K.
\item We obtained the mass and age of the PMS candidates by comparing the location of each star in the HR diagram with theoretical PMS evolutionary tracks \citep{bressan2012}.
The range of the stellar masses in our sample is between $\sim$ 0.2 $M_\sun$ and $\sim$ 1.0 $M_\sun$ with a median of $\sim$ 0.8 $M_\sun$. The age of the stars is distributed between a few million years and as much as $\sim$ 60 Myr with an apparent gap between 5 Myr and 10 Myr.
For this reason we divided our sample in two populations, which we call younger (t $\leq$ 8 Myr with median age $\sim$ 3.5 Myr) and older PMS candidates (t $>$ 8 Myr with median age $\sim$ 35 Myr).
\item We measured the H$\alpha$ luminosity of the PMS candidates and consequently their accretion luminosity. We find a median value of $\sim$ $0.12$ $L_\sun$. The accretion luminosity increases with $L_\star$, while the dispersion in $L_{\rm acc}$ seems to decreases with $L_\star$.
We also find that the accretion luminosity spans the range 0.1-1 $L_\star$, with a peak in the distribution at about 0.3 $L_\star$.
\item Through the accretion luminosity and other physical parameters, we determined the mass accretion rate of PMS stars, finding a median value of $\sim$ 4.8 $\times$ $10^{-9}$ $M_\sun yr^{-1}$, with higher values for the
younger population ($\sim 1.2 \times 10^{-8} M_\sun$\,yr$^{-1}$), and lower values
for the older candidates ($\sim 4.7 \times 10^{-9} M_\sun$ yr$^{-1}$).
\item We studied the relation between the mass accretion rate and both age and stellar mass. As expected, the mass accretion rate appears to decrease with time and to increase with stellar mass.
\item We compared our results with other star formation regions in the Large Magellanic Cloud, in particular with LH 95, which is the closest region to LH 91 for which accretion properties of PMS candidates have been derived.
LH 91 is a star-forming region that is less rich in PMS stars than LH 95, with lower stellar masses (0.2-1.0 $M_\odot$ $vs$ 0.2-1.8 $M_\odot$)
but similar range in age (few Myr up to 60Myr).
The accretion luminosity and the mass accretion rate of PMS candidates in LH 91 are both slightly lower than in LH 95; in particular the median values are 0.12 $L_\odot$ versus 0.17 $L_\odot$, and 7.5 $\times$ $10^{-9}$ $M_\sun yr^{-1}$ versus 4.8 $\times$ $10^{-9}$ $M_\sun yr^{-1}$ , respectively.
\item We explored the possibility that the density of the environment (which we probe using dust emission) could affect the mass accretion rate. We compared the median mass accretion rate of star-forming regions with similar metallicity but different dust density, namely LH 91, LH 95, SN 1987A, and 30\,Dor.
We considered targets in the same mass range (0.4-1.1 $M_\odot$) and younger than 8 Myr. From a qualitative analysis, we find that the mass accretion rate increases with dust density of the environment in which the stars are formed.
\item Finally, we find the spatial distribution of the PMS stars to be rather uniform, without any evidence of clumps around more massive stars.
\end{enumerate}
The advent of the {\it James Webb Space Telescope} will allow us to put strong constraints on accretion phenomena of members in star-forming regions with different stellar properties (such as metallicity, age, and distance). In particular, the spectroscopic observations would give us information on the density and ionization state of the material undergoing accretion as well as on its kinematics, thereby providing a clearer picture of the accretion
process itself in different environmental conditions.
\begin{acknowledgements}
We are very thankful to the anonymous referee for precious comments and suggestions that have helped
us to improve this paper.
RC is grateful to ESA for the support during the data analysis useful for the preparation of this paper. This work was based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Spacte Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in
Astronomy, Inc., under NASA contract NAS5-26555. This research made use of the SIMBAD database, operated at the CDS (Strasbourg, France) and data
products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has made also use of the SVO Filter Profile Service supported from the Spanish MINECO through grant AyA2014-55216.
\end{acknowledgements}
\bibliographystyle{aa}
\section{Introduction}
In the magnetospheric accretion scenario, the accretion of material from a circumstellar disk in low-mass pre-main sequence (PMS) stars is funneled by the stellar magnetic field, which disrupts the disk at a few stellar radii (\citealt{hart}, and reference therein).
Our understanding of this process is still not entirely clear.
A key parameter describing the star--disk evolution is the rate of mass accretion, that is, the rate at which mass from the circumstellar disk is transferred onto the central PMS star (see, e.g., review by \citealt{hart}).
In particular, it is important to evaluate the relation between mass accretion rate, stellar mass, and age, how the mass accretion rate changes as a star approaches its main sequence (MS), and how the metallicity or in general the chemical composition of the parent molecular cloud could impact the formation and evolution of the star.
Usually, mass accretion rates are derived from the analysis of continuum veiling, ultraviolet (UV) excess emission, or indeed through a detailed study of the profile and intensity of hydrogen emission lines (e.g., H$\alpha$, Pa$\beta$, Br$\gamma$), which requires medium- to high-resolution spectroscopy for each individual object. Even with modern multi-object spectrographs at the largest ground-based telescopes, these methods can be applied to relatively nearby star-forming regions ($d \lesssim 1-2$ \,kpc), because of crowding.
For this reason, the properties of low-metallicity PMS stars located in extra-galactic star-forming regions remain poorly known.
In the last decade, \cite{demarchi10} developed an efficient method based on {\it Hubble Space Telescope} (HST) photometry that allows the identification of hundreds of PMS stars simultaneously, and the determination of their physical parameters, including effective temperature, luminosity, age, mass, H$\alpha$ luminosity, accretion luminosity, and mass accretion rate, with an uncertainty of between 15\% and 20\%, comparable to that allowed by spectroscopy.
This method has been successfully applied not only to regions of the Milky Way \citep{beccari10,beccari15,zeidler16}, but also to regions of the Small \citep{demarchi11,demarchi13} and Large Magellanic Clouds (e.g., \citealt{katia19}, \citealt{demarchi17}, \citealt{spezzi12}).
This method combines $V$ (F555W) and $I$ (F814W) broadband photometry with narrow-band H$\alpha$ (F656N or F658N) imaging to identify the stars with excess in H$\alpha$ emission and to determine their associated H$\alpha$ emission equivalent width, $EW_{\rm H\alpha}$, the H$\alpha$ luminosity and the accretion properties of the PMS stars selected.
In this work, we use this method to select and study the PMS populations of the stellar system LH 91 \citep{lucke74} in the northeast outer edge of the super-giant shell LMC\,4 in the Large Magellanic Cloud (LMC). This area, investigated with $H\alpha$ and radio observations by \cite{book}, also covers LH 91\,I in the southeast of LH 91 \citep{konti94} and LH 95 in the north of LH 91 \citep{lucke74}.
The most recent work on LH 91 was presented by \cite{gou02} using ground-based $BVR$ and H$\alpha$ photometry. Studying the H$\alpha$ topography of the area, the authors found that LH 91 is loosely related to an \ion{H}{ii} region, which seems to be large and rather diffuse.
In agreement with \cite{lucke74}, the authors confirm that LH 91 does not seem to represent a "classical" stellar system in which the stars are physically related to each other.
Analyzing the color--magnitude diagram (CMD) in the $B$ and $V$ band, the authors estimated the color excess $E(B-V)$ = 0.16 $\pm$ 0.04 using the reddening-free Wesenheit function. Moreover, fitting the Geneva isochrones \citep{geneva} derived adopting metallicity Z=0.008, \cite{gou02} derived the age of the system, finding it to be younger than 10 Myr, similar to that of LH 95 and LH 91I, and in agreement with for example \cite{braun97,braun00}.
Instead, \cite{konti94} estimated an age of about 20 Myr.
Finally, \cite{gou02} also estimated the age of the background field, the population of the observed area around LH 91, to be older than 50 Myr and up to 1.25 Gyr.
This paper is organized as follows:
in Section \ref{phot} we describe the HST photometric observations, in Section \ref{idpms} we illustrate the analysis needed to identify the PMS stars and to estimate the luminosity and the equivalent width (EW) associated to the H$\alpha$ excess. In Section \ref{physpam} we measure the physical properties of the stars selected. In Section \ref{proacc} we determine the accretion properties of the selected PMS stars, that is, the accretion luminosity and mass accretion rate, and we show the relation between the mass accretion rate and the stellar properties of the PMS objects, such as their mass and age. We also compare our results with the findings for other star-forming regions in the LMC with the same metallicity, and in particular with LH 95, the closest region to LH 91 for which accretion properties of PMS candidates have been derived \citep{katia19}. We present our conclusions in the last section.
\section{Photometric observations}
\label{phot}
The LH 91 region was observed with the Wide Field Camera 3 (WFC3/UV) on board the HST in the broad-band filters $F555W$ and $F814W$, and in the narrow-band filter $F656N$, the latest centered on the H$\alpha$ line.
The data were collected as part of HST programs \#12872 (PI: Da Rio) and
\#13009 (PI: De Marchi). A short logbook of the observations is shown in Table~\ref{tab_obs}.
\begin{table}
\centering
\caption{Logbook of the observations}
\begin{tabular}{@{}cccc@{}}
\hline
Camera & Number of exposures & Filter & Exposure time (s) \\
& & & (s) \\
\hline
\multicolumn{4}{c}{Prop ID 12872, PI: Da Rio}\\
\hline
WFC3 & 2 & $F555W$ & 2804 + 2970 \\
& 1 & $F814W$ & 2804 \\
& 1 & $F656N$ & 2970 \\
\hline
\multicolumn{4}{c}{ Prop ID 13009, PI: De Marchi}\\
\hline
WFC3 & 1 & $F656N$ & 2949 \\
\hline
\label{tab_obs}
\end{tabular}
\end{table}
The data were reduced using the standard $DAOPHOTII$ \citep{stetson} procedure. A list of 10 to 20 well-sampled and isolated stars were used to model the point spread function (PSF) on the $F555W$ and $F814W$ images, and a deep photometric catalog of stars was derived via PSF fitting on the images acquired with the broad-band filters. The final magnitude of each star in a given filter is estimated as the mean of the photometric measures in each individual image taken with that filter, while the standard deviation is taken as the associated error. Aperture photometry was then used
to extract the $F656N$ magnitude for each star detected in the optical bands.
The choice of performing aperture photometry on the narrow-band images is driven by the fact that such images are characterized by very little stellar
crowding. As such, the aperture photometry is the ideal choice as it allows accurate estimation of the magnitude free from any uncertainty that is unavoidably associated with the choice of PSF model. We stress here that the background is locally estimated and subtracted in an annulus around the start.
The final catalog of the overlapping fields contains 9423 objects, of which 6980 have a measure in the $F656N$ band.
These $F555W$ and $F814W$ band observations are among the deepest ever taken toward the LH 91 region.
The instrumental magnitudes in $F555W$, $F814W,$ and $F656N$ were calibrated to the
VEGAMAG photometric system using the zero-point values made available by the Space Telescope Science Institute\footnote{https://www.stsci.edu/hst/instrumentation/wfc3/data-analysis/photometric-calibration}.\\
\section{Data analysis}
\label{idpms}
\subsection{PMS star identification}
\label{analisi}
We applied the method developed by \cite{demarchi10} to identify the PMS stars characterized by an active mass accretion process. We measured the physical and accretion properties of these objects (i.e., H$\alpha$ luminosity, H$\alpha$ emission $EW_{\rm H\alpha}$, mass accretion rate, and accretion luminosity) using photometric data. We refer to \cite{demarchi10} for a detailed discussion of the method, while in this work we describe some fundamental steps.
We selected PMS stars on the basis of their H$\alpha$ excess emission \citep{white03}.
First of all, we identified the H$\alpha$ excess emitters in the $(m_{555}-m_{656})$ versus
$(m_{555} - m_{814})$ color--color diagram shown in Fig. \ref{Vi}. The magnitudes were corrected for the extinction contribution of the Milky Way considering the values $A^{MW}_{555}$ = 0.22 mag and $E(m_{555}-m_{814})^{MW}$ = 0.1 \citep{fitz}.
To this aim, we selected from our catalog in $F555W$, $F814W$, and $F656N$ bands all those stars whose photometric uncertainties, that is $\delta_{555}$, $\delta_{814}$, and $\delta_{656}$, are less than 0.05 in each individual band.
A total of 254 stars satisfied these conditions (gray filled dots in the color--color diagram in Fig.\ref{Vi}), out of 9423 sources in the whole catalog. These are typically MS stars that do not present an appreciable H$\alpha$ excess.
With these stars, we define a reference sequence (dashed black line) with respect to which the excess H$\alpha$ emission is computed.
The dotted blue line of Fig. \ref{Vi} represents the theoretical color relationship obtained using the \cite{bessel} model atmospheres for MS stars with the chemical and physical parameters appropriate for the LMC (effective temperature $T_{\rm eff}$ in the range of 3500-40000 K, surface gravity $\log g$ = 4.5, and metallicity index $[M/H]$ $\simeq $ -0.5, \citealt{colucci}). The agreement between our reference sequence and the theoretical one is evident at $m_{F555W}-m_{F814W}$ < 1.
The discrepancy between the models and the data at $m_{F555W}-m_{F814W}$ > 1 can be attributed to small number statistics and to the fact that the majority of these objects are red giants, with different physical characteristics from those assumed in the models.
To select the most probable PMS stars, after the exclusion of the 254 stars taken as reference, we first selected the targets with photometric uncertainties in each individual band as follows: $\delta_{555}$ and $\delta_{814}$ $<$ 0.1 mag, and $\delta_{656} <$ 0.3 mag, for a total of 1309 objects.
As highlighted by \cite{demarchi10}, the contribution of the H$\alpha$ line to the $m_{555}$ magnitude is negligible, and therefore we can define the magnitude of the excess emission as:
\begin{equation}
\Delta H\alpha= (m_{555}-m_{656})^{obs}-(m_{555}-m_{656})^{ref}
,\end{equation}
where the superscripts "obs" and "ref" refer to the observation and reference sequence, respectively.
We then considered the stars with $\Delta H\alpha$ exceeding at least three times the combined mean photometric uncertainties in the three bands $\delta_3$:
\begin{equation}
\delta_3=\sqrt{\frac{\delta_{555}^{2}+\delta_{656}^2+\delta_{814}^2}{3}}
.\end{equation}
A total of 187 stars satisfy these conditions; they are indicated with large red dots in Fig \ref{Vi}.
This means that 187 stars have $(m_{555}-m_{656})$ colors exceeding that of the reference template at the given $(m_{555}-m_{814})$ color by more than three times the combined uncertainties on their $(m_{555}-m_{656})$ values.
The large green dots in Fig.1 are the targets selected with $m_{555}
< 20 mag, which we exclude from our following analysis as we are interested primarily in low-mass PMS candidates. Our final sample of PMS candidates is therefore composed by 181 targets.
As in these bands the reddening vector due to LH 91 runs almost parallel to the median of the reference sequence \citep{demarchi10}, the color--color diagram provides a robust identification of stars with H$\alpha$ excess even before correction for LH 91 reddening.
\begin{figure}
\includegraphics[width=10cm]{VI_VHalpha_LH91.pdf}
\caption{Color--color diagram of the selected stars in the field of LH 91. All magnitudes are already corrected for the extinction contribution of our Galaxy, $A^{MW}_{555}$ = 0.22 mag and $E(m_{555}-m_{814})^{MW}$ = 0.1. The arrow shows the reddening vector of $E(m_{555}-m_{814})$ =0.25 and $E(m_{555}-m_{656})$=0.13 for the adopted LH 91 extinction law.
The dashed line represents the median photospheric ($m_{555}-m_{814})$ color for the 254 stars with $\delta_{555}$, $\delta_{814}$, and $\delta_{656}$ < 0.05 (gray filled dots).
The dotted line shows the model atmospheres of \cite{bessel} computed for the three WFC3/UVES filters.
The PMS star candidates with H$\alpha$ emission excess at the $3\sigma$ level are represented with large red dots.
The large green dots are the brightest PMS star candidates, with $m_{555}$ < 20 mag. Error bars are also shown.
}
\label{Vi}
\end{figure}
\subsection{The color--magnitude diagram}
\label{reddening}
We applied the correction for the extinction contribution of the Milky Way and LH 91 to the magnitudes in each band.
For the Milky Way, we report the values in the previous section.
We estimated the extinction for LH 91 from the value of $E(B-V)$ = $0.16$ $\pm$ $0.04$ color excess in the photometry of \cite{gou02} and converted into $A_V$ assuming the average LMC reddening law $R_{555}$ = $A_{555}/E(m_{555}-m_{814})$ $=$ 2.97 calculated by \cite{demarchi14}.
As \cite{gou02} found that the density of the ambient medium in LH 91 is similar to the value for LH 95, and as \cite{dario09} did not find a significant level of differential extinction while studying the upper
MS stars of the latter, we also consider the differential reddening to be negligible in LH 91.
We show the CMD $(m_{555}-m_{814})_0 $ versus $(m_{555})_0$ in Fig. \ref{cmd}. The small black dots are the targets of the whole sample, namely 9423 stars.
To estimate the age of the system, we fit the CMD with the isochrone models for Z=0.007 ---which is typical of young LMC stars (e.g., \citealt{colucci})--- taken from the PAdova-Trieste Stellar Evolution Code (PARSEC, \citealt{bressan2012}) and distance modulus $(m-M)_0$=18.55 \citep{panagia91,panagia99}.
The turnoff at $m_{555} \sim 20.5$\,mag and the red clump at $m_{555} \sim 19.5$\,mag and $m_{555}-m_{814} \sim 1.0$ are best matched by a 1.5 Gyr isochrone (dashed light-blue line), in agreement with the age of the background field stars evaluated by \cite{gou02}.
Stars with H$\alpha$ excess show a wide apparent spread towards young age and could be divided in two groups, separated by an isochrone at 8 Myr (solid green line).
\begin{figure}
\centering
\includegraphics[width=10cm]{VI_V_LH91_1.5Gyr.pdf}
\caption{Color--magnitude diagram of the field of LH 91. All magnitudes are already corrected for the extinction contribution of our Galaxy and LH 91. The small black dots are the targets of the whole sample (9423 stars).
Small gray-filled dots are the stars with photometric uncertainties < 0.05 mag in each band. The large red dots represent the PMS star candidates with H$\alpha$ excess emission at the $3\sigma$ level.
Solid green and dashed light-blue lines show the theoretical isochrones from \cite{bressan2012} for ages 8 Myr and 1.5 Gyr, respectively, metallicity Z=0.007, and a distance modulus $(m_V -M_V)_0$=18.55.}
\label{cmd}
\end{figure}
\subsection{From H$\alpha$ color excess to H$\alpha$ luminosity}
\label{Ha_luminosity}
To avoid contamination by stars with significant chromospheric activity, we also imposed constraints on $EW_{\rm H\alpha}$, selecting only stars with $EW_{\rm H\alpha}$ $\geqslant$ 10 \AA, because according to \cite{demarchi10} this is a reliable cutoff to separate accretors from those not accreting .
For details of the method used here to derive $EW_{\rm H\alpha}$ from the photometry, we refer to \cite{demarchi10,demarchi11,demarchi13}. Here we recall that, as the width of the H$\alpha$ line is narrow with respect to the width of the filter, the measure of $EW_{\rm H\alpha}$ is given by the difference between the observed H$\alpha$ line magnitude and the level of the H$\alpha$ continuum ($\Delta H\alpha$).
If we assume that the stars used to define the reference sequence have no H$\alpha$ absorption features, their $(m_{555} - m_{656})$ color represents the color of the pure continuum.
Consequently, we calculated the $EW_{\rm H\alpha}$ from the following relation:
\begin{equation}
EW_{\rm H\alpha}=RECTW \times [1-10^{-0.4\Delta H\alpha}]
,\end{equation}
\noindent{where $RECTW$ is the rectangular width of the F656N filter. The uncertainty on the $EW_{\rm H\alpha}$ measure is dominated by the uncertainty on the H$\alpha$ magnitude.}
Moreover, because of the width of the F656N filter, the small contribution due to the emission of the forbidden $\ion{N}{ii}$ line at $\lambda 6548$ is included in $\Delta H\alpha$.
Therefore, following the prescriptions by \cite{demarchi10}, we estimated corrections ranging from 0.2 to 1.4 $\AA$, to be subtracted from the $EW_{\rm H\alpha}$ of our targets.
Figure \ref{ew} shows the $EW_{\rm H\alpha}$ measured for the selected low-mass PMS candidates as a function of the de-reddened $m_{555}-m_{814}$ color.
We performed a preliminary study of the $EW_{\rm H\alpha}$ distribution of the PMS candidates at different ages using the isochrone at 8 Myr as a discriminating factor. We divided the sample into stars older (blue dots) and younger (red squares) than 8 Myr (Fig. \ref{ew}).
\begin{figure}
\centering
\includegraphics[width=10cm]{EW.pdf}
\caption{H${\alpha}$ equivalent width of the selected low-mass PMS candidates, as a function of the de-reddened ($m_{555}-m_{814}$) color. The red squares represent the values of the PMS stars younger than 8 Myr, the blue dots are the oldest PMS stars.
}
\label{ew}
\end{figure}
The values of $EW_{\rm H\alpha}$ for the sample range from $\sim$ 3 $\AA$ to $\sim$ 17 $\AA$, with a median of $\sim$ 9 $\AA$ that applies to both the whole sample and the two subgroups.
After the selection on the $EW_{\rm H\alpha}$, a total of 75 objects satisfy the conservative condition ($EW_{\rm H\alpha}$ $\geq$ 10 $\AA$). The median value of the $EW_{\rm H\alpha}$ is about 12 $\AA$, regardless of age, smaller than the values found in other star formation fields in the LMC, such as LH 95 ($EW_{\rm H\alpha}$ $\sim$ $30$ $\AA$, \citealt{katia19}) and SN 1987A ($EW_{\rm H\alpha}$ $\sim$ $20$ $\AA$, \citealt{demarchi10}).
The difference could be due to the paucity and to stellar mass range of our sample.
Moreover, the figure shows an almost clear separation in color between the two subgroups in LH 91 with the exception of the target with $(m_{555} - m_{814})_{0}$ $\sim$ $0.4$ and $EW_{\rm H\alpha}$ $\sim$ $10$ $\AA$. As the coordinates of this target correspond to those of a massive star in the 2MASS catalog, it could be a Be star. A similar separation in color between the two subgroups of PMS stars was found in LH 95 \citep{katia19}.
The H$\alpha$ emission line luminosity $L_{\rm H\alpha}$ can be determined from the absolute sensitivity of the instrumental setup, the photometric zero point (ZP), the distance of the stars, and from the magnitude in the H$\alpha$ band:
\begin{equation}
L_{\rm H\alpha}=4\pi d^210^{0.4(ZP-m_{656})}\rm{PHOTFLAM}\times \rm{RECTW}
.\end{equation}
The values of the photometric properties of the instruments were taken from \cite{ryon18}, namely the inverse sensitivity PHOTFLAM= 1.714 $\times$ $10^{-17}$ erg cm$^{-2}$ s$^{-1}$ \AA$^{-1}$, and the zero point in the VEGAmag system for the H$\alpha$ filter, ZP= 19.84 \citep{calamida}.
Assuming a distance of 51.4 $\pm$ 1.2 kpc \citep{panagia99} and considering the rectangular width of the $F656N$ filter RECTW= 17.679 \AA, we determined the H$\alpha$ luminosity for the 75 targets, finding a median value of about 8.7 $\times$ $10^{30}$ erg s$^{-1}$ ($0.2 \times 10^{-2} L_\sun$).
This value is slightly lower than the one found by \cite{katia19} in the LH 95 association ($\sim$ 1.2$\times$ $10^{31}$ erg s$^{-1}$, $ 0.3 \times 10^{-2} L_\sun$).
This is not surprising because
\cite{gou02} found that the mean $H\alpha$ intensities of the HII region related to LH 95 (DEM L 252) is about two times higher than the corresponding intensity of the HII region associated with LH 91 (DEM L 251).
\\
In addition, we can compared our result with the median H$\alpha$ luminosity of other regions of the LMC, namely 30 Doradus Nebula
and SN 1987A field; also in these cases our value is lower, the mean L$_{\rm H\alpha}$ estimated in these regions is $\sim$ 4 $\times$ $10^{31}$ erg s$^{-1}$($\sim$ $10^{-2} L_\sun$, \cite{demarchi10}) and $\sim$ 1.5$\times$ $10^{32}$ erg s$^{-1}$ ($\sim$ 4 $\times$ $10^{-2}$ $L_\sun$ \cite{demarchi17}) respectively.
The uncertainty on $L_{\rm H\alpha}$ is dominated by the uncertainties on the H$\alpha$ photometry, on the distance ($\sim$ 5\%) and on instrumental setup ($\sim$ 3\%) (see \citealt{demarchi10}).
The total uncertainty on $L_{\rm H\alpha}$ is about 16 \%.
\section{Physical parameters of the PMS candidates}
\label{physpam}
\subsection{Effective temperature and bolometric luminosity}
We evaluated the effective temperature of the PMS candidates by comparing the theoretical models with the $m_{555}-m_{814}$ color of our sample corrected for the reddening due to the Milky Way and LH 91, as explained in Sect. \ref{reddening}.
To convert the color to $T_{\rm eff}$ we used the models of \cite{bessel} for 3500 K $\leq$ $T_{\rm eff}$ $\leq$ 40000 K, $\log g$=4.5, and metallicity index [M/H] =-0.5 dex.
As the models of \cite{bessel} are not available for temperatures lower than 3500 K, we used the $T_{\rm eff}$-$(V-I_C)$ calibration by \cite{mamajek}, with the assumption that the calibrated $m_{555}$ and $m_{814}$ magnitudes coincide with the $V$ and $I_C$ magnitudes (see \citealt{katia19}).
To obtain the luminosity of the stars $L_{\star}$, we considered the magnitude $m_{555}$ corrected for the interstellar extinction, a distance to LH 91 of 51.4 kpc \citep{panagia99}, and a bolometric solar magnitude of 4.74 mag \citep{mamajek}.
The uncertainty on the effective temperature and stellar luminosity are dominated by the uncertainties on the magnitudes and distance.
In Fig. \ref{hr}, we show the location of the PMS candidates in the HR diagram, with the relative uncertainties, which in some cases are smaller than the symbol size.
We highlight that the majority of the PMS candidates are close to the MS and we could only identify them thanks to the information on their H$\alpha$ excess.
We also plot in Figure 4 the theoretical isochrones for ages of 2, 4, 8, 16, 32, and 64 Myr for Z=0.007 \citep{bressan2012}.
The red squares represent the PMS candidates younger than 8 Myr, while the blue dots the older ones.
\begin{figure}
\centering
\includegraphics[width=10cm]{HR.pdf}
\caption{HR diagram with the location of the low-mass young (red squares) and old (blue dots) PMS candidates. The theoretical isochrones \citep{bressan2012} are calculated for ages of 2, 4, 8, 16, 32, and 64 Myr (lines from right to left, respectively) and Z=0.007. The small black dots are the targets of the sample of our catalog.
}
\label{hr}
\end{figure}
From the HR diagram, it appears that LH 91 is characterized by a more or less continuous star formation, from a few million years to $\sim$ 60 Myr, with a smaller number of PMS candidates for ages younger than 8 Myr.
From the effective temperature and the luminosity of the PMS stars, we derive the stellar radius $R_\star$ of these stars, which we use to estimate the mass accretion rate of the selected PMS objects in Section\,\ref{proacc}. Typical mean errors on $R_\star$ are around 7\% and include both uncertainties on $T_{\rm eff}$ and $L_\star$.
\subsection{Mass and age}
\label{mage}
We derived the mass and age for each target by comparing the location of each star in the HR diagram (Fig. \ref{tracce}) with theoretical PMS evolutionary tracks. We adopted the PARSEC tracks for metallicity $Z=0.007$ \citep{bressan2012} from 0.1 $M_\sun$ to $3.0$ $M_\sun$.
We followed the approach discussed in \cite{Romaniello98} and refined by \citet{demarchi11,demarchi13}.
According to these authors, we define a grid in luminosity and temperature consisting of evenly spaced cells with sizes comparable to the typical observational errors. Given an evolutionary track of a star of a certain mass, we identify the cells crossed by the star during its evolution. For each cell, we extrapolate information associated with the evolutionary track, namely mass and age. The information is then be associated with the observed star belonging to a particular cell (for further details, see \cite{demarchi17}).
Figure \ref{tracce} shows the masses of the PMS candidates spanning from $0.2\,M_\odot$ for the cooler objects up to $1.0\,M_\odot$ for the hottest ones. The median value of the sample is about $0.8 M_\odot$. In the figure, we divided the PMS stars in two subsamples: the younger PMS candidates with an age of less than 8 Myr (red squares), and the PMS candidates older than 8 Myr (blue dots).
The sizes of the dots and squares are proportional to the mass accretion rate, which we determine and discuss in Section\,\ref{proacc}.
Here, we simply want to investigate whether and how the rate of mass accretion is correlated with evolutionary phase and stellar mass. As one can see in Fig.\,\ref{tracce}, the targets with the highest mass accretion rates (the largest symbols) are the youngest PMS stars, while the mass accretion rate decreases at older ages. Furthermore, the stars with higher mass have higher mass accretion values ($\dot{M}_{\rm acc}$) at all ages.
\begin{figure}
\centering
\includegraphics[width=10cm]{HR_tracks_k.pdf}
\caption{HR diagram of our PMS candidates. Red squares and blue dots represent young and old PMS stars, respectively. The size of the symbols is proportionate to the rate of mass accretion, as in the legend. We adopted the PARSEC tracks for metallicity $Z=0.007$ \citep{bressan2012} from 0.2 $M_\sun$ to $1.2$ $M_\sun$ (dashed lines).
}
\label{tracce}
\end{figure}
In Fig. \ref{histo} we show the histograms of the mass (upper panel) and age (lower panel) distribution of the PMS candidates with the bin sizes compatible with the uncertainties on mass and age, respectively.
The black line corresponds to the whole sample, the dashed red line corresponds to the young PMS candidates, while the dotted blue line represents the old PMS candidates.
The older PMS stars include preferentially higher mass stars, with the mass distribution presenting a peak at $\sim$ 0.7 $M_\odot$. The young PMS objects show a continuous distribution in mass, with no evident peak, but this is probably mostly due to the paucity of the subsample.
\begin{figure}
\centering
\includegraphics[width=10cm]{isto.pdf}
\caption{Histograms of the stellar mass (upper panel, bin of 0.05) and age (lower panel, bin of 0.2) for the 75 low-mass PMS candidates in logarithmic scale. The red dashed and blue dotted lines represent the distribution of the young and older populations, respectively.
}
\label{histo}
\end{figure}
The age distribution could suggest a separation between older and younger PMS stars, with a gap in the range between 5 and 10 Myr. The younger population shows a continuous distribution in age up 5 Myr. The older population constitutes about 90\% of the objects, with ages between 10 and $\sim$ 60 Myr and a peak at $\sim 50$\,Myr.
\section{Accretion properties}
\label{proacc}
In the following subsections we describe how we determined the accretion properties of our sample of PMS candidates and we present our study of their relation with the physical properties of the stars.
\subsection{Accretion luminosity}
\label{acclum}
The luminosity of the H$\alpha$ line generated along the funnel flows of circumstellar gas during the magnetospheric accretion process can be used as a tracer to estimate the accretion luminosity.
To determine the accretion luminosity of our sample of PMS candidates, we adopted the relationship obtained by \cite{demarchi10}, who analyzed the data of a group of T Tauri stars in Taurus-Auriga compiled by \cite{dahm}:
\begin{equation}
\log {\frac{L_{\rm acc}}{L_\odot}}= \log{\frac{L_{\rm H\alpha}}{L_\odot}} + (1.72 \pm 0.25)
.\end{equation}
The median of the accretion luminosity of our 75 PMS stars is 0.12 $L_\odot$.
The uncertainty on $L_{\rm acc}$ is dominated by the uncertainty on $L_{\rm H\alpha}$, which is about $16\%$, related to the photometric error on the H$\alpha$ magnitude. There is also a systematic error to take into account due to the uncertainties on the ratio $L_{\rm acc}/L_{\rm H\alpha}$ \citep{dahm,demarchi11}, but as the relation is the same for all stars, this uncertainty does not interfere with the comparison between the targets.
\begin{figure}
\centering
\includegraphics[width=10cm]{L_Lacc_tot.pdf}
\caption{Accretion luminosity as a function of stellar luminosity. Blue and red dots represent the older (age greater than 8\,Myr) and younger (age smaller than 8\,Myr) PMS candidates of LH 91, respectively. The gray filled dots, green diamonds, and black empty dots are the PMS of LH 95 by \cite{katia19}, SN 1987A by \cite{demarchi10}, and 30 Dor by \cite{demarchi17}, respectively. The dashed lines show the linear $L_{\rm acc}-L_\star$ relationship for different values of the coefficient, as indicated. }
\label{lacc}
\end{figure}
In Fig. \ref{lacc} we show the accretion luminosity versus $L_\star$ of the PMS candidates, the blue dots and red squares representing the old and young ones, respectively.
In each star formation region, $L_{\rm acc}$ increases with stellar luminosity, but the range and dispersion of the data are quite different.
For comparison, we show also the data of LH 95, with gray filled dots, SN 1987A with green empty diamonds, and 30 Dor with black empty dots.
In LH 91 and LH 95, the dispersion in $L_{\rm acc}$ seems to decrease with the increase of $L_\star$.
The accretion luminosity spans a range between 0.1 and 1 $L_\star$, with the peak of the distribution at about $0.3\,L_\star$ for LH91. In Section \ref{Ha_luminosity}, we shown that the median $L_{\rm H\alpha}$ in LH 91 is lower than that found in LH 95, and therefore it is not surprising that the values of the accretion luminosity in LH 91 are
also slightly lower than those of the PMS objects in LH 95 ($\sim$ $0.17$ $L_\odot$). This result could be due to two main factors: in LH 95 the mass range of the sample is larger (0.2-1.8 $M_\odot$), and at the same mass the stars are younger.
The samples of 30 Dor and SN 1987A are richer than LH 91 and LH 95, and the range of the accretion luminosity is larger, from 0.1 $L_\star$ to values higher than 1.0 $L_\star$. For a comparison, we focus on the range in stellar luminosity in common between the regions, namely -0.65 $L_\star$ and 0.0 $L_\star$. We evaluated the median accretion luminosity only for the regions 30 Dor, LH 95, and LH 91 finding values of $\sim$ 0.17 $L_\odot$, 0.22 $L_\odot$, and 0.13 $L_\odot$, respectively.
Unfortunately, the range in star luminosity of SN 1987A does not cross-match with those of LH 91 and LH 95, and therefore we cannot make a
direct comparison.
\begin{figure}
\centering
\includegraphics[width=10cm]{Teff_Lacc.pdf}
\caption{Accretion luminosity versus effective temperature. The blue dots and red squares are as in Fig. \ref{lacc}, the gray filled dots represent the PMS stars of LH 95 by \cite{katia19}.
}
\label{tef}
\end{figure}
In Fig. \ref{tef} we show the accretion luminosity versus the effective temperature in logarithmic scale of the old (blue dots) and young (red squares) PMS stars, together with the sample of LH 95 (\citealt{katia19}; gray filled dots).
This plot is very similar to the HR diagram (Fig. \ref{tracce}). While a separation between the old and young candidates in $T_{\rm
eff}$ is evident in the LH 95 sample (see Fig. 9 in \citealt{katia19}), in LH 91 there is a continuous distribution in $T_{\rm eff}$, the PMS stars with the highest accretion luminosity being close to the old subgroup.
\subsection{Mass accretion rate versus stellar age}
Finally, we derived the mass accretion rate $\dot{M}_{\rm acc}$ of our PMS candidates from the free-fall equation \citep{koenigl91,calvet98}:
\begin{equation}
L_{\rm acc} \simeq \frac{GM_\star\dot{M}_{\rm acc}}{R_\star}\left(1-\frac{R_\star}{R_{in}}\right)
,\end{equation}
\noindent{where $G$ is the gravitational constant, $M_\star$ and $R_\star$ are the mass and radius of the PMS candidates, and $R_{\rm in}$ is the inner radius of the accretion disk. $R_{\rm in}$ depends on how exactly the accretion disk is coupled with the magnetic field of the star, and so its value is quite uncertain. We adopt $R_{\rm in} = 5 R_\star$, following \cite{gul98}. The median value of the mass accretion rate of our sample is $\sim$ $4.8$ $\times$ $10^{-9}$ $M_\odot$ $yr^{-1}$, with higher values for the younger population ($\sim$ 1.2 $\times$ $10^{-8}$ $M_\odot$ $yr^{-1}$), and lower values for the older candidates ($\sim$ 4.7 $\times$ $10^{-9}$ $M_\odot$ $yr^{-1}$). The values we find are slightly lower than those found by \cite{katia19} for LH 95, as shown in Fig. \ref{tmacc}, where the median rate is about $7.5$ $\times$ $10^{-9}$ $M_\odot yr^{-1}$.
The mass accretion rate in LH 91 is also lower than the median value measured in the field of SN 1987A
($2.6$ $\times$ $10^{-8}$ $M_\odot$ $yr^{-1}$, as found by \citealt{romaniello04}, and $2.9$ $\times$ $10^{-8}$ $M_\odot yr^{-1}$ as measured by \citealt{demarchi10}) and in 30 Dor by ($\sim$ 8 $\times$ $10^{-8}$ $M_\odot yr^{-1}$; \citealt{demarchi17}).}
The uncertainty on $\dot{M}_{\rm acc}$ is dominated by the uncertainty on $L_{\rm H\alpha}$, which is of about $16\%$, but we have to consider also the contribution of $R_\star$ ($7\%$, including a $5\%$ systematic uncertainty on the distance modulus), stellar mass ($\sim$ $7\%$), the intrinsic uncertainties due to the evolutionary tracks ($2\%$-$6\%$, for more details see the Appendix A of \citealt{katia19}), and knowledge of the relation $L_{\rm acc}$--$L_{\rm H\alpha}$, which in this case is not very accurate (a factor of $\sim$ 2; \citealt{demarchi10}).
Finally, the contribution of other sources of systematic error ---such as physical processes different from accretion (e.g., chromospheric activity or ionization of nearby massive stars) or nebular continuum--- that could affect the determination of $\dot{M}_{\rm acc}$ are considered to be negligible \citep{demarchi10}. In summary, the combined statistical uncertainty on $\dot{M}_{\rm acc}$ is of about $20\%$.
A snapshot of the mass accretion rate as a function of the age is shown in Fig. \ref{tmacc}.
We divided the sample in two subsamples with the stellar mass larger (yellow filled squares) and smaller (empty black triangles) than the median stellar mass ($\sim 0.8 \,M_\odot$).
The gray filled dots are the PMS candidate in LH 95 \citep{katia19}.
As expected, the accretion appears to decrease with time, in line with the predicted evolution of viscous disks \citep{hart}, but there is a large spread in mass accretion rate at a given age.
We performed a linear fit to the two subsamples, and find a similar slope: $-0.31 \pm 0.07$ for the high masses and $-0.39 \pm 0.04$ for the low masses. These values are in agreement within the error with those evaluated in other MCs regions \citep{demarchi11,demarchi13,demarchi17,katia19}.
This plot also shows that the mean mass accretion rate of the PMS stars in LH 91 is slightly lower than in LH 95, because our sample is composed mainly of older stars (more so than 30 Myr), close to the MS when the accretion process is less powerful.
\begin{figure}
\centering
\includegraphics[width=10cm]{age_Macc_err.pdf}
\caption{Observed mass accretion rate versus age in LH 91. Yellow filled squares and empty black triangles represent the targets with mass greater and smaller than the median mass of the PMS candidates sample, respectively. The gray-filled dots are the PMS candidates in LH 95 \citep{katia19}. The error bars on the age and mass accretion rate are reported. When the uncertainties are smaller than the symbol size, they are not visible. The arrows represent lower limits. The dashed yellow and solid black lines represent the regression fit of the two subsamples of LH 91.
}
\label{tmacc}
\end{figure}
\subsection{Mass accretion rate versus stellar mass}
Figure \ref{mmacc} shows the mass accretion rate as a function of the stellar mass of our PMS candidates. Younger PMS candidates are represented by red squares, while the older PMS candidates are marked with blue dots.
The gray filled dots represent the PMS candidates in LH 95 \citep{katia19}. From Figs. \ref{tmacc} and \ref{mmacc} it is evident that the mass accretion rate is typically higher for the younger and more massive stars. Only two low-mass stars (with masses of about 0.3 $M_\odot$ and 0.4 $M_\odot$) have a high mass accretion rate (2.2 $\times$ $10^{-8}$ $M_{\odot}$/yr and 1.4 $\times$ $10^{-8}$ $M_{\odot}$/yr respectively).
Figure \ref{mmacc} reveals a large spread in $\dot{M}_{\rm acc}$ values for a given stellar mass.
This is hardly surprising considering the large spread of ages (see also \cite{rigliaco11}).
Moreover, the older sample of PMS candidates in LH 91 reaches lower values of mass accretion rate when compared to the stars at similar masses in LH 95.
Again, it is interesting to note how the stars of any given mass appear to be younger in LH 95 than in LH 91. This could simply be the result of different evolutionary stages between stars in LH 91 and LH 95, with the former sample being
more evolved than the latter. This could in turn justify the smaller number of accretors and the lower values of the mass accretion rate in LH 91 compared to LH 95.
This difference might in turn be caused by other physical differences in star formation environment, for example the gas density of the regions.
It would seem natural that an environment with lower gas density would result in less massive circumstellar disks, and therefore a more modest mass accretion rate. To verify this effect, we compare the median mass accretion rate in LH 91 with that of the three star-forming regions at similar metallicity in the LMC, namely
LH 95, SN 1987A, and 30\,Dor, for which a study of accretion properties was performed.
Considering targets with the same mass range ($0.4-1.0\,M_\odot$) and younger than 8\,Myr, we obtained a mean $\dot{M}_{\rm acc}$ value of $1.1 \times 10^{-8}\,M_\odot$/yr for LH 91, $4.4 \times 10^{-8}\,M_\odot$/yr for LH 95, $3.7 \times 10^{-7}\,M_\odot$/yr for SN 1987A, and $5.9 \times 10^{-8}\,M_\odot$/yr for 30\,Dor.
We also estimated the mean dust density of the aforementioned four regions taking into account the mass surface density map\footnote{https://www.asc.ohio-state.edu/astronomy/dustmaps/} by \cite{utomo}.
Considering regions with a radius of 1.5 arcmin, we found values of $0.11 \pm 0.01\,M_\odot$/pc$^2$ for LH 91, $0.16 \pm 0.01\,M_\odot$/pc$^2$ for LH 95, and similar value for SN 1987A, and 0.65 $\pm$ 0.09 $M_\odot$/pc$^2$ for 30 Dor. Even though this kind of analysis is only qualitative, we find some tentative indication that regions with higher dust densities also
have higher mass accretion rates (and possibly higher gas density), with the exception of SN 1987. A much more detailed analysis would be necessary to address this issue in more detail, which goes beyond the scope of this work.
\begin{figure}
\centering
\includegraphics[width=10cm]{M_macc.pdf}
\caption{Distribution of $\dot{M}_{\rm acc}$ as a function of $M_{\star}$. The symbols are as in Fig. \ref{lacc}.
}
\label{mmacc}
\end{figure}
\subsection{Spatial distribution of the PMS candidates}
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{prova_distr_3.pdf}
\caption{{\it Left panel}: Color-composite image of LH 91 from WFC3 observations in the F555W, F814W, and F656N filters. The image is rotated 30 degrees to align the two figures. {\it Right panel}: Distribution of the PMS stars. Symbols are the same as in Fig. 5. The crosses represent the massive stars from the 2MASS catalog, and the triangles are Be stars from Table\,1 of \cite{gou02}. North is up and east to the left.}
\label{space}
\end{center}
\end{figure*}
The left panel of Figure \ref{space} shows a color-composite image from WFC3 observations in the F555W (red), F814W (blue), and F656N (green) filters of LH 91.
The right panel in the same figure shows the spatial distribution of the PMS stars in our sample projected onto the sky.
As in figure \ref{tracce}, the sizes of the dots and squares are proportionate to the ranges of the mass accretion rate. Different colors represent different ages: older than 8\,Myr (blue dots) and younger than 8\,Myr (red squares).
The crosses are massive stars selected from the 2MASS catalog \citep{cutri} with
$J - H < 0.8$ and $J < 15$ mag (see also \citealt{katia19}), while the triangles are Be stars found by \cite{gou02} (see their Table\,1).
To align the orientation of the two figures, the left image is rotated 30 degrees.
With the aim of better understanding the correspondence between the fields, we indicate some bright stars with the letters A to D.
Regions with a lack of stars in the right figure correspond to those rich in gas in the left figure, shown in green. The dust associated to the gas could be obscuring the stars behind it. The PMS objects appear to be distributed more or less uniformly over the region, and are not clustered around the massive stars, unlike the younger population of LH 95 \citep{katia19}.
This result is in agreement with the conclusions of \cite{gou02}, who found only a weak match between the HII region of LH 91 and the two Be stars located to the southwest side of the region. Therefore, it appears that in LH 91 there is no obvious region of higher star-formation intensity, at least currently.
\section{Conclusions}
\label{conclusion}
We presented a multiwavelenght analysis of the stellar populations in LH 91, a star-forming region in the LMC, observed with the WFC3 on board the {\it HST}.
We applied a photometric detection method to identify PMS candidates still actively accreting matter from their circumstellar disks.
The method combines HST broad-band F555W and F814W photometry with narrow-band F656N imaging in order to identify stars with H$\alpha$ excess emission and to subsequently measure their accretion luminosity $L_{\rm acc}$ and equivalent width $EW_{\rm H\alpha}$, and to derive their mass accretion rate $\dot{M}_{\rm acc}$.
The main results of our analysis can be summarized as follows:
\begin{enumerate}
\item From the photometric catalog of 9423 well-detected stars, we identified about 180 low-mass PMS candidates on the basis of their excess H$\alpha$ emission, that is, with
their ($m_{555} - m_{656}$) color exceeding that of the reference template at the same ($m_{555}-m_{814}$) color by more than three times the combined uncertainties on their ($m_{555}-m_{656}$) values.
\item We measured the $EW_{\rm H\alpha}$ of the PMS stars, finding values in the range of $\sim$ 3 $\AA$ - 17 $\AA$, with a median of 9 $\AA$. We selected stars with $EW_{\rm H\alpha}$ $\geq$ 10 $\AA$, which are typical values of actively accreting PMS stars. A total of 75 objects satisfy this condition.
\item We estimated the stellar effective temperature and luminosity thanks to the \cite{bessel} relations for $3500 \leq T_{\rm eff} \leq$ 40000 K, and the \cite{mamajek} calibrations for $T_{\rm eff} < 3500$ K.
\item We obtained the mass and age of the PMS candidates by comparing the location of each star in the HR diagram with theoretical PMS evolutionary tracks \citep{bressan2012}.
The range of the stellar masses in our sample is between $\sim$ 0.2 $M_\sun$ and $\sim$ 1.0 $M_\sun$ with a median of $\sim$ 0.8 $M_\sun$. The age of the stars is distributed between a few million years and as much as $\sim$ 60 Myr with an apparent gap between 5 Myr and 10 Myr.
For this reason we divided our sample in two populations, which we call younger (t $\leq$ 8 Myr with median age $\sim$ 3.5 Myr) and older PMS candidates (t $>$ 8 Myr with median age $\sim$ 35 Myr).
\item We measured the H$\alpha$ luminosity of the PMS candidates and consequently their accretion luminosity. We find a median value of $\sim$ $0.12$ $L_\sun$. The accretion luminosity increases with $L_\star$, while the dispersion in $L_{\rm acc}$ seems to decreases with $L_\star$.
We also find that the accretion luminosity spans the range 0.1-1 $L_\star$, with a peak in the distribution at about 0.3 $L_\star$.
\item Through the accretion luminosity and other physical parameters, we determined the mass accretion rate of PMS stars, finding a median value of $\sim$ 4.8 $\times$ $10^{-9}$ $M_\sun yr^{-1}$, with higher values for the
younger population ($\sim 1.2 \times 10^{-8} M_\sun$\,yr$^{-1}$), and lower values
for the older candidates ($\sim 4.7 \times 10^{-9} M_\sun$ yr$^{-1}$).
\item We studied the relation between the mass accretion rate and both age and stellar mass. As expected, the mass accretion rate appears to decrease with time and to increase with stellar mass.
\item We compared our results with other star formation regions in the Large Magellanic Cloud, in particular with LH 95, which is the closest region to LH 91 for which accretion properties of PMS candidates have been derived.
LH 91 is a star-forming region that is less rich in PMS stars than LH 95, with lower stellar masses (0.2-1.0 $M_\odot$ $vs$ 0.2-1.8 $M_\odot$)
but similar range in age (few Myr up to 60Myr).
The accretion luminosity and the mass accretion rate of PMS candidates in LH 91 are both slightly lower than in LH 95; in particular the median values are 0.12 $L_\odot$ versus 0.17 $L_\odot$, and 7.5 $\times$ $10^{-9}$ $M_\sun yr^{-1}$ versus 4.8 $\times$ $10^{-9}$ $M_\sun yr^{-1}$ , respectively.
\item We explored the possibility that the density of the environment (which we probe using dust emission) could affect the mass accretion rate. We compared the median mass accretion rate of star-forming regions with similar metallicity but different dust density, namely LH 91, LH 95, SN 1987A, and 30\,Dor.
We considered targets in the same mass range (0.4-1.1 $M_\odot$) and younger than 8 Myr. From a qualitative analysis, we find that the mass accretion rate increases with dust density of the environment in which the stars are formed.
\item Finally, we find the spatial distribution of the PMS stars to be rather uniform, without any evidence of clumps around more massive stars.
\end{enumerate}
The advent of the {\it James Webb Space Telescope} will allow us to put strong constraints on accretion phenomena of members in star-forming regions with different stellar properties (such as metallicity, age, and distance). In particular, the spectroscopic observations would give us information on the density and ionization state of the material undergoing accretion as well as on its kinematics, thereby providing a clearer picture of the accretion
process itself in different environmental conditions.
\begin{acknowledgements}
We are very thankful to the anonymous referee for precious comments and suggestions that have helped
us to improve this paper.
RC is grateful to ESA for the support during the data analysis useful for the preparation of this paper. This work was based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Spacte Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in
Astronomy, Inc., under NASA contract NAS5-26555. This research made use of the SIMBAD database, operated at the CDS (Strasbourg, France) and data
products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has made also use of the SVO Filter Profile Service supported from the Spanish MINECO through grant AyA2014-55216.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,869,038,156,776 | arxiv | \section{Introduction}
Let $\mu$ be a probability measure on $\mathbb{R}$ and $(B_t)_{t \geq 0}$ a $\mathcal{F}$-Brownian Motion with initial distribution $\mathrm{Law}(B_0) = \mu$ defined on a filtered probability space $(\Omega, \mathcal{A},\mathbb{P}, (\mathcal{F}_t)_{t \geq 0})$. We assume that the filtration $\mathcal{F} = (\mathcal{F}_t)_{t \geq 0}$ is right-continuous and completed w.r.t.\ $\mathbb{P}$.
Given another probability measure $\nu$, a finite $\mathcal{F}$-stopping time $\tau$ is said to be a solution to the Skorokhod Embedding Problem w.r.t.\ $\mu$ and $\nu$, if
\begin{equation}
\tag{$\mathrm{SEP}(\mu,\nu)$}
(B_{t \land \tau})_{t \geq 0} \text{ is uniformly integrable} \quad \text{and} \quad B_{\tau} \sim \nu.
\end{equation}
It is well known that there exists a solution to $\SEP(\mu,\nu)$ if and only if $\mu \leqc \nu$, i.e.\ if we have $\int_{\mathbb{R}} \varphi \de \mu \leq \int _{\mathbb{R}} \varphi \de \nu$ for all convex functions $\varphi$. In general there exist many different solutions to $\SEP(\mu,\nu)$ (cf.\ \cite{Ob04}).
\subsection*{Main Result}
In this article, we focus on the subclass of ``barrier solutions'' to the Skorokhod Embedding Problem which includes for instance the Root embedding \cite{Ro69}, the Az\'{e}ma-Yor embedding \cite{AzYo79}, the Vallois embedding \cite{Va83}, and the left-monotone embedding \cite{BeHeTo17}. These solutions can be described as the first time the process $(X_t,B_t)_{t \geq 0}$ hits a barrier in $[0, \infty) \times \mathbb{R}$ (cf.\ Definition \ref{def:Intro}) where $X$ is monotonously increasing, non-negative and $\mathcal{F}$-adapted.
We show that these embeddings are closely related to the concept of shadows introduced by Beiglb\"ock and Juillet in \cite{BeJu16}.
\begin{definition} \label{def:Intro}
\begin{itemize}
\item [(i)] A set $\mathcal{R} \subset [0, \infty) \times \mathbb{R}$ is called a barrier if $\mathcal{R}$ is closed and for all $(l,x) \in \mathcal{R}$ and $l \leq l'$ we have $(l',x) \in \mathcal{R}$.
\item [(ii)] Let $\xi$ and $\zeta$ be finite measures on $\mathbb{R}$. We say that $\xi$ is a submeasure of $\zeta$ if $\xi[A] \leqp \zeta[A]$ for all $A \in \mathcal{B}(\mathbb{R})$, denoted by $\xi \leqp \zeta$.
\item [(iii)] Let $\eta$ and $\zeta$ be finite measures on $\mathbb{R}$. A finite measure $\xi$ that satisfies $\eta \leqc \xi \leqp \zeta$ and $\xi \leqc \xi'$ for all $\xi'$ with $\eta \leqc \xi' \leqp \zeta$, is called the shadow of $\eta$ in $\zeta$ and is denoted by $\shadow{\zeta}{\eta}$.
\end{itemize}
\end{definition}
We want to mention that in the literature barriers defined as in Definition \ref{def:Intro}(i) are sometimes called ``right-barriers'' in contrast to ``left-barriers''.
The shadow $\shadow{\zeta}{\eta}$ exists whenever the set of possible candidates is not empty, i.e.\ if there exists $\xi$ such that $\eta \leqc \xi \leqp \zeta$.
This existence result was first shown by Rost \cite{Ro71}. Later Beiglb\"ock and Juillet \cite{BeJu16} rediscovered this object in the context of martingale optimal transport and coined the name shadow.
In the following we use the notation $\Law(X;A)$ for the (sub-)probability measure which is given by the push-forward of $X$ under the restriction of $\mathbb{P}$ to the event $A$ (cf.\ Section \ref{ssec:Notation}).
\begin{theorem} \label{thm:intro}
Let $\mu \leqc \nu$ and $\tau$ a solution of $\mathrm{SEP}(\mu,\nu)$.
The following are equivalent:
\begin{itemize}
\item [(i)] There exists a right-continuous $\mathcal{F}$-adapted stochastic process $(X_t)_{t \geq 0}$ which is non-negative, monotonously increasing and satisfies $\mathbb{P}[\exists s < t : X_s = X_{t} = l] = 0$ for all $l \geq 0$, and a closed barrier $\mathcal{R} \subset [0, \infty) \times \mathbb{R}$ such that
\begin{equation*}
\tau = \inf \{ t \geq 0 : (X_t,B_t) \in \mathcal{R}\} \quad a.s.
\end{equation*}
\item [(ii)] There exists a left-continuous $\mathcal{F}$-time-change $(T_l)_{l \geq 0}$ with $T_0 = 0$, $T_\infty = \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] = 1$ for all $l \geq 0$ such that for all $l \geq 0$ we have
\begin{equation} \label{eq:ShadowResid}
\mathrm{Law}(B_{\tau}; \tau \geq T_l) = \shadow{\nu}{\mathrm{Law}(B_{T_l}; \tau \geq T_l)}.
\end{equation}
\end{itemize}
Moreover, we may choose $T_l = \inf \{t \geq 0: X_t \geq l\}$ or $X_t := \sup \{l \geq 0: T_l \leq t\}$ and $\mathcal{R} := \{ (l,x) \in [0, \infty) \times \mathbb{R} : U_{\Law(B_{T_l \land \tau})} (x) = U_{\nu}(x) \}$, respectively, where $U_\cdot$ denotes the potential function of a finite measure (see Definition \ref{def:PotentialFunction}).
\end{theorem}
\begin{remark}
We want to stress that this theorem also holds for randomized stopping times (cf.\ Theorem \ref{thm:MainEqui}). This concerns the implication $(ii) \Rightarrow (i)$, as part (i) already ensures that the randomized stopping time is induced by a (non-randomized) stopping time.
\end{remark}
To the best of our knowledge, the only known connection between shadows and the Skorokhod Embedding Problem is implicitly through the left-monotone embedding because it is uniquely characterized by the property that the induced martingale coupling between the initial and the terminal marginal distribution is precisely the left-curtain coupling (see below). Theorem \ref{thm:intro} shows that this connection is not by accident, but just a special case of an intimate connection between shadows and barrier solutions rooted in potential theory.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw[->] (0,-2.1) -- (0,2.1);
\node[above] at (0,2) {$B_t$};
\draw[->] (0,0) -- (4,0);
\node[right] at (4,0) {$t$};
\fill[fill = cyan, opacity = 0.2] (1,0) -- (2,1) -- (0,2) -- (4,2) -- (4,0) -- (1,0);
\fill[fill = cyan, opacity = 0.2] (0,-2) -- (2,-1.5) -- (3,-1) -- (1,0) -- (4,0) -- (4,-2) -- (0,-2);
\node[below] at (3.5,2) {$\mathcal{R}$};
\draw[color = red,thick] (1.5,0.5) -- (1.5,1.25);
\draw[color = green,thick] (1.5,0.5) -- (2,1) -- (1.5,1.25);
\draw[color = red,thick] (1.5,-0.25) -- (1.5,-1.625);
\draw[color = green,thick] (1.5,-0.25) -- (3,-1) -- (2,-1.5) -- (1.5,-1.625);
\draw[dashed,thin,color=gray] (1.5,-2) -- (1.5,2);
\node[below] at (1.5,-2) {$l$};
\fill[fill = red, opacity = 0.2] (6.375,-1.5) -- (6.375,-1.4) -- (6.5,-1.37) -- (7.75,-1.15) -- (7.75,-1.5);
\fill[fill = red, opacity = 0.2] (8.5,-1.5) -- (8.5,-1.2) -- (9,-1.3) -- (9.25,-1.35) -- (9.25,-1.5);
\draw[dotted, domain = 6:10,smooth,variable=\t]
plot({\t},{0.35*exp(-(\t-8)*(\t-8)/2)-1.5});
\draw[thick,red, domain = 6.375:7.75,smooth,variable=\t]
plot({\t},{0.35*exp(-(\t-8)*(\t-8)/2)-1.5});
\draw[thick,red, domain = 8.5:9.25,smooth,variable=\t]
plot({\t},{0.35*exp(-(\t-8)*(\t-8)/2)-1.5});
\draw[dotted, domain = 6:7,smooth,variable=\t]
plot({\t},{0.5*exp(-(\t-6.4)*(\t-6.5)/0.06)+0.5});
\draw[dotted, domain = 7:8,smooth,variable=\t]
plot({\t},{1.25*exp(-(\t-8)*(\t-8)/0.12)+0.5});
\draw[dotted, domain = 8:9,smooth,variable=\t]
plot({\t},{1.25*exp(-(\t-8)*(\t-8)/0.24)+0.5});
\draw[dotted, domain = 9:10,smooth,variable=\t]
plot({\t},{0.45*exp(-(\t-9.6)*(\t-9.5)/0.1)+0.5});
\fill[fill = green, opacity = 0.2] (6.375,0.5) -- (6.375,1) -- (6.45,1.02) -- (6.6,0.9) -- (6.75,0.6) -- (6.9,0.5);
\fill[fill = green, opacity = 0.2] (7.2,0.5) -- (7.3,0.5)-- (7.5,0.63) -- (7.6,0.8) -- (7.75,1.28) -- (7.75,0.5);
\fill[fill = green, opacity = 0.2] (8.5,0.5) -- (8.5,0.95) -- (8.7,0.65) -- (8.85,0.55) -- (9,0.5);
\fill[fill = green, opacity = 0.2] (9,0.5) -- (9.25,0.7) -- (9.25,0.5);
\draw[thick,green, domain = 6.375:7,smooth,variable=\t]
plot({\t},{0.5*exp(-(\t-6.4)*(\t-6.5)/0.06)+0.5});
\draw[thick,green, domain = 7:7.75,smooth,variable=\t]
plot({\t},{1.25*exp(-(\t-8)*(\t-8)/0.12)+0.5});
\draw[thick,green, domain = 8.5:9,smooth,variable=\t]
plot({\t},{1.25*exp(-(\t-8)*(\t-8)/0.24)+0.5});
\draw[thick, green, domain = 9:9.25,smooth,variable=\t]
plot({\t},{0.45*exp(-(\t-9.6)*(\t-9.5)/0.1)+0.5});
\draw[->] (5.9,0.5) -- (10.1,0.5);
\draw[->] (5.9,-1.5) -- (10.1,-1.5);
\end{tikzpicture}
\caption{This is a sketch of the support and densities of the measures appearing in \eqref{eq:RootExpl}. The supports of $\Law((l,B_l); \tau \geq l)$ (red) and $\Law((\tau,B_\tau); \tau \geq l)$ (green) are shown on the left and the densities of $\Law(B_l; \tau \geq l)$ (red) and $\Law(B_\tau; \tau \geq l)$ (green) on the right.}
\label{fig:RootExp}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw[->] (0,-2.1) -- (0,2.1);
\node[above] at (0,2) {$B_t$};
\draw[->] (0,0) -- (4,0);
\node[right] at (4,0) {$e^{-B_0}$};
\draw[dotted, domain = 0.14:4,smooth,variable=\t]
plot({\t},{-ln(\t)});
\fill[fill = cyan, thick, opacity = 0.2] (0,2) --(0,1.5) -- (0.368,1.5) -- (0.5,1.234) -- (0.6,1.091) -- (0.75,0.932) -- (1,0.75) -- (1.25,0.626) -- (1.649,0.5) -- (1.649,-1) -- (0.368,-1.5) -- (0,-1.5) -- (0,-2) -- (4,-2) -- (4,2);
\draw[cyan,opacity = 0.2, domain = 0.368:1.649,smooth,variable=\t]
plot({\t},{1/6*ln(\t)*ln(\t) - 7/12 *ln(\t) +3/4});
\draw[cyan, opacity = 0.2, thick, domain = 0.368:1.649,smooth,variable=\t]
plot({\t},{1/6*ln(\t)*ln(\t)+5/12*ln(\t)-5/4});
\node[below] at (3.5,2) {$\mathcal{R}$};
\draw[red, thick,domain = 1:1.649,smooth,variable=\t]
plot({\t},{-ln(\t)});
\draw[blue, thick,domain = 1.649:2.718,smooth,variable=\t]
plot({\t},{-ln(\t)});
\draw[green, thick, domain = 1:1.649,smooth,variable=\t]
plot({\t},{1/6*ln(\t)*ln(\t)+5/12*ln(\t)-5/4});
\draw[green, thick, domain = 1:1.649,smooth,variable=\t]
plot({\t},{1/6*ln(\t)*ln(\t) - 7/12 *ln(\t) +3/4});
\draw[dashed,thin,color=gray] (1,-2) -- (1,2);
\node[below] at (1,-2) {$l$};
\draw[->] (5.9,0.5) -- (10.1,0.5);
\draw[dotted] (7,-1.5) -- (7,-1) -- (9,-1) -- (9,-1.5);
\fill[fill = blue, opacity = 0.2] (7,-1.5) -- (7,-1) -- (7.5,-1) -- (7.5,-1.5);
\draw[color = blue,thick] (7,-1) -- (7.5,-1);
\fill[fill = red, opacity = 0.2] (7.5,-1.5) -- (7.5,-1) -- (8,-1) -- (8,-1.5);
\draw[color = red,thick] (7.5,-1) -- (8,-1);
\draw[->] (5.9,-1.5) -- (10.1,-1.5);
\draw[dotted] (6.5,0.5) -- (6.5,1) -- (7.5,1) -- (7.5,0.5);
\draw[dotted] (8.5,0.5) -- (8.5,1) -- (9.5,1) -- (9.5,0.5);
\fill[fill = green, opacity = 0.2] (6.75,0.5) -- (6.75,1) -- (7,1) -- (7,0.5);
\draw[thick, color = green] (6.8,1) -- (7,1);
\fill[fill = blue, opacity = 0.2] (7,0.5) -- (7,1) -- (7.5,1) -- (7.5,0.5);
\draw[color = blue,thick] (7,1) -- (7.5,1);
\fill[fill = green, opacity = 0.2] (8.5,0.5) -- (8.5,1) -- (8.75,1) -- (8.75,0.5);
\draw[thick, color = green] (8.5,1) -- (8.8,1);
\end{tikzpicture}
\caption{This is a sketch of the support and densities of the measure appearing in \eqref{eq:LMExpl}. The supports of $\Law((l,B_0); B_0 \leq - \ln(l), \tau > 0)$ (red), $\Law((l,B_0); B_0 \leq - \ln(l), \tau = 0)$ (blue) and $\Law((\tau,B_{\tau}); B_0 \leq - \ln(l), \tau > 0)$ (green) are shown on the left and the densities of $\Law(B_0; B_0 \leq - \ln(l), \tau > 0)$ (red), $\Law((B_0; B_0 \leq - \ln(l), \tau = 0)$ (blue) and $\Law(B_{\tau}; B_0 \leq - \ln(l), \tau > 0)$ on the right.}
\label{fig:LmExp}
\end{center}
\end{figure}
Since the time change in Theorem \ref{thm:intro} is given by $T_l = \inf \{t \geq 0 : X_t \geq l\}$ it is straightforward to compute the time changes for well known examples.
In the case of the Root-embedding, we have $X^{r}_t := t$, $T^r_l = l$ and Property \eqref{eq:ShadowResid} turns into
\begin{equation}\label{eq:RootExpl}
\Law(B_\tau; \tau \geq l) = \shadow{\nu}{\Law(B_l; \tau \geq l)}
\end{equation}
for all $l \geq 0$.
The measure $\Law(B_\tau; \tau \geq l)$ is the projection of $\Law((\tau,B_{\tau}); \tau \geq l)$ onto the second (the spatial) component. In the SEP context the joint law of $(\tau,B_\tau)$ describes when and where the Brownian motion is stopped. Since $\tau$ is a barrier stopping time the support of $\Law((\tau,B_{\tau}); \tau \geq l)$ is on the boundary of the barrier intersected with $[l,\infty)\times \R$. This is depicted on the left hand side of Figure \ref{fig:RootExp}. By \eqref{eq:RootExpl} we can characterize this measure using information from time $l$ only. For each $l \geq 0$, it is given
as the shadow of $\Law(B_l; \tau \geq l)$ in the prescribed terminal distribution $\nu$.
We have a similar situation in the case of the left-monotone embedding. We have $X^{lm}_t = \exp(-B_0)$,
\begin{equation*}
T^{lm}_l = \begin{cases}
0 & \exp(-B_0) \geq l \\
+ \infty &\exp(-B_0) < l
\end{cases}
\end{equation*}
and Property \eqref{eq:ShadowResid} becomes
\begin{equation} \label{eq:LMExpl}
\Law(B_{\tau}; B_0 \leq -\ln(l)) = \shadow{\nu}{\mathrm{Law}(B_0; B_0 \leq - \ln(l) )}
\end{equation}
for all $l \geq 0$. Again the measure $\Law(B_\tau; \tau \geq l)$ is the projection of $\Law((\tau,B_{\tau}); \tau \geq l)$ onto the second component and in the SEP context the latter measure is supported on the boundary of the barrier after time $l$ (left side of Figure \ref{fig:LmExp}). Recall that in the left-monotone phase space, the Brownian motion is only moving vertically. This time the characterization of $\Law(B_\tau; \tau \geq l)$ via the shadow of $\mathrm{Law}(B_0; B_0 \leq - \ln(l) )$ into $\nu$ is completely independent of $\tau$. In particular, \eqref{eq:LMExpl} yields that $\tau$ is the left-monotone embedding of $\mu$ into $\nu$ if and only if $(B_0,B_{\tau})$ is the left-curtain coupling of $\mu$ and $\nu$ (cf.\ \cite{BeJu16}).
The shadow $\shadow{\nu}{\eta}$ of a measure $\eta$ in the probability measure $\nu$, is the most concentrated (in the sense of $\leqc$) submeasure of $\nu$ which can be reached by an embedding of $\eta$ into $\nu$ via a (randomized) $\mathcal{F}$-stopping time (cf.\ Lemma \ref{lemma:ConvOrder}). Hence, Theorem \ref{thm:intro} characterizes in general barrier solutions as those solutions $\tau$, for which there exists a random time given by $(T_l)_{l \geq 0}$ such that for all $l \geq 0$ the mass which is not stopped before $T_l$ under $\tau$, is allocated by $\tau$ as concentrated as possible in the target distribution $\nu$ without interference with the mass that is stopped before $T_l$.
\subsection*{Interpolation}
If the time-change $(T_l)_{l \geq 0}$ is measurable w.r.t.\ the completion of the natural filtration $\mathcal{F}^B$ generated by the Brownian motion (as it is the case for the Root-embedding and the left-monotone embedding), we can assume that the Brownian motion $B$ is defined on the canonical path space $\Omega = C([0, \infty))$ and we can consider the natural shift operator $\theta$. In this case, for all $\lambda \in (0,\infty)$ we obtain an interpolation $(R^\lambda_l)_{l \geq 0}$ between two $\mathcal{F}$-time-changes $(T_l ^1)_{l \geq 0}$ and $(T_l^2)_{l \geq 0}$ by
\begin{equation*}
R^\lambda_l := T^1 _{l \land \lambda} + (T^2_{l-\lambda} \circ \theta_{T^1_\lambda}) \1_{\{l \geq \lambda\}} = \begin{cases}
T_l^1 & l \leq \lambda \\
T_\lambda^1 + T^2 _l \circ \theta_{T^1_\lambda} & l > \lambda
\end{cases}.
\end{equation*}
For the Root time-change $(T_l ^{r})_{l \geq 0}$ and the left-monotone time-change $(T_l^{lm})_{l \geq 0}$ the interpolation becomes
\begin{equation*}
R^{\lambda} _l :=
T^{r}_{l \land \lambda} + (T^{lm} _{l - \lambda} \circ \theta _{T^{r}_{\lambda}}) \1_{\{l \geq \lambda\}} = \begin{cases}
l & l \leq \lambda \\
l & \exp(-B_\lambda) + \lambda \geq l > \lambda \\
+ \infty & \exp(-B_\lambda) + \lambda < l, l > \lambda
\end{cases}.
\end{equation*}
A solution $\tau ^{\lambda}$ to $\SEP(\mu,\nu)$ that satisfies property \eqref{eq:ShadowResid} w.r.t.\ $(R^\lambda _l)_{l \geq 0}$, is by Theorem \ref{thm:intro} a barrier solution w.r.t.\ the level-process
\begin{equation} \label{eq:lvlPrc}
X_t ^\lambda := \sup \{l \geq 0 : R_l^\lambda \leq t\} = \begin{cases}
t & t < \lambda \\
\lambda + \exp(-B_0) & t \geq \lambda \end{cases}.
\end{equation}
A natural guess is that $\lambda \mapsto \tau ^{\lambda}$ is a reasonable interpolation between the left-monotone embedding ($\lambda \uparrow + \infty$) and the Root embedding $(\lambda \downarrow 0)$. This is indeed the case:
\begin{proposition} \label{prop:Interpolation}
Let $\lambda \in (0, \infty)$. We define the stochastic process $(X^{\lambda}_t)_{t \geq 0}$ as in \eqref{eq:lvlPrc}.
There exists a barrier $\mathcal{R}^{\lambda} \subset [0, \infty) \times \mathbb{R}$ such that the first hitting time
\begin{equation*}
\tau ^{\lambda} := \inf \{t \geq 0: (X_t ^{\lambda}, B_t) \in \mathcal{R}^{\lambda} \}
\end{equation*}
is a solution to $\SEP(\mu,\nu)$. Moreover, $\Law(B,\tau ^\lambda)$ (as a measure on $\Omega \times [0, \infty))$, converges weakly to $\Law(B,\tau ^r)$ as $\lambda \rightarrow \infty$ and, if $\mu$ is atomless, converges weakly to $\Law(B,\tau ^{lm})$ as $\lambda \rightarrow 0$.
\end{proposition}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw[->] (0,-2.1) -- (0,2.1);
\node[above] at (0,2) {$B_t$};
\draw[->] (0,0) -- (5,0);
\node[right] at (5,0) {$X ^{\lambda_1}_t$};
\draw[dashed,thin] (1.5,-2) -- (1.5,2);
\draw[dotted, domain = 1.75:5,smooth,variable=\t]
plot({\t},{-log2(\t-1.5)});
\fill[fill = cyan, opacity = 0.2] (0,2) -- (5,2) -- (5,0) -- (4,0) -- (2,1) -- (0,1.8);
\fill[fill = cyan, opacity = 0.2] (0,-2) -- (5,-2) -- (5,0) -- (4,0) -- (4,-0.5) -- (3,-1.25) -- (0,-1.8);
\node[below] at (1.5,-2) {$\lambda_1$};
\node[below] at (4,2) {$R^{\lambda_1}$};
\draw[red] (0,0) -- ++ (0.1,0.2) -- ++ (0.1,-0.1) -- ++ (0.1,0.1) -- ++ (0.1,-0.3) -- ++ (0.1,0.4) -- ++ (0.1,-0.1) -- ++ (0.1,0.4) -- ++ (0.1,-0.1) -- ++ (0.1,-0.1) -- ++ (0.1,0.3) -- ++ (0.1,-0.5)-- ++ (0.1,0.2)-- ++ (0.1,-0.3)-- ++ (0.1,-0.1) -- ++ (0.1,0.2);
\draw[green] (0,-0.5) -- ++ (0.1,-0.2) -- ++ (0.1,0.1) -- ++ (0.1,-0.1) -- ++ (0.1,0.2) -- ++ (0.1,0.1) -- ++ (0.1,-0.4) -- ++ (0.1,0.1) -- ++ (0.1,-0.5) -- ++ (0.1,-0.1) -- ++ (0.1,0.3) -- ++ (0.1,-0.5)-- ++ (0.1,0.2)-- ++ (0.1,-0.3);
\draw[red] (2.305,-1) -- (2.305, 0.85);
\end{tikzpicture}
\begin{tikzpicture}
\draw[->] (0,-2.1) -- (0,2.1);
\node[above] at (0,2) {$B_t$};
\draw[->] (0,0) -- (5,0);
\node[right] at (5,0) {$X ^\lambda_t$};
\draw[dashed,thin] (2.5,-2) -- (2.5,2);
\draw[dotted, domain = 2.75:5,smooth,variable=\t]
plot({\t},{-log2(\t-2.5)});
\fill[fill = cyan, opacity = 0.2] (0,2) -- (5,2) -- (5,0) -- (3.5,0) -- (3,0.3) -- (2,0.9) -- (1.5,1.2)-- (0,1.8);
\fill[fill = cyan, opacity = 0.2] (0,-2) -- (5,-2) -- (5,0) -- (3.5,0) -- (4,-0.25) -- (4.5, -1) -- (3,-1.25) -- (0,-1.8);
\node[below] at (2.5,-2) {$\lambda_2$};
\node[below] at (4,2) {$R^{\lambda_2}$};
\draw[red] (0,0) -- ++ (0.1,0.2) -- ++ (0.1,-0.1) -- ++ (0.1,0.1) -- ++ (0.1,-0.3) -- ++ (0.1,0.4) -- ++ (0.1,-0.1) -- ++ (0.1,0.4) -- ++ (0.1,-0.1) -- ++ (0.1,-0.1) -- ++ (0.1,0.3) -- ++ (0.1,-0.5)-- ++ (0.1,0.2)-- ++ (0.1,-0.3)-- ++ (0.1,-0.1) -- ++ (0.1,0.2) -- ++ (0.1,-0.05) -- ++ (0.1,0.3) -- ++ (0.1,0.1) -- ++ (0.1,-0.05) -- ++ (0.1,0.3) -- ++ (0.05,0.1) ;
\draw[green] (0,-0.5) -- ++ (0.1,-0.2) -- ++ (0.1,0.1) -- ++ (0.1,-0.1) -- ++ (0.1,0.2) -- ++ (0.1,0.1) -- ++ (0.1,-0.4) -- ++ (0.1,0.1) -- ++ (0.1,-0.5) -- ++ (0.1,-0.1) -- ++ (0.1,0.3) -- ++ (0.1,-0.5)-- ++ (0.1,0.2)-- ++ (0.1,-0.3);
\
\end{tikzpicture}
\caption{The sketch of two sample paths of $(X^\lambda_t,B_t)_{t \in [0,\tau ^{\lambda}]}$ in the context of Proposition \ref{prop:Interpolation} for two different $\lambda_1 < \lambda _2$ in $(0, \infty)$.}
\end{center}
\end{figure}
\begin{remark}
The choice of the Root embedding and the left-monotone embedding as the endpoints of the interpolation is partially arbitrary. As long as both time-changes are $\mathcal{F}^B$-measurable, this procedure can be applied to any two barrier solutions to obtain a new mixed barrier solution (see Lemma \ref{lemma:Nesting}). The continuity and convergence is then a question of the stability properties of the corresponding embeddings.
Other approaches to interpolate (in some sense) between two different barrier solutions can be found in \cite{CoHo07} and \cite{GaObZo19}.
\end{remark}
\subsection*{Multi-Marginal Embeddings}
Theorem \ref{thm:intro} can be extended to the case that the barrier solution is ``delayed'', in the sense that the solution can be written as the first hitting time of a barrier after it surpassed a fixed stopping time $\sigma$.
\begin{proposition} \label{prop:ShiftedThm}
Let $\tau$ be a $\mathcal{F}$-stopping-time that solves $\SEP(\mu,\nu)$. Let $\sigma \leq \tau$ be another $\mathcal{F}$-stopping time. The following are equivalent:
\begin{itemize}
\item [(i)] There exists a right-continuous $\mathcal{F}$-adapted stochastic process $(X_t)_{t \geq 0}$ which is non-negative, monotonously increasing and satisfies $\mathbb{P}[\exists s < t : X_s = X_{t} = l] = 0$ for all $l \geq 0$, and a closed barrier $\mathcal{R} \subset [0, \infty) \times \mathbb{R}$ such that
\begin{equation*}
\tau = \inf \{t \geq \sigma : (X_t,B_t) \in \mathcal{R} \} \quad a.s.
\end{equation*}
\item [(ii)] There exists a left-continuous $\mathcal{F}$-time-change $(T_l)_{l \geq 0}$ with $T_0 = 0$, $T_\infty = \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] = 1$ for all $l \geq 0$ such that for all $l \geq 0$ we have
\begin{equation*}
\Law(B_\tau; \tau \geq \sigma \lor T_l) = \shadow{\nu}{\Law(B_{\sigma \lor T_l}; \tau \geq \sigma \lor T_l)}.
\end{equation*}
\end{itemize}
\end{proposition}
Motivated by financial applications, there has been an increased interest in the multi-marginal Skorokhod Embedding Problem and in particular in multi-marginal barrier solutions (cf.\ \cite{BeCoHu17b, NuStTa17}). Since this is essentially a sequence of delayed barrier solutions, we can extend Theorem \ref{thm:intro} to this case by an inductive application of Proposition \ref{prop:ShiftedThm}.
\begin{corollary} \label{cor:MultiMarginal}
Let $\mu \leqc \nu _1 \leqc ... \leqc \nu_n$ be greater than $\mu$ in convex order and $\tau _1 \leq ... \leq \tau _n$ an increasing sequence of uniformly integrable $\mathcal{F}$-stopping times such that $\tau _i$ is a solution to $\SEP(\mu, \nu_i)$ for all $1 \leq i \leq n$. The following are equivalent:
\begin{itemize}
\item [(i)] There exists a suitable process $(X_t)_{t \geq 0}$, and closed barriers $\mathcal{R}^1,...,\mathcal{R}^n \subset [0, \infty) \times \mathbb{R}$ such that
\begin{align*}
\tau ^1 &= \inf\{ t \geq 0: (X_t,B_t) \in \mathcal{R}^1\} \quad\text{ and} \\
\tau ^i &= \inf\{ t \geq \tau ^{i-1}: (X_t,B_t) \in \mathcal{R}^i\} \quad \text{for all } 1 \leq i \leq n.
\end{align*}
\item [(ii)] There exists a suitable time-change $(T_l)_{l \geq 0}$, such that for all $l\geq 0$ we have
\begin{align*}
\Law(B_{\tau ^1}; \tau ^1 \geq T_l) &= \shadow{\nu _1}{\Law(B_{T_l}; \tau^1 \geq T_l)} \quad\text{ and} \\
\Law(B_{\tau ^i}; \tau ^i \geq \tau^{i-1} \lor T_l) &= \shadow{\nu _i}{\Law(B_{\tau ^{i-1} \lor T_l}; \tau ^i \geq \tau^{i-1} \lor T_l)} \quad \text{for all } 1 \leq i \leq n.
\end{align*}
\end{itemize}
\end{corollary}
\subsection*{Another Perspective on Theorem \ref{thm:intro}}
We will prove Theorem \ref{thm:intro} in Section \ref{sec:MainResult} using potential theory. However, there is an alternative point of view on this theorem using Choquet-type representations of the barrier stopping time $\tau$ and the terminal law $\mathsf{Law}(B_\tau)$ of the stopped process.
The most primitive version of a barrier embedding is a first hitting time of the form
\begin{equation*}
\tau^F := \inf \{t \geq 0 : B_t \in F\} = \inf \{ t \geq 0 : (t,B_t) \in [0, \infty) \times F\}
\end{equation*}
where $F \subset \mathbb{R}$ is a closed set. The terminal distribution $\Law(B_{\tau ^F})$ w.r.t.\ this stopping time can be characterized using the notion of Kellerer dilations. Given a closed set $F \subset \mathbb{R}$ the Keller dilation is defined as the probability kernel
\begin{equation} \label{eq:KellererDilation}
K^F(x,dy) = \begin{cases}
\frac{x^+ - x}{x^+ - x^-} \delta x_- + \frac{x - x^-}{x^+ - x^-} \delta_{x^-} \quad & x \not \in F \\
\delta_x & x \in F
\end{cases}
\end{equation}
where $x^+ = \inf (F \cap [x, \infty])$ and $x^- = \sup (F \cap (-\infty,x])$. As a direct consequence of \cite[Satz 25]{Ke73}, for every closed set $F \subset \mathbb{R}$ a stopping time $\tau$ satisfies $\tau = \tau ^F$ a.e.\ if and only if $\Law(B_\tau) = \Law(B_0)K^F$.
The main idea behind Theorem \ref{thm:intro} is now the following: In the same way that a barrier solution $\tau$ can be represented as a composition of first hitting times $(\tau ^{F_t})_{t \geq 0}$ for an increasing family of closed sets $(F_t)_{t \geq 0}$, the terminal law $\mathsf{Law}(B_\tau)$ w.r.t.\ a stopping time $\tau$ satisfying the shadow relation \eqref{eq:ShadowResid} can be represented using Kellerer dilations $(K^{F_a})_{a \in [0,1]}$ for an increasing family of closed sets $(F_a)_{a \in [0,1]}$. Since for fixed $F$, $\tau ^F$ and $K^F$ are in a one-to-one correspondence, these two representation -one on the level of stopping times and one on the level of target distributions- are two sides of the same coin. In fact, up to reparametrization of the index set, these two families can be chosen identical. Let us explain these two representations in more detail.
To keep the notation simple, we will only consider the case of the Root-embedding ($X_t^r = t, T_l ^r = l$). For all $\mathcal{F}^B$-stopping times $\tau_1, \tau _2$ and $s \geq 0$, we define the composition
\begin{equation*}
C_{s}(\tau _1, \tau _2) := \tau _1 \land s + \tau _2 \circ \theta_{\tau _1 \land s}
\end{equation*}
where $\theta$ denotes the shift operator on the path space.
The composition $C_{s}(\tau _1, \tau _2)$ is again a stopping time. We also inductively define the stopping times
\begin{equation*}
C_{s_1, ... , s_n}(\tau_1, ... , \tau _n) := C_{s_n}(C_{s_1, ... , s_{n-1}}(\tau _1, ... , \tau _{n-1}), \tau _n).
\end{equation*}
for $0 \leq s_1 \leq ... \leq s_n$. For all $s > 0$ and for all closed sets $F$, we have $C_s(\tau^F,\tau^F) = \tau^F$. Conversely, if there exists $s \geq 0$ and stopping times $\tau_1, \tau_2$ s.t.\ $\tau^F = C_s(\tau_1,\tau_2)$ for a closed set $F \subset \mathbb{R}$, then $\tau_1 \land s = \tau ^F \land s$ and $\tau _2 \circ \theta_{\tau _1 \land s} = \tau^F \circ \theta_{\tau _1 \land s}$. Therefore, stopping times of the form $\tau^F$ are ``extremal'' or ``atomic'' w.r.t.\ the composition operation $C$.
\begin{lemma}
Let $\tau$ be a stopping time. The following are equivalent:
\begin{itemize}
\item[(i)] There exists a right-barrier $\mathcal{R} \subset [0,\infty) \times \mathbb{R}$ s.t.\ $\tau = \inf \{t \geq 0 : (t,B_t) \in \mathcal{R}\}$.
\item[(ii)] There exists an increasing family of closed sets $(F_t)_{t \geq 0}$ such that
\begin{equation*}
\tau = \lim _{n \rightarrow \infty} C_{2^{-n}, ... , n} (\tau ^{F_{2^{-n}}}, ... , \tau ^{F_n}).
\end{equation*}
\end{itemize}
In this case a possible right-barrier is given by $\mathcal{R} := \overline{\bigcup _{t \geq 0} [t, \infty) \times F_t}$.
\end{lemma}
The proof of this equivalence is straightforward using the continuity of Brownian motion. We omit the details.
On the level of measures, we obtain a similar representation of the shadow.
For two probability measures $\zeta_1, \zeta _2$ and all $\alpha \in [0,1]$ the convex combination $(1- \alpha) \zeta_1 + \alpha \zeta_2$ is again a probability measure. By a result of Kellerer \cite[Theorem 1]{Ke73}, for every probability measure $\eta$ the extremal elements of the convex set $\{ \zeta : \eta \leqc \zeta\}$ are given by $\left\{ \eta K^F \, : \, F \subset \mathbb{R} \text{ closed}\right\}$.
\begin{lemma}
Let $\tau$ be a stopping time and set $l_b := \sup \{l \geq 0: \mathbb{P}[\tau \geq l] \geq b\}$ for $b \in [0,1]$. The following are equivalent:
\begin{itemize}
\item [(i)] For all $l \geq 0$ we have $\Law(B_\tau; \tau \geq l) = \shadow{\nu}{\Law(B_l; \tau \geq l)}$.
\item [(ii)] There exists an increasing family of closed sets $(F_a)_{a \in [0,1]}$ such that
\begin{equation*}
\Law(B_\tau) = \int _0 ^1 \eta_{1-a} K^{F_a} \de a
\end{equation*}
where the probability measures $\eta_a$ are defined by $\eta _a := \lim_{\varepsilon \rightarrow 0} \varepsilon ^{-1} \left( \overline{\eta}^{a + \varepsilon} - \overline{\eta}^a \right)$, $a \in [0,1]$, and $\overline{\eta}^\alpha := \Law(B_\tau; \tau \geq l_\alpha) - \frac{\mathbb{P}[\tau \geq l_\alpha] - \alpha}{\mathbb{P}[\tau = l_a]}\Law(B_\tau; \tau = l_\alpha)$ for $\alpha \in [0,1]$.
\end{itemize}
In this case we have $\shadow{\nu}{\Law(B_l; \tau \geq l_b)} = \int_0 ^b \eta_{1-a} K^{F_a} \de a$ for all $b \in [0,1]$.
\end{lemma}
Similar to \cite[Proposition 2.7]{BeJu16b} one can show that (i) implies (ii). The reversed implication is an application of Lemma \ref{lemma:ShadowDecomp}. We leave the details to the reader.
\section{Related Literature}
The Skorokhod Embedding Problem goes back to Skorokhod's work \cite{Sk65} in 1965. After his own solution to the embedding problem, this problem gained considerable attention in the literature and a wide range of different embeddings exploiting different mathematical tools were found. The survey \cite{Ob04} alone covers more than 20 different solutions. Moreover, several interesting variants of the Skorokhod Embedding are considered. Recently, there is an increased interest in a variant of Skorokhod Embedding Problem, which asks embeddings to minimize or maximize a predetermined cost function of space and time. This variant of the Skorokhod Embedding Problem has a direct connection to robust mathematical finance which was first noticed by Hobson \cite{Ho98}. For further background we refer to \cite{Ho03}. A novel mathematical exploration of properties of the optimal Skorokhod Embedding Problem in combination with optimal transport can be found in \cite{BeCoHu17}. Further variants are for instance the extensions to the embedding of multiple distributions (cf.\ \cite{BeCoHu17b}) and to higher dimensions (cf.\ \cite{GhKiPa19}).
Among the first solutions to the Skorokhod Embedding Problem was Root's construction \cite{Ro69} of a barrier solution in the time-space phase space in 1969. Shortly after, Rost \cite{Ro76} proved that the Root-embedding is the unique embedding which has minimal variance among all other embeddings and provided an alternative construction of this embedding based on the potential theory for Markov processes. The Root-embedding and properties of the corresponding barrier are still subject of current research \cite{CoWa13,GaObZo19}. Moreover, the Root-embedding was recently used to construct a counterexample to the Cantelli-conjecture \cite{KlKu15}. The Root-embedding is presumably the most prominent barrier solution to the Skorokhod Embedding Problem. However, there are several other embeddings which can be characterized as first hitting times of barriers in a different phase space \cite{AzYo79, Va83}.
The shadow for finite measures on the real line was introduced by Beiglböck and Juillet \cite{BeJu16} as the main tool in their construction of the left-curtain coupling. Thereby, they showed important properties as the associativity law and continuity, and coined the name shadow. Nevertheless, the essential concept of the shadow as well as its existence in a very broad framework already appeared in \cite{Ro71}.
The shadow is used to study properties of the left-curtain coupling (cf.\ \cite{BeJu16}, \cite{Ju14}, \cite{HoNo17}, \cite{HoNo21}).
Furthermore, the shadow can be used to construct and characterize a whole family of martingale couplings on the real line \cite{BeJu16b}, as well as finite-step martingales \cite{NuStTa17} and solutions to the peacock problem \cite{BrJuHu20}. To the best of our knowledge, the only known connection with the Skorokhod Embedding Problem so far is implicitly through the left-monotone embedding because it is uniquely characterized by the property that the induced martingale coupling between the initial and the terminal marginal distribution is precisely the left-curtain coupling.
\section{Preliminary Results}
\subsection{Notation} \label{ssec:Notation}
$\Omega$ is a Polish space equipped with the Borel $\sigma$-algebra, $\mathcal{F}$ is a right-continuous filtration on $\Omega$ and $B$ is a $\mathcal{F}$-Bownian motion on the complete filtered probability space $(\Omega, \mathcal{B}(\Omega), \mathbb{P}, \mathcal{F})$. We use the notation $\Law(X;A)$ for the (sub-)probability measure which is given by the push-forward of the random variable $X$ under the restriction of $\mathbb{P}$ to the Borel set $A$. Alternatively, we sometimes use the notation $X_{\#}(\mathbb{P}_{|A})$ for this object.
Further, we denote the set of finite (resp.\ probability) measures on a measurable space $\mathsf{X}$ by $\mathcal{M}(\mathsf{X})$ (resp.\ $\mathcal{P}(\mathsf{X})$). In the case $\mathsf{X} = \mathbb{R}$, we denote by $\MO(\mathbb{R})$ (resp.\ $\PO(\mathbb{R})$) the subset of finite (resp.\ probability) measures with finite first moment.
We equip $\MO(\mathbb{R})$ with the initial topology generated by the functionals $(I_f)_{f \in C_b(\mathbb{R}) \cup \{\vert \cdot \vert\}}$ where
\begin{equation*}
I_f : \MO(\mathbb{R}) \ni \pi \mapsto \int _{\mathbb{R}} f \de \pi \in \mathbb{R},
\end{equation*}
$C_b(\mathbb{R})$ is the set of continuous and bounded functions, and $\vert \cdot \vert$ denotes the absolute value function.
We denote this topology on $\MO(\mathbb{R})$ by $\TO$.
Finally, we define two order relations on $\MO(\mathbb{R})$. We say that $\mu \in \MO(\mathbb{R})$ is smaller than or equal to $\mu ' \in \MO(\mathbb{R})$ in convex order, $\mu \leqc \mu'$, if
\begin{equation} \label{eq:OrderRelation}
\int _{\mathbb{R}} \varphi \de \mu \leq \int _{\mathbb{R}} \varphi \de \mu '
\end{equation}
holds for all convex $\varphi$ and $\mu$ is smaller than or equal to $\mu'$ in positive order, $\mu \leqp \nu$, if \eqref{eq:OrderRelation} holds for all non-negative $\varphi$.
\subsection{Randomized Stopping Times}
The product space $\Omega \times [0,\infty)$ equipped with the product topology and Borel $\sigma$-algebra is again a Polish space.
\begin{definition}
A randomized stopping time (RST) w.r.t.\ $\mathbb{P}$ is a subprobability measure $\xi$ on $\Omega \times [0, \infty)$ such that the projection of $\xi$ onto $\Omega$ is $\mathbb{P}$ and there exists a disintegration $(\xi_\omega)_{\omega \in \Omega}$ of $\xi$ w.r.t.\ $\mathbb{P}$ such that
\begin{equation} \label{eq:DecompRST}
\rho _u : \omega \mapsto \inf\{t \geq 0 : \xi_\omega[0,t] \geq u\}
\end{equation}
is an $\mathcal{F}$-stopping time for all $u \in [0,1]$. We call a RST $\xi$ finite, if $\xi$ is a probability measure.
\end{definition}
We equip the space of RST with the topology of weak convergence of measures on $\Omega \times [0, \infty)$, i.e.\ the continuity of functionals $\xi \mapsto \int \varphi \de \xi$ for all $\varphi \in C_b(\Omega \times [0, \infty))$. The RST-property is closed under this topology (cf.\ \cite[Corollary 3.10]{BeCoHu17}).
Any $\mathcal{F}$-stopping time $\tau$ naturally induces a RST by $\xi^{\tau} := \Law_{\mathbb{P}}(B,\tau)$. Conversely, we can represent any randomized stopping time as a usual stopping time by enlarging the filtration.
\begin{lemma} [{\cite[Theorem 3.8]{BeCoHu17}}] \label{lemma:ReprRST}
For every RST $\xi$ there exists an $(\mathcal{B}([0,1]) \times \mathcal{F}_t)_{t \geq 0}$-stopping-time $\overline{\tau} ^\xi$ on the probability space $([0,1] \times \Omega, \mathcal{B}([0,1] \times \Omega), \overline{\mathbb{P}})$ where $\overline{\mathbb{P}}$ is the product of the Lebesque measure and $\mathbb{P}$ such that
\begin{equation*}
\xi = \Law_{\overline{\mathbb{P}}}(\overline{\mathsf{Id}},\overline{\tau}^\xi)
\end{equation*}
where $\overline{\mathsf{Id}} : (u,\omega) \mapsto \omega$. Moreover, $\overline{B} : (u, \omega) \mapsto B(\omega)$ is a Brownian motion on $([0,1] \times \Omega, \mathcal{B}([0,1] \times \Omega), \overline{\mathbb{P}})$.
\end{lemma}
This representation is useful to justify the application of known theorems of stopping times to RST and will be used in the following. For further literature on randomized stopping times we refer to \cite{BeCoHu17} and references therein.
Provided that $\Law(B_0) = \mu \leqc \nu$, we say that $\xi$ is a solution of $\SEP(\mu,\nu)$ if
\begin{align*}
\sup _{s \geq 0} \int _{\Omega \times [0,\infty)} B_{s \land t} \de \xi(\omega,t) < + \infty \quad \text{and} \quad
((\omega,t) \mapsto B_t(\omega))_{\#} \xi = \nu.
\end{align*}
If $\xi$ is induced by a $\mathcal{F}$-stopping time $\tau$, this definition is consistent with the definiton of $\SEP(\mu,\nu)$ in the introduction.
Especially in Section \ref{sec:MainResult} we will use the notational convention that $(\omega,t)$ always refers to an element of $\Omega \times [0, \infty)$. In particular, we will write $\xi[t \geq X]$ instead of $\xi[\{(\omega,t) : t \geq X(\omega)\}]$ where $X$ is a random variable and $\xi$ a RST.
\subsection{Potential Theory} \label{ssec:PotentialTheory}
Potential Theory is known to be a useful tool when dealing with barrier solutions (cf.\ \cite{Ro76}, \cite{CoWa13}) and the shadow (cf.\ \cite{Ro71}, \cite{BeJu16}). Since it is also a central part of our proof of Theorem \ref{thm:intro}, we recall some results below.
\begin{definition} \label{def:PotentialFunction}
Let $\eta \in \MO$. The potential function of $\eta$ is defined by
\begin{equation*}
U_\eta : \mathbb{R} \rightarrow [0, \infty) \quad U_\eta (x) := \int _{\mathbb{R}} |y - x| \de \eta (y).
\end{equation*}
\end{definition}
Since elements of $\MO$ have finite first moments, the potential function is always well-defined.
\begin{lemma}[{cf.\ \cite[Proposition 4.2]{BeJu16}, \cite[p.\ 335]{Ob04}}] \label{lemma:ConvOrder}
Let $\mu, \nu \in \mathcal{P}_1(\mathbb{R})$. The following are equivalent:
\begin{itemize}
\item [(i)] $\mu \leqc \nu$
\item [(ii)] $U_\mu \leq U_\nu$
\item [(iii)] There exists a solution to $\SEP(\mu,\nu)$.
\end{itemize}
\end{lemma}
The equivalence between (i) and (ii) is not restricted to probability measures. Since both the convex order and the order of the potetntial functions are invariant w.r.t.\ scaling with positive factors, for all $\eta, \zeta \in \MO$ with $\eta(\mathbb{R}) = \zeta(\mathbb{R})$ we have $\eta \leqc \zeta$ if and only if $U_{\eta} \leq U_\zeta$ .
\begin{lemma}[{cf.\ \cite[Proposition 4.1]{BeJu16}}]
\label{lemma:characPotF}
Let $m \in [0,\infty)$ and $x^* \in \mathbb{R}$.
For a function $u:\mathbb{R} \rightarrow \mathbb{R}$ the following statements are equivalent:
\begin{enumerate}
\item [(i)] There exists a finite measure $\mu \in \MO$ with mass $\mu(\mathbb{R}) = m$ and barycenter $x^* = \int _{\mathbb{R}} x \de \mu (x)$ such that $U_\mu = u$ .
\item [(ii)] The function $u$ is non-negative, convex and satisfies
\begin{equation} \label{eq:characPotF}
\lim _{x \rightarrow \pm \infty} u(x) - m|x - x^*| = 0.
\end{equation}
\end{enumerate}
Moreover, for all $\mu, \mu' \in \MO$ we have $\mu = \mu'$ if and only if $U_\mu = U_{\mu'}$.
\end{lemma}
\begin{lemma} \label{lemma:PropPotf}
Let $\eta$ be a positive measure on $\mathbb{R}$. If there exists an $\varepsilon > 0$ such that $U_{\eta}$ is affine on $[x-\varepsilon, x+ \varepsilon]$, $x \not \in \mathrm{supp}(\eta)$.
\end{lemma}
\begin{proof}
The claim follows from the observation that the potential function of the measure $\eta$ satisfies $\frac{1}{2} U_\eta '' = \eta$ in a distributional sense (cf.\ \cite[Proposition 2.1]{HiRo12}).
\end{proof}
\begin{corollary} \label{cor:EquaPotfToZero}
Let $\mu \leq \nu$ and $\tau$ be a solution to $\SEP(\mu,\nu)$. We have
\begin{equation*}
\mathbb{P}[\tau > 0, U_{\mu}(B_0) = U_{\nu}(B_0)] = 0.
\end{equation*}
\end{corollary}
\begin{proof}
Let $A := \{x \in \mathbb{R} :U_{\mu}(x) = U_{\nu}(x)\}$ and set $\eta := \mathrm{Law}(B_{0}; B_{0} \in A)$. Fubini's Theorem yields
\begin{align*}
0 = \int U_{\nu} - U_{\mu} \de \eta = \mathbb{E}[U_{\eta}(B_\tau) - U_{\eta}(B_0)] = \mathbb{E}\left[\left(U_{\eta}(B_\tau) - U_{\eta}(B_0)\right) \1_{\{\tau > 0\}}\right].
\end{align*}
Since $U_\eta$ is a convex function and $(B_{t \land \tau})_{t \geq 0}$ is a uniformly integrable martingale, the (conditional) Jensen inequality yields that
$U_\eta$ is $\mathbb{P}$-a.s.\ affine at $B_0$ on the set $\tau > 0$.
Hence, by Lemma \ref{lemma:PropPotf} the claim follows.
\end{proof}
\begin{lemma} \label{lemma:T1Conv}
Let $(\mu_n)_{n \in \mathbb{N}}$ be a sequence in $\MO(\mathbb{R})$. The following are equivalent:
\begin{itemize}
\item [(i)] The sequence $(\mu_n)_{n \in \mathbb{N}}$ is weakly convergent and there exists a finite measure $\eta \in \MO(\mathbb{R})$ such that
\begin{equation*}
\int _{\mathbb{R}} \varphi \de \mu_n \leq \int _{\mathbb{R}} \varphi \de \eta
\end{equation*}
for all non-negative convex $\varphi$.
\item [(ii)] The sequence $(\mu_n)_{n \in \mathbb{N}}$ is convergent under $\TO$.
\item [(iii)] The sequence of potential functions is pointwise convergent and the limit is the potential function of a finte measure.
\end{itemize}
\end{lemma}
\begin{proof}
For the equiavlence of (ii) and (iii) and the implication (i)$\Rightarrow$(ii) we refer to \cite[Lemma 3.6]{BrJuHu20} and \cite[Lemma 3.3]{BrJuHu20}. It remains to show that (ii) implies (i). Since $\TO$ is by definition stonger than the weak topology, $(\mu_n)_{n \in \mathbb{N}}$ is weakly convergent. Moreover, by \cite[Proposition 7.1.5]{AmGiSa08} the convergence in $\TO$ implies that
\begin{equation*}
\limsup _{K \rightarrow \infty} \sup _{n \in \mathbb{N}} \int _{\mathbb{R}} |x| \1_{\{\vert x \vert \geq K\}} \de \mu_n(x) = 0.
\end{equation*}
Hence, there exists a sequence $(K_m)_{m \in \mathbb{N}}$ with $K_{m+1} \geq K_m \geq 1$ such that $$\sup _{n \in \mathbb{N}} \int _{\mathbb{R}} |x| \1_{\{\vert x \vert \geq K_m\}} \de \mu_n(x) \leq 2^{-m}$$ for all $m \in \mathbb{N}$. The measure
\begin{equation*}
\eta := \sum _{m = 1} ^{\infty} \sup _{n \in \mathbb{N}} \mu_n \left( [-K_m,-K_{m-1}] \cup [K_{m-1},K_m] \right) \left( \delta _{-K_m} + \delta_{K_m} \right)
\end{equation*}
is an element of $\mathcal{M}_1(\mathbb{R})$ which satisfies the desired properties.
\end{proof}
\subsection{Shadows}
Recall the definition of the shadow in Definition \ref{def:Intro}. As direct consequences of this definition we obtain that
\begin{equation*}
\eta \leqp \nu \Rightarrow \shadow{\nu}{\eta} = \eta \quad \text{and} \quad \eta \leqc \eta' \Rightarrow \shadow{\nu}{\eta} \leqc \shadow{\nu}{\eta'}.
\end{equation*}
In the following we collect further properties of the shadow.
\begin{lemma}[{\cite[Theorem 4.8]{BeJu16}}] \label{lemma:ShadowAssz}
Let $\eta := \eta _1 + \eta _2 \leqc \nu$, the shadow of $\eta_2$ in $\nu - \shadow{\nu}{\eta_1}$ exists and we have
\begin{equation*}
\shadow{\nu}{\eta} = \shadow{\nu}{\eta _1} + \shadow{\nu - \shadow{\nu}{\eta _1}}{\eta _2}.
\end{equation*}
\end{lemma}
The statement in Lemma \ref{lemma:ShadowAssz} is the ``associativity law'' for shadows already mentioned in the introduction.
\begin{corollary} \label{lemma:CharShad}
Let $\mu \leqc \nu$ be probability measures and $A \subset \mathbb{R}$ a Borel set such that $\mu(A) > 0$.
If a solution $\tau$ of $\SEP(\mu,\nu)$ satisfies
\begin{equation*}
\forall \tau' \text{ solution of } \SEP(\mu,\nu) \, : \, \Law(B_{\tau}; B_0 \in A) \leqc \Law(B_{\tau'}; B_0 \in A) ,
\end{equation*}
we have $\Law(B_\tau; B_0 \in A) = \shadow{\nu}{\mu _{|A}}$.
\end{corollary}
\begin{proof}
If $\alpha := \mu(A) = 1$, there is nothing to show because $\shadow{\nu}{\mu_{|A}} = \nu = \Law(B_\tau; B_0 \in A)$.
Assume $\alpha < 1$.
Since $\tau$ is a solution to $\SEP(\mu,\nu)$, we have
\begin{equation*}
\mu_{|A} = \Law(B_0; B_0 \in A) \leqc \Law(B_\tau; B_0 \in A) \leqp \nu
\end{equation*}
and hence we obtain $\shadow{\nu}{\mu_{|A}} \leqc \Law(B_\tau; B_0 \in A)$. It remains to show that also the reversed relation holds.
By definition of the shadow, we have $\mu_{|A} \leqc \shadow{\nu}{\mu_{|A}}$ and Lemma \ref{lemma:ConvOrder} yields that there exists a solution $\tau^A$ to $\SEP(\alpha^{-1}\mu_{|A}, \alpha^{-1}\shadow{\nu}{\mu_{|A}})$. By Lemma \ref{lemma:ShadowAssz} it is
\begin{equation*}
\mu_{|A^c} \leq_c \nu - \shadow{\nu}{\mu_{|A}}
\end{equation*}
and again Lemma \ref{lemma:ConvOrder} yields the existence of a solution $\tau ^{A^c}$ to $\SEP((1-\alpha)^{-1}\mu_{|A^c}, (1-\alpha)^{-1}(\nu - \shadow{\nu}{\mu_{|A}}))$. Since $\{B_0 \in A\} \in \mathcal{F}_0$,
\begin{equation*}
\tau' := \tau ^A \1_{\{B_0 \in A\}} + \tau ^{A^c} \1_{\{B_0 \not \in A\}}
\end{equation*}
is a solution to $\SEP(\mu,\nu)$ and thus
\begin{equation*}
\Law(B_{\tau}; B_0 \in A) \leqc \Law(B_{\tau'}; B_0 \in A) = \alpha \Law(B_{\tau^A}) = \shadow{\nu}{\mu_{|A}}. \qedhere
\end{equation*}
\end{proof}
\begin{corollary} \label{cor:ShadowOnEqualPart}
Let $\mu \leqc \nu$ and $\tau$ be a solution to $\SEP(\mu,\nu)$. Let $A \in \mathcal{F}_0$ such that $U_{\mu}(B_0) = U_{\nu}(B_0)$ on $A$. Then
\begin{equation*}
\shadow{\nu}{\Law(B_0; A^c)} = \Law(B_\tau; A^c).
\end{equation*}
\end{corollary}
\begin{proof}
Set $I := \{ x \in \mathbb{R} : U_{\mu}(x) < U_{\nu}(x)\}$. Since $I$ is the collection of irreducible components of $(\mu,\nu)$ (cf.\ \cite[Section A.1]{BeJu16} ), for any solution $\tau'$ of $\mathrm{SEP}(\mu,\nu)$, the stopped process $(B_{\tau' \land s})_{s \geq 0}$ stays in the irreducible component that it started in. Hence, the measure $\Law(B_{\tau'};B_0 \in I)$ is independent of the specific solution $\tau'$.
By Lemma \ref{lemma:CharShad}, we obtain
\begin{equation} \label{eq:IrredShadow}
\mathrm{Law}(B_{\tau'}; B_{0} \in I) = \shadow{\nu}{\mathrm{Law}(B_{0}; B_{0} \in I)}
\end{equation}
for any solution $\tau'$ of $\SEP(\mu,\nu)$.
Since $\{B_0 \in I\} \subset A^c$ and $\tau = 0$ on $\{B_0 \not \in I\}$ by Corollary \ref{cor:EquaPotfToZero}, we obtain
\begin{align*}
\Law(B_0; B_0 \not \in I, A^c) = \Law(B_\tau; B_0 \not \in I, A^c) \leq_+ \Law(B_\tau; B_0 \not \in I).
\end{align*}
Thus, with Lemma \ref{lemma:ShadowAssz} and \eqref{eq:IrredShadow} we obtain
\begin{align*}
\shadow{\nu}{\Law(B_0; A^c)} &= \shadow{\nu}{\Law(B_0; B_0 \in I)} + \shadow{\nu - \shadow{\nu}{\Law(B_0; B_0 \in I)}}{\Law(B_0; B_0 \not \in I, A^c)} \\
&= \Law(B_\tau ; B_0 \in I) + \shadow{\Law(B_\tau; B_0 \not \in I)}{\Law(B_0; B_0 \not \in I, A^c)} \\
&= \Law(B_\tau ; B_0 \in I) + \Law(B_\tau; B_0 \not \in I, A^c) = \Law(B_\tau ; A^c). \qedhere
\end{align*}
\end{proof}
The connection of shadows to potential theory is through the following characterization of the potential functions of the shadow.
\begin{lemma}[{\cite[Theorem 2]{BeHoNo20}}] \label{lemma:PotfShad}
Let $\hat{\mu} \leq \mu \leqc \nu$. The potential function of the shadow $\shadow{\nu}{\hat{\mu}}$ is given by
\begin{equation*}
U_{\shadow{\nu}{\hat{\mu}}} = U_{\nu} - \mathrm{conv} \left( U_{\nu} - U_{\hat{\mu}} \right)
\end{equation*}
where $\mathrm{conv}(f)$ denotes the convex hull of a function $f$, i.e. the largest convex function that is pointwise smaller than $f$.
\end{lemma}
\begin{lemma} [{\cite[Lemma 1]{BeHoNo20}}] \label{lemma:PropConv}
Let $f$ be a continuous function bounded by an affine function from below. If $x \in \mathbb{R}$ satisfies $(\mathrm{conv}(f))(x) < f(x)$, there exists an $\varepsilon > 0$ such that $\mathrm{conv}(f)$ is affine on $[x - \varepsilon, x + \varepsilon]$.
\end{lemma}
\begin{lemma} \label{lemma:ShadowDecomp}
Let $(\mu_a)_{a \in [0,1]}$ be a family of probability measures, $(F_a)_{a \in [0,1]}$ a decreasing sequence of closed subsets of $\mathbb{R}$ and set $\nu = \int _0 ^1 \mu_a K^{F_a} \leqp \nu$. For all $b \in [0,1]$ we have
\begin{equation*}
\mathcal{S}^{\nu}\left(\int _0 ^b \mu_a \de a\right) = \int _0 ^b \mu_a K^{F_a} \de a.
\end{equation*}
\end{lemma}
\begin{proof}
Let $\eta, \zeta \in \MO(\mathbb{R})$ and $F \subset \mathbb{R}$ a closed set with $\mathrm{supp}(\zeta) \subset F$. Since we have
\begin{equation*}
\eta \leqc \eta K^F \leqp \eta K^F + \zeta,
\end{equation*}
we obtain $\shadow{\eta K^F + \zeta}{\eta} \leqc \eta K^F$. Conversely, we also have
\begin{equation*}
\eta K^F \leqc \eta K^{\mathrm{supp}(\eta K^F + \zeta)} \leqc \shadow{\eta K^F + \zeta}{\eta}
\end{equation*}
because $\mathrm{supp}(\eta K^F + \zeta) \subset F$ and by definition $\eta K^{\mathrm{supp}(\eta K^F + \zeta)}$ is the smallest measure in convex order which dominates $\eta$ in convex order and is supported on $\mathrm{supp}(\eta K^F + \zeta)$ (cf.\ \eqref{eq:KellererDilation}). Hence, we have $ \shadow{\eta K^F + \zeta}{\eta} = \eta K^F$.
Furthermore, for all $n \in \mathbb{N}$, $\mu_1, \ldots , \mu_n \in \MO$ and closed sets $F_1, \ldots , F_n \subset \mathbb{R}$ we can apply this equality to get
\begin{equation*}
\mu_1 K^{F_1} = \shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n}}{\mu _1}
\end{equation*}
and with Lemma \ref{lemma:ShadowAssz} we inductively obtain
\begin{align*}
&\shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n}}{\mu _1 + \ldots + \mu_{k}} \\
&= \shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n}}{\mu _1 + \ldots + \mu_{k-1}} \\
& \quad \quad + \shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n} - \shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n}}{\mu _1 + \ldots + \mu_{k-1}} }{\mu_{k}} \\
&= \mu _1K^{F_1} + \ldots + \mu_{k-1}K^{F_{k-1}} + \shadow{\mu_{k} K ^{F_{k}} + \ldots + \mu_n K^{F_n}}{\mu_{k}} \\
&= \mu_1 K ^{F_1} + \ldots + \mu_k K^{F_k}
\end{align*}
for all $2 \leq k \leq n$.
Since the map $(\mu,\nu) \mapsto \shadow{\nu}{\mu}$ is continuous under $\TO$ (cf.\ \cite{Ju14}), the claim follows.
\end{proof}
\section{Proof of the Main Result} \label{sec:MainResult}
We split the proof of Theorem \ref{thm:intro} in three parts. In Subsection \ref{ssec:adjoint} we show that the assumptions on the time-change and the level process in Theorem \ref{thm:intro} correspond to each other. In Subsection \ref{ssec:AprioriBound} we construct for every solution of the Skorokhod Embedding Problem an upper bound in the form of a barrier solution and we prove in Subsection \ref{ssec:ActualProof} that this upper bound is attained if and only if the properties of Theorem \ref{thm:intro} are satisfied.
\subsection{Monotonously Increasing Processes} \label{ssec:adjoint}
\begin{definition}
Two monotonously increasing and non-negative families of random variables $(X_t)_{t \geq 0}$ and $(T_l)_{l \geq 0}$ are adjoint if $\mathbb{P}[X_t \geq l \Leftrightarrow T_l \leq t] = 1$ for all $l,t \geq 0$.
\end{definition}
\begin{remark}
If $(X_t)_{t \geq 0}$ is right-continuous or $(T_l)_{l \geq 0}$ left-continuous and both families are adjoint, we have $\mathbb{P}[\forall l,t \geq 0 : X_t \geq l \Leftrightarrow T_l \leq t] = 1$.
\end{remark}
\begin{lemma} \label{lemma:ExAdjoint}
\begin{itemize}
\item [(i)] Let $(X_t)_{t \geq 0}$ be a right-continuous $\mathcal{F}$-adapted stochastic process which is non-negative, monotonously increasing and satisfies $\mathbb{P}[\exists s < t : X_s = X_t = l] = 0$ for all $l \geq 0$. Then, the family $(T_l)_{l \geq 0}$ defined by
\begin{equation*}
T_l := \inf \{t \geq 0 : X_t \geq l \}
\end{equation*}
is a left-continuous $\mathcal{F}$-time change with $T_0 = 0$, $T_\infty = + \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] = 1$ for all $l \geq 0$ which is adjoint to $(X_t)_{t \geq 0}$.
\item [(ii)] Let $(T_l)_{l \geq 0}$ be a left-continuous $\mathcal{F}$-time-change with $T_0 = 0$, $T_\infty = \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] = 1$ for all $l \geq 0$. Then. the family $(X_t)_{t \geq 0}$ defined by
\begin{equation*}
X_t := \sup \{l \geq 0 : T_l \leq t\}
\end{equation*}
is a right-continuous $\mathcal{F}$-adapted stochastic process $(X_t)_{t \geq 0}$ which is non-negative, monotonously increasing and satisfies $\mathbb{P}[\exists s < t: X_s = X_{t} = l] = 0$ for all $l \geq 0$, and which is adjoint to $(T_l)_{l \geq 0}$.
\end{itemize}
\end{lemma}
\begin{proof}
Item (i): Let $t,l \geq 0$. If $X_t \geq l$, $T_l \leq t$ directly by definition. Conversely, if $T_l \leq t$, for all $u > t$ we obtain $X_u \geq l$ and thus $X_t = \lim _{u \downarrow t} X_u \geq l$ by right-continuity of $X$. Hence, $(T_l)_{l \geq 0}$ is adjoint to $(X_t)_{t \geq 0}$.
Clearly, $(T_l)_{l \geq 0}$ is monotonously increasing.
Since $(T_l)_{l \geq 0}$ and $(X_t)_{t \geq 0}$ are adjoint, the symmetric difference $\{T_l \leq t\} \triangle \{X_t \geq l\}$ is a $\mathbb{P}$-null-set and therefore contained in the completed filtration $\mathcal{F}_t$. Thus, $(T_l)_{l \geq 0}$ is a $\mathcal{F}$-time-change. Since $X_t$ is non-negative and finite, we obtain $T_0 = 0$ and $T_{\infty} = + \infty$. Moreover, $l \mapsto T_l$ is left-continuous by definition.
Furthermore, we have $\mathbb{P}[ \lim _{k \downarrow l} T_k > T_l ] \leq \mathbb{P}[\exists s < t : X_s = X_{t} = l] = 0$.
Item (ii): Basically the same just in reverse.
\end{proof}
\textbf{In the following} we fix a $\mathcal{F}$-adapted stochastic process $(X_t)_{t \geq 0}$ and an adjoint $\mathcal{F}$-time-change $(T_l)_{l \geq 0}$ that satisfy the properties listed in Lemma \ref{lemma:ExAdjoint}.
\subsection{A-priori Bound} \label{ssec:AprioriBound}
Let $B$ be a Brownian motion that starts in $\mu$.
Fix a randomized stopping time $\xi$ that is a solution to $\SEP(\mu,\nu)$.
To simplify notation we will use the following notation for measures derived from $\xi$
\begin{equation} \label{eq:DefRST2}
\Law(B_{\sigma \land \xi}) := ((\omega,t) \mapsto B_{\sigma(\omega) \land t}(\omega))_{\#} \xi
\end{equation}
where $\sigma$ is an $\mathcal{F}$-stopping time.
We set $u(l,x) := U_{\Law(B_{T_l \land \xi})}(x)$ and $v(x) := U_{\Law(\nu)}$ for $l \geq 0$ and $x \in \mathbb{R}$.
In this part we will show that $\xi$ is bounded from above by the stopping time
\begin{equation*}
\hat{\tau} := \inf\{ t \geq 0 : u(X_t,B_t) = v(B_t)\},
\end{equation*}
i.e.\ we have $\xi[t \leq \hat{\tau}] = 1$. Since $u$ depends on $\xi$, $\hat{\tau}$ is obviously not a global bound for all solutions to $\mathrm{SEP}(\mu,\nu)$. Nevertheless, Lemma \ref{lemma:uCont} implies that $\hat{\tau}$ is a barrier solution.
\begin{lemma} \label{lemma:uCont}
The function $u$ is continuous and monotonously increasing in the first component. Moreover, for all $x \in \mathbb{R}$ we have $v(x) = \lim_{l \rightarrow \infty} u(l,x)$.
\end{lemma}
\begin{proof}
For all $x \in \mathbb{R}$ and $l \leq l'$, by Lemma \ref{lemma:ReprRST} we have
\begin{align*}
u(l',x) = \overline{\mathbb{E}} \left[ |\overline{B}_{T_{l'} \land \overline{\tau}^\xi} - x| \right] \geq \overline{\mathbb{E}} \left[ |\overline{B}_{T_{l} \land \overline{\tau}^\xi} - x| \right] = u(l,x)
\end{align*}
because $\Law_{\overline{\mathbb{P}}}(\overline{B}_{T_{l} \land \overline{\tau}^\xi})_{l \geq 0}$ is increasing in convex order by the optional stopping theorem.
We chose $(T_l)_{l \geq 0}$ such that, for fixed $l_0 \geq 0$, $l \mapsto T_l$ is $\mathbb{P}$-a.s.\ continuous at $l_0$. Hence, $l \mapsto \Law(B_{T_l \land \xi})$ is weakly continuous and by Lemma \ref{lemma:T1Conv}, $u$ is continuous in the fist component because $\Law(B_{T_l \land \xi}) \leq_c \nu$ for all $l \geq 0$. Furthermore, $u$ is $1$-Lipschitz continuous in the second component because $u(l,\cdot)$ is the potential function of $\Law(B_{T_l \land \xi})$.
\end{proof}
\begin{lemma} \label{lemma:EquaToLeq}
Let $l \geq 0$ and $\sigma$ be a finite $\mathcal{F}$-stopping time. It is
\begin{equation*}
\xi \left[ u(l,B_{\sigma}) = v(B_{\sigma}), t > \sigma \geq T_l \right] = 0.
\end{equation*}
\end{lemma}
\begin{proof}
This is a direct consequence of Lemma \ref{lemma:ReprRST} and Corollary \ref{cor:EquaPotfToZero}.
\end{proof}
\begin{proposition} \label{prop:EquaToLeq}
Let $\sigma$ be a finite $\mathcal{F}$ stopping time with $\mathbb{P}[u(X_{\sigma},B_{\sigma}) = v(B_{\sigma})] = 1$. We have $\xi[t \leq \sigma] = 1$.
\end{proposition}
\begin{proof}
Let $r(x) := \inf \{l \geq 0 : u(l,x) = v(x)\}$ for all $x \in \mathbb{R}$ and let
\begin{equation} \label{eq:DefOfL}
L := \{ r(x) : x \in \mathbb{R}, \exists \varepsilon > 0 \text{ s.t. } r(x) \leq r(y) \text{ f.a.\ } y \in (x-\varepsilon,x+\varepsilon)\}
\end{equation}
be the value set of all local minima of $r$. The set $L$ is countable. Indeed, setting $I_{p,q} := \{ x \in (p,q) : r(x) \leq r(y) \text{ f.a.\ } y \in (p,q)\}$, we have $L = \bigcup _{(p,q) \in \mathbb{Q}^2} r(I_{p,q})$ where $r(I_{p,q})$ is either empty or a singleton. Since $u(X_\sigma,B_\sigma) = v(B_\sigma)$ and $X_\sigma = l \Rightarrow T_l \leq \sigma$ $\mathbb{P}$-a.s., we obtain
\begin{align*}
\xi[t > \sigma, X_{\sigma} \in L] &= \sum _{l \in L} \xi[t > \sigma, u(X_{\sigma},B_{\sigma}) = v(B_{\sigma}), X_{\sigma} = l] \\
&\leq \sum _{l \in L} \xi[t > \sigma \geq T_l, u(l,B_{\sigma}) = v(B_{\sigma})]
\end{align*}
and the r.h.s.\ is equal to $0$ by Lemma \ref{lemma:EquaToLeq}.
It remains to show that $\xi[t > \sigma, X_{\sigma} \not \in L] = 0$. To this end, we define
$[l]_n := \max \{ i/2^n : i \in \mathbb{N}, i/2^n \leq l \}$ for all $n \in \mathbb{N}$ and $l \geq 0$, and
\begin{equation*}
\sigma ^n := \inf \{t \geq 0: u([X_t]_n,B_t) = v(B_t) \}.
\end{equation*}
We claim that
\begin{equation} \label{eq:AuxClaim}
\mathbb{P}\left[X_{\sigma} \not \in L, \sigma < \inf _{n \in \mathbb{N}} \sigma ^n\right] = 0.
\end{equation}
Admitting \eqref{eq:AuxClaim}, since for all $n \in \mathbb{N}$ the function $t \mapsto u([X_t]_n,B_t)$ is right-continuous, we have a.s.\ $u([X_{\sigma^n}]_n,B_{\sigma^n}) = v(B_{\sigma_n})$ and hence \eqref{eq:AuxClaim} yields
\begin{align*}
\xi[t > \sigma, X_{\sigma} \not \in L]
&\leq \xi\left[ t > \inf _{n \in \mathbb{N}} \sigma^n \right] \\
&\leq \sum _{n \in \mathbb{N}} \xi[ t > \sigma^n, u([X_{\sigma^n}]_n,B_{\sigma^n}) = v(B_{\sigma^n})] \\
&= \sum _{n \in \mathbb{N}} \sum _{i = 0} ^{\infty} \xi \left[ t > \sigma^n, u([X_{\sigma^n}]_n,B_{\sigma^n}) = v(B_{\sigma^n}), \frac{i}{2^n} \leq X_{\sigma ^n} < \frac{i+1}{2^n} \right] \\
&\leq \sum _{n \in \mathbb{N}} \sum _{i = 0} ^{\infty} \xi[ t > \sigma^n \geq i/2^n, u(i/2^n,B_{\sigma^n}) = v(B_{\sigma^n})].
\end{align*}
By Lemma \ref{lemma:EquaToLeq}, these summands are zero for all $n,i \in \mathbb{N}$.
We are left with verifying \eqref{eq:AuxClaim}. By the definition of $L$ in \eqref{eq:DefOfL}, we see that for every pair $(l,x)$ where $l \not \in L$ and $x \in \mathbb{R}$ with $u(l,x) = v(x)$, there exists a sequence $(x_n)_{n \in \mathbb{N}}$ that converges to $x$ such that $u([l]_n,x_n) = v(x_n)$ for all $n \in \mathbb{N}$ large enough. Indeed, since $u(l,x) = v(x)$, it is $r(x) \leq l$ which leaves us with two cases: If $r(x) < l$, we just need to choose $n$ large enough such that $r(x) \leq [l]_n \leq l$. If $r(x) = l \not \in L$, $x$ cannot be a local minimum of $r$, therefore there exists a sequence $(x_m)_{m \in \mathbb{N}}$ that converges to $x$ with $r(x_m) < l$ and we just need to choose an appropriate subsequence $(x_{m_n})_{n \in \mathbb{N}}$ such that $r(x_m) \leq [l]_{n_m} \leq l$.
Thus, since $u(X_\sigma,B_\sigma) = v(B_\sigma)$ $\mathbb{P}$-a.s., we obtain for $\mathbb{P}$-a.e.\ $\omega$
\begin{align*}
X_\sigma(\omega) \not \in L \quad &\Rightarrow \quad \forall \delta > 0 \, \exists n \in \mathbb{N} \, \exists y \in \mathcal{B}_\delta (B_{\sigma}(\omega)) \, : u([X_\sigma(\omega)]_n,y) = v(y)
\end{align*}
where $\mathcal{B}_{\delta}(x)$ denotes the open ball of radius $\delta$ around $x$.
Hence, for all $\varepsilon > 0$ we have
\begin{equation} \label{eq:NastyIneq}
\begin{split}
&\mathbb{P}[\forall n \in \mathbb{N} \, \forall t \in (\sigma, \sigma + \varepsilon) : u([X_t]_n,B_t) < v(B_t), X_{\sigma} \not \in L ] \\
\leq \,&\mathbb{P}[\forall n \in \mathbb{N} \, \forall t \in (\sigma, \sigma + \varepsilon) : u([X_\sigma]_n,B_t) < v(B_t), X_{\sigma} \not \in L ] \\
\leq \,& \mathbb{P}[\forall \delta > 0 \, \exists y\in \mathcal{B}_\delta (B_{\sigma}) \, \forall t \in (\sigma, \sigma + \varepsilon) : B_t \neq y].
\end{split}
\end{equation}
where we used the monotonicity of $u$ in the first component (cf.\ Lemma \ref{lemma:uCont}).
By the strong Markov property and the continuity of Brownian motion, we can bound the last term in \eqref{eq:NastyIneq} by the sum of $\mathbb{P}[\forall t \leq \varepsilon : B_t \leq 0]$ and $\mathbb{P}[\forall t \leq \varepsilon : B_t \geq 0]$, and this is clearly $0$. Since $\varepsilon > 0$ is arbitrary, \eqref{eq:AuxClaim} is shown.
\end{proof}
Recall that $\hat{\tau} := \inf\{ t \geq 0 : u(X_t,B_t) = v(B_t)\}$ where $u(l,\cdot)$ is the potential function of $\Law(B_{\xi \land T_l})$ and $v$ is the potential function of $\nu$.
\begin{corollary} \label{cor:Leq}
We have a.s.\ $\xi[t \leq \hat{\tau}] = 1$. If $\xi$ is induced by an $\mathcal{F}$-stopping time $\tau$, we have $\tau \leq \hat{\tau}$.
\end{corollary}
\begin{proof}
Since $u$ is continuous and $t \mapsto (X_t,B_t)$ is $\mathbb{P}$-a.s.\ right-continuous, we obtain $\mathbb{P}[u(X_{\hat \tau}, B_{\hat \tau}) = v(B_{\hat \tau})] = 1$ and therefore we can apply Proposition \ref{prop:EquaToLeq}.
\end{proof}
\subsection{Proof of Theorem \ref{thm:MainEqui}} \label{ssec:ActualProof}
Recall once again the properties of $(X_t)_{t \geq 0}$ and $(T_l)_{l \geq 0}$ formulated at the end of subsection \ref{ssec:adjoint}.
Let $\xi$ be a RST which is a solution to $\SEP(\mu,\nu)$.
Additionally to $\Law(B_{T_l \land \xi})$ (cf.\ \eqref{eq:DefRST2}), we introduce notation for the measures \begin{equation} \label{eq:DefRST3}
\begin{split}
\Law(B_{\xi}; \xi \geq T_l) &:= ((\omega,t) \mapsto B_t(\omega))_{\#} \xi _{\vert \{t \geq T_l (\omega)\}} \quad \text{and} \\
\Law(B_{T_l}; \xi \geq T_l) &:= ((\omega,t) \mapsto B_{T_l(\omega)}(\omega))_{\#} \xi _{\vert \{t \geq T_l (\omega)\}}
\end{split}
\end{equation}
The following Lemma \ref{lemma:ShadToSupp2} is the main observation that allows us to show in Lemma \ref{lemma:ShadToEqua} a counterpart to the upper bound stated in Corollary \ref{cor:Leq}.
\begin{lemma} \label{lemma:ShadToSupp2}
Let $l \geq 0$ and suppose that $\xi$ satisfies
\begin{equation} \label{eq:ShaodwProp}
\Law(B_{\xi}; \xi \geq T_l) = \shadow{\nu}{\Law(B_{T_l}; \xi \geq T_l)}.
\end{equation}
For all $x \in \mathbb{R}$, if $u(l,x) < v(x)$, $x \not \in \mathrm{supp}(\nu - \mathrm{Law}(B_{\xi}; \xi \geq T_l))$.
\end{lemma}
\begin{proof}
Fix $l \geq 0$. By \eqref{eq:DefRST2} and \eqref{eq:DefRST3}, we have
\begin{equation} \label{eq:DefinitionId}
\Law(B_{T_l \land \xi}) - \Law(B_{T_l}; \xi \geq T_l) = \nu - \Law(B_{\xi}; \xi \geq T_l).
\end{equation}
Hence, Lemma \ref{lemma:PotfShad} and \eqref{eq:ShaodwProp} yield
\begin{align*}
v - u(l, \cdot) &= U_{\mathrm{Law}(B_{\xi}; \xi \geq T_l)} - U_{\mathrm{Law}(B_{T_l}; \xi \geq {T_l})} \\
&= v - U_{\mathrm{Law}(B_{T_l}; \xi \geq T_l)} - \mathrm{conv} \left( v - U_{\mathrm{Law}(B_{T_l}; \xi \geq T_l)} \right).
\end{align*}
Let $x \in \mathbb{R}$ with $u(l,x) < v(x)$. By Lemma \ref{lemma:PropConv}, there exists an $\varepsilon > 0$ such that on the interval $[x- \varepsilon, x + \varepsilon]$ the function
\begin{align*}
\mathrm{conv} \left( v - U _{\mathrm{Law}(B_{T_l}; \xi \geq T_l)} \right)
\end{align*}
is affine. Rewriting with Lemma \ref{lemma:PotfShad} and \eqref{eq:DefinitionId}, we obtain that the function
\begin{align*}
u(l, \cdot) - U_{\mathrm{Law}(B_{T_l}; \xi \geq T_l)} = U_{\Law(B_{T_l \land \xi}) - \Law(B_{T_l}; \xi \geq T_l)} = U_{\nu - \Law(B_{\xi}; \xi \geq T_l)}
\end{align*}
is affine around $x$. Hence, Lemma \ref{lemma:PropPotf} yields that $x \not \in \mathrm{supp}(\nu - \Law(B_{\xi}; \xi \geq T_l))$.
\end{proof}
\begin{lemma} \label{lemma:ShadToEqua}
If $\xi$ satisfies $\mathrm{Law}(B_{\xi}; \xi \geq T_l) = \shadow{\nu}{\mathrm{Law}(B_{T_l}; \xi \geq T_l)}$ for all $l \geq 0$, we have $\xi\left[ u(X_t, B_{t}) < v(B_{t}) \right] = 0$.
\end{lemma}
\begin{proof}
Let $\varepsilon > 0$.
Since both $u(0,\cdot)$ and $v$ are potential functions of probability measures with the same mass and barycenter, they are continuous and their difference vanishes at $\pm \infty$. Hence, there exists an $M_1 \in \mathbb{N}$ such that $v(x) - u(0,x) \leq \frac{\varepsilon}{2}$ for all $|x| \geq M_1$. On the compact interval $[-M_1,M_1]$ the monotone increasing sequence $(u(l,\cdot))_{l \geq 0}$ converges pointwise to $v$ (see Lemma \ref{lemma:uCont}). Dini's theorem yields that there exists $M_2 \in \mathbb{N}$ such that $\sup _{x \in [-M_1,M_1]} v(x) - u(l,x) \leq \frac{\varepsilon}{2}$ for all $l \geq M_2$. Moreover, since $u$ is jointly continuous on the compact interval $[0,M_2] \times [-M_1,M_1]$,there exists an $n \in \mathbb{N}$ such that
\begin{equation*}
\forall \, x \in \mathbb{R} \ \forall \, 0 \leq l \leq l' \leq l + \frac{1}{2^n} \, : \quad u(l',x) - u(l,x) \leq \frac{\varepsilon}{2}.
\end{equation*}
For this $n$, we obtain
\begin{align*}
\xi \left[ u(X_t, B_t) + \varepsilon \leq v(B_t) \right] &= \sum _{i = 1} ^{\infty} \xi \left[ u(X_t, B_t) + \varepsilon \leq v(B_t), \frac{i-1}{2^n} \leq X_t < \frac{i}{2^n} \right] \\
&\leq \sum _{i = 1} ^{\infty} \xi \left[ u\left(\frac{i}{2^n}, B_t\right) < v(B_t), X_t < \frac{i}{2^n} \right].
\end{align*}
For each $i \in \mathbb{N}$, the summands on the r.h.s.\ are $0$ because Lemma \ref{lemma:ShadToSupp2} yields
\begin{align*}
\xi \left[ u\left(\frac{i}{2^n}, B_t\right) < v(B_t), X_t < \frac{i}{2^n} \right] = \xi \left[ u\left(\frac{i}{2^n}, B_t \right) < v(B_t), t < T_{\frac{i}{2^n}} \right] = 0
\end{align*}
Since $\varepsilon >0$ is arbitrary, the claim follows.
\end{proof}
\begin{lemma} \label{lemma:RBtoEqua}
Let $\mathcal{R} \subset [0,\infty) \times \mathbb{R}$ be a closed barrier and $\tau$ an $\mathcal{F}$-stopping time. If $\tau = \inf \{t \geq 0: (X_t,B_t) \in \mathcal{R} \}$ $\mathbb{P}$-a.s., we have $\mathbb{P}[u(X_\tau,B_\tau) = v(B_{\tau})] = 1$.
\end{lemma}
\begin{proof}
For all $(l,x) \in \mathcal{R}$, the Brownian motion $B$ cannot pass through $x$ on $[T_l \land \tau, \tau]$. Indeed, if $ t \in (T_l \land \tau, \tau]$ it is $X_t \geq l$ and since $\tau$ is by assumption the first time the process $(X_t,B_t)_{t \geq 0}$ hits the barrier $\mathcal{R}$, the Brownian motion is stopped at latest when it reaches $[l, \infty) \times \{x\} \subset \mathbb{R}$.
Hence, we have $(B_{\tau} - x)(B_{\tau \land T_l} - x) \geq 0$ $\mathbb{P}$-a.s., and thus we obtain $$u(l,x) = \mathbb{E}[|B_{T_l \land \tau }-x|] = \mathbb{E}[|B_{\tau} - x|] = v(x).$$
Since $u$ is continuous (cf.\ Lemma \ref{lemma:uCont}) and $t \mapsto (X_t,B_t)$ is right-continuous, we get $\mathbb{P}[u(X_{\tau},B_{\tau}) = v(B_{\tau})] = 1$.
\end{proof}
\begin{lemma} \label{lemma:InfimumToEqual}
Let $\hat{\tau} := \inf\{t \geq 0 : u(X_t,B_t) = v(B_t)\}$. For all $l \geq 0$ we have $\mathbb{P}[\hat{\tau} < T_l, u(l,B_{T_l \land \hat{\tau}}) < v(B_{T_l \land \hat{\tau}})] = 0$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:uCont}, $u$ is continuous and $t \mapsto (X_t,B_t)$ is $\mathbb{P}$-a.s.\ right-continuous, therefore we obtain from the definition of $\hat{\tau}$
\begin{equation*}
\mathbb{P}[u(X_{\hat \tau}, B_{\hat \tau}) = v(B_{\hat \tau})] = 1.
\end{equation*}
Let $l \geq 0$. Since $(X_t)_{t \geq 0}$ is adjoint to $(T_l)_{l \geq 0}$, we obtain
\begin{equation*}
\hat{\tau} < T_l \quad \Rightarrow \quad X_{\hat{\tau}} < l \quad \mathbb{P}\text{a.s.}
\end{equation*}
Since $u$ is also monotonously increasing in the first component and it is $B_{T_l \land \hat{\tau}} = B_{\hat{\tau}}$ on the set $\{\hat{\tau} < T_l\}$, the claim follows.
\end{proof}
Recall the definitions from the end of Subsection \ref{ssec:adjoint}:
\begin{theorem} \label{thm:MainEqui}
The following are equivalent:
\begin{itemize}
\item [(i)] There exists a closed barrier $\mathcal{R} \subset [0,\infty) \times \mathbb{R}$ such that $\xi$ is induced by the $\mathcal{F}$-stopping time $\tau := \inf \{ t \geq 0 : (X_t,B_t) \in \mathcal{R}\}$.
\item [(ii)] For all $l \geq 0$ we have $\mathrm{Law}(B_{\xi}; \xi > T_l) = \shadow{\nu}{\mathrm{Law}(B_{\xi}; \xi > T_l)}$.
\item [(iii)] $\xi$ is induced by the $\mathcal{F}$ stopping time $\hat{\tau} := \inf\{t \geq 0 : u(X_t,B_t) = v(B_t)\}$.
\end{itemize}
\end{theorem}
\begin{proof}
\textit{(i) $\Rightarrow$ (iii):} By Lemma \ref{lemma:RBtoEqua}, $\tau$ satisfies $\mathbb{P}[u(X_\tau,B_\tau) = v(B_{\tau})] = 1$ and thus we have $\mathbb{P}$-a.s. $\hat{\tau} \leq \tau$. The claim follows with Corollary \ref{cor:Leq}.
\textit{(iii) $\Rightarrow$ (i):}
Lemma \ref{lemma:uCont} yields that $u$ is a jointly continuous function which is monotonously increasing in $l$. Hence, the set $\mathcal{R} := \{(l,x) \in [0, \infty) \times \mathbb{R} : u(l,x) = v(x)\}$ is a closed barrier.
\textit{(ii) $\Rightarrow$ (iii):}
By Lemma \ref{lemma:ShadToEqua},
$\xi[t \geq \hat \tau] \geq \xi[u(X_t,B_t) = v(B_t)] = 1$ and Corollary \ref{cor:Leq} yields that $\xi [t \leq \hat{\tau}] = 1$.
\textit{(iii) $\Rightarrow$ (ii):}
Let $l \geq 0$.
Since $\xi$ is induced by $\hat{\tau}$ and a solution to $\SEP(\mu,\nu)$, $\hat \tau - \hat{\tau} \land T_l$ is a solution of $\SEP(\Law(B_{\hat{\tau} \land T_l}),\nu)$ w.r.t.\ the Brownian motion $B'_s = B_{s + \hat{\tau} \land T_l}$. Moreover, Lemma \ref{lemma:InfimumToEqual} yields
\begin{equation*}
\hat{\tau} < T_l \Rightarrow u(l,B'_0) = v(B'_{0}) \quad \mathbb{P}\text{-a.s.}
\end{equation*}
Hence, by Corollary \ref{cor:ShadowOnEqualPart} it is
\begin{align*}
\Law(B_{\hat{\tau}}; \hat{\tau} \geq T_l) &= \Law(B'_{\hat{\tau} - \hat{\tau} \land T_l}; \hat \tau \geq T_l) \\
&= \shadow{\nu}{\Law(B'_{0}; \hat \tau \geq T_l)} = \shadow{\nu}{\Law(B_{T_l}; \hat{\tau} \geq T_l)}. \qedhere
\end{align*}
\end{proof}
\section{Proof of Proposition \ref{prop:ShiftedThm}}
\begin{proof}
Let $\tilde{\mathcal{F}}$ be the filtration defined by $\tilde{\mathcal{F}}_s := \mathcal{F}_{\sigma + s}$ and let $\tilde{B}$ be the process defined by $\tilde{B}_s := B_{\sigma + s}$. $\tilde{B}$ is an $\tilde{\mathcal{F}}$-Brownian motion. Moreover, $\tilde{\tau} := \tau - \sigma$ is an $\tilde{\mathcal{F}}$-stopping time because $\{\tilde{\tau} \leq s \} = \{\tau \leq \sigma + s\} \in \tilde{\mathcal{F}}_s$ for all $s \geq 0$. Clearly, we have $\tilde{B}_{\tilde{\tau}} = B_\tau$.
Suppose (i) is satisfied. We set $\tilde{X}_s := X_{\sigma + s}$. Since $X$ is $\mathcal{F}$-adapted, $\tilde{X}$ is $\tilde{\mathcal{F}}$-adapted and, furthermore, we have
\begin{equation*}
\tilde{\tau} := \tau - \sigma = \inf \{ s \geq 0 : (\tilde{X}_s,\tilde{B}_s) \in \mathcal{R} \}.
\end{equation*}
Applying Theorem \ref{thm:intro} yields the existence of an $\tilde{\mathcal{F}}$-time-change $(\tilde{T}_l)_{l \geq 0}$ such that for all $l \geq 0$ we have
\begin{equation*}
\Law(\tilde{B}_{\tilde{\tau}}; \tilde{\tau} \geq \tilde{T}_l) = \shadow{\nu}{\Law(\tilde{B}_{\tilde{T}_l}; \tilde{\tau} \geq \tilde{T}_l}.
\end{equation*}
In particular, by Theorem \ref{thm:intro} we can choose $\tilde{T}_l = \inf \{ s \geq 0 : \tilde{X}_s \geq l\}$. Moreover, we set $T_l = \inf \{t \geq 0 : X_t \geq l\}$ and see that we have
\begin{align*}
\sigma + \tilde{T}_l &= \sigma + \inf \{ s \geq 0 : \tilde{X}_s \geq l\} = \sigma + \inf \{ s \geq 0 : X_{\sigma + s} \geq l\} \\
&= \max\{\sigma, T_l \}
\end{align*}
where the last equality follows from the fact that $X$ is monotonously increasing.
We easily verify that a.s.
\begin{equation} \label{eq:RelationTilde}
\tilde{B}_{\tilde{T}_l} = B_{\sigma + \tilde{T_l}} = B_{\sigma \lor T_l} \quad \text{ and } \quad \{ \tilde{\tau} \geq \tilde{T}_l\} = \{\tau - \sigma \geq \tilde{T}_l \} \} = \{\tau \geq \sigma \lor T_l\}.
\end{equation}
Hence, for all $l \geq 0$ we obtain
\begin{equation*}
\Law(B_\tau; \tau \geq \sigma \lor T_l) = \shadow{\nu}{\Law(B_{\sigma \lor T_l}; \tau \geq \sigma \lor T_l )}.
\end{equation*}
Conversely, suppose that (ii) is satisfied. We set $\tilde{T}_l := \max\{0,T_l - \sigma\}$. Since $(T_l)_{l \geq 0}$ is an $\mathcal{F}$-time-change, $(\tilde{T}_l)_{l \geq 0}$ is an $\tilde{\mathcal{F}}$-time-change, and by definition we have $\sigma + \tilde{T}_l = \sigma \lor T_l$ such that \eqref{eq:RelationTilde} holds as well. In particular, we obtain
\begin{align*}
\Law(\tilde{B}_{\tilde{\tau}}; \tilde{\tau} \geq \tilde{T}_l) &= \Law(B_\tau; \tau \geq \sigma \lor T_l) \\
&= \shadow{\nu}{\Law(B_{\sigma \lor T_l}; \tau \geq \sigma \lor T_l)} = \shadow{\nu}{\Law(\tilde{B}_{\tilde{T_l}}; \tilde{\tau} \geq \tilde{T}_l)}.
\end{align*}
Applying Theorem \ref{thm:intro} yields the existence of an $\tilde{\mathcal{F}}$-adapted stochastic process $(\tilde{X}_s)_{s \geq 0}$ and a closed barrier $\mathcal{R} \subset [0,\infty) \times \mathbb{R}$ such that
\begin{equation*}
\tilde{\tau} = \inf \{ s \geq 0: (\tilde{X}_s,\tilde{B}_s) \in \mathcal{R}\}.
\end{equation*}
In particular, by Theorem \ref{thm:intro} we can choose $\tilde{X}_s := \sup \{l \geq 0 : \tilde{T}_l \leq t\}$. Moreover, we set $X_t := \sup \{l \geq 0 : T_l \leq t\}$ and see that
\begin{align*}
\tilde{X}_s &= \sup \{l \geq 0: \tilde{T}_l \leq s\} = \sup \{l \geq 0: \max\{0, T_l - \sigma\} \leq s\} \\
&= \sup \{l \geq 0 : T_l \leq \sigma + s\} = X_{\sigma + s}.
\end{align*}
Thus, we have a.s.
\begin{align*}
\inf \{ t \geq \sigma: (X_t, B_t) \in \mathcal{R} \} &= \sigma + \inf\{ s \geq 0 : (X_{\sigma + s}, B_{\sigma + s}) \in \mathcal{R}\} \\
&= \sigma + \tilde{\tau} = \tau. \qedhere
\end{align*}
\end{proof}
\begin{remark} \label{rem:CondLM}
For the left-monotone time-change $(T_l^{lm})_{l \geq 0}$, we have
\begin{equation*}
\{\tau^i \geq \tau^{i-1} \lor T_l ^{lm} \} = \{\tau^i \geq \tau^{i-1}, T^{lm}_l = 0\} = \{ B_0 \leq q_l\}
\end{equation*}
where $q_l :=- \ln (l)$ and $T_l ^{lm} = 0$ on this set. Thus, the stopping times $\tau ^1 \leq ... \leq \tau ^n$ are $(T_l ^{lm})_{l \geq 0}$-shadow-residual if and only if
\begin{align*}
\Law(B_{\tau ^i}; B_0 \leq q_l) &= \Law(B_{\tau ^i}; \tau ^i \geq \tau^{i-1} \lor T_l ^{lm}) \\
&= \shadow{\nu _i}{\Law(B_{\tau ^{i-1} \lor T_l ^{lm}}; \tau ^i \geq \tau^{i-1} \lor T_l ^{lm})} \\
&= \shadow{\nu _i}{\Law(B_{\tau ^{i-1}}; B_0 \leq q_l)}
\end{align*}
for all $1 \leq i \leq n$. Applying this inductively, these stopping times are shadow-residual if and only if
\begin{equation*}
\Law(B_{\tau ^i}; B_0 \leq q_l) = \shadow{\nu _i}{ ... \, \shadow{\nu_1}{\Law(B_{0}; B_0 \leq q_l)}} =: \shadow{\nu_1,...,\nu _i}{\Law(B_{0}; B_0 \leq q_l)}
\end{equation*}
for all $1 \leq i \leq n$. This is the obstructed shadow defined by Nutz-Stebegg-Tan in \cite{NuStTa17}. Hence, $(\tau ^1, ... , \tau^n)$ is the multi-marginal lm-solution if and only if the joint distribution $(B_0,B_{\tau ^1}, ... , B_{\tau ^n})$ is the mutliperiod left-monotone transport.
\end{remark}
\section{Proof of Proposition \ref{prop:Interpolation}}
\label{sec:Interpolation}
In this subsection we suppose that $\Omega = C([0,\infty))$ is the path space of continuous functions and that $\mathbb{P}$ is a probability measure on the path space such that the canonical process $B: \omega \mapsto \omega$ is a Brownian motion with $\Law_{\mathbb{P}}(B_0) = \mu$. Moreover, we denote by $\theta$ the shift operator on $\Omega$, i.e.\ $\theta_r : (\omega_s)_{s \geq 0} \mapsto (\omega_s)_{s \geq r}$ for all $r \geq 0$.
\subsection{Concatenation Method}
To simplify notation, we say that a finite stopping time $\tau$ is shadow-residual w.r.t.\ a time-change $(T_l)_{l \geq 0}$ if for all $l \geq 0$ we have
\begin{equation*}
\Law(B_\tau; \tau \geq T_l) = \shadow{\Law(B_\tau)}{\Law(B_{T_l}; \tau \geq T_l)}.
\end{equation*}
This is precisely the condition in part (ii) of Theorem \ref{thm:intro}.
\begin{lemma} \label{lemma:CombinedStoppingTime}
Let $\tau$ and $\sigma$ be two $\mathcal{F}$-stopping times such that $\tau$ is finite. The random variable $\tau + \sigma \circ \theta_{\tau}$ is a again a $\mathcal{F}$ stopping time.
\end{lemma}
\begin{proof}
If $\tau$ takes only values in the countable set $A \subset [0, \infty)$, for all $s \geq 0$ we obtain
\begin{equation*}
\{\tau + \sigma \circ \ \theta_\tau \leq s\} = \bigcup _{k \in A \cap [0,t]} \{\sigma \circ \theta _k \leq t - k\} \in \mathcal{F}_{k + (t-k)} = \mathcal{F}_t.
\end{equation*}
A general $\tau$ can be approximated by discrete stopping times.
\end{proof}
\begin{corollary} \label{lemma:NestingPrep}
Let $(T_l)_{l \geq 0}$ be a finite $\mathcal{F}$-time-change, $(S_l)_{l \geq 0}$ a $\mathcal{F}$-time-change and $\lambda > 0$.
The family $(R_l)_{l \geq 0}$ defined by
\begin{equation*}
R_l := T_{l \land \lambda} + (S_{l-\lambda} \circ \theta _{T_\lambda}) \1 _{\{l \geq \lambda\}} = \begin{cases}
T_l & l < \lambda \\
T_\lambda + S_{l - \lambda} \circ \theta _{T_\lambda} &l \geq \lambda
\end{cases}
\end{equation*}
is an $\mathcal{F}$-time-change.
If additionally, both $(T_l)_{l \geq 0}$ and $(S_l)_{l \geq 0}$ are left-continuous, $T_0 = S_0 = 0$, $T_\infty = S_\infty = + \infty$ and
$\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] =\mathbb{P}[\lim _{k \downarrow l} S_k = S_l] = 1$ for all $l \geq 0$,
$(R_l)_{l \geq 0}$ satisfies these four properties as well.
\end{corollary}
\begin{lemma} \label{lemma:Nesting}
Suppose we are in the setting of Lemma \ref{lemma:NestingPrep}.
Additionally assume that $\tau$ is a solution of $\SEP(\mu,\nu)$ which is shadow residual w.r.t.\ $(T_l)_{l \geq 0}$. If $\sigma$ is a $\mathcal{F}$-stopping time such that $\sigma$ is a solution to $\SEP(\mathrm{Law}(B_{\tau \land T_\lambda}),\nu)$, then
\begin{equation*}
\rho := \tau \land T_{\lambda} + \sigma \circ \theta _{T_\lambda \land \tau}
\end{equation*}
is a $\mathcal{F}$-stopping time and a solution to $\SEP(\mu,\nu)$ which is shadow residual w.r.t.\ $(R _l)_{l \geq 0}$.
\end{lemma}
\begin{proof}
We set $\tilde{\mathcal{F}}_s = \mathcal{F}_{s + \tau \land T_\lambda}$, $\tilde{B} := B \circ \theta _{\tau \land T_\lambda}$, $\tilde{\sigma} := \sigma \circ \theta_{\tau \land T_\lambda}$ and $\tilde{S}_l := S_l \circ \theta_{T_\lambda \land \tau}$. $\tilde{\sigma}$ is a stopping time w.r.t.\ the filtration generated by $\tilde{B}$. We also have $\Law(\tilde{B}_{\tilde{\sigma}}) = \nu$ and $\tilde{\sigma}$ is $(\tilde{S}_l)_{l \geq 0}$-shadow-residual.
\texttt{STEP 1:}
We have $\Law(B_{\rho}) = \Law(B_{\tau \land T_\lambda + \tilde{\sigma}}) = \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}) = \nu$. By Lemma \ref{lemma:CombinedStoppingTime}, $\rho$ is a $\mathcal{F}$-stopping time and $(B_{s \land \rho})_{s \geq 0}$ is uniformly integrable because $\tau$ and $\sigma$ are solutions to $\SEP(\mu,\nu)$ and $\SEP(\Law(\tilde{B}_0), \nu)$. Thus, $\rho$ is a solution to $\SEP(\mu,\nu)$. Moreover, we claim that we can represent $\rho$ as
\begin{equation} \label{eq:Step1}
\rho = \begin{cases}
\tau & \tau < T_\lambda \\
T_\lambda + \tilde{\sigma} & \tau \geq T_\lambda
\end{cases} \quad \mathbb{P}\text{-a.s.}.
\end{equation}
Indeed, since $\tau$ is $(T_l)_{l \geq 0}$-shadow-residual, by Theorem \ref{thm:MainEqui} (iii) and Lemma \ref{lemma:InfimumToEqual}, we have $\tau < T_\lambda \Rightarrow u(\lambda,\tilde{B}_0) = v(\tilde{B}_0)$ $\mathbb{P}$-a.s.\ where $u(l,\cdot) := U_{\Law(B_{T_l \land \tau})}$ and $v = U_{\nu}$ for all $l \geq 0$. Thus, we get
\begin{align*}
\mathbb{P}[\tilde{\sigma} > 0, \tau < T_\lambda] &\leq \mathbb{P}[\tilde{\sigma} > 0, u(\lambda, \tilde{B}_0) = v(\tilde{B}_0)] \\
&= \mathbb{P}[\tilde{\sigma} > 0, U_{\Law(\tilde{B}_0)}(\tilde{B}_0)
= U_{\nu}(\tilde{B}_0)].
\end{align*}
and the r.h.s. is equal to $0$ because $\tilde{\sigma}$ is a $\tilde{\mathcal{F}}$-stopping-time that solves $\SEP(\Law(\tilde{B}_0),\nu)$ (cf.\ Lemma \ref{lemma:EquaToLeq}).
It remains to show that $\rho$ is $(R_l)_{l \geq 0}$-shadow-residual. We split this up in the cases $l \geq \lambda$ and $l < \lambda$.
\texttt{STEP 2:}
Suppose $l \geq \lambda$. Since by \texttt{STEP 1} $\{ \rho \geq R_l \} = \{\tilde{\sigma} \geq \tilde{S}_{l - \lambda},\tau \geq T_\lambda\}$ $\mathbb{P}$-a.s.\ and $\tilde{\sigma}$ is $(\tilde{S}_l)_{l \geq 0}$ shadow-residual, Lemma \ref{lemma:ShadowAssz} yields
\begin{align*}
&\Law(B_\rho; \rho \geq R_l) + \Law(\tilde{B}_{\tilde{\sigma}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda) \\ &\quad = \Law(\tilde{B}_{\tilde{\sigma}}; \tilde{\sigma} \geq \tilde{S}_{l - \lambda})
= \shadow{\nu}{\Law(\tilde{B}_{\tilde{S}_{l - \lambda}}; \tilde{\sigma} \geq \tilde{S}_{l - \lambda} )}
\\
& \quad = \shadow{\nu}{\Law(B_{R_l}; \rho \geq R_l)} \\
& \hspace{2cm} + \shadow{\nu - \shadow{\nu}{\Law(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau \geq T_\lambda)}}{\Law(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda)}
\end{align*}
Thus, we obtain $\Law(B_\rho; \rho \geq R_l) = \shadow{\nu}{\Law(B_{R_l}; \rho \geq R_l)}$ if we show
\begin{align}
&\Law(\tilde{B}_{\tilde{\sigma}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda) = \Law(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda) \quad \text{and} \label{eq:Step2Eq1}\\
&\Law(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda) \leq_+ \nu - \shadow{\nu}{\Law(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau \geq T_\lambda)}. \label{eq:Step2Eq2}
\end{align}
By \texttt{STEP} 1 it is $\tilde{\sigma} = 0$ on $\{\tau < T_\lambda\}$ and therefore \eqref{eq:Step2Eq1} follows immediately. Moreover, Lemma \ref{lemma:ShadowAssz} yields
\begin{align*}
\shadow{\nu}{\Law(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau \geq T_\lambda)} &\leqp \shadow{\nu}{\Law(\tilde{B}_{\tilde{S}_{l-\lambda} \land \tilde{\sigma}}; \tau \geq T_\lambda)}
\end{align*}
On the one hand, by the definition of the shadow we have
\begin{equation} \label{eq:Aux2}
\begin{split}
\shadow{\nu}{\Law(\tilde{B}_{0}; \tau \geq T_\lambda)} &\leqc \shadow{\nu}{\Law(\tilde{B}_{\tilde{S}_{l-\lambda} \land \tilde{\sigma}}; \tau \geq T_\lambda)} \\
&\leq_c \shadow{\nu}{\Law(\tilde{B}_{\tilde{\sigma}}; \tau \geq T_\lambda)} = \Law(\tilde{B}_{\tilde{\sigma}}; \tau \geq T_\lambda)
\end{split}
\end{equation}
because $0 \leq \tilde{S}_{l-\lambda} \land \tilde{\sigma} \leq \tilde{\sigma}$ and $\Law(\tilde{B}_{\tilde{\sigma}}; \tau \geq T_\lambda) \leqp \nu$. On the other hand, since $u(\lambda, \tilde{B}_0) = v(\tilde{B}_0)$ on $\{\tau < T_\lambda\}$ by \texttt{STEP 1}, Corollary \ref{cor:ShadowOnEqualPart} yields
\begin{equation*}
\shadow{\nu}{\Law(\tilde{B}_{0}; \tau \geq T_\lambda)} = \Law(\tilde{B}_{\tilde{\sigma}}; \tau \geq T_\lambda).
\end{equation*}
Thus, we have equality in \eqref{eq:Aux2} which implies
\begin{equation*}
\shadow{\nu}{\Law(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau \geq T_\lambda)} \leqp \nu - \Law(\tilde{B}_{\tilde{\sigma}}; \tau < T_\lambda)
\end{equation*}
and thereby \eqref{eq:Step2Eq2}.
\texttt{STEP 3:}
Now suppose $l < \lambda$. Since $R_\lambda = T_\lambda$ and $\{\rho \geq R_\lambda\} = \{\tau \geq T_\lambda\}$ $\mathbb{P}$-a.s., by \texttt{STEP 2} we have
\begin{align*}
\Law(B_\rho; \rho \geq R_\lambda) &= \shadow{\nu}{\Law(B_{R_\lambda}; \rho \geq R_\lambda)} = \shadow{\nu}{\Law(B_{T_\lambda}; \tau \geq T_\lambda)} \\
&= \Law(B_\tau; \tau \geq T_\lambda)
\end{align*}
because $\tau$ is $(T_l)_{l \geq 0}$ shadow residual.
In particular, we get
\begin{align*}
\Law(B_\rho; \rho \geq R_l) &= \Law(B_\rho; \rho \geq R_l, \tau < T_\lambda ) + \Law(B_\rho; \rho \geq R_l, \tau \geq T_\lambda ) \\
&= \Law(B_\tau; T_\lambda > \tau \geq T_l) + \Law(B_\rho; \rho \geq T_\lambda) \\
&= \Law(B_\tau; \tau \geq T_l) \\
&= \shadow{\nu}{\Law(B_{T_l}; \tau \geq T_l)} \\
&= \shadow{\nu}{\Law(B_{R_l}; \rho \geq R_l)}
\end{align*}
because $\{\rho \geq R_l \} = \{\tau \geq T_\lambda\} \cup \{T_\lambda > \tau \geq T_\lambda\} = \{\tau \geq T_l\}$.
\end{proof}
\subsection{Robustness of LM-Embedding}
\begin{lemma} \label{lemma:SEPcompact}
Let $(\mathbb{P}_n)_{n \in \mathbb{N}}$ be a sequence of probability measures on $\Omega$ such that $B$ is a Brownian motion under $\mathbb{P}_n$, $(\nu_n)_{n \in \mathbb{N}}$ a sequence of probability measures on $\mathbb{R}$ and $(\xi^n)_{n \in \mathbb{N}}$ a sequence of RST w.r.t.\ $\mathbb{P}_n$ that are a solution to $\SEP(\Law_{\mathbb{P}_n}(B_0), \nu _n)$ for all $n \in \mathbb{N}$. If $(\mathbb{P}_n)$ converges weakly to $\mathbb{P}$ and $(\nu_n)_{n \in \mathbb{N}}$ converges to the probability measure $\nu$ under $\TO$, there exists a weakly convergent subsequence of $(\xi ^n)_{n \in \mathbb{N}}$. Moreover, the limit of every convergent subsequence is a RST w.r.t.\ $\mathbb{P}$ that solves $\SEP(\Law_\mathbb{P}(B_0),\nu)$.
\end{lemma}
\begin{proof}
Let $\varepsilon > 0$.
Since $(\mathbb{P}_n)_{n \in \mathbb{N}}$ converges weakly, there exists a compact set $K_{\varepsilon} \subset \Omega$ such that $\mathbb{P}^n[K_\varepsilon] > 1 - \varepsilon$ for all $n \in \mathbb{N}$.
Since $(\nu_n)_{n \in \mathbb{N}}$ converges in $\TO$, by Lemma \ref{lemma:T1Conv} there exists $\eta \in \MO$ such that $\int \varphi \de \nu _n \leq \int \varphi \de \eta$ for all $n \in \mathbb{N}$ and non-negative convex functions $\varphi$. Moreover, by the Theorem of de la Vallee-Poussin there exists a non-negative convex function $V \in C^2(\mathbb{R})$ with $V'' \geq C > 0$ such that $\int_\mathbb{R} V \de \eta < \infty$. For all $s \geq 0$ and $n \in \mathbb{N}$ we have
\begin{equation*}
\xi ^n [t \geq s] = \overline{\mathbb{P}}[\overline{\tau}^{\xi^n} \geq s] \leq \frac{\overline{\mathbb{E}}[\overline{\tau} ^{\xi^n}]}{s} \leq \frac{\overline{\mathbb{E}}[V(\overline{B}_{\overline{\tau} ^{\xi^n}})]}{s} \leq \frac{1}{Cs} \int_{\mathbb{R}} V \de \eta
\end{equation*}
where we used the notation of Lemma \ref{lemma:ReprRST}, the Markov inequality and Ito's formula.
Hence, there exists $s_\varepsilon > 0$ such that $\xi^n[t \leq s_\varepsilon] > 1 - \varepsilon$ for all $n \in \mathbb{N}$.
Then the mass of the compact set $K_\varepsilon \times [0,s_\varepsilon]$ under $\xi ^n$ is strictly greater than $1- 2\varepsilon$ for all $n \in \mathbb{N}$. Hence, the set $\{\xi ^n : n \in \mathbb{N} \}$ is tight.
By Prokhorovs Theorem there exists a weakly convergent subsequence. We denote the limit by $\xi$.
Since the set of RST is closed under weak convergence (cf.\ \cite[Corollary 3.10]{BeCoHu17}), $\xi$ is a RST stopping time w.r.t.\ $\mathbb{P}$. Moreover, $(\omega,t) \mapsto \varphi(\omega_t)$ is a continuous and bounded function on $\Omega \times [0, \infty)$ for all $\varphi \in C_b(\mathbb{R})$, and therefore $\Law(B_\xi) = \nu$.
It remains to show that $(B_{\xi \land t})_{t \geq 0}$ is uniformly integrable. Since $|x| \1 _{|x| \geq K} \leq |x- K/2| + |x + K/2| - K$, we get
\begin{equation*}
\mathbb{E} \left[ |B_{\xi \land s}| \1 _{\{|B_{\xi \land s}| \geq K\}} \right] \leq U_{\Law(B_{\xi \land s})}(K/2) + U_{\Law(B_{\xi \land s})} (-K/2) - K
\end{equation*}
for all $s,K \geq 0$.
Moreover, since $\xi ^n$ converges weakly to $\xi$ and $g_m$ defined by $g_m(y) := \min\{|y|,m\}$ is a continuous and bounded function, we obtain for all $x \in \mathbb{R}$ with monotone and dominated convergence
\begin{align*}
U_{\Law(B_{\xi \land t})}(x) &= \sup_{m \in \mathbb{N}} \lim_{n \rightarrow \infty} \int _{\Omega \times [0, \infty)} g_m(\omega_{t \land s} - x) \de \xi ^n(\omega,t) \\
&\leq \sup _{n \in \mathbb{N}} \mathbb{E}\left[ |B_{\xi^{n} \land s}| \right] \leq \sup _{n \in \mathbb{N}} \mathbb{E}\left[ |B_{\xi^{n}}| \right] = \sup _{n \in \mathbb{N}} U_{\nu_n}(x) \leq U_{\eta}(x)
\end{align*}
where we used that $\Law(B_{\xi ^n \land t})_{t \geq 0}$ is uniformly integrable for all $n \in \mathbb{N}$ .
Thus, using the asymptotic behaviour of potential functions, it is
\begin{equation*}
\lim _{K \rightarrow \infty} \sup _{t \geq 0} \mathbb{E} \left[ |B_{\xi \land t}| \1 _{\{|B_{\xi \land t}| \geq K\}} \right] \leq \limsup _{K \rightarrow \infty} U_{\eta}\left(-K/2\right) + U_{\eta}(K/2) - K = 0.
\end{equation*}
and the claim follows.
\end{proof}
\begin{lemma} \label{lemma:StabilityLM}
Let $(\nu_n)_{n \in \mathbb{N}}$ be a sequence of probability measures on $\mathbb{R}$, $(\mathbb{P}^n)_{n \in \mathbb{N}}$ be a sequence of probability measures on $\Omega$ such that $B$ is a Brownian motion with initial distribution $\mu_n$ under $\mathbb{P}^n$ and $(\xi^n)_{n \in \mathbb{N}}$ a sequence of corresponding RST which are lm-monotone solutions to $\SEP(\mu_n, \nu_n)$.
If $(\mathbb{P}_n)_{n \in \mathbb{N}}$ converges weakly to $\mathbb{P}$ whose initial distribution $\mu$ is atomless and $\nu_n$ converges to $\nu$ in $\TO$, the sequence $(\xi ^ n)_{n \in \mathbb{N}}$ converges weakly to a RST $\xi$ w.r.t.\ $\mathbb{P}$ which is a lm-monotone solution to $\SEP(\mu,\nu)$
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:SEPcompact}, any subsequence of $(\xi ^n)_{n \in \mathbb{N}}$ has itself a convergent subsequence and the limit is a solution to $\SEP(\mu,\nu)$. If we show that this limit is $(T_l^{lm})_{l \geq 0}$-shadow-residual, by uniqueness
(see \cite[Lemma 4.3]{BeHeTo17}), it has to be the unique left-monotone solution to $\SEP(\mu,\nu)$ and the claim follows.
For simplicity, we denote the convergent subsequence of a given subsequence again by $(\xi ^n)_{n \in \mathbb{N}}$ and the limit by $\xi$. Since $\xi^n$ is $(T_l ^{lm})$-shadow-residual, the probability measure $\Law(B_0,B_{\xi ^n})$ is the left-curtain coupling of $\mu_n$ and $\nu_n$. Indeed, as in Remark \ref{rem:CondLM} we have for all $l \geq 0$
\begin{equation} \label{eq:LmStability}
\begin{split}
\Law(B_{\xi^n}; B_0 \leq - \ln(l)) &= \Law(B_{\xi ^n}; \xi ^n \geq T_l ^{lm}) \\ &= \shadow{\nu ^n}{\Law(B_0; \xi ^n \geq T_l ^n)}
= \shadow{\nu _n}{\Law(B_0; B_0 \leq - \ln(l))}
\end{split}
\end{equation}
because $\{t \geq T_l ^{lm}\} = \{T_l ^{lm} = 0\} = \{B_0 \leq - \ln(l) \}$ (where $-\ln(0) := - \infty$). As shown in \cite[Theorem 2.16]{Ju14} (and also as a consequence of stability of martingale optimal transport \cite[Theorem 1.1]{BaPa19})), the left-curtain coupling is stable under weak convergence, i.e.\ the weak limit $\Law(B_0,B_\xi)$ of $(\Law(B_0,B_{\xi^n}))_{n \in \mathbb{N}}$ is the left-curtain coupling of $\mu$ and $\nu$. Thus, analogous to \eqref{eq:LmStability}, $\xi$ is $(T_l^{lm})_{l \geq 0}$-shadow-residual.
\end{proof}
\subsection{Application}
Fix $\mu \leqc \nu$, let $\tau ^r$ be the Root solution to $\SEP(\mu,\nu)$ and $(T_l^r)_{l \geq 0}$ the Root time-change.
Let $\lambda > 0$. We set $ \tilde B^{\lambda} := B \circ \theta _{T^{r} _{\lambda} \land \tau ^{r}}$. By the strong Markov property, $ \tilde B^{\lambda}$ is a Brownian motion and there exists a left-monotone solution $\sigma ^{\lambda}$ of $\SEP(\mathrm{Law}(\tilde B^{\lambda}_0),\nu)$. We define
\begin{align*}
\tau ^{\lambda} :=& \tau ^{r} \1 _{\{ \tau ^{r} < T^{r}_{\lambda} \}} + \sigma^\lambda \circ \theta_{(T^{r} _{\lambda} \land \tau ^{r})} \1 _{\{ \tau ^{r} \geq T^{r} _{\lambda} \}} \\
=& \tau^r \1_{\{\tau ^r < \lambda \}} + \sigma^\lambda \circ \theta _{\lambda} \1 _{\{\tau ^r \geq \lambda\}} .
\end{align*}
By Lemma \ref{lemma:Nesting}, $\tau ^\lambda$ is a solution to $\SEP(\mu,\nu)$ which is shadow residual w.r.t.\ the time-change $(T_l ^\lambda)_{l \geq 0}$ defined as
\begin{equation*}
T^{\lambda} _l :=
T^{r}_{l \land \lambda} + (T^{lm} _{l - \lambda} \circ \theta _{T^{r}_{\lambda}}) \1_{\{l \geq \lambda\}} = \begin{cases}
l & l < \lambda \\
l & \exp(-B_\lambda) + \lambda \geq l \geq \lambda \\
+ \infty & \exp(-B_\lambda) + \lambda < l, l \geq \lambda
\end{cases}.
\end{equation*}
Thus, by Theorem \ref{thm:MainEqui}, there exists a barrier $\mathcal{R}^\lambda$ such that
\begin{equation*}
\tau ^{\lambda} = \inf \{t \geq 0 : (X_t ^\lambda, B_t) \in \mathcal{R}^\lambda\}
\end{equation*}
where $X^{\lambda}$ is defined as
\begin{align*}
X_t ^\lambda := \sup \{l \geq 0 : T_l^\lambda \leq t\} = \begin{cases}
t & t < \lambda \\
\lambda + \exp(-B_0) & t \geq \lambda \end{cases}.
\end{align*}
To complete the proof of Proposition \ref{prop:Interpolation}, it remains to show the convergence of $\tau ^\lambda$ to $\tau ^r$ and $\tau ^{lm}$ as randomized stopping times as $\lambda$ tends to $+\infty$ and $0$. This is covered by Lemma \ref{lemma:ConvToRoot} and Lemma \ref{lemma:ConvToLM}.
\begin{lemma} \label{lemma:ConvToRoot}
The sequence $(\tau _\lambda)_{\lambda > 0}$ converges a.s.\ to $\tau ^r$ as $\lambda$ tends to $+ \infty$. In particular, $\Law(B,\tau ^\lambda)$ converges weakly to $\Law(B,\tau ^r)$.
\end{lemma}
\begin{proof}
Since $\tau^r < + \infty$ and $T_\lambda^r \rightarrow \infty$, we have
\begin{equation*}
\lim _{\lambda \rightarrow + \infty} \tau ^\lambda = \lim _{\lambda \rightarrow + \infty} \left( \tau ^{r} \1 _{\{ \tau ^{r} < T^{r}_{\lambda} \}} + \sigma^\lambda \circ \theta_{(T^{r} _{\lambda} \land \tau ^{r})} \1 _{\{ \tau ^{r} \geq T^{r} _{\lambda} \}} \right) = \tau ^{r} \quad \mathbb{P}-\text{a.s.}
\end{equation*}
The weak convergence of $\Law(B,\tau ^\lambda)$ to $\Law(B,\tau ^r)$ follows immediately.
\end{proof}
\begin{lemma} \label{lemma:CompSupp}
For all $\varphi \in C_c(\Omega \times [0, \infty))$ we have
\begin{equation*}
\lim_{\lambda \rightarrow 0} \mathbb{E}\left[ \left\vert \varphi(B, \tau ^\lambda) - \varphi (\tilde{B}^\lambda, \tilde{\sigma}^\lambda) \right\vert \1 _{\{\tau^r > 0\}}\right] = 0
\end{equation*}
where $\tilde{B}^\lambda := B \circ \theta_{\tau ^r \land \lambda}$ and $\tilde{\sigma}^\lambda := \sigma^\lambda \circ \theta_{\tau ^r \land \lambda}$ for all $\lambda > 0$.
\end{lemma}
\begin{proof}
For all $\lambda > 0$ we define the map $\Theta ^\lambda$ on $\Omega \times [0,\infty)$ by
\begin{equation*}
\Theta ^\lambda : (\omega,t) \mapsto (\omega \circ \theta_\lambda, \max\{t-\lambda,0\}).
\end{equation*}
A compatible metric on the Polish space $\Omega \times [0, \infty)$ is given by
\begin{equation*}
d((\omega,t),(\omega',t')) := |t-t'| + \sum _{n \in \mathbb{N}} 2^{-n}\sup _{s \in [0,n]} |\omega_s - \omega'_s|
\end{equation*}
and under this metric $\Theta ^\lambda$ is $2$-Lipschitz continuous for all $\lambda \in (0,1)$. Moreover, since $\lim _{\lambda \rightarrow 0}\Theta ^\lambda(\omega,t) = (\omega,t)$ for all $(\omega,t) \in \Omega \times [0, \infty)$, $\Theta^\lambda$ converges uniformly on compact sets to the idenity on $\Omega \times [0, \infty)$. Thus, we have
\begin{align*}
&\lim _{\lambda \rightarrow 0} \mathbb{E}\left[ \left\vert \varphi(B,\lambda + \tilde{\sigma} ^\lambda) - \varphi(\tilde{B}^\lambda,\tilde{\sigma} ^\lambda) \right\vert \right] \\
& \quad = \lim _{\lambda \rightarrow 0} \mathbb{E}\left[ \varphi(B,\lambda + \tilde{\sigma} ^\lambda) - \varphi(\Theta^\lambda(B,\lambda + \tilde{\sigma} ^\lambda)) \right] = 0.
\end{align*}
By substituting the definition of $\tau^\lambda$, we obtain the estimate
\begin{align*}
&\mathbb{E}\left[ \left\vert \varphi(B, \tau ^\lambda) - \varphi (\tilde{B}^\lambda, \tilde{\sigma}^\lambda) \right\vert \1 _{\{\tau^r > 0\}}\right] \\
&\quad \leq 2 ||\varphi||_\infty \mathbb{P}[0 <\tau ^r < \lambda] + \mathbb{E}\left[ \left\vert \varphi(B,\lambda + \tilde{\sigma} ^\lambda) - \varphi(\tilde{B}^\lambda,\tilde{\sigma} ^\lambda) \right\vert \right]
\end{align*}
and therefore the claim follows.
\end{proof}
\begin{lemma} \label{lemma:ConvToLM}
The sequence $(\Law(B,\tau ^\lambda))_{\lambda > 0}$ converges weakly to $\Law(B,\tau ^{lm})$ as $\lambda$ tends to $0$.
\end{lemma}
\begin{proof}
On the set $\{\tau ^r = 0\}$, $U_\mu(0) = u^r(0,B_0) = v(B_0) = U_\nu(B_0)$ and thus $\tau ^{lm} = 0 = \tau ^r$.
Hence, in conjunction with Lemma \ref{lemma:CompSupp} we obtain for all $\varphi \in C_c(\Omega \times [0, \infty))$
\begin{equation} \label{eq:ConvPHI}
\lim_{\lambda \rightarrow 0} \mathbb{E}\left[ \left\vert \varphi(B, \tau ^\lambda) - \varphi (\tilde{B}^\lambda, \tilde{\sigma}^\lambda) \right\vert \right] = 0
\end{equation}
where $\tilde{B}^\lambda := B \circ \theta_{\tau ^r \land \lambda}$ and $\tilde{\sigma}^\lambda := \sigma^\lambda \circ \theta_{\tau ^r \land \lambda}$ for all $\lambda > 0$.
Since both $(\Law(B,\tau ^\lambda))_{\lambda > 0}$ and $(\Law(\tilde{B}^\lambda,\tilde{\sigma}^\lambda))_{\lambda > 0}$ are sequences of solutions to $\SEP(\mu,\nu)$ and $\SEP(\Law(\tilde{B}_0^\lambda),\nu)$, both families are tight by Lemma \ref{lemma:SEPcompact}. Thus, \eqref{eq:ConvPHI} holds also for all $\varphi \in C_b(\Omega \times [0, \infty))$.
Finally, Lemma \ref{lemma:StabilityLM} shows that $\Law(\tilde{B}^\lambda,\tilde{\sigma}^\lambda)$ converges weakly $\Law(B,\tau ^{lm})$.
\end{proof}
\normalem
\bibliographystyle{abbrv}
|
2,869,038,156,777 | arxiv | \section{Introduction}
Supermassive black holes have long been known to exist at the centers of large
galaxies (e.g.\ Lynden-Bell 1969; Wolfe \& Burbidge 1970; Sargent et al.\
1978). Intriguingly, scaling relations between their masses and several
global properties of the host spheroid (e.g.\ Kormendy \& Richstone 1995;
Magorrian et al.\ 1998; Ferrarese \& Merritt 2000; Gebhardt et al.\ 2000;
Graham et al.\ 2001; Marconi \& Hunt 2003) may be the result of feedback
mechanisms in which the central black hole regulates the growth of the
surrounding bulge, rather than vice-versa (e.g.\ Silk \& Rees 1998; Haehnelt,
Natarajan \& Rees 1998; de Lucia et al.\ 2006; Antonuccio-Delogu \& Silk
2010).
In early-type dwarf galaxies and the bulges of late-type galaxies, dense
nuclear star clusters appear to dominate at the expense of massive black holes
(Valluri et al.\ 2005; Ferrarese et al.\ 2006a; Wehner \& Harris 2006). The
existence of scaling relations between the luminosity and stellar mass of
these star clusters and their host spheroid (e.g.\ Graham \& Guzm\'an 2003;
Balcells et al.\ 2003, 2007; Grant et al.\ 2005) similarly suggests that a
physical mechanism may be controlling their growth, possibly based on some
regulating feedback process (e.g.\ King 2005; McLaughlin et al.\ 2006;
Hueyotl-Zahuantitla et al.\ 2010). Or perhaps instead some other activity prevails,
such as cluster inspiral (e.g.\ Tremaine, Ostriker \& Spitzer 1975; Bekki 2010,
Agarwal \& Milosavljevi{\'c} 2011), possibly coupled with gas dissipation and new star formation
(Hartmann et al.\ 2011).
Coupled with the above observational relations
is the observation that many nuclear star clusters in
intermediate-mass spheroids (of stellar mass $10^{8} < M_{\rm sph,*}/M_{\odot}
< 10^{10}$) harbour massive black holes themselves (e.g.\ Graham \& Driver
2007; Gonz{\'a}lez Delgado et al.\ 2008; Seth et al.\ 2008, 2010; Gallo et
al.\ 2010; Neumayer \& Walcher 2012). An attempt to quantify the coexistence of these two types of
galactic nuclei was provided by Graham \& Spitler (2009) who revealed how
their (i) mass ratio and (ii) combined mass relative to their host spheroid's
stellar mass, changed as a function of host spheroid stellar mass.
Such dual nuclei are exciting for a number of reasons, including UV/X-ray
flaring events as infalling stars are tidally disrupted by the black hole
(e.g.\ Komossa \& Merritt 2008; Lodato et al.\ 2008; Rosswog et al.\ 2008;
Maksym, Ulmer \& Eracleous 2010) and the increased expectation for the
discovery of gravitational radiation as stellar mass black holes and neutron
stars inspiral toward the central supermassive black hole of these dense,
compact star clusters (Mapelli et al.\ 2011).
If nucleated galaxies, i.e.\ those with nuclear star clusters, were
participants in an hierarchical universe (White \& Frenk 1991), then their
dense nuclei must have eventually been replaced by massive black holes as
they, the host galaxies, grew into massive elliptical galaxies. Bekki \&
Graham (2010) have argued that the gravitational scouring which ensues from a
coalescing binary supermassive black hole after a galaxy merger event
(Begelman, Blandford \& Rees 1980; Ebisuzaki, Makino \& Okumura 1991; Graham
2004; Merritt, Mikkola \& Szell 2007), must first be preceded by the
destruction of these nuclear star clusters. They have revealed that binary
supermassive black holes can effectively `heat' the newly-merged star
clusters, causing them to eventually evaporate into the host spheroid. Such a
scenario suggests a connection-of-sorts between nuclear star clusters and
massive black holes in intermediate mass spheroids. Other, perhaps yet
unthought of, processes may also be operating.
This Letter explores potential connections by expanding upon the association
between black hole mass and host galaxy velocity dispersion, the $M_{\rm
bh}$--$\sigma$ diagram (Ferrarese \& Merritt 2000; Gebhardt et al.\ 2000;
Graham et al.\ 2011), by including nuclear star clusters.
In section~\ref{Sec_light} we provide some insight into the expected relations
in the ($M_{\rm bh} + M_{\rm nc}$)--$\sigma$ diagram via reference to the
galaxy luminosity-(velocity dispersion) relation for dwarf and ordinary
elliptical galaxies (Davies et al.\ 1983) and the galaxy-(nuclear star
cluster) luminosity relation for spheroids (Graham \& Guzm\'an 2003; Balcells
et al.\ 2007).
We also build on the ($M_{\rm bh} + M_{\rm nc}$)--$\sigma$ diagram from Graham
et al.\ (2011) by identifying and including new galaxies that host both a
nuclear star cluster and a supermassive black hole (Section~\ref{Sec_data}).
We additionally include those galaxies from Ferrarese et al.\ (2006a) with
nuclear star cluster masses that populate the low-mass end of the diagram.
In Section~\ref{Sec-R_D} we present our findings, notably that the expected relation
$M_{\rm nc} \propto \sigma^2$ appears consistent with the data. This exponent
of 2 is dramatically different to the value of $4.27\pm0.61$ advocated
previously (Ferrarese et al.\ 2006a), and suggests that theories developed to
match the previous relation may need reconsideration.
Section~\ref{Sec_Predict} goes on to present an exciting and significantly new
prediction for the $M_{\rm bh}$--$M_{\rm sph}$ and $M_{\rm bh}$--luminosity relations
for spheroids
fainter than $M_B \sim -20.5$ mag, i.e.\ those
thought to have not formed from major, dissipationless, galaxy merger events
(e.g.\ Davies et al.\ 1983; Faber et al.\ 1997).
\section{Expectations}\label{Sec_light}
From pre-existing scaling relations it is possible
to predict the slope of the relation between nuclear cluster mass and host
spheroid velocity dispersion: the $M_{\rm nc}$--$\sigma$ relation. It
is also possible to predict a slope for the $M_{\rm bh}$--$\sigma$
relation at the high-mass end where nuclear clusters do not exist and dry
galaxy merging is thought to occur.
The luminosity $L$ of dwarf elliptical galaxies
(or more broadly elliptical galaxies without depleted cores)
is such that $L\propto \sigma^2$ (Davies et al.\ 1983; Held et al.\
1992), while for big elliptical galaxies (with $\sigma \ga 200$ km s$^{-1}$)
the exponent is known to have a value of 5 (Schechter 1980; Malumuth \&
Kirshner 1981). When including samples of intermediate-mass elliptical
galaxies (with $100 \la \sigma <$ 170-200 km s$^{-1}$) with the big
elliptical galaxies, the average
exponent has the more commonly known value of 3 to 4 (Faber \& Jackson 1976;
Tonry 1981).
Following Davies et al.'s (1983) identification of the transition in the
$L$--$\sigma$ relation at $M_B \approx -20.5$ B-mag ($\sigma \approx 200$ km
s$^{-1}$), where they noted that a number of other physical properties changed
behavior, Matkovi\'c \& Guzm\'an (2005, see also de Rijcke et al.\ 2005)
connected this transition with the onset of dry galaxy merging in the brighter
galaxies.
Provided there are no significant gravitational
ejections of supermassive black holes from massive galaxies
(e.g.\ Gualandris \& Merritt 2008), then at the high-mass end where
dry galaxy merging is thought to occur ---
involving galaxies with equal
$M_{\rm bh}/M_{\rm sph}$ ratios (H\"aring \& Rix 2004) ---
the combined supermassive black hole mass and the merged host galaxy
luminosity and mass, must increase in lock step. That is, the slope of the
$M_{\rm bh}$--$L$ relation must be equal to 1, as is
observed for samples dominated by luminous galaxies (Marconi \& Hunt 2003; Graham 2007).
Consequently, the slope of the $L$--$\sigma$ relation for galaxies built by
such dry merging (with $M_B \la -20.5$ B-mag and $\sigma \ga$ 200 km
s$^{-1}$) will therefore equal the slope of the $M_{\rm bh}$--$\sigma$ relation
over this same mass range. Given that $L \propto \sigma^5$, one has (the
prediction) that $M_{\rm bh} \propto \sigma^5$, which is what is observed for
massive ``core'' galaxies (Hu 2008; Graham et al.\ 2011; see also
Ferrarese \& Merritt 2000 and Merritt \& Ferrarese 2001).
At the low-mass end, Graham \& Guzm\'an (2003) have
revealed that the nuclear cluster luminosity,
and in turn stellar mass,
$M_{\rm nc}$, in dwarf elliptical galaxies scales with the galaxy luminosity
$L$ such that $M_{\rm nc} \propto L^{0.87\pm0.26}$. Given that $L \propto
\sigma^2$ in dwarf elliptical galaxies, one has that $M_{\rm nc} \propto
\sigma^{1.74\pm0.52}$, or, {\it roughly} that $M_{\rm nc} \propto \sigma^{2}$.
Another way to predict the outcome is to note that if the ratio
of $(M_{\rm bh} + M_{\rm nc})$ to host spheroid luminosity $L$ is
constant (Ferrarese et al.\ 2006a), then the bent $L$--$\sigma$ relation
(Davies et al.\ 1983) maps directly into a
bent ($M+M$)-$\sigma$ relation, with slopes of 2 and 5 at the low- and
high-mass end respectively. We note that this bent $(M+M)$--$\sigma$
relation has been predicted before
(e.g.\ Graham \& Driver 2007, their
section 3.2; Graham 2008b, their section~2.2.2) but curiously is at
odds with Ferrarese et al.\ (2006a) who reported a slope of
$\sim$4 for the $M_{\rm nc}$--$\sigma$ relation.
\section{Data}\label{Sec_data}
\begin{table*}
\caption{Extension of Graham \& Spitler's (2009) Table~1 for galaxies
with a direct supermassive black hole mass measurement (from the
compilation by Graham 2008b and Graham et al.\ 2011) {\it and} a
nuclear star cluster. All galaxies that are likely to have both a
supermassive black hole and a nuclear cluster, based upon their
'goldilocks' host spheroid stellar
mass (see Graham \& Spitler 2009) are included.}
\label{Tab1}
\begin{tabular}{@{}llccll@{}}
\hline
Galaxy & Type & Dist. & $M_{\rm bh}$ & Mag$_{\rm nc}$ & $ M_{\rm nc}$ \\
& & Mpc & $10^7 [M_{\odot}]$ & mag & $10^7 [M_{\odot}]$ \\
\hline
NGC~1300 & SBbc & 20.7 & $7.3^{+6.9}_{-3.5}$ & ... & 8.7$^A$ \\
NGC~2549 & SB0 & 12.3 & $1.4^{+0.2}_{-1.3}$ & $m_{F702W} = 17.6$$^B$ & 1.1 \\
NGC~3585 & S0 & 19.5 & $31^{+14}_{-6}$ & $m_{F555W} = 20.5$$^C$ & 0.4 \\
NGC~4026 & S0 & 13.2 & $18^{+6}_{-3}$ & $m_{F555W} = 18.4$$^C$ & 1.3 \\
\multicolumn{6}{c}{Upper limits on nuclear star cluster mass} \\
NGC~1316 & SB0 & 18.6 & $15.0^{+7.5}_{-8.0}$ & $m_V > 19.9$$^D$ & $< 0.8$ \\
NGC~2787 & SB0 & 7.3 & $4.0^{+0.4}_{-0.5}$ & $m_{F555W} > 17.25^{+0.17}_{-0.10}$$^E$ & $< 1.5$ \\
NGC~3227 & SB & 20.3 & $1.4^{+1.0}_{-0.6}$ & $m_H > 15.7\pm0.2^F$ & $< 2.2$ \\
NGC~3245 & S0 & 20.3 & $20^{+5}_{-5}$ & $m_{F547M} > 17.61^{+0.15}_{-0.11}$$^E$ & $< 8.4$ \\
NGC~3489 & SB0 & 11.7 & $0.58^{+0.08}_{-0.08}$ & $m_H > 12.7$$^G$ & $< 13$ \\
NGC~4459 & S0 & 15.7 & $6.8^{+1.3}_{-1.3}$ & $m_{F555W} > 17.40^{+0.24}_{-0.14}$$^E$ & $< 5.8$ \\
NGC~4596 & SB0 & 17.0 & $7.9^{+3.8}_{-3.3}$ & $m_{F606W} > 17.97^{+0.14}_{-0.08}$$^E$ & $< 4.0$ \\
\multicolumn{6}{c}{Unknown nuclear star cluster mass} \\
Circinus & Sb & 2.8 & $0.11^{+0.02}_{-0.02}$ & \multicolumn{2}{l}{unknown, dusty Sy2 nucleus$^H$} \\
IC~2560 & SBb & 40.7 & $0.44^{+0.44}_{-0.22}$ & \multicolumn{2}{l}{unknown, dusty Sy2 nucleus$^I$} \\
NGC~224 & Sb & 0.74 & $14^{+9}_{-3}$ & \multicolumn{2}{l}{two nuclear discs$^J$} \\
NGC~1068 & Sb & 15.2 & $0.84^{+0.03}_{-0.03}$ & \multicolumn{2}{l}{unknown, Sy2 nucleus$^K$} \\
NGC~3079 & SBcd & 20.7 & $0.24^{+0.24}_{-0.12}$ & \multicolumn{2}{l}{unknown, dusty Sy2 nucleus$^L$} \\
NGC~3393 & SBab & 55.2 & $3.4^{+0.2}_{-0.2}$ & \multicolumn{2}{l}{unknown, dusty Sy2 nucleus$^M$} \\
NGC~3998 & S0 & 13.7 & $22^{+19}_{-16}$ & \multicolumn{2}{l}{unknown, AGN dominates$^E$} \\
NGC~4258 & SBbc & 7.2 & $3.9^{+0.1}_{-0.1}$ & \multicolumn{2}{l}{unknown, Sy2 AGN dominates$^N$} \\
NGC~4261 & E2 & 30.8 & $52^{+10}_{-11}$ & \multicolumn{2}{l}{unknown, Sy3 AGN dominates$^E$} \\
NGC~4486a & E2 & 17.0 & $1.3^{+0.8}_{-0.8}$ & \multicolumn{2}{l}{nuclear stellar disc$^O$} \\
NGC~4945 & SBcd & 3.8 & $0.14^{+0.14}_{-0.07}$ & \multicolumn{2}{l}{Sy2 $+$ dusty nuclear starburst$^P$} \\
NGC~5128 & S0 & 3.8 & $4.5^{+1.7}_{-1.0}$ & \multicolumn{2}{l}{unknown, Sy2 AGN dominates$^Q$} \\
NGC~7582 & SBab & 22.0 & $5.5^{+2.6}_{-1.9}$ & \multicolumn{2}{l}{unknown, Sy AGN dominates$^R$} \\
\hline
\end{tabular}
\noindent
References:
$^A$ Atkinson et al.\ (2005, their Table~2, integrating their inner component to 10$r_b \approx 1\arcsec$);
$^B$ From our NC$+$S\'ersic$+$exponential analysis of the light profile in Rest et al.\ (2001), using $M/L_{F702W}=1.5$;
$^C$ From our NC$+$S\'ersic$+$exponential analysis of the light-profile in Lauer et al.\ (2005), using $M/L_{F555W}=2.0$;
$^D$ Lauer et al.\ (2005), NGC~1316 = Fornax A, AGN contamination, $M/L_V=2.5$ used here;
$^E$ Gonzalez-Delgado et al.\ (2008), $M/L=2.5$ used here, nuclear cluster masses are upper limits due to AGN contamination;
$^F$ Carollo et al.\ (2002), may have starburst plus Sy1.5 AGN contamination, $M/L_H=0.5$ used here;
$^G$ From our NC$+$S\'ersic$+$exponential analysis of this Sy2
galaxy's light-profile in Nowak et al.\ (2010; their Figure~9), using $M/L_H=0.56$;
$^H$ Prieto et al.\ 2004, Mu{\~n}oz-Mar{\'{\i}}n et al.\ (2007), Tristram et al.\ 2007;
$^I$ Peng et al.\ (2006), Mu{\~n}oz-Mar{\'{\i}}n et al.\ (2007);
$^J$ Peterson (1978);
$^K$ Davies et al.\ (2007, their Fig.22) uncalibrated light profile reveals a nuclear point source within
0.1-0.2 arcseconds, atop of the 1 arcsecond (70 pc) nuclear disc in NGC~1068.
$^L$ Cecil et al.\ (2001);
$^M$ Cooke et al.\ (2000);
$^N$ Pastorini et al.\ (2007);
$^O$ Kormendy et al.\ (2005), Ferrarese et al.\ (2006b: NGC~4486a = VCC~1327), Prugniel et al.\ (2011);
$^P$ Marconi et al.\ (2000);
$^Q$ Radomski et al.\ (2008);
$^R$ Bianchi et al.\ (2007), Wold \& Galliano (2006);
light-profile given by Rest et al.\ (2001), $M/L_{F702W}=1.5$ used here;
\end{table*}
The black hole masses for 64 galaxies have been taken from Graham
(2008b, his table~1) and Graham et al.\ (2011, their table~1). The
velocity dispersions have also been obtained from the tables in these papers, with
the exception that this Letter uses a host spheroid velocity
dispersion of 55 km s$^{-1}$ for M32 (Chilingarian 2011, in prep.).
The previously tabulated central velocity dispersion of 72 km s$^{-1}$
for this nearby galaxy is elevated by the stellar dynamics close to
the spatially well-resolved black hole.
As noted by Graham \& Spitler (2009), many of these galaxies also house
nuclear star clusters.
In the linear regression which follows, we do however exclude
NGC~4564 (whose nuclear star cluster mass is not yet available) and
NGC~1399 (whose nuclear star cluster is debatable) from Graham \& Spitler's
list.
In Table~\ref{Tab1} we expand the above list of 10 (=12-2) galaxies for which black
holes and nuclear star clusters coexist. We (i) provide masses for an
additional three galaxies (NGC~1300, NGC~2549 and NGC~3585, see
Figure~\ref{Fig1}) to give a total of 13,
(ii) update the mass of the nuclear star cluster in NGC~4026, and (iii)
tabulate upper limits on the star cluster masses for a further seven galaxies.
Also provided in Table~\ref{Tab1} are the names of galaxies whose spheroid
mass is such that they are good candidates to house dual nuclei.
In passing, it is noted that the presence of nuclear star clusters with a
different stellar population and thus a different stellar $M/L$ ratio to the
surrounding bulge (e.g.\ Lotz et al.\ 2004; C\^ot\'e et al.\ 2006; Paudel et
al.\ 2011; den Brok et al.\ 2011, in prep.) may result in errors to the
derivation of the supermassive black hole mass if one is not careful. We are
not, however, in a position to quantify this, and we take the quoted
supermassive black hole errors at face value. As discussed in Graham \&
Spitler (2008), the uncertainty on the nuclear star
cluster masses is likely constrained to within a factor of $\sim$2.
\begin{figure}
\includegraphics[angle=270,scale=0.36]{fig-1.ps}
\caption{
The magnitude of the nuclear star cluster is measured
relative to the inward extrapolation of the available outer galaxy light
distribution --- which has been modelled as the sum of two components: a
S\'ersic bulge plus an exponential disc. Residual profiles, and the root mean
square (rms) scatter $\Delta$, are shown in the lower panels.
The light profile data have come from the sources listed in Table~\ref{Tab1}.
}
\label{Fig1}
\end{figure}
The nuclear star cluster masses and host galaxy velocity dispersions shown in
Ferrarese et al.\ (2006a), for 29 galaxies with $\sigma \la$120 km
s$^{-1}$, have been included here to better populate the lower-mass end of our
($M_{\rm bh} + M_{\rm nc}$)-$\sigma$ diagram. From that study, the four
nuclear star clusters with masses $\ga 10^8 M_{\odot}$ (VCC 1913,
1146, 1630, 1619) are reported to have half-light radii of 0.32, 0.50, 0.60
and 0.71 arcseconds, respectively (Ferrarese et al.\ 2006b). All of the
remaining nuclei sizes are less than $0.^{\prime\prime}25$, i.e.\ less than 20
pc adopting their Virgo cluster distance of 16.5 Mpc. Ferrarese et al.\
(2006b) identified the first three of these four galaxies as hosting a small
scale nuclear disc, and they observed a very dusty nucleus in the lenticular
galaxy VCC~1619. Through application of their S\'ersic-galaxy $+$
single-nucleus model, the flux which they assigned to their ``nuclear star
clusters'' is greater than that acquired when separating nuclear discs and
nuclear star clusters (e.g.\ Balcells et al.\ 2007).
This explains the apparent
deviant nature of at least the first three of these four galaxies in
Figure~\ref{Fig2}b.
\section{Results and Discussion}\label{Sec-R_D}
Expanding upon the $(M_{\rm bh}+M_{\rm nc})$--$\sigma$ diagram from Graham et al.\ (2011,
their figure~8), especially at the
low-$\sigma$ end through the inclusion of the ($M_{\rm nc}, \sigma$)
data from Ferrarese et al.\ (2006a) proves to be rather revealing.
Figure~\ref{Fig2} appears to display two markedly different
slopes. While the slope at the high-$\sigma$ end is around 5 for the ``core''
galaxies (Ferrarese \& Merritt 2000; Hu 2008; Graham et al.\ 2011), the slope at
the low-$\sigma$ end is seen to be roughly consistent with a value of 2.
Given that the efficiency of feedback from star clusters
and massive black holes is different, it is probably preferable
to separate their masses when considering slopes in $M$--$\sigma$ diagram.
Fitting the ordinary least squares bisector regression {\sc SLOPES} (Feigelson
\& Babu 1992) --- a code which is not sensitive to measurement uncertainties
--- to the (13+29) nuclear stellar masses and associated velocity dispersions
mentioned in the previous section gives a slope of 2.14$\pm$0.31.
Although one may rightly wonder if this slope has been lowered by the
inclusion, at the high-$\sigma$ end, of nuclear star clusters which have been
partly eroded by massive black holes --- if the scenario proposed by Bekki \&
Graham (2010) is correct. It is however the case that the four stellar nuclei with
masses $\ga 10^8 M_{\odot}$ do increase the measured slope. Removing these four
objects results in a slope of 1.78$\pm$0.24 (and an intercept at 70 km s$^{-1}$
of 6.83$\pm$0.08), in remarkable agreement with the expected value of
1.74$\pm$0.52 (see section~\ref{Sec_light}) based on a smaller independent
data set.
Using the bisector regression {\sc BCES} from Akritas \& Bershady (1996),
and assuming a 10 and 50 per cent uncertainty on the velocity dispersion
and the nuclear star cluster mass, respectively, gives a near identical
slope and intercept of 1.73$\pm$0.23 and 6.83$\pm$0.07. While varying the
uncertainty on the velocity dispersion by a factor of 2 has almost no affect
on the fit,
increasing the uncertainty on the nuclear star cluster mass to a factor of 2
yields the relation
\begin{equation}
\log \left[\frac{M_{\rm nc}}{M_{\odot}} \right] =
(1.57\pm0.24)\log \left[\frac{\sigma}{70\, {\rm km\, s}^{-1}}\right] + (6.83\pm0.07).
\end{equation}
\begin{figure*}
\includegraphics[angle=270,scale=0.66]{fig-2.ps}
\caption{
Panel a) 64 gray squares define the $M_{\rm bh}$--$\sigma$ relation from
Graham et al.\ (2011), shown by the thin line, 13 black arrows show how points
will move if the nuclear star cluster mass $M_{\rm nc}$ is added, while
double-headed arrows are used for the 7 nuclear star clusters that only have an
upper limit to their mass. The 13 open triangles mark galaxies which may have
a nuclear star cluster that could move the points higher in the diagram (see Table~1).
A representative error bar is shown in the bottom right corner.
Panel b) The 13 galaxies with known nuclear cluster masses are now shown by only
one single gray square.
We have also now included, shown by the stars, 29 galaxies with nuclear cluster masses
from Ferrarese et al.\ (2006a, their figure~2b).
The two heavy black lines have a slope of 2.
}
\label{Fig2}
\end{figure*}
Figure~\ref{Fig2} suggests that nuclear star clusters do not cleary define
an offset parallel relation that is disconnected from the distribution of black
holes in the $M$--$\sigma$ diagram, as suggested by Ferrarese et al.\ (2006a)
who had found that $M_{\rm nc}$--$\sigma^{4.27\pm0.61}$.
Excluding what are likely to be nuclear stellar discs from four galaxies
studied by Ferrarese et al.\ (2006a; although see Prieto et al.\ 2004 and
Seth 2008b), while including an additional 13 nuclear
star clusters in galaxies with velocity dispersions over a much larger
baseline, reaching out to $\sim$200 km
s$^{-1}$, we have found a notably shallower $M{\rm nc}$--$\sigma$ relation.
The previous relation had inspired some to adapt the momentum-conserving
arguments of Fabian (1999; see also King \& Pounds 2003 and Murray et al.\
2005) which had been used to explain why an $M_{\rm bh}$--$\sigma^4$ relation
might arise.
This nuclear cluster feedback mechanism involving stellar winds to produce an
$M_{\rm nc} \propto \sigma^4$ scaling relation may therefore require some
modification (McLaughlin et al.\ 2006; McQuillin \& McLaughlin 2012).
Relaxing the assumption of an isothermal sphere for the dark matter halo might
prove helpful. On the other hand, the results may be telling us that
(momentum) feedback is not relevant, which would be expected if the star
clusters were to have originated somewhere else and subsequently been
deposited into the spheroid, rather than coevolving there.
It is noted that the distribution of points defining the $(M_{\rm bh}+M_{\rm
nc})$--$\sigma$ relation seen in Figure~\ref{Fig2} may yet be shown to be
tracing an upper envelope at the low-$\sigma$ end. For-example, non-nucleated
dwarf elliptical galaxies would reside below such an upper envelope if they do
not contain a supermassive black hole of sufficient mass (see also Batcheldor
2010 in regard to sample selection effects).
Finally,
an argument can be made for expecting a slope (or upper envelope)
at the low-$\sigma$ end of the $(M_{\rm bh}+M_{\rm nc})$--$\sigma$ diagram that
is actually closer to 1 than 2. While the data for galaxies with
$M_{\rm bh} > 5\times10^7 M_{\odot}$ to $2\times10^8 M_{\odot}$ is
roughly consistent with a constant $(M_{\rm bh}+M_{\rm nc})/M_{\rm sph}$ ratio
(Marconi \& Hunt 2003; H\"aring \& Rix 2004),
Graham \& Spitler (2009, see their figure~3) found that the $(M_{\rm
bh}+M_{\rm nc})/L$ ratio increases as one proceeds to lower
luminosities $L$ such that $L\propto (M_{\rm bh}+M_{\rm nc})^{5/3}$.
Subsequently, coupled with the relation $L\propto \sigma^2$, one has
that $(M_{\rm bh}+M_{\rm nc}) \propto \sigma^{6/5}$.
Additional data plus a more detailed modelling of each galaxy's individual
stellar components, including inner and outer nuclear discs,
will help to clarify this situation.
\subsection{Predictions for a bent $M_{\rm bh}$--$L$ and $M_{\rm bh}$--$M_{\rm
sph}$ relation}\label{Sec_Predict}
We know that for massive elliptical galaxies $L \propto \sigma^5$ (Schechter
1980; Malumuth \& Kirshner 1981) and $M_{\rm bh} \propto \sigma^5$ (Merritt \&
Ferrarese 2001; Hu 2008; Graham et al.\ 2011). Consistent with these
observations is the relation $M_{\rm bh} \propto L^{1.0}$ (Marconi \& Hunt
2003; Graham 2007) for galaxy samples dominated by massive elliptical
galaxies. One may then ask what about the lower-mass galaxies (with $M_B \ga
-20.5$ mag). As noted, these dwarf and intermediate-luminosity elliptical galaxies
have $L \propto \sigma^2$ (Davies et al.\ 1983; Matkovi\'c \& Guzm\'an 2005; de
Rijcke et al.\ 2005) while they also seem to follow the relation $M_{\rm bh}
\propto \sigma^5$ (Ferrarese \& Merritt 2000; Graham et al.\
2011).\footnote{The offset nature of barred / pseudobulge galaxies in the
$M_{\rm bh}$--$\sigma$ diagram (Graham 2008a; Hu 2008) appears to be an
unrelated phenomenon.} Consequently, one should find that $M_{\rm bh} \propto
L^{2.5}$ for elliptical galaxies with $M_B \ga -20.5$ mag ($M_{\rm bh} \la
5 \times 10^7$ -- $2\times10^8 M_{\odot}$). That is, the $M_{\rm bh}$--$L$
relation may be broken or curved, and the $M_{\rm bh}/L$ and $M_{\rm
bh}/M_{\rm sph}$ ratios may not be approximately constant values at these
lower masses.
This has nothing to do with pseudobulges nor the alleged divide between
elliptical and dwarf elliptical galaxies at $M_B = -18$ mag (see the review in
Graham 2012a).
Further support for the above suggestion stems from the observation
that the luminosity-(S\'ersic index) relation is linear (e.g.\ Graham \&
Guzm\'an 2003) while the $M_{\rm bh}$--(S\'ersic index) relation is curved or
broken (Graham \& Driver 2007). Consistency would require that the $M_{\rm
bh}$-luminosity relation be broken too.
Spheroids fainter than $M_B = -20.5$ mag are the dominant spheroid population in the
universe, and it is claimed here that past work on the
$M_{\rm bh}$--$M_{\rm sph}$ and $M_{\rm bh}$--L relations have
been severely biased by the sample selection of luminous spheroids likely
built in `dry' merger events. As such, the current near-linear
$M_{\rm bh}$--$M_{\rm sph}$ and $M_{\rm bh}$--L relations
(e.g.\ Marconi \& Hunt 2003; H\"aring \& Rix 2004; Graham 2007) should
not be used to constrain the growth mechanism of supermassive black holes in
galaxies (beyond simple addition in `dry' merger events).
This prediction, with significant implications for galaxy formation if true,
will be investigated further in (Graham 2012b).
\section{acknowledgment}
The author wishes to have it acknowledged that more than five months elapsed
between submitting his manuscript and receiving a referee letter.
This research was supported by Australian Research Council
grants DP110103509 and FT110100263.
Graham thanks the organisers of the conference ``Central
Massive Objects: The Stellar Nuclei-Black Hole Connection'', June
22-25, 2010, ESO Headquarters, Garching, Germany, where this work was
first presented.
|
2,869,038,156,778 | arxiv |
\section{Introduction}
It is now clear that Type Ia supernovae are not a homogenous class of
objects. One can see differences in spectral features at specific
epochs (\cite{Ber64}, \cite{Bra87}, \cite{Phi_etal87},
\cite{HarWhe90}, \cite{Nug_etal95}, \cite{Fil97}) and in the overall
morphology of the light curves (\cite{Phi_etal87}, \cite{Phi93},
\cite{Sun95}), which had long been suspected by earlier workers
(\cite{Bar_etal73}, \cite{Rus74}, \cite{Psk77}, \cite{Bra81},
\cite{Psk84}). The modern data have shown us, however, that the class
of Type Ia supernovae can still be used to provide accurate {\it
relative} distances by applying correction factors to the observed
luminosity which are a simple function of the evolution of the light
curve near maximum light (\cite{Phi93}, \cite{Ham_etal95},
\cite{Rie_etal95}). The simple proof that these techniques work is the
reduction in the magnitude scatter of a Hubble diagram for distant
supernovae, where the scatter reduces from about 0.4mag to 0.14mag
(\cite{Ham_etal96b}, \cite{Rie_etal96}).
To use the very high accuracy of the zero-point of the distant
supernova Hubble diagram to measure an accurate {\it absolute}
distance scale requires the direct measurements of the distances to a
number of nearby galaxies which have had well-observed Type Ia
supernovae. One of the most accurate methods to measure absolute
distances to nearby galaxies is the use of the Cepheid
period-magnitude relationship calibrated relative to the Large
Magellanic Cloud (\cite{MadFre91}). With an independent measurement of
the distance to the LMC the observed Hubble diagram of distant
supernovae will directly yield the Hubble constant. The nearby
calibrating galaxies must have three rather trivial properties: the
galaxy must be young enough to form classical Cepheids; the galaxy
must have hosted a reasonably ``normal'' type Ia supernova; and the
supernova light curve must have been reasonably well-measured.
There are only a handful of such supernova host galaxies within the
light grasp of HST, where the limiting distance modulus for measuring
Cepheids light curves is about 32.0. Six supernovae, SNe 1895B,
1937C, 1960F, 1972E, 1981B, and 1990N have been calibrated to date by
Saha, Sandage and collaborators (see \cite{Sah_etal96} for the most
recent paper in this series).
We have pointed out (\cite{Ham_etal96b}) that a few of the nearby
supernovae are not ideal as calibrators. The light curves for SNe
1895B and 1960F are very poor with ill-defined maxima. SN 1937C was
well observed, but the transformation from the 60 year-old films to
modern photometric bandpasses, while carefully calibrated
(\cite{PieJac95}), remains controversial in some circles
(\cite{Sch96}; rebuttal in \cite{JacPie96}). Both 1937C and 1972E
clearly had ``slower'' evolution near maximum light, which we have
shown is indicative of intrinsically brighter supernovae
(\cite{Ham_etal96a}); however, this latter point has been contested
(\cite{TamSan95}, \cite{San_etal96}). SN 1989B had very high reddening
($E(B-V) \sim 0.4$; \cite{Wel_etal94}). Only SNe 1981B and 1990N have
evidently both uncontroversial light curves and Cepheid distances.
If we relax the requirement that the Cepheids must be measured in the
{\it same} galaxy as the supernova and rely on group or cluster
associations between galaxies, a number of other calibrators become
available. Cepheid distances to NGC 3368 (M96) (\cite{Tan_etal95}) and
NGC 3351 (\cite{Gra_etal97} have been measured with HST data.
\cite{San_etal96} associate the M96 group with a larger Leo Group
(also called the Leo I cloud) which includes the compact M66 (Leo
Triplet) group. However, M66, which hosted SN 1989B, is some 8\arcdeg\
away from the Cepheid host galaxies and one must worry that the whole
Leo group is at the same distance. HST observations to determine a
Cepheid distance to M66 are planned for HST Cycle 7 by Saha, Sandage,
and collaborators. There have also been two well studied Type Ia
supernovae in the Fornax cluster: SN 1980N (in NGC 1316) and SN 1992A
(in NGC 1380). \cite{Sil_etal96} have measured a Cepheid distance to
the peculiar spiral NGC 1365 thought to be a member of Fornax. Once
again, the physical association of the supernova host galaxy and the
Cepheid host galaxy is a point of some controversy. Such ambiguities
lead us to prefer Type Ia supernova absolute magnitude calibrations
based on galaxies which have both primary calibrators such as Cepheids
and supernova.
SN 1990N was discovered significantly before maximum in the SBb(r)
galaxy NGC 4639 by E.~Thouvenot at the Observatory of the C\^ote
d'Azur on 22 June 1990 (\cite{Mau90}; all dates referenced as UT).
\cite{Pol90} measured an astrometric position for this supernova of
(RA,dec,equinox)= (15$^h$ 18$^m$ 52\fs92, -7\arcdeg 11\arcmin
43\farcs2, 1950). \cite{KirLei90} classified it as a Type Ia supernova
on the basis of a spectrum obtained on the 26 June
1990. \cite{Lei_etal91} presented preliminary light curves based on
CTIO CCD data (which is reanalyzed in the present paper). Spectral
modeling has been published by \cite{Jef_etal92}, \cite{Shi_etal92},
\cite{Yam_etal92}, and \cite{Fis_etal97}. Recently, \cite{San_etal96}
and \cite{Sah_etal96} have obtained the distance to the parent galaxy
by measuring the periods and magnitudes of 20 Cepheid variable stars
with the Hubble Space Telescope. This object is therefore a key
template objects in establishing the value of $H_0$ based on the
Cepheid-supernova distance scale.
SN 1991T has been one of the most extensively studied Type Ia
supernovae. It was discovered well before maximum in the Sb(s)II
galaxy NGC 4527 by S. Knight and independent observers
(\cite{Waa_etal91}) on 13 April 1991. An astrometric position for
this supernova by R. H. McNaught is given in the previous reference as
(RA,dec,equinox)= (12$^h$ 31$^m$ 36\fs91, +2\arcdeg 56\arcmin
28\farcs3, 2000).
Early optical spectral observations reported by \cite{LaFGol91},
\cite{Kir91}, and \cite{PhiHam91} showed that SN 1991T was a peculiar
Type Ia event which motivated a number of theoretical studies to model
the spectral evolution (\cite{Jef_etal92}, \cite{Rui_etal92},
\cite{Spy_etal92}, \cite{Yam_etal92}, \cite{Maz_etal95},
\cite{Mei_etal96}). Optical photometry has been published by
\cite{Phi_etal92}, \cite{For_etal93}, and \cite{Sch_etal94} which
showed that SN 1991T had a very slow rate of evolution through
maximum. Due to the excellent temporal coverage of the light curve,
this supernova has been used as template example of a slow supernova
(\cite{Ham_etal96c}, \cite{Rie_etal96}). It is expected that the Saha
and Sandage group will obtain a Cepheid distance to NGC 4527 using
data taken with HST in Cycle 7.
When accurate modern light curves for several nearby supernovae became
available some years ago, subtle differences between Type Ia supernova
light curves became apparent. CCD photometry showed that there is a
real spread in the peak luminosity and that some of the objects evolve
through maximum light more slowly than others. \cite{Phi93} presented
evidence that the rate of the decline after maximum is correlated with
the luminosity at maximum and that more luminous objects have a slower
decline rates. \cite{Ham_etal95}, \cite{Rie_etal95},
\cite{Ham_etal96b}, and \cite{Rie_etal96} have found that the scatter
in the Hubble diagram of Type Ia supernovae decreases significantly
when corrections for the peak luminosity -- decline rate relation are
introduced. \cite{TamSan95} argue that when samples are restricted to
``normal'' objects (by eliminating events like SN 1991T) there is no
need to correct for a peak luminosity -- decline rate effect.
However, the Hamuy studies find that by ignoring this effect, the
estimate of the Hubble constant can be biased too low by up to 15\%.
The intent of this paper is to present accurate light curves of the
two nearby supernovae SNe 1990N and 1991T which are important
calibrators in the distance scale. We have already used these light
curves in our work on the Hubble constant (\cite{Ham_etal96b}. In
Section 2 of this paper we present the observations and reduction of
the optical photometric data obtained at CTIO. The light curves as
well as color curves for both supernovae are shown in Section 3. A
final discussion is found in Section 4.
\section{Observations}
The optical observations of SNe 1990N and 1991T were obtained using
the 0.9m and the Blanco 4m telescopes at CTIO. SN 1990N was
observed from June 1990 to March 1992 and SN 1991T was observed from
April 1991 to June 1992. The observations were made using Texas
Instrument and Tektronix CCDs (except for the night of the 19 March
1990 when a Thomson detector was used) and facility $UBV(RI)_{KC}$
filters in the Johnson--Kron--Cousin photometric system (\cite{Joh63},
\cite{Kro53}, \cite{Cou76}. The observation logs are given in Table
\ref{t1} and \ref{t2}. The detector name, listed in the final
column of these two tables, combines the manufacturer name and a
running number assigned by the CTIO CCD lab. We have assumed that each
different CCD listed in this table (along with the filter set) has a
unique set of color terms that must be derived from observations.
We made observations of the supernovae under varied photometric
conditions, including very non-photometric weather with cloud
extinction up to a few magnitudes. It is well established
(\cite{Ser70}, \cite{Wal_etal70}, \cite{Ols83}) that clouds are quite
grey, which allows us to use local standards (in the same CCD frame as
the supernova) and averaged color terms for a specific CCD measured on
photometric nights.
For accurate photometry on non-photometric nights, it is necessary to
define a precise local photometric sequence of stars. We measured
photometric sequences on 13 photometric nights in the CCD field around
NGC 4639 and NGC 4527 referenced to the Landolt and Graham standards
stars (\cite{Lan72}, \cite{Gra82}, \cite{Lan92}). Extinction
coefficients, color terms and zero points for the transformations to
the standard $UBV(RI)_{KC}$ system were derived for each night
following the method described by \cite{Har_etal81}. Typical values
of the extinction coefficients were $k_{U}=0.50$, $k_{B}=0.32$,
$k_{V}=0.20$, $k_{R}=0.14$ and $k_{I}=0.08$ in units of mag
(airmass)$^{-1}$.
\footnote{These extinction values are higher than normal due to the
effects of the Mt.~Pinatubo eruption which occurred on JD 2448422. See
\cite{GroGoc92}.} We measured $UBV(RI)_{KC}$ sequences for a total of
15 stars for SN 1990N and 9 stars for SN 1991T using digital aperture
photometry with an aperture diameter of 14\arcsec. The photometric
sequences are identified in Figures \ref{f1} and \ref{f2}, and the
photometry is given in Tables \ref{t3} and \ref{t4}. In these tables
we list the number of observing nights (n) and the total number of
observations (m).
Star 2 in our local sequence around SN 1991T is also a sequence star
(number 2) listed by \cite{For_etal93}. The magnitude differences for
this star in the sense of this study {\it minus} \cite{For_etal93} are
$\Delta(VRI) = (0.00,-0.02,-0.03)$. For the three SN 1991T local
standards in common with \cite{Sch_etal94}, we find the mean
differences in the sense of this study {\it minus} that of
\cite{Sch_etal94} are $\Delta(BVRI) = (-0.03\pm0.01, -0.04\pm0.01,
-0.02\pm0.02, +0.12\pm0.08)$ where the errors quoted are the mean
errors. These mean differences are consistent with the photometric
errors quoted in \cite{Sch_etal94}, which dominate the comparison of
the two magnitude systems. Due to the larger number of measurements on
independent photometric nights, we are confident that sequences given
in Tables \ref{t3} and \ref{t4} are the most accurate available.
To determine the supernova magnitudes we subtracted late-time images
of the parent galaxies at the location of the supernovae using the
technique described by \cite{Ham_etal94}. For these subtractions deep
``master images'' of NGC 4639 and NGC 4527 were obtained at the
beginning of 1994, which corresponds to 1300 and 1010 days after
maximum light for SNe 1990N and 1991T. A simple extrapolation of the
late-time decline rates (see the $\gamma$ parameter in Table~\ref{t8})
to these dates yields $B$ magnitudes of $\sim33$ (SN 1990N) and
$\sim28$ (SN 1991T). However, there are two factors which could make
the late-time magnitudes in the master images significantly brighter
than this extrapolation
The first factor is the presence of minor radioactive nuclides and the
efficiency of positron energy deposition from the radioactive decays.
A Type Ia explosion is predicted to synthesize about 0.5M\sun\ of
$^{56}$Ni, and smaller amounts of $^{56}$Co, $^{57}$Co, $^{44}$Ti, and
$^{22}$Na (see model W7 in \cite{Nom_etal84}). \cite{Woo_etal89}
provide energy deposition rates for these nuclides. A complication
arises in predicting the late-time light curve after day 500 in how to
handle the energy deposition from the positron production
(\cite{Arn79}). The positrons can add energy into the supernova nebula
both from their kinetic energies and annihilations. The efficiency of
this process is poorly understood in the low-density environment of
the supernova nebula at late-time. If we make the rather extreme
assumption of complete kinetic energy deposition and positron
annihilation into gamma rays, we can use the deposition rates given by
\cite{Woo_etal89} and the model W7 abundances by \cite{Nom_etal84} to
predict upper limits to the supernova luminosity at late time. The
effect of full energy deposition from positrons and the existence of
long-lived radioactive nuclides such as $^{44}$Ti, and $^{22}$Na tends
to flatten out the light curve past day 1000. Using a column depth of
400 g cm$^{-2}$ at t=$10^6$ s as suggested by \cite{Woo_etal89} for a
Type Ia supernova, we predict $V$ magnitudes of 29.6 and 25.4 for SNe
1990N and 1991T at the epoch of the master image. For no positron
energy deposition, the predicted magnitudes are about 3 magnitudes
fainter. In either case, the predicted magnitudes in the master images
are so faint as to have no effect on the measured photometry. However,
if there is significant overproduction of $^{57}$Co or $^{44}$Ti
relative to model W7, the late-time magnitudes could be much brighter
and affect the magnitudes measured by image subtraction.
The second factor that could affect the magnitudes measured from the
subtracted images is the presence of a light echo. In the late-time
images of SN 1991T, the location of supernova has been found to be
contaminated by a faint echo of SN 1991T at maximum light
(\cite{Sch_etal94}). We will return to this minor complication in
Section 3.
We measured differential photometry of the supernovae on each CCD
frame using aperture photometry when the supernova was bright, or
using the point spread function (psf) fitting program DAOPHOT
(\cite{Ste87}) when the supernova was faint. Averaged color terms
chosen to match the CCD/filter setup of instruments for each observing
night were adopted from a database of coefficients at CTIO. For SN
1991T near maximum, our CCD exposures were very short and the local
standards were poorly exposed. In this case we used the sharp core of
NGC 4527 as a $BV$ ``standard'' for the nights of 26, 28, 29, and 30
April, and 1 May 1991. An aperture radius of 2.7\arcsec\ was chosen to
maximize the signal to noise ratio for the photometry of the core. The
core photometry is listed in Table \ref{t4}.
Besides the error given by the Poisson statistics of the number of
counts in the supernova aperture or psf, $\sigma_{phot}$, there are
other error sources such as those due to the transformation of the
instrumental magnitudes into the standard system and CCD flat
fielding. To get a sense of the magnitude of these errors we selected
many frames with a sufficient number of bright stars (with negligible
$\sigma_{phot}$ error). For each frame we calculated the standard
deviation of the difference between a given measurement and the
standard magnitudes listed in Tables \ref{t3} and \ref{t4}. This
standard deviation ($\sigma_{rms}$) is an empirical estimate of the
average error in a {\it single} observation of a stellar object in any
CCD frame when referenced to the local photometric sequence. The
measured standard deviations $\sigma_{rms}$ were (0.026, 0.017,
0.017, 0.015, 0.017) magnitudes in $UBVRI$. The value of
$\sigma_{rms}$ derived for both supernovae in each filter agreed
within 0.002 magnitudes).
The final error in the individual magnitudes of the supernovae was
calculated as the quadratic sum of the empirical error in a single
observation $\sigma_{rms}$ and the photon statistical error,
$\sigma_{phot}$. The error $\sigma_{rms}$ was the dominant component
of the errors in the early part of the photometry, while
$\sigma_{phot}$ became more important when the supernova dimmed.
\section{Results}
\subsection{Light Curves}
We present $UBV(RI)_{KC}$ photometry for SNe 1990N and 1991T in
Tables \ref{t5} and \ref{t6}, and plot the data in Figures \ref{f3}
and \ref{f4}. To find the time and magnitude of maximum light for
both supernovae we fit the data around the peak with a third-order
polynomial. For SN 1990N the first observation was acquired 11 days
before $B_{max}$, and the last observation was made 607 days after
$B_{max}$ in just the $V$ band. The $B$ maximum was reached on JD
2448082.7 $\pm$ 0.5 with $B_{max} = 12.76 \pm 0.03$ and a $B-V$ color
of 0.03 magnitudes. For SN 1991T the data begin 12 days before
$B_{max}$ and end 401 days after maximum. We derive $B_{max} = 11.70
\pm 0.02$ at JD 2448375.7 $\pm$ 0.5 with a $B-V$ color of 0.17. In
Table \ref{t7} we summarize the maxima of the light curves in the
different bands for both supernovae. We find that the $B$ maximum
occurs before the $V$ maximum; in particular, the time difference
between the $B$ and $V$ maxima is $1.5 \pm 0.7$ days for SN 1990N and
$2.6 \pm 0.7$ days for SN 1991T, in agreement with the result of
\cite{Lei88} who found a difference of $2.5 \pm 0.5$ days.
Comparisons of our $BV$ photometry with previous data published for SNe
1990N and 1991T are shown in Figures \ref{f5} and \ref{f6}. The
agreement between the photometry presented in this paper and the
results published by \cite{Lei_etal91} for SN 1990N and
\cite{Phi_etal92} for SN 1991T are not surprising since they are based
on a subsample of the same optical data analyzed here. However, a
small systematic difference between the different data sets is
clear. Our photometry near maximum is generally dimmer, although the
discrepancy is less than 0.1mag for SN 1990N and even less for SN
1991T. The preliminary photometric results of these earlier papers
which were based on only a single night for the photometric
calibration should be ignored in preference to the photometric data
given in Tables \ref{t5} and \ref{t6}.
For SN 1991T there is independent photometry published by
\cite{For_etal93} which we plot in Figure \ref{f6}. If we interpolate
our data to the dates of the \cite{For_etal93} data using a spline
fit, we find the following differences in the sense of this work {\it
minus} Ford {\it et al.}: $V, 0.08 \pm 0.01$ ; $R, 0.04 \pm 0.02$ ;
and $I, 0.02 \pm 0.02 $. The quoted errors are the errors in the mean
based on the interpolation to the 12 dates in the Ford {\it et al.}
study. A similar systematic offset in the $V$ magnitude was noted by
Ford {\it et al.} with respect to the \cite{Phi_etal92} reductions of
the 1991T data. These mean differences between careful photometric
studies indicate the level in systematic errors that can be
encountered even in bright supernova photometry.
It is now well established that there is not a unique light curve for
all type Ia supernova. As was suggested by \cite{Psk77} and
\cite{Bra81}, supernovae can be discriminated by the rate of decline
after maximum. Pskovskii(1977, 1984) defined the parameter $\beta$ as
the characteristic decline rate during the fast-decline phase of the
$B$ supernova light curve, and the parameter $\gamma$ as the rate
during the slow-decline phase (see \cite{Phi_etal87} for an
unambiguous description of these parameters). \cite{Phi93} introduced
the parameter $\Delta m_{15}$, defined as the decline in magnitude
during the 15 days after $B$ maximum. The evidence from nearby
supernovae (\cite{Phi93}, \cite{Ham_etal96c}) and the scatter in the
observed Hubble diagram
(\cite{Maz_etal94}, \cite{Ham_etal95}, \cite{Ham_etal96b}) clearly show
that the brighter supernova decline more slowly (small $\Delta
m_{15}$).
In Table \ref{t8} we list the evolutionary parameters of the $B$ light
curves for our two supernovae. We also list the values for the
\cite{Lei88} template $B$ curve. This multicolor template, which was
formed from a large number of supernova light curves, provides a
useful fiducial light curve and can be considered a ``typical'' light
curve which can be compared to other observations. We calculated the
Pskovskii $\beta$ and $\gamma$ parameters for the $B$ curves of
SNe 1990N and 1991T using a linear least-squares fit with the data
weighted using the photometric errors as quoted in Tables \ref{t5} and
\ref{t6}. The range of days used in the fitting for each supernova are
indicated in Table \ref{t8}.
Table \ref{t8} shows that in both the $\beta(B)$ and $\Delta m_{15}$
are much smaller for SN 1991T than the values for the Leibundgut
template. \cite{Phi93} has shown that this ``slow'' supernova was
intrinsically very bright. In fact, SN 1991T is one of the slowest
supernovae ever found and has been used as a representative template
for slow events (\cite{Ham_etal95}, \cite{Ham_etal96c}). SN 1990N, on
the other hand, is quite similar to the Leibundgut template, and is
therefore similar to the typical Type Ia event.
In Figures \ref{f7} and \ref{f8} we plot the first 120 days of $BV$
photometry for the two supernovae along with the Leibundgut templates.
The $BV$ templates have been shifted to match the epoch of $B$ maxima
and the peak magnitudes given in Table \ref{t7} (with the appropriate
time delay between $B$ and $V$ maximum given above). The results given
in the preceeding paragraph can be now clearly seen in these figures.
SN 1990N follows the template closely while SN 1991T declines from
maximum light more slowly. SN 1991T also begins its exponential
decline significantly earlier and remains at higher relative
brightness when compared to the template. Visually, the ``knee'' in
the light curve around 30 days past maximum occurs earlier in this
supernova. SN 1991T also rises to maximum light more slowly than SN
1990N.
In Figure \ref{f12} we plot the late-time photometry of SN 1991T from
this study and \cite{Sch_etal94} along with the predicted trend of the
late-time evolution based on the $\gamma(B)$ and $\gamma(V)$ fits to
our data. The fact that the light curve past day 400 levels off has
been shown by \cite{Sch_etal94} to be due to a light echo with $BV
\sim (21.3,21.4)$. Recall that in our work we subtracted a late-time
image of the region taken around JD2449380 around the supernova to
remove the galaxian contribution to the background under the psf. By
doing this however, we also automatically correct for the echo
contamination. This assumption is valid provided that the echo
magnitude did not change during the period between the supernova
observations and the late-time image, and that the light curve of
supernova did not level off for other reasons, such as the
overproduction of $^{44}$Ti or $^{22}$Na. Under these assumptions, the
photometry of SN 1991T in Table \ref{t6} and Figure \ref{f12} should
be free of any echo contamination. Indeed, the small differences
between our last points and those of Schmidt {\it et al.} at JD
2448750 shown in Figure \ref{f12} are consistent with the echo
magnitude cited above.
\subsection{Color Curves}
The $B-V, B-R, B-I, V-R, V-I, R-I$ color curves for SNe 1990N and
1991T through day 100 are shown in Figures \ref{f9} and \ref{f10}
respectively. The temporal axis was shifted so that the $B$ maximum
corresponds to $t=0$ for both supernovae. The redder color of SN 1991T
with respect to SN 1990N is evident. The presence of redshifted Na
absorption lines in the spectrum of SN 1991T and the location of the
supernova in one of the arms of NGC 4527 suggest that this object was
obscured by dust in its parent galaxy. Strong Ca and Na interstellar
absorption lines at the radial velocity of NGC 4527 were observed by
\cite{WheSmi91} and \cite{MeyRot91}. \cite{Rui_etal92} estimated an
excess $E(B-V) \sim 0.3$ assuming a relationship between the
equivalent width of the line Na I D and $E(B-V)$. On the other hand,
\cite{Phi_etal92} found an excess of 0.13 magnitudes assuming an
intrinsic color $(B-V)$ of zero during maximum. The foreground
$E(B-V)$ reddening is $0.00 \pm 0.015$ according to \cite{BurHei94}.
SN 1990N did not show absorption lines in a low-dispersion spectrum
(\cite{Lei_etal91} and its location in the outskirts of NGC 4639
suggests that this object is less reddened than SN 1991T.
\cite{Sah_etal96} estimate the mean extinction of the Cepheids as
$E(V-I)=0.04\pm0.06$ based on the difference in distance moduli from
Cepheid $VI$ P-L relations. The foreground $E(B-V)$ reddening is
$0.012 \pm 0.015$ according to \cite{BurHei94}. It is unfortunate that
no high-dispersion spectrum of this bright object was made. The
$(B-V)$ color of 0.03 is consistent with an intrinsic color at maximum
of $\sim -0.1 - 0.1$ magnitudes for other Type Ia supernovae with low
reddenings (\cite{Ham_etal91}, \cite{SanTam93}, \cite{Ham_etal95}) and
suggests $E(B-V) \lesssim 0.15$. \cite{Lir96} has shown that the
color evolution in $BV$ from days 32-92 during the nebular phase is
extremely uniform among Type Ia supernovae. In a future paper we will
use this fact to calibrate the intrinsic colors of SNe Ia at maximum
which, in turn, should allow more precise estimates to be made of the
host galaxy reddening.
Differences in the color evolution of the two supernovae are better
appreciated in the color-color plot shown in Figure \ref{f11}. Time
along the light curve is indicated by labeling the points at
approximately -10, 0, 10 and 20 days from the maximum. The figure also
shows the reddening vector for a galactic extinction law
(\cite{SavMat79}). For $t>10$ days, the curves are parallel to the
reddening vector. However, the data from $t=-10$ to $t=10$ show that
the color curve of SN 1991T cannot be matched to that of 1990N by a
simple dereddening vector.
\section{Concluding Remarks}
SNe 1990N and 1991T are important supernovae. They were close enough
that distances to the host galaxies can (or will) be measured by
direct techniques with the HST. The light curves are especially well
determined over the full evolution, and in particular, the evolution
before maximum is well covered. The light curves of these supernovae
have become standard templates used in the study of more distant
supernovae.
The photometric data presented in this paper show that SN 1990N was a
typical Type Ia event in that the light curves are well fitted by the
template curve determined by \cite{Lei88}. It also falls in the
middle of the range of $\Delta m_{15}$ types defined by \cite{Phi93}.
The spectral evolution of SN 1990N has also been classified as similar
to other prototypical Type Ia supernovae, although the early first
observations made it to be claimed as a peculiar object
(\cite{Lei_etal91}).
The preliminary reductions of the SN 1990N data in \cite{Lei_etal91}
have been used by \cite{San_etal96} to estimate a peak absolute
magnitude for this supernova. The peak magnitudes cited by Sandage et
al. of $(B,V)=(12.70,12.61)$ are $\sim 0.07$ mag brighter than the
more precise results given in Table \ref{t7}. Such a small magnitude
difference will have little effect on the measurement of the Hubble
constant since $\delta{H_0}/H_0 \approx
0.46\delta{m}$. \cite{Ham_etal96b}, \cite{San_etal96} and
\cite{Rie_etal96} have used the light curve of this supernova as one
of the fundamental calibrators of the absolute magnitudes of Type Ia
supernovae. These absolute magnitudes coupled with the observed
Hubble diagram from the Cal\'an/Tololo survey (\cite{Ham_etal96b})
have yielded $H_0 \sim 65$ km s$^{-1}$ Mpc$^{-1}$.
Because of its peculiar nature, SN 1991T has been studied intensively.
The peculiarities of this supernova include pre-maximum spectra
dominated by iron-group features, a very small $\Delta m_{15}$ value,
and a visual luminosity larger than other typical Type Ia supernovae,
although the derived absolute magnitudes depend strongly on the
different extinction assumed for the supernova and the distance to the
host galaxy NGC 4527 (\cite{Fil_etal92}, \cite{Rui_etal92},
\cite{Phi_etal92}). The results of this paper confirm the slow
evolutionary rate near maximum and also show that the color curve is
significantly different from more normal Type Ia supernovae.
This is not to say that SN 1991T is unique, as new events of this
``slow-class'' have been found, such as SNe 1991ag
(\cite{Ham_etal95}), 1992bc (\cite{Maz_etal94}), 1995ac
(\cite{Gar_etal96}) or 1997br (\cite{Qia_etal97b}). The evidence
suggests that the decline rate of these supernovae is just the slow
end of the peak luminosity -- decline rate relation for Type Ia
supernovae and that this correlation could be also extended to a
spectroscopic sequence (\cite{Nug_etal95}). \cite{Ham_etal96c} and
\cite{Gar_etal96} however, have pointed out that the intrinsic
luminosity, spectral features, and colors at maximum light are not a
simple function of the light curve shape (as measured by $\Delta
m_{15}$) for this bright class of supernovae. For instance, among
supernovae with similar small values of $\Delta m_{15}$, SNe 1991T,
1995ac, and 1997br had very weak Si II 6355\AA\ at maximum light
(\cite{FilLeo95}, \cite{Qia_etal97a}) while 1992bc had the typical
deep spectral features at maximum light common to most Type Ia events
(\cite{Maz_etal94}). Conversely, SN 1995bd had a spectrum similar to
1991T at maximum light but its light curve was well fit by the
``faster'' Leibundgut template (\cite{Gar_etal96}). It is clearly
important to obtain more examples of this class of bright Type Ia
supernovae to sort out this issue.
\acknowledgments JM and MH acknowledge support by Ca\'tedra
Presidencial de Ciencias 1996-1997. We would like to thank the Space
Telescope Science Institute for access to the Digitized Sky Survey. We
thank Peter Garnavich, Eric Olsen, Brian Schmidt, and Gordon Walker
for helpful correspondence. This research has made extensive use of
the Canadian Astronomy Data Center (Dominion Astrophysical
Observatory, Herzberg Institute of Astrophysics), and the NASA
Astrophysics Data System Abstract Service. We would also like to
thank Brian Marsden and Daniel Green at the IAU Central Bureau for
Astronomical Telegrams for their valuable notification service which
allows observers to start observing supernovae within 24 hours of
discovery.
\clearpage
|
2,869,038,156,779 | arxiv | \section*{Nomenclature}
\addcontentsline{toc}{section}{Nomenclature}
\subsection{Sets and numbers}
\begin{IEEEdescription}[\IEEEusemathlabelsep\IEEEsetlabelwidth{${\mathcal N}_p$, ${\mathcal N}_q$}]
\item[${\mathcal N}$, $N$] Set and number of buses.
\item[${\mathcal L}$, $L$] Set and number of power lines.
\item[${\mathcal M}$, $M$] Set and number of measurements.
\item[${\mathcal N}_p$, ${\mathcal N}_q$] Sets of active and reactive power injection measurements.
\item[$\mathcal{D}$] Set of all dual SDP certificates.
\end{IEEEdescription}
\subsection{Input signals and constants}
\begin{IEEEdescription}[\IEEEusemathlabelsep\IEEEsetlabelwidth{$\mathbf{Y}$, $\mathbf{Y}_f$, $\mathbf{Y}_t$}]
\item[$\mathbf{v}$, $\bm{\ell}$] $N$-dimensional complex vectors of nodal voltages (state of the system) and current injections.
\item[$\mathbf{p}$, $\mathbf{q}$] $N$-dimensional real vectors of net injected active and reactive powers.
\item[$\bm{\ell}_{f}$, $\bm{\ell}_{t}$] $L$-dimensional complex vectors of current injections at the \emph{from} and \emph{to} ends of all branches.
\item[$\mathbf{Y}$, $\mathbf{Y}_f$, $\mathbf{Y}_t$] Matrices of nodal admittance, \emph{from} branch admittance, and \emph{to} branch admittance.
\item[$\mathbf{Y}_{l,p_{f}}$, $\mathbf{Y}_{l,p_{t}}$] Coefficient matrices corresponding to active power flow measurements at the \emph{from} and \emph{to} ends over the $l$-th branch.
\item[$\mathbf{Y}_{l,q_{f}}$, $\mathbf{Y}_{l,q_{t}}$] Coefficient matrices corresponding to reactive power flow measurements at the \emph{from} and \emph{to} ends over the $l$-th branch.
\item[$\mathbf{M}_0$] Designed coefficient matrix in the objective.
\item[$\mathbf{M}_j$] Coefficient matrix corresponding to the $j$-th measurement.
\item[$\mathbf{z}$] $M$-dimensional real vector collecting all measurements.
\item[$|v_k|$, $\measuredangle v_{k}$] Voltage magnitude and angle at the $k$-th bus.
\item[$\measuredangle y_{st}$] Angle of the branch $(s,t)$ line admittance.
\item[$\eta_j$, $\sigma_j$] Additive noise and positive weight of the $j$-th measurement.
\item[$\rho$] Positive weight trading off the data fitting cost and the designed linear regularizer.
\item[$\zeta$] Defined root-mean-square estimation error of the obtained optimal SDP solution.
\end{IEEEdescription}
\subsection{Variables and functions}
\begin{IEEEdescription}[\IEEEusemathlabelsep\IEEEsetlabelwidth{$f_{\mathrm{WLAV}}(\cdot)$z}]
\item[$\mathbf{X}$, $\mathbf{H}$] $N \times N$ primal and dual matrix variables.
\item[$\boldsymbol{\mu}$] $M$-dimensional real vector of Lagrange multipliers.
\item[$\boldsymbol{\nu}$] $M$-dimensional real vector of slack variables.
\item[$f_{\mathrm{WLAV}}(\cdot)$] Weighted least absolute value cost.
\item[$f_{\mathrm{WLS}}(\cdot)$] Weighted least squares cost.
\end{IEEEdescription}
\section{Introduction}
An electrical grid infrastructure is operated for delivering electricity from
power generators to consumers via interconnected transmission and distribution networks.
Accurately determining the operating point and estimating the underlying state of the system are of paramount importance for the reliable and economic operation of power networks.
Power flow analysis and power system state estimation play indispensable roles in the planning and monitoring of the power grid. The solutions of these two problems are used for many optimal resource allocation problems such as unit commitment, optimal power flow (OPF), security-constrained OPF, and network reconfiguration \cite{Wollenberg13,GG13}.
\subsection{Power Flow Analysis} \vspace{0mm}
The power flow (PF) problem is a numerical analysis of the steady-state electrical power flows,
which serves as a necessary prerequisite for future system planning.
Specifically, having measured the voltage magnitudes and injected active/reactive powers at certain buses,
the PF problem aims to find the unknown voltage magnitude and phase angle at each bus of a power network.
Using the obtained voltage phasors and the network admittances, line power flows can then be determined for the entire system.
The calculation of power flows is essentially equivalent to solving a set of quadratic equations obeying the laws of physics.
Solving a system of nonlinear polynomial equations is NP-hard in general.
Be\'{z}out's theorem asserts that a well-behaved system can have exponentially many solutions \cite{Hartshorne77}.
Upper bounds on the number of PF solutions have been analyzed in the recent work \cite{MolzahnACC16} and the references therein.
When it comes to the feasibility of AC power flows, it is known that this problem is NP-hard for both transmission and distribution networks \cite{Bienstock15,Lehmann16}.
For solving the PF problem, many iterative methods such as the Newton-Raphson method and Gauss-Seidel algorithms have been extensively studied over the last few decades \cite{Bergen00}.
The Newton-Raphson method features quadratic convergence whenever the initial point is sufficiently close to the solution \cite{Tinney67,Stott74}.
Nevertheless, a fundamental drawback of various Newton-based algorithms is that there is no convergence guarantee in general. By leveraging advanced techniques in complex analysis and algebraic geometry,
sophisticated tools have been developed for solving PF, including holomorphic embedding load flow
and numerical polynomial homotopy continuation \cite{Trias12,Li03}.
However, these approaches involve costly computations, and are generally not suitable for large-scale power systems.
Using the theory of monotone operators and moment relaxations, the papers \cite{DJcdc15} and \cite{DJ15} identify
a ``monotonicity domain'', within which it is possible to efficiently find the PF solutions or certify their non-existence.
A review on recent advances in computational methods for the PF equations can be found in \cite{Mehta15a}.
Facing the inherent challenge of non-convexity, convex relaxation techniques have been recently developed for finding the PF solutions \cite{Madani15CDC}.
More specifically, a class of convex programs is proposed to solve the PF problem in the case where the solution belongs to a recovery region that contains voltage vectors with small angles.
The proposed convex programs are in the form of semidefinite programming (SDP), where a convex objective
is designed as a surrogate of the rank-one constraint to guarantee the exactness of the SDP relaxation.
\subsection{Power System State Estimation}
Closely related to the PF problem, the power system state estimation (PSSE) problem plays a key role for grid monitoring.
System measurements are acquired through the supervisory control and data acquisition (SCADA) systems,
as well as increasingly pervasive phasor measurement units (PMUs).
Given these noisy measurements, the PSSE task aims at estimating the complex voltage at each bus,
and determining the system's operating conditions.
The PSSE is traditionally formulated as a nonlinear least-squares (LS) problem,
which is commonly solved by the Gauss-Newton algorithm in practice \cite{Monticelli12,Abur04}.
The algorithm is based on a sequence of linear approximations of the nonlinear residuals.
A descent direction is obtained at each iteration by minimizing the sum of squares of the linearized residuals.
However, the Gauss-Newton algorithm has no guaranteed convergence in general.
Furthermore, a linear search must be carefully carried out for the damped Gauss-Newton method.
The widely-adopted Levenberg-–Marquardt algorithm finds only a local optimum of the nonlinear LS problem,
and may still be slow for large residual or highly nonlinear problems \cite{Bjorck96, Mascarenhas14}.
For a linear regression model, the classic Gauss-Markov theorem states that
if the additive noises are uncorrelated with mean zero and homoscedastic with finite variance,
then the ordinary least squares estimator (LSE) of the unknown parameters is the best linear unbiased estimator (BLUE)
that yields the least variance estimates. The generalized LSE should be applied when the noise covariance matrix is positive definite \cite{Bjorck96}. The work \cite{Zyskind69} shows that even when the noise covariance matrix is singular, the BLUE can be found by utilizing
its pseudo-inverse in the generalized normal equations.
Analytic solutions of the BLUE and the minimum variances of the estimates are available for the linear model.
In addition, minimum variance unbiased estimator (MVUE) and Bayesian-based estimators are studied
in \cite{Amini14} and \cite{Amini15}. It is well known that when the linear measurements are normally distributed, the LSE coincides with the maximum-likelihood estimator (MLE).
However, LSE for the PSSE problem may not possess these attractive properties due to the inherently nonlinear measurements. There are several issues involved from both optimization and statistical perspectives:
\begin{itemize}
\item The problem of nonlinear LS estimation is generally nonconvex, which can have multiple local solutions.
Hence, finding a globally optimal solution is challenging.
\item Newton-based iterative algorithms are sensitive to the initialization and lack a guaranteed convergence.
They may converge to a stationary point. It is nevertheless not easy to interpret that point,
and quantify its distance relative to the true unknown state of the system.
\item Even if a global solution can be obtained, the nonlinear LSE may not correspond to the MVUE.
When the noises are not from the exponential family of distributions, the LSE is different from the MLE in general.
\item The LSE is vulnerable to the presence of outliers primarily due to its uniformly squaring, which makes data with
large residuals have a significant influence on the fitted model.
\end{itemize}
To deal with bad data, the weighted least absolute value (WLAV) function is proposed as the data fitting cost in \cite{Irving78, Kotiuga82},
for which efficient algorithms are developed in \cite{Singh94,Abur91}.
The work \cite{Celik92} presents linear transformations to mitigate the deteriorating effect of ``leverage points'' on the WLAV estimator.
Robust or distributed PSSE has also been developed in the papers \cite{Irving08, Kekatos13, Conejo07, Minot16}.
The state estimation problem with line flow measurements using an iterative algorithm is studied in \cite{Dopazo70} and \cite{Dopazo70_2},
where complex power flows over all transmission lines and at least one voltage phasor are assumed to be measured to achieve the necessary redundancy for the solution of the problem. The performance of these selected measurements and the proposed algorithm tested on the Ontario hydro power system are reported in \cite{Porretta73}. Heuristic optimization techniques are also utilized for PSSE in \cite{Naka03,Lee08}.
Intensive studies of the SDP relaxation technique for solving fundamental problems in power networks have been springing up due to the pioneering papers \cite{Bai08}, \cite{lavaei2011_1} and \cite{LL2012_1}.
The work \cite{LL2012_1} develops an SDP relaxation for finding a global minimum of the OPF problem.
A sufficient and necessary condition is provided to guarantee a zero duality gap, which is satisfied by several benchmark systems.
From the perspective of the physics of power systems, the follow-up papers \cite{SLa12} and \cite{sojoudi2014exactness} develop theoretical results to support the success of the
SDP relaxation in handling the non-convexity of OPF.
The papers \cite{madani2013convex} and \cite{madani2014promises} develop a graph-theoretic SDP framework
for finding a near-global solution whenever the SDP relaxation fails to find a global minimum.
Recent advances in the convex relaxation of the OPF problem are summarized in the tutorial papers \cite{Low2014_1} and \cite{Low2014_2}.
The paper \cite{Zhu11} initializes the idea of solving the PSSE problem via the SDP relaxation.
When the SDP solution is not rank one, its principal eigenvector is used to recover approximate voltage phasors.
The work \cite{Weng12} suggests generating a ``good'' initial point from the SDP optimal solution
to improve the performance of Newton's method, while a nuclear norm regularizer is used to promote a low-rank solution in \cite{Weng15}.
Distributed or online PSSE using the SDP relaxation can be found in \cite{Weng13,Zhu14,KimGG14}.
However, in the literature there is a lack of theoretical analysis on
the quality of the SDP optimal solution for estimating the complex voltages.
Hence, to the best of our knowledge, this is still an intriguing open problem.
The aforementioned grand challenges of the PSSE problem motivate us to revisit the design of a high-performance estimator with finite measurements.
The novelty and main contributions of the present work are outlined in the ensuing subsection.
\subsection{Contributions}
In this paper, we start with a PF problem that can be regarded as the noiseless counterpart of PSSE.
In contrast to the standard setup with only nodal measurements at the PV, PQ and slack buses,
one objective of this work is to investigate the effect of branch flow measurements on reducing the computational complexity of the PF problem.
Motivated by the work \cite{Madani15CDC},
we contrive a convex optimization framework for the PF problem using SDP and second-order cone programming (SOCP) relaxations.
It is shown that the proposed conic relaxations are both always exact if: (i) the set of measurements includes the nodal voltage magnitude at each bus and line active power flows over a spanning tree of the power network, and (ii) the line phase voltage differences are not too large (e.g., less than $90^{\circ}$ for lossless networks).
By building upon the proposed convexification framework for the PF problem, we develop a penalized convex program for solving the PSSE problem.
In addition to an $\ell_1$ norm penalty that is robust to outliers in the measurements,
the objective function of the penalized convex problem features a linear regularization term
whose coefficient matrix can be systematically designed according to the meter placements.
We present a theoretical result regarding the quality of the optimal solution of the convex program.
It is shown that the obtained optimal solution has a dominant rank-one matrix component,
which is formed by lifting the vector of true system state. The distance between the
solution of the penalized convex problem and the correct rank-one component is quantified as a function of the noise level.
An upper bound of the tail probability of this distance is further derived,
which also implies the correlation between the quality of the estimation and the number of measurements.
The effort of this paper is mainly on the scenario where the measurements include nodal voltage magnitudes and branch active power flows.
However, the developed mathematical framework is rather general and could be adopted to study the PSSE problem with other types of measurements.
\subsection{Notations}
Boldface lower (upper) case letters represent column vectors (matrices), and calligraphic letters stand for sets.
The symbols $\mathbb{R}$ and $\mathbb{C}$ denote the sets of real and complex numbers, respectively.
$\mathbb{R}^{N}$ and $\mathbb{C}^{N}$ denote the spaces of $N$-dimensional real and complex vectors, respectively.
$\mathbb{S}^N$ and $\mathbb{H}^N$ stand for the spaces of $N\times N$ complex symmetric and Hermitian matrices, respectively.
The symbols $(\cdot)^{\top}$ and $(\cdot)^{*}$ denote the transpose and conjugate transpose of a vector/matrix.
${\rm Re}(\cdot)$, ${\rm Im}(\cdot)$, ${\mathrm {rank}}(\cdot)$, $\mathrm{Tr}(\cdot)$, and $\mathrm{null}(\cdot)$ denote the real part,
imaginary part, rank, trace, and null space of a given scalar or matrix.
$\|\mathbf{a}\|_2$, $\|\mathbf{A}\|_F$, and $\nuclearnorm{\mathbf{A}}$ denote the Euclidean norm of the vector $\mathbf{a}$,
the Frobenius norm and the nuclear norm of the matrix $\mathbf{A}$, respectively.
The relation $\mathbf{X} \succeq \mathbf{0}$ means that the matrix $\mathbf{X}$ is Hermitian positive semidefinite.
The $(i,j)$ entry of $\mathbf{X}$ is given by $X_{i,j}$. $\mathbf{I}_{N}$ denotes the $N\times N$ identity matrix.
The symbol ${\mathrm {diag}}(\mathbf{x})$ denotes a diagonal matrix whose diagonal entries are given by the vector $\mathbf{x}$, while
${\mathrm {diag}}(\mathbf{X})$ forms a column vector by extracting the diagonal entries of the matrix $\mathbf{X}$.
The imaginary unit is denoted by $\mathsf{j}$.
The expectation operator and the probability measure are denoted by $\mathbb{E(\cdot)}$ and $\mathbb{P}(\cdot)$, respectively.
The notations $\measuredangle x$ and $\lvert x\rvert$ denote the angle and magnitude of a complex number $x$.
The notation $\mathbf{X}[\mathcal{S}_1,\mathcal{S}_2]$ denotes the submatrix of $\mathbf{X}$ whose rows and columns are
chosen from the given index sets $\mathcal{S}_1$ and $\mathcal{S}_2$, respectively.
\section{Preliminaries}
\subsection{System Modeling}\label{sec:systmodel}
Consider an electric power network represented by a graph ${\mathcal G} = ({\mathcal N},{\mathcal L})$,
where ${\mathcal N} := \{1,\ldots,N\}$ and ${\mathcal L}:= \{1,\ldots,L\}$ denote the sets of buses and branches, respectively.
Let $v_k \in \mathbb{C}$ denote the nodal complex voltage at bus $k\in\mathcal N$,
whose magnitude and phase angle are given as $|v_k|$ and $\measuredangle v_k$.
The net injected complex power at bus $k$ is denoted as $s_k=p_k+q_k\mathsf{j}$.
Define $s_{lf}=p_{lf}+q_{lf}\mathsf{j}$ and $s_{lt}=p_{lt}+q_{lt}\mathsf{j}$
as the complex power injections entering the line $l\in {\mathcal L}$ through the \emph{from} and \emph{to} ends of the branch.
Note that the current $i_{l,f}$ and $i_{l,t}$ may not add up to zero due to the existence of transformers and shunt capacitors.
Denote the admittance of each branch $(s,t)$ of the network as $y_{st}$.
The Ohm's law dictates that
\begin{align}
\bm{\ell} = \mathbf{Y}\mathbf{v},\quad \bm{\ell}_{f} = \mathbf{Y}_{f}\mathbf{v},\quad \mathrm{and} \quad \bm{\ell}_{t} = \mathbf{Y}_{t}\mathbf{v},
\end{align}
where $\mathbf{Y} = \mathbf{G} + \mathsf{j}\mathbf{B} \in \mathbb{S}^{N}$ is the nodal admittance matrix of the power network, whose real and imaginary parts
are the conductance matrix $\mathbf{G}$ and susceptance matrix $\mathbf{B}$, respectively.
Furthermore, $\mathbf{Y}_{f}\in \mathbb{C}^{L\times N}$ and $ \mathbf{Y}_{t} \in \mathbb{C}^{L\times N}$
represent the \emph{from} and \emph{to} branch admittance matrices.
The injected complex power can thus be expressed as $\mathbf{p} + \mathbf{q}\mathsf{j} = {\mathrm {diag}}(\mathbf{v}\bv^{*}\mathbf{Y}^{*})$.
Let $\{\mathbf{e}_1,\ldots,\mathbf{e}_N\}$ denote the canonical vectors in $\mathbb{R}^N$. Define
\begin{equation}\label{nodalM}
\begin{aligned}
\mathbf{E}_{k} &:= \mathbf{e}_k \mathbf{e}_k^{\top},\quad \mathbf{Y}_{k,p} := \frac{1}{2}(\mathbf{Y}^{*}\mathbf{E}_{k}+\mathbf{E}_{k}\mathbf{Y}),\\
\mathbf{Y}_{k,q} &:= \frac{\mathsf{j}}{2}(\mathbf{E}_{k}\mathbf{Y}-\mathbf{Y}^{*}\mathbf{E}_{k}).
\end{aligned}
\end{equation}
For each $k\in {\mathcal N}$, the quantities $|v_k|^2$, $p_k$ and $q_k$ can be written as
\begin{equation}\label{nodalQTY}
\hspace{-1mm}
|v_k|^2 = \mathrm{Tr}(\mathbf{E}_{k}\mathbf{v}\bv^{*}),\
p_k = \mathrm{Tr}(\mathbf{Y}_{k,p}\mathbf{v}\bv^{*}),\
q_k = \mathrm{Tr}(\mathbf{Y}_{k,q}\mathbf{v}\bv^{*}).
\end{equation}
Similarly, the branch active and reactive powers for each line $l\in {\mathcal L}$ can be expressed as
\begin{equation}\label{branchQTY}
\begin{aligned}
p_{l,f} &= \mathrm{Tr}(\mathbf{Y}_{l,p_{f}}\mathbf{v}\bv^{*}),\quad
p_{l,t} = \mathrm{Tr}(\mathbf{Y}_{l,p_{t}}\mathbf{v}\bv^{*}) \\
q_{l,f} &= \mathrm{Tr}(\mathbf{Y}_{l,q_{f}}\mathbf{v}\bv^{*}),\quad
q_{l,t} = \mathrm{Tr}(\mathbf{Y}_{l,q_{t}}\mathbf{v}\bv^{*}),
\end{aligned}
\end{equation}
where the coefficient matrices $\mathbf{Y}_{l,p_{f}},\mathbf{Y}_{l,p_{t}},\mathbf{Y}_{l,q_{f}},\mathbf{Y}_{l,q_{t}} \in \mathbb{H}^N$ are defined
over the $l$-th branch from node $i$ to node $j$ as
\begin{subequations}\label{branchM}
\begin{align}
\mathbf{Y}_{l,p_{f}} &:= \frac{1}{2}(\mathbf{Y}^{*}_f\mathbf{d}_l\mathbf{e}_{i}^{\top}+\mathbf{e}_{i}\mathbf{d}_l^{\top}\mathbf{Y}_f) \label{Ylpf} \\
\mathbf{Y}_{l,p_{t}} &:= \frac{1}{2}(\mathbf{Y}^{*}_t\mathbf{d}_l\mathbf{e}_{j}^{\top}+\mathbf{e}_{j}\mathbf{d}_l^{\top}\mathbf{Y}_t) \\
\mathbf{Y}_{l,q_{f}} &:= \frac{\mathsf{j}}{2}(\mathbf{e}_{i}\mathbf{d}_l^{\top}\mathbf{Y}_f- \mathbf{Y}^{*}_f\mathbf{d}_l\mathbf{e}_{i}^{\top}) \\
\mathbf{Y}_{l,q_{t}} &:= \frac{\mathsf{j}}{2}(\mathbf{e}_{j}\mathbf{d}_l^{\top}\mathbf{Y}_t-\mathbf{Y}^{*}_t\mathbf{d}_l\mathbf{e}_{j}^{\top}),
\end{align}
\end{subequations}
where $\{\mathbf{d}_1,\ldots,\mathbf{d}_L\}$ is the set of canonical vectors in $\mathbb{R}^{L}$.
So far, nodal and line measurements of interest have been expressed as quadratic functions of the complex voltage $\mathbf{v}$.
The PF and PSSE problems will be formulated next.
\vspace{-2mm}
\subsection{Convex Relaxation of Power Flow Equations}\label{sec:probform}
The task of the PSSE problem is to estimate the complex voltage vector $\mathbf{v}$ based on $M$ real measurements:
\begin{align}
z_j = \mathbf{v}^{*}\mathbf{M}_j\mathbf{v} + \eta_j, \quad \forall j \in {\mathcal M}:=\{1,2,\ldots,M\},
\end{align}
where $\{z_j\}_{j \in {\mathcal M}}$ are the known measurements, $\{\eta_j\}_{j \in {\mathcal M}}$ are the possible measurement noises with known statistical information, and $\{\mathbf{M}_j\}_{j \in {\mathcal M}}$ are
arbitrary measurement matrices that could be any subset of the Hermitian matrices defined in \eqref{nodalM} and \eqref{branchM}.
The PF problem is a noiseless version of the PSSE problem.
More specifically, given a total of $M$ noiseless specifications $z_j$ for $j=1,2,\ldots,M$, the goal of PF
is to find the nodal complex voltage vector $\mathbf{v}$ satisfying all quadratic measurement equations, i.e.,
\begin{subequations}\label{pfp}
\begin{align}
\mathrm{find}\quad &\mathbf{v} \in \mathbb{C}^N \\
\mathrm{subject~to}\quad &\mathbf{v}^{*}\mathbf{M}_j\mathbf{v} = z_j,\quad\forall j\in {\mathcal M}.
\end{align}
\end{subequations}
After setting the phase of the voltage at the slack bus to zero, the problem reduces to $M$ power flow equations with $2N-1$ unknown real parameters. The classical PF problem corresponds to the case $M=2N-1$, where the measurements are specified at the PV, PQ, and slack buses such that:
\begin{itemize}
\item
For each PV (generator) bus $k$, the active power $p_k$ and the voltage magnitude $|v_k|$ are given.
\item
For each PQ (load) bus $k$, the active power $p_k$ and the reactive power $q_k$ are given.
\item For the slack (reference) bus, the voltage magnitude $|v_{\mathrm{ref}}|$ and the phase angle $\measuredangle v_{\mathrm{ref}}$ are given.
\end{itemize}
Instead of solving the feasibility problem \eqref{pfp} to obtain the voltage vector $\mathbf{v}$,
consider the optimization problem
\begin{subequations}\label{PFP2}
\begin{align}
\quad \mini_{\mathbf{X} \in \mathbb{H}^N, \mathbf{v} \in \mathbb{C}^N} \quad &\mathrm{Tr}(\mathbf{M}_0\mathbf{X}) \\
\mathrm{subject~to}\quad &\mathrm{Tr}(\mathbf{M}_j\mathbf{X}) = z_j,\quad\forall j\in {\mathcal M} \\%\label{PF-SDPP:meq}\\
\quad & \mathbf{X} = \mathbf{v}\bv^{*},
\end{align}
\end{subequations}
where its objective function is to be designed later.
Note that the constraint $\mathbf{X} = \mathbf{v}\bv^{*}$ can be equivalently replaced by the two conditions $\mathbf{X} \succeq \mathbf{0}$ and ${\mathrm {rank}}(\mathbf{X}) = 1$.
The SDP relaxation of~\eqref{PFP2} is obtained by dropping the rank-one constraint as
\begin{subequations}\label{PF-SDPP}
\begin{align}
\mini_{\mathbf{X} \in \mathbb{H}^N}\quad &\mathrm{Tr}(\mathbf{M}_0\mathbf{X}) \label{PF-SDPP:obj} \\
\mathrm{subject~to}\quad &\mathrm{Tr}(\mathbf{M}_j\mathbf{X}) = z_j,\quad\forall j\in {\mathcal M} \label{PF-SDPP:meq} \\
\quad & \mathbf{X} \succeq \mathbf{0}.\label{PF-SDPP:cone}
\end{align}
\end{subequations}
This relaxation correctly solves \eqref{PFP2} if and only if it has a unique rank-1 solution $\mathbf{X}^{\text{opt}}$, in which case $\mathbf{v}$ can be recovered via the decomposition $\mathbf{X}^{\text{opt}}=\mathbf{v} \mathbf{v}^{*}$.
The dual of \eqref{PF-SDPP} can be obtained as
\begin{subequations}\label{PF-SDPD}
\begin{align}
\maxi_{\bm{\mu} \in \mathbb{R}^M}\quad &-\mathbf{z}^{\top}\bm{\mu} \\
\mathrm{subject~to}\quad
\mathbf{H}(\boldsymbol{\mu}) \succeq \mathbf{0}, \label{PF-SDPD:PSD}
\end{align}
\end{subequations}
where the vector $\mathbf{z}:=[z_1,\ldots,z_M]^{\top}$ collects all the available measurements,
$\bm{\mu} = [\mu_1,\ldots,\mu_M]^{\top} $ is the Lagrangian multiplier vector associated with
the linear equality constraints \eqref{PF-SDPP:meq}, and the dual matrix function $\mathbf{H}:\mathbb{R}^M\to\mathbb{H}^N$ is defined as
\begin{align}
\mathbf{H}(\boldsymbol{\mu}):=\mathbf{M}_0 + \sum_{j = 1}^M \mu_j\mathbf{M}_j.\label{Hdef}
\end{align}
If strong duality holds while the primal and dual problems both attain their solutions, then every pair of optimal primal-dual solutions $(\mathbf{X}^{\mathrm{opt}},\boldsymbol{\mu}^{\mathrm{opt}})$ satisfies the relation $\mathbf{H}(\boldsymbol{\mu}^{\mathrm{opt}}) \mathbf{X}^{\mathrm{opt}}= \mathbf{0}$, due to the complementary slackness.
Hence, if ${\mathrm {rank}}(\mathbf{H}(\boldsymbol{\mu}^{\mathrm{opt}})) = N-1$ holds, then we have the inequality ${\mathrm {rank}}(\mathbf{X}^{\mathrm{opt}}) \leq 1$ such that the SDP relaxation can recover a solution of the PF problem.
\begin{definition}[SDP recovery]
It is said that the SDP relaxation problem \eqref{PF-SDPP} recovers the voltage vector $\mathbf{v} \in \mathbb{C}^N$ if $\mathbf{X}=\mathbf{v}\bv^{*}$ is the unique solution of \eqref{PF-SDPP} for some input $\mathbf{z} \in \mathbb{R}^M$.
\end{definition}
\begin{definition}[Dual certificate]
\label{dual_cer_def}
A vector $\boldsymbol{\mu}\in\mathbb{R}^M$ is regarded as a dual SDP certificate for the voltage vector $\mathbf{v}\in\mathbb{C}^N$ if it satisfies the following three properties:
\begin{align}\label{dual_cer}
\!\!\!\!
\mathbf{H}(\boldsymbol{\mu})\succeq \mathbf{0},\quad
\mathbf{H}(\boldsymbol{\mu})\mathbf{v}= \mathbf{0},\quad
\mathrm{rank}( \mathbf{H}(\boldsymbol{\mu}) )= N-1.\!\!
\end{align}
Denote the set of all dual SDP certificates for the voltage vector $\mathbf{v}$ as $\mathcal{D}(\mathbf{v})$.\vspace{-2mm}
\end{definition}
The SDP problem \eqref{PF-SDPP} can be further relaxed by replacing the high-order positive semidefinite constraint \eqref{PF-SDPP:cone} with second-order conic constraints on $2\times 2$ principal sub-matrices of $\mathbf{X}$ corresponding to certain lines of the network. This yields the SOCP relaxation:
\begin{subequations}\label{PF-SOCP}
\begin{align}
\mini_{\mathbf{X} \in \mathbb{H}^N}\quad &\mathrm{Tr}(\mathbf{M}_0\mathbf{X}) \label{PF-SOCP:obj} \\
\mathrm{subject~to}\quad &\mathrm{Tr}(\mathbf{M}_j\mathbf{X}) = z_j,\, &&\forall j\in {\mathcal M} \label{PF-SOCP:meq} \\
\quad &
\begin{bmatrix}
X_{s,s} & X_{s,t}\\
X_{t,s} &X_{t,t}
\end{bmatrix}
\succeq \mathbf{0}, &&\forall(s,t) \in \overline{\mathcal L},\label{PF-SOCP:cone}
\end{align}
\end{subequations}
where $\overline {\mathcal L}$ denotes the set of those edges of the network graph for which
the corresponding entry of $\mathbf{M}_{j}$ is nonzero for at least one index $j \in \{0,1,\ldots,M\}$.
\begin{definition}[SOCP recovery]
It is said that the SOCP relaxation problem \eqref{PF-SOCP} recovers the voltage vector $\mathbf{v} \in \mathbb{C}^N$ if there is some input $\mathbf{z} \in \mathbb{R}^M$ such that, for every solution $\mathbf{X}^{\mathrm{opt}}$ of \eqref{PF-SOCP}, those entries of the matrix $\mathbf{X}^{\mathrm{opt}}-\mathbf{v}\bv^{*}$ on the diagonal or corresponding to the members of $\overline{\mathcal L}$ are all equal to zero.
\end{definition}
\begin{figure}[t]
\centering
{\includegraphics[width=0.24\textwidth]{angleCond2.eps}}
\caption{The demonstration of the angle conditions \eqref{angleMy} and \eqref{angleV}. The acceptable
regions for the voltage phase difference $\measuredangle v_{s}-\measuredangle v_{t}$ (blue open half-space)
and the entry $M_{0;st}$ (yellow open half-space) are shown relative to the branch admittance $y_{st}$ (red dot). }
\label{fig:angleCond}
\end{figure}
\vspace{-2mm}
\section{Exact Recovery of Power Flow Solution}
The objective of this section is to show that the SDP problem~\eqref{PF-SDPP} is exact and the correct complex voltage vector $\mathbf{v}$ can be recovered for a class of nodal and branch noiseless measurements.
Let ${\mathcal G}^{\prime} = ({\mathcal N},{\mathcal L}^{\prime})$ denote an arbitrary subgraph of $\mathcal G$ that contains a spanning tree of $\mathcal G$.
Throughout the rest of this section, we assume that the available measurements consist of:
(i) voltage magnitudes at all buses, and
(ii) active power flow at the ``from'' end of each line of ${\mathcal G}^{\prime}$. Note that whenever the SDP relaxation is exact for this set of measurements, it remains exact if more measurements are available. Please refer to Corollary~\ref{coro:nodalM} and Remark~\ref{rem:r2} for more details.
The SDP relaxation of \eqref{PFP2} can be expressed as
\begin{subequations}\label{PF-SDPP-specM}
\begin{align}
\mini_{\mathbf{X} \in\mathbb{H}^{N}}\quad &\mathrm{Tr}(\mathbf{M}_0\mathbf{X}) \label{PF-SDPP-specM:obj}\\
\mathrm{subject~to}\quad &X_{k,k} = |v_k|^2, &&\forall k\in {\mathcal N} \label{PF-SDPP-specM:meq_Node} \\
&\mathrm{Tr}(\mathbf{Y}_{l,p_f}\mathbf{X})=p_{l,f}, &&\forall l\in {\mathcal L}^{\prime}
\label{PF-SDPP-specM:meq_Branch}\\
&\mathbf{X}\succeq \mathbf{0}.
\end{align}
\end{subequations}
Moreover, the SOCP relaxation of \eqref{PFP2} can be written as:
\begin{subequations}\label{PF-SOCP-specM}
\begin{align}
\mini_{\mathbf{X} \in\mathbb{H}^{N}}\quad &\mathrm{Tr}(\mathbf{M}_0\mathbf{X})\label{PF-SOCP-specM:obj}\\
\mathrm{subject~to}\quad &X_{k,k} = |v_k|^2, &&\forall k\in {\mathcal N} \label{PF-SOCP-specM:meq_Node}\\
&\mathrm{Tr}(\mathbf{Y}_{l,p_f}\mathbf{X})=p_{l,f}, &&\forall l\in {\mathcal L}^{\prime}\label{PF-SOCP-specM:meq_Branch}\\
\quad &
\begin{bmatrix}
X_{s,s} & X_{s,t}\\
X_{t,s} &X_{t,t}
\end{bmatrix}
\succeq \mathbf{0}, &&\forall(s,t) \in {\mathcal L}^{\prime}.\label{PF-SOCP-specM:cone}
\end{align}
\end{subequations}
\begin{definition}[Sparsity graph]
Given a Hermitian matrix $\mathbf{W}\in\mathbb H^N$,
the sparsity graph of $\mathbf{W}$, denoted by $\mathscr{G}(\mathbf{W})$, is a simple undirected graph with the vertex set $\{1,2,\ldots,N\}$ such that every two distinct vertices $i$ and $j$ are connected to each other if and only if the $(i,j)$ entry of $\mathbf{W}$ is nonzero.
\end{definition}
\begin{assumption}\label{asmp1}
The edge set of $\mathscr{G}(\mathbf{M}_0)$ coincides with ${\mathcal L}^{\prime}$ and in addition,
\begin{align} \label{angleMy}
-180^{\circ}<\measuredangle M_{0;st}-\measuredangle y_{st}<0,\quad \forall (s,t)\in{\mathcal L}^{\prime},
\end{align}
where $M_{0;st}$ denotes the $(s,t)$ entry of $\mathbf{M}_0$.
Moreover, the solution $\mathbf{v}$ being sought satisfies the relations
\begin{subequations}
\label{angleV}
\begin{align}
\hspace{-0.2cm}0<(\measuredangle v_{s}-\measuredangle v_{t})-\measuredangle y_{st}<180^{\circ},\quad \forall (s,t)\in{\mathcal L}^{\prime} \label{angleV1}\\
\hspace{-0.2cm}(\measuredangle v_{s}-\measuredangle v_{t})-\measuredangle M_{0;st}\neq 0 \, \ \mathrm{or}\, \ 180^{\circ},\quad \forall (s,t)\in{\mathcal L}^{\prime}. \label{angleV12}
\end{align}
\end{subequations}
\end{assumption}
To reduce power losses, real-world transmission systems feature low R/X ratios (the ratio of line resistance to reactance).
The angle of the line admittance $\measuredangle y_{st}$ is therefore close to $-90^{\circ}$ \cite[Sec. 3.7]{Weedy12}.
Meanwhile, since the transferred real power is proportional to its corresponding voltage angle difference,
the number $|\measuredangle v_{s}-\measuredangle v_{t}|$ is typically small due to thermal and stability limits \cite{GG13,Andersson08}.
Hence, the angle condition \eqref{angleV1} is expected to hold. For lossless networks, \eqref{angleV1} requires each line voltage angle difference to be between $-90^{\circ}$ and $90^{\circ}$, which is a very practical assumption. The acceptable regions for $\measuredangle v_{s}-\measuredangle v_{t}$
and $M_{0;st}$ are shown in Figure \ref{fig:angleCond}. It can be observed that one convenient choice for the matrix $\mathbf{M}_{0}$ is to select its entries $M_{0;st}$ as complex numbers with negative real and imaginary parts.
\begin{lemma}
\label{lem:dualcertf}
Under Assumption~\ref{asmp1}, there exists a dual SDP certificate for the voltage vector $\mathbf{v} \in \mathbb{C}^N$.
\end{lemma}
\begin{proof}
The proof is provided in Appendix~\ref{appendix:dualcertf}.
\end{proof}
\begin{theorem}
\label{thm:tightrelax}
Under Assumption~\ref{asmp1}, the SDP relaxation \eqref{PF-SDPP-specM} and the SOCP relaxation~\eqref{PF-SOCP-specM} both recover the voltage vector $\mathbf{v} \in \mathbb{C}^N$.
\end{theorem}
\begin{proof}
The proof is provided in Appendix~\ref{appendix:tightrelax}.~\end{proof}
To be able to recover a large set of voltage vectors, Theorem~\ref{thm:tightrelax} implies that there are infinitely many choices for the objective function of the SDP relaxation, namely all matrices $\mathbf{M}_0$ satisfying Assumption~\ref{asmp1}. Now, consider the case with extra nodal measurements
\begin{subequations}\label{nodalPQ}
\begin{align}
\mathrm{Tr}(\mathbf{Y}_{k,p}\mathbf{X}) &= p_k, \quad\ \forall k\in {\mathcal N}_p \\
\mathrm{Tr}(\mathbf{Y}_{k^{\prime},q}\mathbf{X}) &= q_{k^\prime}, \quad \forall k^{\prime}\in {\mathcal N}_q.
\end{align}
\end{subequations}
The next corollary shows that the property of the exact relaxation is preserved in presence of
these arbitrary extra power injection measurements. As will be studied later in the paper, the availability of extra measurements seems unnecessary for the PF problem, but is instrumental in recovering the state of the system in the noisy setup.
\begin{corollary}
\label{coro:nodalM}
Under Assumption \ref{asmp1}, the SDP relaxation \eqref{PF-SDPP-specM} and the SOCP relaxation~\eqref{PF-SOCP-specM} with the
additional constraints of power injection measurements \eqref{nodalPQ}
both recover the voltage vector $\mathbf{v} \in \mathbb{C}^N$.
\begin{proof}
With extra nodal power measurements, $\mathbf{X}=\mathbf{v}\mathbf{v}^{\ast}$ still remains feasible for both problems. Therefore the corollary comes as a direct result of Theorem~\ref{thm:tightrelax}.
\end{proof}
\end{corollary}
\begin{figure}[t]
\centering
\includegraphics[width=0.27\textwidth]{3bus.eps}
\caption{A 3-bus power network with the voltage magnitude measurements $|v_1|$, $|v_2|$ and $|v_3|$,
as well as the branch active power measurements $p_{12}$ and $p_{23}$.}
\label{fig:3bus}
\vspace{-0.4cm}
\end{figure}
\vspace{-2mm}
\subsection{Effect of Reactive Power Branch Measurements}
In the preceding section, the exactness of the SDP and SOCP relaxations were studied in the case with the measurement of branch active power flows. In what follows, it will be shown that reactive power line flows do not offer the same benefits as active power measurements. Assume that, as opposed to the active power flow, the reactive power flow at the ``from'' end of each branch of ${\mathcal G}^{\prime}$ is measured .
In this case, Theorem~\ref{thm:tightrelax} still holds if the conditions provided in Assumption~\ref{asmp1} are replaced by:
\begin{subequations}
\begin{align}
\mathrm{Re}(M_{0;st}y^{\ast}_{st})\neq 0 \quad \mathrm{and} \quad
\mathrm{Im}(v_sv^{\ast}_t M^{\ast}_{0;st}) &\neq 0 \\
\mathrm{Re}(v_sv^{\ast}_t y^{\ast}_{st})\mathrm{Re}(M_{0;st}y^{\ast}_{st}) & \leq 0. \label{recond2Q}
\end{align}
\end{subequations}
In contrast to the case with the measurements of $p_{l,f}$, the following two different scenarios must be considered for \eqref{recond2Q}
\begin{itemize}
\item[(i):] if $90^{\circ}<(\measuredangle v_{s}-\measuredangle v_{t})-\measuredangle y_{st}\leq 180^{\circ}$, then $\mathrm{Re}(v_sv^{\ast}_t y^{\ast}_{st})<0$
and $\mathrm{Re}(M_{0;st}y^{\ast}_{st})>0$, which imply that
\begin{align}
-90^{\circ} \leq \measuredangle M_{0;st}-\measuredangle y_{st}\leq 90^{\circ},
\end{align}
\item[(ii):] if $0 \leq (\measuredangle v_{s}-\measuredangle v_{t})-\measuredangle y_{st} < 90^{\circ} $, then $\mathrm{Re}(v_sv^{\ast}_t y^{\ast}_{st})>0$
and $\mathrm{Re}(M_{0;st}y^{\ast}_{st})<0$, which imply that
\begin{align}
90^{\circ} \leq \measuredangle M_{0;st}-\measuredangle y_{st} \leq 270^{\circ}.
\end{align}
\end{itemize}
As a result, $\measuredangle M_{0;st}$ must belong to one of the two complementary intervals
$[\measuredangle y_{st}+90^{\circ}, \measuredangle y_{st}+270^{\circ}]$ and $[\measuredangle y_{st}-90^{\circ}, \measuredangle y_{st}+90^{\circ}]$,
depending on the value of $\measuredangle v_{s}-\measuredangle v_{t}$. Therefore, it is impossible to design the matrix $\mathbf{M}_{0}$ in advance
without knowing the phase angle difference $\measuredangle v_{s}-\measuredangle v_{t}$.
\begin{remark} \label{rem:r2}
We assume that available measurements include the voltage magnitude at each bus and active line flows over at least a spanning tree of the power network.
Such an assumption is realistic in practical power systems since these two types of measurements are typically provided by the SCADA system with little incremental cost \cite{Korres2011},
while also used for conventional static state estimation algorithms \cite{Phadke08}.
Another source of voltage magnitude measurements comes from the increasing usage of PMUs.
Moreover, the selection of line power flow measurements features several advantages \cite{Dopazo70,Porretta73}:
\begin{itemize}
\item The spanning tree line flow measurements ensure the network observability \cite{Abur99,WuKK06}.
\item The line flow measurements can be directly used for monitoring, which is of practical importance.
\item Measurements at both ends of lines are very effective in detecting and identifying incorrect data.
\item The numerical computation is fast and stable, while the results are less sensitive to measurement errors.
\end{itemize}
Nevertheless, the above assumption on the types of measurements is not essential for the validity of the proposed convexification framework. In other words, this framework can be deployed for arbitrary measurements, but we study its performance under the above assumption. It is worth stressing that, similar to the aforementioned PF problem, additional measurements such as nodal power injections
can be readily incorporated in our framework for PSSE.
\end{remark}
\subsection{Three-Bus Example}
Consider the 3-bus power system shown in Figure~\ref{fig:3bus}.
Suppose that the measured signals consist of the two active power line flows $p_{12}$ and $p_{23}$, as well as the nodal voltage squared magnitudes
$|v_1|^2$, $|v_2|^2$ and $|v_3|^2$. Theorem~\ref{thm:tightrelax} states that the SDP and SOCP relaxation problems~\eqref{PF-SDPP-specM} and \eqref{PF-SOCP-specM} are both able to find the unknown voltage vector $\mathbf{v}$, using an appropriately designed coefficient matrix $\mathbf{M}_0$. It turns out that $\mathbf{v}$ can also be found through a direct calculation. More precisely, one can write
\begin{subequations}\label{eq:ptheta}
\begin{align}
p_{12} &= {\rm Re}(v_1(v_1-v_2)^{*}y_{12}^{*})= |v_1|^2{\rm Re}(y_{12}) \notag \\
&- |v_1||v_2||y_{12}|\cos(\measuredangle v_{1}-\measuredangle v_{2}-\measuredangle y_{12}) \\
p_{23} &= {\rm Re}(v_2(v_2-v_3)^{*}y_{23}^{*}) = |v_2|^2{\rm Re}(y_{23})\notag \\
& - |v_2||v_3||y_{23}|\cos(\measuredangle v_{2}-\measuredangle v_{3}-\measuredangle y_{23}),
\end{align}
\end{subequations}
which yields that
\begin{subequations}\label{eq:vtheta}
\begin{align}
\measuredangle v_{1}-\measuredangle v_{2}&= \arccos\left(\frac{p_{12}-|v_1|^2{\rm Re}(y_{12})}{|v_1||v_2||y_{12}|}\right) +\measuredangle y_{12} \\
\measuredangle v_{2}-\measuredangle v_{3} &= \arccos\left(\frac{p_{23}-|v_2|^2{\rm Re}(y_{23})}{|v_2||v_3||y_{23}|}\right)+ \measuredangle y_{23}.
\end{align}
\end{subequations}
Each phase difference $\measuredangle v_{1}-\measuredangle v_{2}$ or $\measuredangle v_{2}-\measuredangle v_{3}$ can have two possible solutions,
but only one of them satisfies the angle condition \eqref{angleV1}. Hence, all complex voltages can be readily recovered.
This argument applies to general power networks. In other words, without resorting to the relaxed problems~\eqref{PF-SDPP-specM} and \eqref{PF-SOCP-specM},
the PF problem considered in this paper can be directly solved by the calculation of phase angles.
However, once the measurements are noisy, the equations \eqref{eq:vtheta} cannot be used because the exact values of the quantities $p_{12}$, $p_{23}$, $|v_1|^2$, $|v_2|^2$ and $|v_3|^2$ are no longer available since they are corrupted by noise.
In contrast, the proposed SDP and SOCP relaxations work in both noiseless and noisy cases. This will be elaborated in the next section.
As a byproduct of the discussion made above, one can obtain the following result.
\begin{corollary}
\label{cor:c2}
The PF problem has a unique solution satisfying Assumption~\ref{asmp1}. Moreover, this solution can be recovered using the SDP relaxation \eqref{PF-SDPP-specM} and the SOCP relaxation~\eqref{PF-SOCP-specM}.
\end{corollary}
\vspace{-2mm}
\section{Convexification of State Estimation Problem}
Consider the PSSE as a generalization of the PF problem, where the measurements are subject to noise. As explained in Corollary~\ref{cor:c2}, the unknown solution $\bold v$ is unique under Assumption~\ref{asmp1}.
To find this solution, consider the optimization problem:
\begin{subequations}\label{prob:PSSE2}
\begin{align}
\mini_{\mathbf{v} \in \mathbb{C}^N,\, \bm{\nu} \in \mathbb{R}^M}\quad &f(\bm{\nu}) \\
\mathrm{subject~to}\quad &z_j - \mathbf{v}^{*}\mathbf{M}_j\mathbf{v} = \nu_j,\quad \forall j\in {\mathcal M}, \label{PSSE2:constraint}
\end{align}
\end{subequations}
where $\bm{\nu}:=[\nu_1,\ldots,\nu_M]^{\top}$ and the function $f(\cdot)$ quantifies the estimation criterion.
Common choices of $f(\cdot)$ are the weighted $\ell_1$ and $\ell_2$ norm functions:
\begin{align}
f_{\mathrm{WLAV}}(\bm{\nu}) & = \frac{|\nu_1|}{\sigma_1} + \frac{|\nu_2|}{\sigma_2} + \cdots + \frac{|\nu_M|}{\sigma_M}\\
f_{\mathrm{WLS}}(\bm{\nu}) & = \frac{\nu_1^2}{\sigma_1^2} + \frac{\nu_2^2}{\sigma_2^2} + \cdots + \frac{\nu_M^2}{\sigma_M^2},
\end{align}
where $\sigma_1,...,\sigma_M$ are positive constants.
\begin{remark}
The above functions correspond to the weighted least absolute value (WLAV) and weighted least square (WLS) estimators, which arise as the maximum likelihood estimator when the noises have a Laplace or normal distribution, respectively. Note that possible outliers in the measurements can be better modeled by the Laplace distribution that features heavier tails than the normal. Consequently, the WLAV estimator is more robust to the outliers. On the contrary, the non-robustness of the WLS estimator is primarily attributed to the squared distance because outliers with large residuals can have a high influence to skew the regression.
\end{remark}
Due to the inherent quadratic relationship between the voltage vector $\mathbf{v}$ and the measured quantities $\{|v_i|^2,\mathbf{p},\mathbf{q},\mathbf{p}_l,\mathbf{q}_l\}$, the
quadratic equality constraints \eqref{PSSE2:constraint} make the problem \eqref{prob:PSSE2}
non-convex and NP-hard in general. To remedy this drawback, consider the penalized SDP relaxation
\begin{subequations}\label{PSSE-SDPP}
\begin{align}
\mini_{\mathbf{X} \in \mathbb{H}^N, \bm{\nu} \in \mathbb{R}^M}\quad & \rho f(\bm{\nu})+\mathrm{Tr}(\mathbf{M}_0\mathbf{X}) \\
\mathrm{subject~to}\quad &\mathrm{Tr}(\mathbf{M}_j\mathbf{X})+\nu_j = z_j,\quad \forall j\in {\mathcal M} \\
\quad & \mathbf{X} \succeq \mathbf{0},
\end{align}
\end{subequations}
where $\rho>0$ is a pre-selected coefficient that balances the data fitting cost $f(\bm{\nu})$ with the convexification term
$\mathrm{Tr}(\mathbf{M}_0\mathbf{X})$. The latter term is inherited from the SDP relaxation for the PF problem to deal with the non-convexity of the power flow equations. Similarly, a penalized SOCP relaxation problem can be derived as
\begin{subequations}\label{PSSE-SOCP}
\begin{align}
\mini_{\mathbf{X} \in \mathbb{H}^N, \bm{\nu} \in \mathbb{R}^M}\quad &\rho f(\bm{\nu})+\mathrm{Tr}(\mathbf{M}_0\mathbf{X}) \label{PSSE-SOCP:obj} \\
\mathrm{subject~to}\quad &\mathrm{Tr}(\mathbf{M}_j\mathbf{X})+\nu_j = z_j,\, &&\forall\, j\in {\mathcal M} \label{PSSE-SOCP:meq} \\
\quad &
\begin{bmatrix}
X_{s,s} & X_{s,t}\\
X_{t,s} &X_{t,t}
\end{bmatrix}
\succeq \mathbf{0}, &&\forall~(s,t) \in \overline{\mathcal L},\label{PSSE-SOCP:cone}
\end{align}
\end{subequations}
where $\overline {\mathcal L}$ denotes the set of edges of the network graph for which
the corresponding entry of $\mathbf{M}_{j}$ is nonzero for at least one index $j \in \{0,1,\ldots,M\}$.
Based on the results derived earlier for the PF problem,
we will next develop strong theoretical results on the estimation error for the PSSE.
\subsection{Bounded Estimation Error}
In this subsection, we assume that the function $f(\bm{\nu})$ corresponds to the WLAV estimator, and that the available measurements consist of the voltage magnitudes at all buses and the active power flow at the ``from'' end of each line of ${\mathcal G}^{\prime}$. The results to be presented next hold true in presence of extra power measurements (see Remark~\ref{rem:r2}).
The penalized problem~\eqref{PSSE-SDPP} can be expressed as
\begin{align}\label{PSSE-SDPP2}
\min_{\mathbf{X} \succeq \mathbf{0}}\, \mathrm{Tr}(\mathbf{M}_0\mathbf{X})\!+\!\rho \sum_{j=1}^M \sigma_j^{-1}\left|\mathrm{Tr}\left(\mathbf{M}_j(\mathbf{X}\!-\!\mathbf{v}\bv^{*})\right)\!-\!\eta_j\right|.
\end{align}
We aim to show that the solution of the penalized relaxation estimates the true solution of PSSE, where the estimation error is a function of the noise power. Define $\boldsymbol{\eta}$ as the vector of the noise values $\eta_1,..,\eta_M$.
\begin{theorem}\label{thm:rmse}
Suppose that Assumption \ref{asmp1} holds. Consider an arbitrary dual SDP certificate $\hat{\boldsymbol{\mu}}\in\mathcal{D}(\mathbf{v})$, where $\bf v$ is the unique solution of the PSSE problem. Let $(\mathbf{X}^{\mathrm{opt}},\boldsymbol{\nu}^{\mathrm{opt}})$ denote an optimal solution of the penalized convex program \eqref{PSSE-SDPP} with $f(\bm{\nu})=f_{\mathrm{WLAV}}(\bm{\nu})$ and a coefficient $\rho$ satisfying the inequality
\begin{align}\label{eq:rho}
\rho \geq \max_{j\in{\mathcal M}} |\sigma_j\hat{\mu}_j|.
\end{align} There exists a scalar $\beta >0$ such that
\begin{align} \label{rmse:Xopt}
\zeta := \frac{\|\mathbf{X}^{\mathrm{opt}}\! -\! \beta\mathbf{v}\mathbf{v}^{*}\|_F}{\sqrt{N\times \mathrm{Tr}({\mathbf{X}}^{\mathrm{opt}})}}
\leq 2\sqrt{\frac{\rho \!\times\! f_{\mathrm{WLAV}}(\boldsymbol{\eta})}{N\lambda}},
\end{align}
where $\lambda$ is the second smallest eigenvalue of the matrix $\mathbf{H}(\hat{\boldsymbol{\mu}})$.
\end{theorem}
\begin{proof}
The proof is provided in Appendix~\ref{appendix:rmse}.\end{proof}
Note that the numerator of $\zeta$ quantifies the distance between the optimal solution of the penalized convex program and the true PSSE solution.
The denominator of $\zeta$ is expected to be around $N$ since
$\mathrm{Tr}({\mathbf{X}}^{\mathrm{opt}})\simeq N$ in the noiseless scenario.
Hence, the quantity $\zeta$ can be regarded as a root-mean-square estimation error.
Theorem~\ref{thm:rmse} establishes an upper bound for the estimation error as a function of the noise power
$f_{\mathrm{WLAV}}(\boldsymbol{\eta})$. In particular, the error is zero if $\boldsymbol{\eta}=0$.
This theorem provides an upper bound on the estimation error without using any statistical information of the random vector $\boldsymbol{\eta}$.
In what follows, the upper bound will be further studied for Gaussian random variables.
To this end, define $\kappa$ as $\frac{M}{N}$. If $M$ were the number of lines in the network, $\kappa$ was between 1.5 and 2 for most real-world power systems \cite{Chow13}.
\begin{corollary}\label{coro:probbound}
Suppose that the noise $\boldsymbol{\eta}$ is a zero-mean Gaussian vector with the covariance matrix $\bm{\Sigma} = {\mathrm {diag}}(\sigma_1^2,...,\sigma_M^2)$.
Under the assumptions of Theorem~\ref{thm:rmse}, the tail probability of the estimation error $\zeta$ is upper bounded as
\begin{align}
\mathbb{P}(\zeta>t) \leq \mathrm{e}^{-\gamma M}
\end{align}
for every $t>0$, where $\gamma = \frac{t^4\lambda^2}{32\kappa^2\rho^2}-\ln2$.
\end{corollary}
\begin{proof}
The proof is given in Appendix \ref{appendix:probbound}.
\end{proof}
Recall that the measurements used for solving the PSSE problem include one active power flow per each line of the subgraph ${\mathcal G}^{\prime}$. The graph ${\mathcal G}^{\prime}$ could be as small as a spanning tree of $\mathcal G$ or as large as the entire graph $\mathcal G$. Although the results developed in this paper work in all of these cases, the number of measurements could significantly vary for different choices of ${\mathcal G}^{\prime}$. A question arises as to how the number of measurements affects the estimation error. To address this problem, notice that if it is known that some measurements are corrupted with high values of noise, it would be preferable to discard those bad measurements. To avoid this scenario, assume that there are two sets of measurements with similar noise levels. It is aimed to show that the set with a higher cardinality would lead to a better estimation error.
\begin{definition} \label{def:dd1} Define $\omega ({\mathcal G}^{\prime})$ as the minimum of $2\sqrt{\frac{\rho}{N\lambda}}$ over all dual SDP certificates $\hat{\boldsymbol{\mu}}\in\mathcal{D}(\mathbf{v})$, where $\rho= \max_{j\in{\mathcal M}} |\sigma_j\hat{\mu}_j|$ and $\lambda$ denotes the second smallest eigenvalue of $\mathbf{H}(\hat{\boldsymbol{\mu}})$.
\end{definition}
In light of Theorem~\ref{thm:rmse},
the estimation error $\zeta$ satisfies the inequality
\begin{equation}
\zeta\leq \omega ({\mathcal G}^{\prime}) \sqrt{f_{\mathrm{WLAV}}(\boldsymbol{\eta})}
\end{equation}
if an optimal coefficient $\rho$ is used in the penalized convex problem. The term $ \sqrt{f_{\mathrm{WLAV}}(\boldsymbol{\eta})}$ is related to the noise power. If this term is kept constant, then the estimation error is a function of $\omega ({\mathcal G}^{\prime})$. Hence, it is desirable to analyze $\omega ({\mathcal G}^{\prime})$.
\begin{theorem}\label{thm:measurment} Consider two choices of the graph ${\mathcal G}^{\prime}$, denoted as $\mathcal G^{\prime}_1$ and $\mathcal G^{\prime}_2$, such that $\mathcal G^{\prime}_1$ is a subgraph of $\mathcal G^{\prime}_2$. Then, the relation
\begin{equation}
\omega (\mathcal G^{\prime}_2)\leq \omega (\mathcal G^{\prime}_1)
\end{equation}
holds.
\end{theorem}
\begin{proof}
The proof follows from the fact that the feasible set of the dual certificate $\hat{\boldsymbol{\mu}}$ for the case $\mathcal G^{\prime}=\mathcal G^{\prime}_1$ is contained in the feasible set of $\hat{\boldsymbol{\mu}}$ for $\mathcal G^{\prime}=\mathcal G^{\prime}_2$.
\end{proof}
The penalized convex program \eqref{PSSE-SDPP} may have a non-rank-1 solution in the noisy case.
Whenever the optimal solution $\mathbf{X}^{\text{opt}}$ is not rank 1, an estimated voltage vector $\hat{\mathbf{v}}$ can be obtained using a rank-1 approximation method, such as the following algorithm borrowed from \cite{madani2014promises}:
\begin{itemize}
\item [i)] Set the voltage magnitudes via the equations
\begin{align}\label{anglerecovery}
|\hat{v}_k| = \sqrt{\mathbf{X}_{k,k}^{\text{opt}}}, \quad k=1,2,\ldots,N.
\end{align}
\item [ii)] Set the voltage angles via the convex program
\begin{subequations}\label{magrecovery}
\begin{align}
\measuredangle \hat{\mathbf{v}} = &\argmin_{\measuredangle \mathbf{v} \in [-\pi, \pi]^N} \sum_{(s,t)\in {\mathcal L}}
|\measuredangle \mathbf{X}_{s,t}^{\text{opt}}- \measuredangle v_{s} + \measuredangle v_{t}| \\
&\mathrm{subject~to} \quad \measuredangle v_{\text{ref}} = 0.
\end{align}
\end{subequations}
\end{itemize}
Note that $\hat{\mathbf{v}}$ is the true solution of the PSSE problem if $\mathbf{X}^{\text{opt}}$ has rank 1.
\subsection{Reduction of Computational Complexity}
Due to the presence of the positive semidefinite constraint $\mathbf{X} \succeq \mathbf{0}$,
solving the conic problems \eqref{PF-SDPP} and \eqref{PSSE-SDPP} is computationally expensive or
even prohibitive for large-scale power systems.
In this subsection, we deploy a graph-theoretic approach to replace
the complicating constraint $\mathbf{X} \succeq \mathbf{0}$
with a set of small-sized SDP or SOCP constraints.
\begin{definition}
The sparsity graph of the problem \eqref{PF-SDPP} or \eqref{PSSE-SDPP} is defined as the
union of the sparsity graphs of the coefficient matrices $\mathbf{M}_j$ for $j = 0,1,\ldots,M$.
In other words, the sparsity graph of \eqref{PF-SDPP} or \eqref{PSSE-SDPP} denoted as $\tilde{\mathcal G}=(\mathcal N,\tilde{\mathcal L})$
is a simple undirected graph with $N$ vertices,
which has an edge between every two distinct vertices $s$ and $t$ if and only if
the $(s,t)$ entry of $\mathbf{M}_{j;st}$ is nonzero for some $j \in \{0,1,\ldots,M\}$.
\end{definition}
\begin{definition}[Tree decomposition]
A tree decomposition of $\tilde{\mathcal G}$ is a 2-tuple $({\mathcal B}, {\mathcal T})$,
where ${\mathcal B} = \{{\mathcal B}_1,\ldots,{\mathcal B}_Q\}$ is a collection of subsets of ${\mathcal N}$ and ${\mathcal T}$ is a tree whose nodes (called \emph{bags}) are the subsets ${\mathcal B}_r$ and satisfy the following properties:
\begin{itemize}
\item Vertex coverage: Each vertex of $\tilde{\mathcal G}$ is a member of at least one node of ${\mathcal T}$, i.e., ${\mathcal N} = {\mathcal B}_1 \cup \cdots \cup {\mathcal B}_Q$.
\item Edge coverage: For every edge $(s, t)$ in $\tilde{\mathcal G}$, there is a bag ${\mathcal B}_r$ that contains both ends $s$ and $t$.
\item Running intersection: For every two bags ${\mathcal B}_i$ and ${\mathcal B}_j$ in ${\mathcal T}$, every node on the path connecting ${\mathcal B}_i$ and ${\mathcal B}_j$ contains ${\mathcal B}_i \cap {\mathcal B}_j$.
In other words, all nodes of ${\mathcal T}$ that contain a common vertex of $\tilde{\mathcal G}$ should form a subtree.
\end{itemize}
\end{definition}
\begin{theorem}\label{chor_them_1}
The optimal objective values of the SDP problems \eqref{PF-SDPP} and \eqref{PSSE-SDPP} do not change if
their constraint $\mathbf{X} \succeq \mathbf{0}$ is replaced by the set of constraints
\begin{align}\label{decomSDP}
\mathbf{X}[{\mathcal B}_r,{\mathcal B}_r] \succeq \mathbf{0}, \quad \forall r\in\{ 1,2,\ldots,Q\}.
\end{align}
\end{theorem}
\begin{proof}
This theorem is a direct consequence of the matrix completion theorem and chordal extension \cite{Grone1984}.
\end{proof}
As a by-product of Theorem~\ref{chor_them_1}, all off-diagonal entries of $\mathbf{X}$ that do not appear in the submatrices $\mathbf{X}[{\mathcal B}_r,{\mathcal B}_r]$ are redundant and could be eliminated from the SDP relaxations. This significantly reduces the computational complexity for sparse power systems. As an example, consider the case where the sparsity graph $\tilde{\mathcal G}$ is acyclic. Then, $\tilde{\mathcal G}$ has a tree decomposition
such that each bag contains only two connected vertices of $\tilde{\mathcal G}$.
Hence, the decomposed constraints \eqref{decomSDP}
boil down to positive semidefinite constraints on a set of $2 \times 2$ submatrices of $\mathbf{X}$.
This special case is formalized below.
\begin{corollary}
Suppose that the sparsity graph $\tilde{\mathcal G}$ is a spanning tree of $\mathcal{G}$. Then, the optimal objective value of the penalized SDP problem \eqref{PSSE-SDPP} is equal to the optimal objective value of the penalized SOCP problem \eqref{PSSE-SOCP}.
\end{corollary}
It can be readily shown that the number of scalar optimization variables associated with the SOCP relaxation \eqref{PSSE-SOCP} (after eliminating redundant variables) is $\mathcal{O}(N)$
as opposed to $\mathcal{O}(N^2)$ for the SDP relaxation \eqref{PSSE-SDPP}.
\section{Numerical Tests}\label{sec:test}
In this section, numerical results are presented to verify the performance of the proposed convexification techniques for
the PSSE problem. The tests are conducted on several benchmark power systems \cite{Josz16}, where the admittance matrices and the
underlying system states are obtained from \texttt{MATPOWER} \cite{matpower}.
Unless otherwise stated, the available measurements are assumed to be: (i) voltage magnitudes at all buses, and
(ii) one active power flow per line of a spanning tree of the network.
The tree is obtained by the function \texttt{graphminspantree} in the Matlab bioinformatics toolbox \cite{matlabMST}.
We first compare the proposed SOCP relaxation \eqref{PSSE-SOCP} with the conventional WLS estimator (by using the Matpower function \texttt{run\_se} with flat start) for estimating the true complex voltage vector $\mathbf{v}$. The performance metric is the root-mean-square error (RMSE) of the estimated voltage $\hat{\mathbf{v}}$, which is defined as $\xi(\hat{\mathbf{v}}):=\|\hat{\mathbf{v}}-\mathbf{v}\|_2/\sqrt{N}$.
The simulation results tested on the IEEE 57-bus and 118-bus systems are shown in Figures \ref{WLS_Newton}(a) and \ref{WLS_Newton}(b), respectively.
In each case, the measurements are under 100 randomly generated realizations of noise, which correspond to the voltage magnitudes for all buses and the active power flows at both ends of all lines. The zero-mean Gaussian noises have 0.002 and 0.001 per unit standard deviations for squared voltage magnitudes and line flows, respectively. In addition, $20\%$ of randomly chosen line flow measurements are generated as bad data,
which are contaminated by adding zero-mean Gaussian noises with 0.1 per unit standard deviation.
The coefficient matrix $\mathbf{M}_0$ is chosen as a real symmetric matrix with negative values at entries corresponding to the line flow measurements and zero elsewhere. The penalty weight is set to $\rho=1$ for all test cases.
Clearly, the penalized SOCP method significantly outperforms the conventional Newton-based WLS estimator.
Furthermore, we evaluate the effect of different types of measurements and scaling of load demand on the performance of PSSE. The simulation results are shown in Figure \ref{WLS_Newton_2}. In Figure \ref{WLS_Newton_2}(a), voltage measurements are only given at the reference and load (PQ) buses.
In addition to the active power flows at both ends of all lines, reactive power flows are available at ``to'' ends of half of the lines. Despite the fact that our assumption on voltage measurements does not hold in this case,
the proposed approach still has much smaller RMSEs. Similarly, performance gains are observed in Figure \ref{WLS_Newton_2}(b), where all fixed loads are scaled up 10\%.
\begin{figure}[t]
\centering
\hspace{-0.7cm}\subfloat[\label{WLS_Newton_57}]{\includegraphics[width = 7.7cm]{WLS_Newton_57.eps}}\\
\hspace{-0.7cm}\subfloat[\label{WLS_Newton_118}]{\includegraphics[width = 7.8cm]{WLS_Newton_118.eps}}
\caption{The RMSEs of the estimated voltages obtained by the penalized SOCP method and the WLS-based Newton method: (a) IEEE 57-bus system, (b) IEEE 118-bus system.}\label{WLS_Newton}
\end{figure}
\begin{figure}[t]
\centering
\hspace{-0.7cm}\subfloat[\label{WLS_Newton_partV}]{\includegraphics[width = 7.7cm]{fig1a.eps}}\\
\hspace{-0.7cm}\subfloat[\label{WLS_Newton_scaleload}]{\includegraphics[width = 7.7cm]{fig1b.eps}}
\caption{The RMSEs of the estimated voltages obtained by the penalized SOCP method and the WLS-based Newton method for the IEEE 57-bus system: (a) voltage magnitude measurements are not available at generator (PV) buses, (b) active and reactive power of all loads are scaled up 10\%.}\label{WLS_Newton_2}
\end{figure}
The numerical results for the penalized SDP relaxation problem \eqref{PSSE-SDPP} performed on several benchmark systems are shown in Tables \ref{tab:perf1} and \ref{tab:perf2}. The following numbers shown in \eqref{rmse:Xopt} are reported for each case:
\begin{itemize}
\item $\zeta$: the RMSE of the obtained optimal SDP solution $\mathbf{X}^{\text{opt}}$.
\item $\zeta^{\text{max}}$: the upper bound of $\zeta$.
\item Other relevant quantities $\beta$, $\lambda$, $f_{\text{WLAV}}$ and $\rho^{\text{min}}$.
\end{itemize}
In this test, for each squared voltage magnitude $\{|v_k|^2\}_{k\in {\mathcal N}}$, the standard deviation of the zero-mean Gaussian noise is chosen $c$ times higher than its noiseless value, where $c>0$ is a pre-selected scalar quantifying the noise level. Likewise, the standard deviations for nodal and branch active/reactive power measurements are $1.5c$ and $2c$ times higher than the corresponding noiseless values, respectively.
The entries of matrix $\mathbf{M}_0$ are set as $\mathbf{M}_{0;st} = -\mathbf{B}_{st}$ for all
$(s,t)\in{\mathcal L}^{\prime}$, and $\mathbf{M}_{0;ii} = \sum_{j=1}^N|\mathbf{B}_{i,j}|$ for $i=1,2,\ldots,N$.
The penalty weight is set to $\rho^{\text{min}}:= \max_{j\in{\mathcal M}} |\sigma_j\hat{\mu}_j|$ as given in \eqref{eq:rho}.
\begin{table}[t]
\centering
\caption{Performance of the penalized SDP \eqref{PSSE-SDPP} with the noise level $c=0.01$.}\label{tab:perf1}
\begin{tabular}{|p{7.4mm}|p{6.5mm}|p{6.5mm}|p{6.5mm}|p{6.5mm}|p{6.5mm}|p{7.5mm}|p{6.5mm}|}
\hline
\text{Cases} &$\xi(\hat{\mathbf{v}})$ &$\zeta$ &$\zeta^{\text{max}}$ &$\beta$ &$\lambda$ &$f_{\text{WLAV}}$ &$\rho^{\text{min}}$ \\ \hline
\text{9-bus} & 0.0111 & 0.0145 & 0.1535 & 0.9972 & 1.3417 & 14.768 & 0.0048 \\
\text{14-bus} & 0.0057 & 0.0078 & 0.2859 & 1.0005 & 0.3812 & 20.509 & 0.0053 \\
\text{30-bus} & 0.0060 & 0.0084 & 0.3728 & 0.9997 & 0.1094 & 51.479 & 0.0022 \\
\text{39-bus} & 0.0077 & 0.0083 & 0.8397 & 1.0009 & 0.7438 & 62.558 & 0.0817 \\
\text{57-bus} & 0.0092 & 0.0102 & 0.8364 & 1.0013 & 0.0912 & 88.434 & 0.0103 \\
\text{118-bus} & 0.0057 & 0.0079 & 1.2585 & 0.9992 & 0.0878 & 179.509 & 0.0228 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Performance of the penalized SDP \eqref{PSSE-SDPP} with the noise level $c=0.1$.}\label{tab:perf2}
\begin{tabular}{|p{7.4mm}|p{6.5mm}|p{6.5mm}|p{6.5mm}|p{6.5mm}|p{6.5mm}|p{7.5mm}|p{6.5mm}|}
\hline
\text{Cases} &$\xi(\hat{\mathbf{v}})$ &$\zeta$ &$\zeta^{\text{max}}$ &$\beta$ &$\lambda$ &$f_{\text{WLAV}}$ &$\rho^{\text{min}}$ \\ \hline
\text{9-bus} & 0.0357 & 0.0462 & 0.4237 & 0.9779 & 1.3417 & 11.250 & 0.0482 \\
\text{14-bus} & 0.0418 & 0.0537 & 0.8119 & 0.9682 & 0.3812 & 16.536 & 0.0532 \\
\text{30-bus} & 0.0297 & 0.0405 & 1.1734 & 0.9882 & 0.1094 & 50.993 & 0.0222 \\
\text{39-bus} & 0.0485 & 0.0676 & 2.4315 & 0.9840 & 0.7438 & 52.462 & 0.8173 \\
\text{57-bus} & 0.0907 & 0.1028 & 2.6937 & 1.0393 & 0.0912 & 91.724 & 0.1028 \\
\text{118-bus} & 0.0559 & 0.0743 & 4.0302 & 0.9871 & 0.0878 & 184.093 & 0.2284 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{The average RMSEs of the estimated voltage vector $\hat{\mathbf{v}}$ obtained by
the penalized SDP \eqref{PSSE-SDPP} for six different objective functions with the noise level $c=0.1$.}\label{tab:4obj}
\begin{tabular}{|p{8.2mm}|p{6.5mm} p{6.5mm}|p{6.5mm} p{6.5mm}|p{6.5mm} p{6.5mm}| }
\cline{1-7}
\multirow{2}{*}{Methods} & \multicolumn{2}{c|}{$\rho f(\bm{\nu})+\mathrm{Tr}(\mathbf{M}_0\mathbf{X})$} & \multicolumn{2}{c|}{ $\rho f(\bm{\nu})+\nuclearnorm{\mathbf{X}}$} & \multicolumn{2}{c|}{$ \rho f(\bm{\nu})$} \\
\cline{2-7}
& \text{WLAV} & \text{WLS} & \text{WLAV} & \text{WLS} & \text{WLAV} & \text{WLS} \\
\cline{1-7}
\text{9-bus} & 0.0648 & 0.1293 & 1.2744 & 1.1483 & 1.1619 & 1.1633 \\
\text{14-bus} & 0.1307 & 0.1784 & 1.1320 & 1.3871 & 1.4233 & 1.4215 \\
\text{30-bus} & 0.2055 & 0.2543 & 1.4236 & 1.4306 & 1.4269 & 1.4268 \\
\text{39-bus} & 0.1324 & 0.1239 & 1.1317 & 1.3135 & 1.2764 & 1.2757 \\
\text{57-bus} & 0.2343 & 0.2809 & 1.2981 & 1.3004 & 1.3235 & 1.3098 \\
\text{118-bus} & 0.1136 & 0.1641 & 1.3620 & 1.3272 & 1.3445 & 1.3577 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Simulation times of the penalized conic relaxations with
$f_{\text{WLAV}}(\boldsymbol{\nu})$ and the noise level $c=0.1$ (the unit is second).}\label{tab:simuTime}
\begin{tabular}{|l|c|c|}
\hline
\text{Cases} &\text{Solver time} &\text{Total time} \\ \hline
\text{9-bus} & 0.89 & 1.58 \\
\text{14-bus} & 1.23 & 2.54 \\
\text{30-bus} & 1.33 & 3.21 \\
\text{39-bus} & 1.56 & 3.28 \\
\text{57-bus} & 1.97 & 4.09 \\
\text{118-bus} & 2.38 & 5.63 \\
\text{1354-bus} & 4.55 & 9.48 \\
\text{2869-bus} & 13.17 & 24.44 \\
\text{9241-bus} & 58.00 & 109.14 \\
\hline
\end{tabular}
\end{table}
For all test cases, it can be observed that the obtained optimal solutions of the penalized SDP method yield good estimates of the complex voltages featuring small RMSEs $\xi(\hat{\mathbf{v}})$ and $\zeta$. These two error metrics are roughly on the same order as the corresponding noise levels.
Furthermore, the value of $\zeta^{\text{max}}$ is calculated using the quantities $\rho$ and $\lambda$. As expected, this is a legitimate upper bound on $\zeta$ that corroborates our theoretical results in Theorem \ref{thm:rmse}. The tightness of this upper bound depends on the second smallest eigenvalue of the dual matrix $\mathbf{H}(\hat{\bm{\mu}})$, which is a function of the
true state $\mathbf{v}$ and the matrix $\mathbf{M}_0$. The discrepancy between $\zeta$ and $\zeta^{\text{max}}$ is rooted in the fact that $\zeta$ corresponds to our realization of noise, but $\zeta^{\text{max}}$ works for all realizations of the noise independent of its statistical properties.
Moreover, the value of the scaling factor $\beta$ (see \eqref{rmse:Xopt}) is always very close to 1 for all scenarios.
This implies that the optimal SDP solution $\mathbf{X}^{\text{opt}}$
is close to the true lifted state $\mathbf{v}\bv^{*}$ without scaling this rank-one matrix.
To further show the merit of the proposed penalized SDP framework,
we compare the performance of the convex problem \eqref{PSSE-SDPP} against two other estimation techniques.
To this end, consider three convex programs that are obtained from \eqref{PSSE-SDPP} by changing its objective to:
(i) $\rho f(\bm{\nu})+\mathrm{Tr}(\mathbf{M}_0\mathbf{X})$, (ii) $\rho f(\bm{\nu})+\nuclearnorm{\mathbf{X}}$ (see \cite{Weng15} and \cite{Kim15}),
and (iii) $\rho f(\bm{\nu})$ (see \cite{Zhu11,Zhu14,Weng12,Weng13}).
Each of these methods is tested for both WLAV and WLS functions.
Furthermore, $10\%$ of the measurements are generated as bad data to show the robustness of WLAV compared with WLS.
These bad data are simulated by adding uniformly distributed random numbers (over the interval $[0,2]$) to the original measurements.
Table \ref{tab:4obj} reports the RMSE $\xi(\hat{\mathbf{v}})$ averaged over 50 Monte-Carlo simulations for each test case,
where the parameter $\rho$ is set to $0.1$.
The penalized SDP method proposed in this work clearly outperforms the other techniques.
To show the scalability of the proposed approaches, we conduct simulations on large-scale systems by solving the penalized SDP or SOCP relaxations.
Figure \ref{fig_2} shows the effect of additional measurements on reducing the estimation error.
In Figures \ref{fig_2}(a) and \ref{fig_2}(b), the RMSEs of the
estimated voltage vectors $\hat{\mathbf{v}}$ are depicted for four different objective functions
with respect to the percentage of nodes having measured active power injections.
The measurements are under two samples of the noise $\boldsymbol{\eta}$ corresponding to $c=0.01$ and $c=0.02$.
It can be observed that the quality of the estimation improves with the increase of nodal active power measurements.
Even in the case when the number of measurements is limited and close to the number of unknown parameters, the proposed approach can still produce good estimates. In contrast, the methods with no penalty yield very high errors that are out of the plot ranges.
\begin{figure*}
\centering
\subfloat[\label{fig_2a}]{ \includegraphics[width =0.33\textwidth]{fig2.eps}}
\subfloat[\label{fig_2b}]{ \includegraphics[width =0.33\textwidth]{fig3.eps}}
\subfloat[\label{fig_2c}]{\includegraphics[width =0.33\textwidth]{pegase9241.eps}}
\caption{The RMSEs of the estimated voltages obtained with additional nodal power measurements: (a) $c=0.01$ and (b) $c=0.02$. Both are tested on PEGASE 1354-bus system using the penalized SDP; (c) $c=0.01$ with PEGASE 9241-bus system using the penalized SOCP.}
\label{fig_2}\vspace{-5mm}
\end{figure*}
In Figure \ref{fig_2}(c) for all four curves, it is assumed that the voltage magnitudes at all buses and active power flows in one direction for all branches are measured. Moreover, different percentages of nodes are chosen at which nodal active and reactive power measurements are made simultaneously.
The noise level is set to $c=0.01$ and the weight is $\rho = 5$. It can be observed that the quality of the estimation improves by increasing the number of additional measurements. The RMSE value at each data point is the average over 10 Monte-Carlo simulations for different noise realizations and choices of nodes with measured power injections.
Finally, Table \ref{tab:simuTime} lists the simulation time of the proposed conic relaxations. The total time is obtained by the command \texttt{cvx\_cputime}, which includes both \texttt{CVX} modeling time and solver time \cite{cvx}. For all benchmark systems from 9-bus to 118-bus, the SDP relaxation problem is solved by \texttt{SDPT3 4.0} \cite{sdpt3}.
The simulation time is obtained by averaging over 50 Monte-Carlo simulations, which are tested on a macOS system with 2.7GHz Intel Core i5 and 8GB memory. For the last three large-scale test cases, we utilize the SOCP relaxation with the solver~\texttt{MOSEK 7.0} \cite{mosek}. The simulation time corresponds to a single run, which is tested on a Windows system with 2.20GHz CPU and 12GB RAM. Clearly, it only takes a few seconds for each case (except the last one) to yield an optimal solution. Even for the large-scale 9241-bus network, the solver time for the proposed SOCP is less than 1 minute, which is fairly practical in real-world applications.
\section{Conclusions}\label{sec:Conclusions}
In this paper, a convex optimization framework is developed for solving the non-convex PF and PSSE problems.
To efficiently solve these two problems, the quadratic power flow equations are lifted into a
higher-dimensional space, which enables their formulation as linear functions of a rank-one positive semidefinite matrix variable. By meticulously designing an objective function, the PF feasibility problem is converted into a non-convex optimization problem and then relaxed to a convex program. The performance of the proposed convexification is studied in the case where the set of measurements includes: (i) nodal voltage magnitudes, and (ii) one active power flow per line for a spanning tree of the power network. It is shown that the designed convex problem finds the correct solution of the PF problem as long as the voltage angle differences across the lines of the network are not too large.
This result along with the proposed framework is then extended to the PSSE problem.
Aside from the well-designed objective function for dealing with the non-convexity of PF,
a data fitting penalty based on the weighted least absolute value is included
to account for the noisy measurements. This leads to a penalized conic optimization scheme.
The distance between the optimal solution of the proposed convex problem and the unknown state of the system is quantified in terms of the noise level,
which decays as the number of measurements increases.
Extensive numerical results tested on benchmark systems corroborate our theoretical analysis.
Moreover, compared with the conventional WLS-based Newton's method as well as other convex programs with different regularizers, the proposed approaches have significant performance gains in terms of the RMSE of the estimated voltages.
|
2,869,038,156,780 | arxiv | \section{Introduction}
Let $G=(N,E)$ be a simple undirected graph with the set $N$ of $n$ nodes
and the set $E$ of $m$ edges. The length of a shortest path between two
nodes $i$ and $j$ in $G$ is denoted by $dist_G(i,j)$, whereas
$d_G:=\max_{i,j\in N}dist_G(i,j)$ is the {\em diameter} of $G$. For a
nonempty subset of nodes $S\subseteq N$, $G[S]$ denotes the subgraph
$(S,E(S))$ of $G$ induced by $S$ on $G$, where $E(S)$ are edges of $E$
with both end nodes in $S$. If every pair of nodes $i,j\in S$ is
connected in $G[S]$ by at least one path with at most $k$ edges, in
other words, $d_{G[S]}$ is at most $k$, then $S$ is called a {\em
$k$-club} of $G$.
The {\em maximum $k$-club problem}, M$k$CP, consists in finding a
maximum cardinality $k$-club in $G$. We denote the cardinality of a
maximum $k$-club in $G$ by $\omega_k(G)$, referred to also as the {\em
$k$-club number} of $G$. A $k$-club is regarded as diameter-based
relaxation of {\em clique}~\cite{veremyev2012identifying}. Recall, a
clique $C$ in $G$ is a subset of $N$ such that the subgraph $G[C]$ of
$G$ is complete, i.e., $d_{G[C]}=1$. Hence, for $k=1$, the definition of
$k$-club is equivalent to that of clique.
The notion of $k$-club was introduced in social network
analysis~\cite{mokken1979cliques} as an alternative way to model tightly
linked groups of actors (e.g., people, companies, web communities or
sites), referred to as {\em cohesive
subgroups}~\cite{scott2000social}. In those groups every member is
related to all other members either directly or via other
members. Although cliques are useful for modeling high-density
communities~\cite{bomze1999maximum}, they appear to be too restrictive
to represent real-life groups where rarely all members are connected
directly. Here, the idea of $k$-club can be used instead to model
low-diameter clusters in graphs. It finds its application in graph-based
data mining in social, biological, financial, and communication
networks~\cite{almeida2012integer,balasundaram2005novel,shahinpour2013algorithms}.
\paragraph{Related work.} M$k$CP is computationally
challenging. Bourjolly et al.~\cite{bourjolly2002exact} established the
NP-hardness of M$k$CP, even for fixed $k>1$, and proposed an
\mbox{exact} branch-and-bound algorithm for it. Balasundaram et
al.~\cite{balasundaram2005novel} showed that M$k$CP remains NP-hard even
when restricted to graphs of fixed diameter. Unlike cliques, the
$k$-club model is of nonhereditary nature~\cite{mokken1979cliques},
meaning that every subset of a $k$-club is not necessarily a $k$-club
itself. An important manifestation of this property is the
intractability of testing maximality of $k$-clubs, demonstrated by
Mahdavi Pajouh and Balasundaram~\cite{pajouh2012oninclusionwise}. They
also developed a branch-and-bound technique $\mathsf{B\&B}$ for M$k$CP
using the $k$-coloring number as an upper bound. For fixed $k\geq 2$,
Asahiro et al.~\cite{asahiro2010approx} proved that M$k$CP is
inapproximable within a factor of $n^{\frac{1}{2}-\epsilon}$ for any
$\epsilon > 0$, unless P=NP. M$k$CP is fixed-parameter tractable when
parameterized by solution size as shown by Schäfer et
al.~\cite{schaefer2012param}.
In~\cite{hartung2012parametrized,hartung2013onstructural}, Hartung et
al. gave recently a systematic classification of the complexity of M2CP
with respect to several structural graph parameters like, e.g., feedback
edge set size, as well as a new well-performing parameterized algorithm
for M2CP. Moreover, Schäfer~\cite{schaefer2009exact} demonstrated that
M$2$CP on bipartite graphs can be solved in $O(n^5)$ time, whereas
M$k$CP on trees and interval graphs needs $O(nk^2)$ and $O(n^2)$ time,
respectively. Chang et al.~\cite{chang2013finding} proved recently that
M$k$CP can be solved exactly in $O^*(1.62^n)$ time, where $O^*$ hides
factors polynomial in $n$. The first polyhedral results for $2$-club
polytope were given in~\cite{balasundaram2005novel}. While M$k$CP has a
compact Boolean integer programming (BIP) formulation for $k=2$, the
formulations proposed for $k\geq 3$
in~\cite{balasundaram2005novel,carvalho2011upper} need exponentially
many variables. Alternative BIP formulations for $k=3$ were explored by
Almeida and Carvahlo~\cite{almeida2012integer}. The first
polynomial-size BIP formulation for a general $k$ using $O(kn^2)$
variables and constraints was given by Veremyev and
Boginski~\cite{veremyev2012identifying}. Further, Chang et
al.~\cite{chang2013finding} implemented a branch-and-bound algorithm for
M$k$CP using a new heuristic IDROP for finding initial lower
bounds. Finally, Shahinpour and Butenko~\cite{shahinpour2013algorithms}
presented a well-performing exact branch-and-bound method for M$k$CP
using variable neighborhood search for lower bounding.
\paragraph{Our contribution.} In this paper, we present a new exact
approach for M$k$CP. To this end, we give first in Section~\ref{s:model}
two propositional-logic-based formulations of M$k$CP. In both cases, we
encode M$k$CP on graph $G$ as an instance of the PARTIAL MAX-SAT
problem~\cite{cha1997local} of some propositional formula in conjunctive
normal form with a mandatory part of clauses that must be satisfied for
the solution to be reasonable, and a second part of clauses of length 1
(1-clauses), such that a truth assignment must satisfy as many of them
as possible. In an optimal solution of such a PARTIAL MAX-SAT instance,
the number of satisfied 1-clauses is than equal to $\omega_k(G)$, and
the Boolean variables assigned by the truth assignment to $1$ indicate
the nodes of $G$ included in an optimal $k$-club of $G$. Our first
satisfiability-based formulation of M$k$CP needs $O(n^{k-1})$ variables
and clauses, whereas for the second one $O(kn^2)$ variables and
$O(kn^3)$ clauses suffice.
According to the experimental evaluation for $k\in \{2,3,4\}$ (i.e., for
typical values of $k$ from the
literature~\cite{almeida2012integer,balasundaram2005novel,shahinpour2013algorithms})
given in Section~\ref{s:evaluation}, our exact methods $\mathsf{SatMC1}$
and $\mathsf{SatMC2}$ for M$k$CP incorporating the encodings from
Section~\ref{s:model}, when compared with a straightforward exact
BIP-based approach using the problem formulation described
in~\cite{almeida2012integer,veremyev2012identifying}, as well as with
two well-performing specialized exact branch-and-bound methods
$\mathsf{VNS}$~\cite{shahinpour2013algorithms} and
$\mathsf{B\&B}$~\cite{pajouh2012oninclusionwise}, demonstrate clearly
their practical strength by outperforming the other three methods
considerably. Also, they offer a simple yet effective alternative for
finding good-quality approximate solutions for M$k$CP, as the numerical
results for our both methods show. Finally, in
Section~\ref{s:conclusion} we conclude our work and state some open
questions.
\section{Satisfiability-based Formulation of M$k$CP}
\label{s:model}
\paragraph{Preliminaries.} Let CNF denote the set of propositional
formulas in conjunctive normal form over a set $V$ of Boolean
variables. Each variable $x\in V$ induces a positive literal (variable
$x$), or a negative literal (negated variable $\overline{x}$). Each
formula $C\in$ CNF is regarded as a {\em set} of its clauses. Similarly,
a clause is considered as a {\em set} of its literals. A clause is
termed a {\em $k$-clause}, for some integer $k>0$, if it contains
exactly $k$ literals. We denote by $V(C)$ the set of variables occurring
in formula $C$. The satisfiability problem (SAT) asks whether formula
$C$ is~\emph{satisfiable}, i.e., whether there is a truth assignment $t
: V(C) \rightarrow \{0, 1\}$ setting at least one literal in each clause
of $C$ to 1, whereas for every $x\in V$ it holds
$t(x)=1-t(\overline{x})$. Given a formula $C \in$ CNF, the optimization
version MAX-SAT searches for a truth assignment $t$ satisfying as many
clauses of $C$ as possible, whereas in its PARTIAL variant some clauses
(called {\em hard}) must be satisfied.
\paragraph{Our Method.} We only consider simple undirected graphs
$G=(N,E)$ with $N =\{1,...,n\}$ and $m:=|E|$. Each node is referred to
by its number. Let $A:=(a_{ij})$ be the adjacency matrix of $G$, where
the values $a_{ij}$'s are regarded as constant truth values 0 and 1,
such that $a_{ij}=1$ iff an edge $\{i,j\}\in E$, for $1\leq i,j \leq n$.
We are now ready to give our two PARTIAL MAX-SAT formulations of M$k$CP
on graph $G$ for an integer $k>1$. For this purpose, we define for every
node $i\in N$ a Boolean variable $x_i$, such that $x_i=1$ if and only if
$i$ belongs to a specific $k$-club of $G$. We proceed next in two
steps. We define first a CNF formula $C_S$ ensuring the optimality,
i.e., the maximum cardinality, of a solution $S\subseteq N$ to M$k$CP on
$G$. In the second step, we show the construction of two CNF formulas,
$C_{H}$ and $D_{H}$, for the first and the second formulation,
respectively, both consisting only of hard clauses and ensuring the
correctness of a solution $S$ to M$k$CP, i.e., $S$ is a $k$-club in
$G$. The unions $C_S\cup C_{H}$ and $C_S\cup D_{H}$ will give finally
the first and the second PARTIAL MAX-SAT encoding of M$k$CP on $G$,
respectively.
The formula $C_S$ consists of 1-clauses solely and is defined as
follows:
$$
C_S := \{\{x_1\}, ..., \{x_n\}\}.
$$
Now we construct $C_{H}$ for the first encoding as follows:
$$
C_{H} := \bigcup_{i=1}^{n-1}\bigcup_{j\in \{i+1,...,n\}|a_{ij}=0}
\{C_{ij}\},\quad\mbox{ where }\quad
C_{ij} := \{\overline{x}_i,\overline{x}_j\}\cup \bigcup_{l=1}^{k-1}
C^l_{ij}
$$
and
$$
C^l_{ij} :=
\bigcup_{r_1\in N_*}\bigcup_{r_2\in N_1}...\bigcup_{r_l\in N_{l-1}}
\{x_{r_1}\wedge x_{r_2} \wedge ...\wedge x_{r_l}\;|\; a_{ir_1}\wedge
a_{r_1r_2} \wedge ... \wedge a_{r_lj}=1\},
$$
where $N_*:=N\setminus\{i,j\}$ and $N_p:=N_*\setminus \{r_1,...,r_p\}$,
for $p=1,...,l-1$. Note, that each conjunction $x_{r_1}\wedge x_{r_2}
\wedge ...\wedge x_{r_l}$ in $C_{ij}^l$ together with nodes $i,j$
corresponds to a path of length $l+1$ from $i$ to $j$ in $G$. Due to
$N_*$ and $N_p$ in the definition of $C_{ij}^l$, no paths with cycles
can be generated, what may result in a tighter encoding. However, for
the correctness of $C_H$, these restrictions of $N$ are not necessary.
To finish our construction, $C^l_{ij}$, for $l\geq 2$, has to be
transformed into a clause. For this, we replace each occurrence of
$x_{r_1}\wedge x_{r_2} \wedge ...\wedge x_{r_l}$ in $C^l_{ij}$, for all
$i,j\in N$, with a new Boolean variable $y_{r_1...r_l}$ and define $l+1$
additional clauses
$$
\{\overline{y}_{r_1...r_l},x_{r_1}\},\{\overline{y}_{r_1...r_l},x_{r_2}\},
...,\{\overline{y}_{r_1...r_l},x_{r_l}\}, \{\overline{x}_{r_1},
\overline{x}_{r_2},..., \overline{x}_{r_l}, y_{r_1...r_l}\},
$$
expressing after some elementary transformations the logical equivalence
$$
x_{r_1}\wedge x_{r_2} \wedge ...\wedge x_{r_l} \leftrightarrow
y_{r_1...r_l}.
$$
Clearly, for $k=2$, we need $O(n)$ variables and $O(n^{2})$
clauses. However, for $k>2$, $C_{H}$ requires, in consequence of the
transformation of $C_{ij}^l$ into a clause, $O(n^{k-1})$ variables and
clauses. Note, that $C_{ij}$ is generated only if $a_{ij}=0$.
Let $t$ be a truth assignment satisfying $C_H$, and $S$ the nodes
selected by $t$, i.e., $S=\{i\in N\,|\,t(x_i)=1\}$. For the correctness
of $C_{H}$, it suffices to show which conditions do hold in $G[S]$, if a
pair of distinct nodes $i,j\in N$ belongs to $S$, implying $t(x_i)=
t(x_j)=1$. Obviously, only the case $a_{ij}=0$ need to be considered. In
that case, $C_{ij}$ can be satisfied if and only if there exists at
least one conjunction $x_{r_1} \wedge ...\wedge x_{r_l}$ in $C_{ij}^l$
(or, equivalently, at least one variable $y_{r_1...r_l}$ together with
the corresponding clauses after the transformation of $C^l_{ij}$ in a
clause given above), for some $l\in\{1,...,k-1\}$, satisfied by $t$,
i.e., $t(x_{r_1})=...=t(x_{r_l})=1$, implying that nodes $r_1,...,r_l$
belong to $S$, too. Consequently, the nodes $i$ and $j$ are connected in
$G[S]$ via $l$ many nodes $r_1,...,r_l$ on a path from $i$ to $j$ in
$G[S]$. Thus, $d_{G[S]}\leq k$ holds for $S$ specified by $t$ and, by
the definition of $k$-club, we conclude that $S$ is a $k$-club in $G$ if
and only if $t$ satisfies $C_H$.
Finally, observe that solving M$k$CP on $G$ is equivalent in terms of
propositional calculus to determining a truth assignment satisfying
$C_{H}$ and maximizing the number $\tau$ of satisfied clauses in
$C_S$. Clearly, $\tau$ corresponds to $\omega_k(G)$, thus completing the
description of our first satisfiability-based formulation of
M$k$CP. Note that this formulation works trivially for $k=1$.
\begin{theorem}
Let $G$ be a simple undirected graph, $k$ some positive integer, and
$t: V(C_{S}\cup C_{H})\rightarrow \{0,1\}$ a truth assignment
satisfying $C_{H}$ and maximizing the number $\tau$ of satisfied
clauses in $C_S$. Then $\tau$ is equal to $\omega_k(G)$. Moreover, for
$k>2$, $C_{S}\cup C_{H}$ contains $O(n^{k-1})$ Boolean variables and
clauses.
\end{theorem}
The formulation above is in the worst case of an exponential size and
requires explicit enumeration of all paths of length at most $k$ between
all pairs of nodes. Nevertheless, for typical values of
$k$~\cite{almeida2012integer,balasundaram2005novel,shahinpour2013algorithms},
$C_{H}$ is of reasonable size. For $k=2$, we need only $n$ variables and
$n(n+1)/2-m$ clauses (mostly 2-clauses for sparse graphs). For $k=3$, at
most $n+m$ variables and $n(n+1)/2+2m$ clauses suffice.
For $k>3$, our second formulation of M$k$CP, $C_S\cup D_{H}$, is
substantially smaller than the first one, as we shall show it now by
constructing the CNF formula $D_{H}$. For this, we introduce for every
pair of distinct nodes $i,j\in N$ and $l=2,...,k$ a new Boolean variable
$v_{ij}^l$, such that $v_{ij}^l=1$ if and only if there exists at least
one path of length at most $l$ from node $i$ to node $j$ in the subgraph
$G[S]$ induced by the nodes of a $k$-club $S$ of $G$. Initially, for
$l=2$, we can write
$$
v_{ij}^2 \leftrightarrow x_i\wedge x_j \wedge \left(\bigvee_{r=1}^n
a_{ir}\wedge a_{rj} \wedge x_r\right),
$$
what after some elementary transformations is equivalent to the CNF
formula
$$
D_{ij}^2\hspace{-2pt}:=\hspace{-2pt}\left\{
\{\overline{v}_{ij}^2,x_i\},
\{\overline{v}_{ij}^2,x_j\},
\{\overline{v}_{ij}^2, x_{r_1}, ..., x_{r_p}\},
\{\overline{x}_i,\overline{x}_j,v_{ij}^2,\overline{x}_{r_1}\},...,
\{\overline{x}_i,\overline{x}_j,v_{ij}^2,\overline{x}_{r_p}\}
\right\}
$$
where the nodes $r_1,...,r_p\in\{r\in N\,|\, a_{ir}\wedge
a_{rj}=1\}$. If no such a node exists, then we set $D_{ij}^2:= \{
\{ \overline{v}_{ij}^2\}\}$.
For $l\geq 3$, $v_{ij}^l$ can be defined recursively as
$$
v_{ij}^l \leftrightarrow x_i \wedge \left(\bigvee_{r=1}^{n}
a_{ir}\wedge v_{rj}^{l-1}\right),
$$
what again after some transformations is equivalent to the CNF formula
$$
D_{ij}^l:=\left\{
\{\overline{v}_{ij}^l,x_i\},\{\overline{v}_{ij}^l, v^{l-1}_{r_1j}, ...,
v^{l-1}_{r_pj}\},
\{\overline{x}_i,v_{ij}^l,\overline{v}_{r_1j}^{l-1}\},...,
\{\overline{x}_i,v_{ij}^l,\overline{v}_{r_pj}^{l-1}\}
\right\},
$$
where $r_1,...,r_p\in \{r\in N\,|\,a_{ir}=1\}$. If no such a node
exists, then $D_{ij}^l := \{ \{ \overline{v}_{ij}^l\}\}$.
Finally, we define
$$
D_{H}:= \bigcup_{i\in N}\bigcup_{j\in N\setminus \{i\}|a_{ij}=0} D_{ij},
\quad \mbox{ where }\quad
D_{ij}:= \left\{\{\overline{x}_i,\overline{x}_j,
v_{ij}^2,...,v_{ij}^k\}\right\} \cup \bigcup_{l=2}^kD_{ij}^l.
$$
Observe first that $D_{ij}$ has to be generated only if
$a_{ij}=0$. Moreover, to encode $D_{H}$ for $k>1$, we need $O((k-1)n^2)$
variables and $O((k-1)n^3)$ clauses. Thus, the encoding size remains
polynomial in the input size. Now, similarly to $C_{ij}$, for every pair
of distinct nodes $i,j\in N$, the existence of a satisfying truth
assignment $t$ for $D_{ij}$ with $t(x_i)=t(x_j)=1$ implies that
$dist_{G[S]}(i,j)\leq k$ for $S\subseteq N$ specified by $t$. Hence, $S$
is a $k$-club in $G$ if and only if $t$ satisfies $D_H$.
Finally, note that solving M$k$CP on $G$ is equivalent to determining a
truth assignment satisfying $D_{H}$ and maximizing the number of
satisfied clauses in $C_S$, completing the description of our second
PARTIAL MAX-SAT formulation of M$k$CP. Obviously, this formulation works
fine also for $k=1$.
\begin{theorem}
Let $G$ be a simple undirected graph, $k$ some positive integer, and
$t: V(C_{S}\cup D_{H})\rightarrow \{0,1\}$ a truth assignment
satisfying $D_{H}$ and maximizing the number $\tau$ of satisfied
clauses in $C_S$. Then $\tau$ is equal to $\omega_k(G)$. Moreover, for
$k>1$, $C_{S}\cup D_{H}$ contains $O(kn^2)$ Boolean variables and
$O(kn^3)$ clauses.
\end{theorem}
\section{Comparative Evaluation}
\label{s:evaluation}
\paragraph{Experimental Setup.} The goal of our experiments was to
evaluate, for typical values of $k\in \{2,3,4\}$ from the
literature~\cite{almeida2012integer,balasundaram2005novel,shahinpour2013algorithms},
the performance of two exact methods for M$k$CP, $\mathsf{SatMC1}$ and
$\mathsf{SatMC2}$, implemented in C++ according to the first and the
second encoding from Section~\ref{s:model}, respectively. We tested our
methods against a BIP-based approach $\mathsf{IPMC}$ using M$k$CP
formulations from~\cite{almeida2012integer,veremyev2012identifying}, and
two state-of-the-art exact methods: the hybrid algorithm for M$k$CP
from~\cite{shahinpour2013algorithms}, denoted here by $\mathsf{VNS}$,
and the branch-and-bound technique
$\mathsf{B\&B}$~\cite{pajouh2012oninclusionwise}. To make the study
better comparable with the previous results, we use the same C++
implementations of $\mathsf{VNS}$ and $\mathsf{B\&B}$ as the ones being
tested in~\cite{shahinpour2013algorithms}.
For solving the PARTIAL MAX-SAT instances produced by $\mathsf{SatMC1}$
and $\mathsf{SatMC2}$, we applied a complete MAX-SAT solver clasp
2.1.3~\cite{gebser2007conflict}, an example of a modern SAT solver. It
extends the backtrack search procedure DPLL~\cite{Handbook}, commonly
used for SAT-solving, with efficient conflict-driven clause learning
(CDCL), lazy data structures, deletion polices for learned clauses, and
periodical restarts of the search procedure, among others. For more
details on the key techniques of DPLL- and CDCL-based SAT-solving, we
refer to~\cite{Handbook}. In $\mathsf{IPMC}$, for solving BIP-instances
we use CPLEX 12.1~\cite{cplex121}. All tests were run on a machine with
Intel Xeon E5410 2.33 GHz processor running a 64-bit Linux 3.2.51 with
32GB RAM. All programs (solvers) were run with default call parameters
in a single-threaded mode with only one CPU core permitted.
There were two sets of graph instances being tested. The first set
contained 12 connected simple graphs from the 10th DIMACS Implementation
Challenge~\cite{dimacs2012}. We used them to test the methods on some
real-life networks ranging from small and dense ones to large and sparse
ones (see Table~\ref{table:res_dimacs_stat}). The graphs of the second
set were generated randomly by the algorithm proposed by Gendreau et
al.~\cite{gendreau1993solving}. This
\begin{table}
\centering
\caption{Statistics on DIMACS instances. Here, $n$ and $m$ give the number of
nodes and edges, $d$ the edge density, and $\omega_2$, $\omega_3$,
$\omega_4$ the club numbers computed by $\mathsf{SatMC\{1,2\}}$.}
\begin{tabular}{lrrrrrr}
\toprule
\multirow{1}{*}{Instance} &
\multirow{1}{27pt}{\centering $n$} &
\multirow{1}{31pt}{\centering $m$} &
\multirow{1}{31pt}{\centering $d$} &
\multirow{1}{23pt}{\centering $\omega_2$} &
\multirow{1}{23pt}{\centering $\omega_3$} &
\multirow{1}{23pt}{\centering $\omega_4$} \\
\midrule
adjnoun & 112 & 425 & 0.0684 & 50& 82& 107\\%112 &5903&&527&7148\\
football & 115 & 613 & 0.0935 & 16& 58& 115\\%115 &6057&&728&7896\\
jazz & 198 & 2742 & 0.1406 & 103& 174& 192\\%198 &16959&&2933&25164\\
celegansm & 453 & 2025 & 0.0198 & 238& 371& 432\\%453 &100806&&2427&106728\\
email & 1133 & 5451 & 0.0085 & 72& 212& 651\\%1133 &636960&&6431&652854\\
polblogs & 1490 & 16715&0.0151&352&776& 1127\\%1490&1094080&&18063&1143799\\
add20 & 2395 & 7462 & 0.0022 & 124& 671&1454\\%2395 &2861748&&9017&2881614\\
data & 2851 & 15093 & 0.0037 & 18& 32&52 \\%2851 &4050433&&17944&4095712\\
3elt & 4720 & 13722 & 0.0012 & 10& 16&27 \\%4720 &11127838&&18442&11169004\\
add32 & 4960 & 9462 & 0.0008 &32 & 99&268\\
hep-th & 8361 & 15751 &0.0006&51&120&344\\%8361&34941590&&20704&34978619\\
whitaker3&9800&28989&0.0006 & 9&15&23\\
\bottomrule
\end{tabular}
\label{table:res_dimacs_stat}
\end{table}
generalization of the classical uniform random graph generator has
earlier been used for testing new methods for
M$k$CP~\cite{almeida2012integer,bourjolly2002exact,pajouh2012oninclusionwise,shahinpour2013algorithms}.
The edge density of the graphs produced by this method was controlled by
two parameters $a$ and $b$ ($0\leq a \leq b \leq 1$). The {\em expected}
edge density $D$ is $(a+b)/2$, and the node degree variance (NDV)
increases with the increase in $b-a$. In our tests, we used {\em
connected} graphs with $n=100, 150,$ and 200 and $D=0.035, 0.05, 0.1,
0.15$, and $0.2$, i.e., from the range of challenging instances
according to~\cite{almeida2012integer,bourjolly2002exact}. For each
graph size $n$ and density $D$, we generated 10 samples with minimum NDV
($a=b=D$) and 10 samples with maximum NDV ($a=0$, $b=2D$), denoted in
the following by min and max, respectively. As
in~\cite{almeida2012integer} and in contrast
to~\cite{pajouh2012oninclusionwise,shahinpour2013algorithms}, we decided
to reject samples with more than one component since for them the number
of nodes would be misleading. It would correspond mostly to the
cardinality of the largest component. Though, the maximum $k$-club may
or may not be located in the largest or the most dense component as
indicated already in~\cite{pajouh2012oninclusionwise}.
The running time limit for solving each instance tested was set to 3600
seconds for each method. If an instance could not be solved into
optimality within that time, the computation has been terminated, the
best solution computed so far (i.e., a lower bound for the optimum), as
well as the upper bound have been recorded, and an optimality gap,
(upper bound $-$ best solution size)/(upper bound), has been
reported. The CNF formulas generated by our methods for the DIMACS
instances included up to 3.5 millions variables and 50 millions
clauses. $\mathsf{SatMC2}$ required in most cases, and in particular for
$k\in\{2,3\}$, up to 10 times more variables than $\mathsf{SatMC1}$
did. For sparse graphs, the number of clauses required by both methods
was similar. The time $\mathsf{SatMC\{1,2\}}$ needed for the generation
of CNF formulas is {\it included} in the running times given below.
\paragraph{Results for real-life graphs.}
Tables~\ref{table:res_dimacs_k2},~\ref{table:res_dimacs_k3},
and~\ref{table:res_dimacs_k4} show the running times (in seconds) and
the optimality gaps for the DIMACS instances solved by $\mathsf{IPMC,
B\&B, VNS}$, and $\mathsf{SatMC\{1,2\}}$ for $k\in\{2,3,4\}$,
respectively.
\begin{table}
\centering
\caption{Computational results of solving M$k$CP for $k=2$ on DIMACS
instances.}
\begin{tabular}{lrp{1pt}rrp{1pt}rrp{1pt}rrp{1pt}rr}
\toprule
\multirow{2}{*}{Instance} &
\multirow{1}{40pt}{\centering $\mathsf{IPMC}$} &
&
\multicolumn{2}{c}{$\mathsf{B\&B}$} &
&
\multicolumn{2}{c}{$\mathsf{VNS}$} &
&
\multicolumn{2}{c}{$\mathsf{SatMC1}$} &
&
\multicolumn{2}{c}{$\mathsf{SatMC2}$}\\
\cmidrule{2-2}\cmidrule{4-5}\cmidrule{7-8}\cmidrule{10-11}
\cmidrule{13-14}
&time (s) && time (s) &\hspace*{2pt} gap && time (s)&\hspace*{2pt}
gap && time (s) &\hspace*{2pt} gap&& time (s) &\hspace*{2pt} gap\\
\midrule
adjnoun & 0.28 && 0.16 &0 &&0.52 &0 && 0.01&0&&0.02&0\\
football & 8.91 &&0.74 &0 && 0.76 &0 && 0.02&0&&0.03&0\\
jazz & 3.98 &&10.1 &0&& 26.8 & 0 && 0.06&0&&0.21&0\\
celegansm & 39.7 &&26.3 &0&& 29.5&0 && 0.43&0&&2.3&0\\
email & $>$3600 &&39.2&0&&39.6 & 0 && 16.1&0&&24.1&0\\
polblogs & 74.9&&1351&0&&1359 & 0 && 46.9&0&&90.7&0\\
add20 & 63.9& &126.6&0&&141.6 &0 && 108.2&0&&129.9&0\\
data & $>$3600&&$>$3600&0.18&&$>$3600&0.22&&28.8&0&&36.3&0\\
3elt & $>$3600 &&$>$3600&0.29 &&$>$3600&0.29&&105.9&0&&87.3&0\\
add32 & $>$3600 &&59.5&0&&136.5 & 0 && 80.1&0&&50.2&0\\
hep-th & $>$3600 &&199.2 &0&&242.5 &0&&624.1&0&&273.3&0\\
whitaker3& $>$3600&&$>$3600&0.36&&$>$3600&0.36&&1091&0&&1045&0\\
\bottomrule
\end{tabular}
\label{table:res_dimacs_k2}
\end{table}
For $\mathsf{IPMC}$ no optimality gaps are reported since for the
unsolved instances the method could not find any feasible solution, nor
give upper bounds for the solution in the given time limit.
\begin{table}[t]
\centering
\caption{Computational results of solving M$k$CP for $k=3$ on DIMACS
instances.}
\begin{tabular}{lrp{1pt}rrp{1pt}rrp{1pt}rrp{1pt}rr}
\toprule
\multirow{2}{*}{Instance} &
\multirow{1}{40pt}{\centering $\mathsf{IPMC}$} &
&
\multicolumn{2}{c}{$\mathsf{B\&B}$} &
&
\multicolumn{2}{c}{$\mathsf{VNS}$} &
&
\multicolumn{2}{c}{$\mathsf{SatMC1}$}&
&
\multicolumn{2}{c}{$\mathsf{SatMC2}$}\\
\cmidrule{2-2}\cmidrule{4-5}\cmidrule{7-8}\cmidrule{10-11}
\cmidrule{13-14}
&time (s) && time (s) &\hspace*{2pt} gap && time (s)&\hspace*{2pt}
gap && time (s) &\hspace*{2pt} gap&& time (s) &\hspace*{2pt} gap\\
\midrule
adjnoun &1.74&& 1.99& 0&& 8.25&0&& 0.02&0&&0.19&0\\
football &86.6&& 12.9& 0&& 12.5&0&& 0.32&0&&0.68&0\\
jazz &22.4&& 9.8& 0&& 829.9&0 && 1.05&0&&5.21&0\\
celegansm &46.1&& 155.3& 0&& $>$3600&0.16&& 1.06&0&&26.9&0\\
email &$>$3600&&$>$3600&0.21&& $>$3600& 0.17&& $>$3600&0.07&&$>$3600&0.07\\
polblogs&$>$3600&&$>$3600&0.01&&$>$3600&0.01&&$>$3600&0.01&&$>$3600&0\\
add20 &$>$3600&& $>$3600& 0.02&& $>$3600& 0.02&& 160.8&0&&106.4&0\\
data &$>$3600&& $>$3600& 0.32&& $>$3600& 0.22&& 72.5&0&&84.1&0\\
3elt &$>$3600&& $>$3600& 0.46&& $>$3600& 0.33&& 124.1&0&&133.6&0\\
add32 &$>$3600&&487.5 & 0&& 527.6&0&& 48.5&0&&62.5&0\\
hep-th &$>$3600&&$>$3600&0.12&&$>$3600&0.12&&925.7&0&&1336&0\\
whitaker3 &$>$3600&& $>$3600& 0.46&& $>$3600&0.41&&1372&0&&1413&0\\
\bottomrule
\end{tabular}
\label{table:res_dimacs_k3}
\end{table}
For $k=2$, $\mathsf{SatMC\{1,2\}}$ were the only methods which solved
optimally all instances within the time limit. In all but three cases
(add20, polblogs, and hep-th), their running times were significantly
better than those of the other methods. Here, $\mathsf{SatMC1}$ was for
medium-size dense instances faster than $\mathsf{SatMC2}$, whereas the
latter was better for large sparse graphs. $\mathsf{B\&B}$ and
$\mathsf{VNS}$ performed similarly solving efficiently small and
medium-size instances.
For $k\in\{3,4\}$, $\mathsf{SatMC1}$, followed by $\mathsf{SatMC2}$, was
the best method regarding the running times, the number of solved
\begin{table}
\centering
\caption{Computational results of solving M$k$CP for $k=4$ on DIMACS
instances.}
\begin{tabular}{lrp{1pt}rrp{1pt}rrp{1pt}rrp{1pt}rr}
\toprule
\multirow{2}{*}{Instance} &
\multirow{1}{40pt}{\centering $\mathsf{IPMC}$} &
&
\multicolumn{2}{c}{$\mathsf{B\&B}$} &
&
\multicolumn{2}{c}{$\mathsf{VNS}$} &
&
\multicolumn{2}{c}{$\mathsf{SatMC1}$}&
&
\multicolumn{2}{c}{$\mathsf{SatMC2}$}\\
\cmidrule{2-2}\cmidrule{4-5}\cmidrule{7-8}\cmidrule{10-11}
\cmidrule{13-14}
&time (s) && time (s) &\hspace*{2pt} gap && time (s)&\hspace*{2pt}
gap && time (s) &\hspace*{2pt} gap&& time (s) &\hspace*{2pt} gap\\
\midrule
adjnoun &2.18&& 0.86& 0&& 0.94&0&& 0.34&0&&0.49&0\\
football &177.4&& 0.01& 0&& 0.01&0&& 0.22&0&&0.92&0\\
jazz &492.4&& 8.87& 0&& 9.11&0 &&137,9 &0&&21.8&0\\
celegansm &$>$3600&& 186.1& 0&& 194.5&0&& 45.3&0&&95.1&0\\
email &$>$3600&&$>$3600&0.03&& $>$3600& 1&& $>$3600&0.61&&$>$3600&0.51\\
polblogs&$>$3600&&$>$3600&1&&$>$3600&1&& $>$3600&0.37&&$>$3600&0.15\\
add20 &$>$3600&& $>$3600& 1&& $>$3600& 1&&514.8&0&&594.1&0\\
data &$>$3600&& $>$3600& 0.31&& $>$3600& 0.31&& 112.5&0&&151.4&0\\
3elt &$>$3600&& $>$3600& 0.54&& $>$3600& 1&& 171.1&0&&186.7&0\\
add32 &$>$3600&&3322 & 0&& $>$3600&1&& 66.1&0&&71.2&0\\
hep-th &$>$3600&&$>$3600&1&&$>$3600&1&&$>$3600&0&&$>$3600&0\\
whitaker3 &$>$3600&& $>$3600& 0.5&& $>$3600&1&&1563&0&&1687&0\\
\bottomrule
\end{tabular}
\label{table:res_dimacs_k4}
\end{table}
instances, and the optimality gaps. Only one large (hep-th for $k=4$)
and two medium-size instances email and polblogs could not be solved
optimally by our methods; nevertheless good approximate solutions could
be found. When comparing the two branch-and-bound techniques,
$\mathsf{B\&B}$ solved the same number of instances as $\mathsf{VNS}$,
but was faster. However, the latter delivered for $k=3$ better
approximations. Interestingly, for $\mathsf{SatMC\{1,2\}}$, the running
times required for M3CP on medium-size graphs add20, data, 3elt, and
add32 were longer but still comparable with those needed for M2CP on
those instances. The worst performance across all $k$ tested here showed
$\mathsf{IPMC}$.
\begin{table}[t]
\centering
\caption{Computational results of solving M$k$CP for $k=2$ on random
instances. The number of unsolved instances is
indicated in brackets.}
\begin{tabular}{ccccrrp{2pt}rrp{2pt}rrp{2pt}rr}
\toprule
\multirow{2}{*}{\centering $D$} &
\multirow{2}{21pt}{\centering $n$} &
\multirow{2}{20pt}{\centering NDV} &
\multirow{2}{26pt}{\centering $\overline{\omega}_2$} &
\multicolumn{2}{c}{$\mathsf{IPMC}$} &&
\multicolumn{2}{c}{$\mathsf{VNS}$} &&
\multicolumn{2}{c}{$\mathsf{SatMC1}$} &&
\multicolumn{2}{c}{$\mathsf{SatMC2}$} \\
\cmidrule{5-6}\cmidrule{8-9}\cmidrule{11-12}\cmidrule{14-15}
&&&&time (s) &\hspace*{1pt} gap && time (s) &\hspace*{1pt} gap &&
time (s)&\hspace*{1pt} gap&&time (s) &\hspace*{1pt} gap\\
\midrule
\multirow{6}{*}{0.10}& \multirow{2}{*}{100} &
min & 21 & 18.4 &0&& 1.66 &0&&0.06 &0 &&0.08&0\\
& & max & 24.5 & 32.5 &0&& 2.51 &0&&0.07 &0 &&0.09&0\\
& \multirow{2}{*}{150} &
min & 26.6 & 3436(9) &0.26&& 41.1 &0&&1.64 &0&&1.84&0\\
& & max & 32.8 & 3600(10) &0.21&& 61.6 &0&&3.77 &0&&3.84&0\\
& \multirow{2}{*}{200} &
min & 33.6 & 3600(10)&1.37&& 1018&0&&30.1 &0 &&31.5&0\\
& & max &43.1 & 3600(10)&0.74&& 2051(3)&0.05&&204.5 &0 &&176.6 &0\\
\cmidrule{2-15}
\multirow{6}{*}{0.15}& \multirow{2}{*}{100} &
min & 32.3 & 922.1 &0&& 59.4 &0&&1.03 &0&&1.13 &0\\
& & max & 42.5 & 11.6 &0&& 15.6 &0&&0.25 &0&&0.29 &0\\
& \multirow{2}{*}{150} &
min & 53.3 & 3600(10) &0.53&&3600(10) &0.26&&539.8 &0 &&575.2 &0\\
& & max & 80.8 & 545.9(1) &0.4&& 429 &0&&23.3 &0&&24.2 &0\\
& \multirow{2}{*}{200} &
min & 79.3&3600(10)&1.88&&3600(10)&0.43&&3600(10) &0.05&&3600(10) & 0.04\\
& & max & 124.4 & 107.9&0&& 2005(3)&0.01&&1530(3) &0.01&&1591(3) &0.01\\
\cmidrule{2-15}
\multirow{6}{*}{0.20}& \multirow{2}{*}{100} &
min & 65 & 7.74 &0&& 61.3 &0&&0.99 &0&&1.09 &0\\
& & max & 68.9 & 0.51 &0&& 11.3 &0&&0.08 &0&&0.16 &0\\
& \multirow{2}{*}{150} &
min & 129.8 & 1.48 &0&& 111 &0&&0.73 &0&&1.11 &0\\
& & max & 122.7 & 1.35 &0&& 50.2 &0&&0.67&0&&0.92 &0\\
& \multirow{2}{*}{200} &
min & 192.4 & 3.34&0&& 113&0&&0.13 &0&&0.52 &0\\
& & max & 176.8& 3.66&0&& 111&0&&0.51 &0&&1.53 &0\\
\bottomrule
\end{tabular}
\label{table:res_random_k2}
\end{table}
\paragraph{Results for random graphs.}
Tables~\ref{table:res_random_k2} and~\ref{table:res_random_k3} present
the results of solving M$k$CP on random graphs. We restrict the results
to $k\in\{2,3\}$. Since $\mathsf{B\&B}$ and $\mathsf{VNS}$ performed on
random graphs tested similarly, we provide only the results of
$\mathsf{VNS}$, the one of a better overall performance. Average running
times and average optimality gaps, both computed across the 10 graph
samples of a given $D$, $n$, and NDV, are reported. Additionally, for
each instance category, i.e., for a given $D$, $n$, and NDV, we provide
average 2- and 3-club numbers $\overline{\omega}_2$ and
$\overline{\omega}_3$, computed for each category from the 10 (optimum)
values of $\omega_2$ and $\omega_3$ found by our methods. Noteworthy, to
compute $\omega_2$ and $\omega_3$, only for 13 of a total of 360 graph
samples more than one hour was needed. Finally, the average optimality
gap for $\mathsf{IPMC}$ was calculated from the gap values returned by
the integer routine of CPLEX.
For $k=2$ and average densities $D=0.10$ and $0.20$, $\mathsf{SatMC\{1,
2\}}$ found optimal solutions for every instance, whereas the running
times were by far shorter than those of the other methods. For graphs of
$D=0.15$, being reportedly the hardest
ones~\cite{pajouh2012oninclusionwise}, our methods obtained optimal
solutions except for 13 test samples of size $n=200$, for which,
however, competitive approximate solutions could be given. When
comparing the performance for densities $>0.10$, $\mathsf{SatMC\{1,2\}}$
solved instances with maximum NDV always faster than those of the same
size and density but with minimum NDV. This could not be observed for
the other methods. However, for $D=0.15$ and all methods, the instances
with minimum NDV turned out to be much harder than those with maximum
NDV. $\mathsf{IPMC}$ was the slowest method regarding graphs of $D=0.10$
and $0.15$, but it performed exceptionally well for $D=0.20$, taking the
third place by beating $\mathsf{VNS}$.
For $k=3$, our methods were able to solve all instances into optimality,
whereas the average running times required were considerably shorter
than those of the other methods. This held also for instances of
challenging densities $0.035$ and $0.05$ according
to~\cite{almeida2012integer,bourjolly2002exact}. $\mathsf{VNS}$
exhibited the third best performance, solving optimally all but 12
instances. However, for $D=0.10$, it was beaten by $\mathsf{IPMC}$. For
both values of $k$, $\mathsf{SatMC1}$ performed slightly better than
$\mathsf{SatMC2}$, primarily due to smaller size of the CNF encodings.
Finally, for a given $k, n$, and NDV, as the density $D$ increased, the
average solution size found by all methods increased, too, while the
\begin{table}[t]
\centering
\caption{Computational results of solving M$k$CP for $k=3$ on random
instances. The number of unsolved instances is
indicated in brackets.}
\begin{tabular}{ccccrrp{2pt}rrp{2pt}rrp{2pt}rr}
\toprule
\multirow{2}{*}{\centering $D$} &
\multirow{2}{21pt}{\centering $n$} &
\multirow{2}{20pt}{\centering NDV} &
\multirow{2}{26pt}{\centering $\overline{\omega}_3$} &
\multicolumn{2}{c}{$\mathsf{IPMC}$} &&
\multicolumn{2}{c}{$\mathsf{VNS}$} &&
\multicolumn{2}{c}{$\mathsf{SatMC1}$} &&
\multicolumn{2}{c}{$\mathsf{SatMC2}$}\\
\cmidrule{5-6}\cmidrule{8-9}\cmidrule{11-12}\cmidrule{14-15}
&&&&time (s) &\hspace*{2pt}gap && time (s) &\hspace*{2pt}gap &&
time (s)&\hspace*{2pt}gap&&time (s) &\hspace*{2pt}gap\\
\midrule
\multirow{6}{*}{0.035}& \multirow{2}{*}{100} &
min & 24 & 1.91&0 && 1.13& 0&&0.02 &0 &&0.05 &0 \\
& & max & 27.1& 1.94&0 && 1.31& 0&&0.01 &0 &&0.06 &0 \\
& \multirow{2}{*}{150} &
min & 31.6& 95.3& 0&& 7.49& 0&&0.18 &0 &&0.41 &0 \\
& & max & 33.7& 213.6&0 && 10.4&0 &&0.23 &0 &&0.49 &0 \\
& \multirow{2}{*}{200} &
min & 36.9& 2381(4)& 0.12&& 54.2&0 &&1.54 & 0&&2.91 & 0\\
& & max & 40.8& 3105(7)& 0.18&& 107.8&0 &&2.86 & 0&&5.08 & 0\\
\cmidrule{2-15}
\multirow{6}{*}{0.05}& \multirow{2}{*}{100} &
min & 30.4& 7.69& 0&& 3.25&0 &&0.04 & 0&&0.11 & 0\\
& & max & 33.9& 6.14& 0&& 3.18& 0&&0.04 &0 &&0.14 & 0\\
& \multirow{2}{*}{150} &
min & 43.9&2750(6) &0.06 &&130.2 &0 &&1.71 &0 &&2.96 & 0\\
& & max &56 &1138(2) &0.02 &&128.1 &0 &&2.05 &0 &&4.04 & 0\\
& \multirow{2}{*}{200} &
min & 55& 3600(10)&0.77 &&3578(9) & 0.19&&91.4 & 0 &&128.7 & 0\\
& & max & 84.3& 3168(8)&0.21&&2311(3)&0.06&&65.9 &0 &&117.1 & 0\\
\cmidrule{2-15}
\multirow{6}{*}{0.10}& \multirow{2}{*}{100} &
min & 82.8& 1.19&0 && 8.77& 0&&0.03 & 0&&0.31 & 0\\
& & max & 81.8& 1.36& 0&& 8.86& 0&&0.03 &0 &&0.31 & 0\\
& \multirow{2}{*}{150} &
min & 146& 10.1& 0&& 42.8& 0&&0.12 &0 &&1.51 & 0\\
& & max & 141.5& 13.2&0 && 46.8& 0&&0.12 & 0&&1.46 & 0\\
& \multirow{2}{*}{200} &
min & 199.2& 11.9& 0&& 105.4& 0&&0.33 & 0&&5.88 & 0\\
& & max & 196.5& 13.4&0 && 197.2&0 &&0.51 & 0&&6.32 & 0\\
\bottomrule
\end{tabular}
\label{table:res_random_k3}
\end{table}
average running time increased first up to a peak and then declined for
higher densities. This can be explained by the fact that sparse graphs
are easier to solve, because there are less possibilities to construct a
$k$-club, whereas for higher densities many of the problems are becoming
trivial. More specifically, for $D\geq 0.20$ and $k=2$, and for $D\geq
0.10$ and $k=3$, the values of $\overline{\omega}_k$ approached $n$ and
the instances became easier to solve despite their growing sizes. The
peak average running time can be used to determine the challenging
densities for an algorithm for M$k$CP~\cite{pajouh2012oninclusionwise}.
\mbox{Table}~\ref{table:densities} gives those densities identified
empirically for all methods tested. The numerical results show that for
a given $k$, the challenging densities were the same for minimum and
maximum NDV instances, and decreased as $n$ increased. This effect was
most evident for $\mathsf{IPMC}$, followed by $\mathsf{B\&B}$ and
$\mathsf{VNS}$. $\mathsf{SatMC\{1,2\}}$ turned out to be least affected,
indicating clearly their better robustness.
\begin{table}[t]
\centering
\caption{Challenging densities $D$ for solving M$k$CP on random
instances.}
\begin{tabular}{ccccp{2pt}ccc}
\toprule
\multirow{2}{45pt}{\centering Method}
&\multicolumn{3}{c}{$k=2$} && \multicolumn{3}{c}{$k=3$}\\
\cmidrule{2-4}\cmidrule{6-8}
&$n=100$\hspace*{8pt}&$n=150$\hspace*{8pt}&$n=200$&
&$n=100$\hspace*{8pt}&$n=150$\hspace*{8pt}&$n=200$\\
\midrule
$\mathsf{IPMC}$ & $0.15$ & $0.1, 0.15$ & 0.1, 0.15 && 0.05& 0.05&
0.035, 0.05\\
$\mathsf{VNS}$ & 0.15, 0.2 & 0.15 & 0.1, 0.15 && 0.1& 0.05& 0.05\\
$\mathsf{SatMC\{1,2\}}$ & 0.15 & 0.15 & 0.15 && 0.05& 0.05& 0.05\\
\bottomrule
\end{tabular}
\label{table:densities}
\end{table}
\section{Conclusion}
\label{s:conclusion}
In this paper, we presented two PARTIAL MAX-SAT formulations of M$k$CP
for a positive integer $k$. Using those encodings, we implemented two
exact methods for M$k$CP, $\mathsf{SatMC1}$ and $\mathsf{SatMC2}$, and
evaluated them experimentally for typical values of $k\in\{2,3,4\}$ both
on real-life as well as on random graphs. The computational study showed
that our approach outperforms other state-of-the-art algorithms
developed in the last years. It computed optimal solutions in most cases
much faster and found good approximate solutions in case the computation
had to be terminated. Its short running times on small and moderate-size
instances qualify it clearly for usage in interactive tools, e.g., for
clustering biological networks~\cite{balasundaram2005novel}, providing
useful insights into substructures in those networks.
It would be of interest to adapt our ideas for solving other clique
relaxations like $k$-clique, $k$-plex, or $R$-robust
$k$-club~\cite{balasundaram2011clique,seidman1978graph,veremyev2012identifying}. Moreover,
for $k=2$, our approach could also be compared with the parameterized
algorithm of Hartung et al.~\cite{hartung2012parametrized}, and for a
general $k$ with the branch-and-bound method by Chang et
al.~\cite{chang2013finding}. One could also evaluate our methods for
solving M$k$CP for $k>4$ on power-law graphs from bioinformatics and
social web applications. Finally, it is an open question, if any of the
algorithmic ideas of modern CDCL SAT-solving, from which our approach
clearly benefits, could successfully be extended to BIP.
\subsubsection*{Acknowledgments.}
The author would like to thank Shahram Shahinpour and Sergiy Butenko for
providing an implementation of their method
from~\cite{shahinpour2013algorithms}.
\bibliographystyle{splncs03}
|
2,869,038,156,781 | arxiv | \section{Introduction}\label{sec:introduction}
Diversity combining techniques are commonly used in modern wireless multi-antenna consumer devices such as smartphones, laptops and WiFi routers, to improve link reliability and energy efficiency. One of the most popular choices is maximal-ratio combining (MRC), which is known to achieve optimal performance in the absence of (multi-user) interference\cite{brennan03,simon05,goldsmith05}. In the interference-free case, MRC maximizes the post-combiner signal-to-noise ratio (SNR) by weighting the signals received at the different antennas (or equivalently, branches) according to the respective per-antenna SNRs, followed by the coherent summation of the weighted signals. Like other diversity combining schemes, MRC suffers substantial performance losses when practical non-idealities such as average reception-quality imbalance\cite{halpern77} and fading correlation\cite{aalo95} are taken into account. These performance losses are amplified further by interference, which has become a key issue with the denser usage of wireless devices; taking place particularly in non-licensed spectrum due both to offloading of cellular traffic\cite{andrews_femto} and the relentless increase of wireless consumer devices\cite{cisco13}. The main reason behind these losses is that the resulting interference is usually not equally strong across antennas because of uncorrelated or slightly correlated fading on the interferer to per-antenna links, thereby leading to additional reception-quality imbalance across the branches\cite{cui04}. Furthermore, this imbalance typically varies unpredictably fast and entails a complex correlation structure across antennas that depends upon various system parameters, such as the locations of the interferers and the fading gains.
Although information-theoretically suboptimal in the presence of interference, MRC is expected to remain a widespread diversity combining technique in the near future due to its maturity and low implementation costs compared to other competing techniques, e.g., interference-canceling combining schemes, which usually require a higher channel estimation effort. This motivates the study of the performance of MRC under a more realistic channel and interference model, which is the main focus of this paper.
\subsection{Related Work and Motivation}
The impact of interference on the performance of MRC was first studied assuming \textit{deterministic} interference power at all branches for both the equal as well as the unequal strength case\cite{cui04,cui99,aalo00}. Using the notion of outage probability, these works demonstrated that interference may severely degrade the expected performance depending on the number of interferers and their strength, especially for the case of unequal strengths. In a broader sense, the outage probability expressions derived in these works may be seen as \textit{conditional} on the interference statistics. Therefore, to evaluate the overall performance, one needs to average over the interference, which is challenging because interference depends upon various system parameters and often appears {\em random} to the receiver.
Recently, tools from stochastic geometry\cite{stoyan95} has been proposed for addressing this and other closely related challenges\cite{baccelli09a,baccelli09b,HaenggiBook,weber10,andrews11,tanbourgi13_3}. Using these tools, the performance of MRC in the presence of interference, modeled as a Poisson shot noise field, was studied in several works, mainly under two simplified interference correlation models: for instance, in\cite{sheng10,rajan10} the interference power was assumed statistically independent across the antennas, although it is correlated as the interference terms at the different antennas originate from the same source of randomness, i.e., from the same set of transmitters. This type of correlation is often neglected in the literature\cite{ganti09}, which results in significantly overestimating the true diversity. On the other hand,\cite{hunter08} assumed the same interference strength at all antennas, which corresponds to modeling the interference power as being fully correlated across the branches. This, in turn, underestimates the true diversity as the de-correlation effect of the channel fading is ignored. The importance of properly modeling interference correlation was highlighted in\cite{chopra11,chopra12,ganti09_1}. In\cite{chopra11,chopra12}, the interference properties measured at a multi-antenna receiver were analyzed within the continuum between complete independence and full correlation of the interference. In\cite{ganti09_1}, the second-order statistics of the interference and of outage events were characterized. This led for example to an exact performance evaluation of the simple retransmission scheme\cite{Haenggi14twc}, selection combining\cite{haenggi12_1} as well as cooperative relaying\cite{tanbourgi_13_1,crismani13}.
Another frequently made assumption in the literature\cite{cui04,zhang07,ahn09,direnzo13}, is that the MRC combining weights do not dependent on the interference-plus-noise power experienced at each antenna, i.e., they are proportional only to the fading gains of the desired link. Such an MRC model may be seen as \emph{interference-blind} and is suboptimal when the interference-plus-noise power varies across antennas. In slight contrast, the MRC combining weights in\cite{chopra11,chopra13} were assumed to be additionally inversely proportional to the interferer density corresponding to the interference field seen by each antenna. Since the interferer density is proportional to the mean interference power\cite{HaenggiBook}, this form of MRC essentially performs an adaptation to the long-term effects of the interference. The authors showed that such a long-term adaptation yields some improvements when interference is correlated across antennas.
When the {\it current} per-antenna interference-plus-noise powers in one transmission period are known to the receiver, e.g., through estimation within the channel training period\cite{benedict67,pauluzzi00}, they can be taken into account when computing the MRC weights; thereby following the MRC approach of \cite{brennan03}. In\cite{tanbourgi13_2}, and in contrast to all previous works, the performance under spatial interference correlation of such an \emph{interference-aware} MRC receiver model was recently analyzed assuming Rayleigh fading channels and absence of receiver noise. For the practical dual-branch case, the exact distribution of the post-combiner signal-to-interference-plus-noise ratio ($\mathtt{SINR}$) was derived, while bounds were proposed for the case of more than two branches.
\subsection{Contributions and Outcomes}
In this work, we extend the findings obtained in\cite{tanbourgi13_2} for interference-aware MRC by considering Nakagami fading and receiver noise, and discuss related design aspects with emphasis on the effect of spatial interference correlation. Similar to\cite{tanbourgi13_2}, we assume an {\it isotropic} interference model\cite{chopra12,chopra13}, i.e., each antenna sees interference from the same set of interferers, which results in interference correlation across antennas. Our main contributions and insights are summarized below.
\textit{Success probability for dual-branch MRC:} The main result of this paper is Theorem~\ref{thm:cov_prob} in Section~\ref{sec:cov_prob}, which gives an analytical expression for the exact success probability (1-outage probability) for a dual-branch MRC receiver under spatially-correlated interference, receiver noise and independent Nakagami fading. Importantly, the Nakagami fading parameter does not have to be identical for the desired and the interfering links, whereas the parameter for the desired links is restricted to integers. We show how previous results from the literature are special cases of Theorem~\ref{thm:cov_prob}. For the low outage probability regime, we derive a tractable closed-form expression for the main result later in Section~\ref{sec:asym}.
\textit{Comparison with simpler correlation models:} In Section~\ref{sec:simple_models}, we use the main result to study the accuracy loss associated with simpler correlation models frequently used due to their analytical tractability. It is shown that ignoring interference correlation across the branches results in a considerably optimistic performance characterization of MRC, particularly for large Nakagami fading parameters (small channel variability). The picture changes when assuming an identical interference level across the branches; here, the available diversity is underestimated, which yields a slightly pessimistic performance characterization. The resulting success probability gap, however, rapidly decreases with the Nakagami fading parameter of the interfering links and becomes no greater than about $10\%$ depending on the path loss exponent. This intuitive trend eventually yields an asymptotic equivalence between the full-correlation and the exact model, which is mathematically established in Section~\ref{sec:simple_models}. One important insight is that the simpler full-correlation model can be used whenever the interfering links undergo a strong path loss and/or poor scattering.
\begin{figure}[t]
\centering
\includegraphics[width=0.420\textwidth]{figure1.pdf}
\caption{Illustration of the underlying scenario for the example $N=2$. The considered dual-antenna receiver is located at the origin. The desired transmitter is located $d$ meters away. The considered receiver experiences interference from surrounding interferers.
} \label{fig:illustration
\end{figure}
\textit{Efficient method for semi-numerical evaluation of the result:} In Section~\ref{sec:diff}, we propose and discuss a methodology for efficient and robust semi-numerical evaluation of the result of Theorem~\ref{thm:cov_prob}. We mainly make use of Fa\`{a} di Bruno's formula, followed by a method for numerical differentiation based on Chebyshev polynomial approximation. Although immaterial to the theoretical framework, the ideas presented in this section are helpful for applying and reproducing our theoretical results using numerical software.
\textit{Comparison with other diversity combining techniques:} Using the main result for the dual-branch case, we compare the performance of MRC to other widely-known diversity combining schemes under the influence of spatial interference correlation in Section~\ref{sec:other_div_com}. We find that minimum mean square error (MMSE) combining, which does not treat interference as white noise, yields a linear diversity-gain increase with the path loss exponent compared to MRC. For small path loss exponents, there is almost no benefit from estimating and rejecting interference using MMSE as MRC, although sub-optimal, achieves almost the same diversity gain. The benefit of MRC over selection combining (SC) in terms of diversity gain is in general smaller than in the interference-free case, and monotonically decreases with the path loss exponent. For typical path loss exponents, the performance of MRC is about $1$ dB higher than for SC. Interestingly, when the path loss exponent tends to two, the gain of MRC over SC becomes equal to the corresponding value for the interference-free case.
{\bf Notation:} We use sans-serif-style letters ($\mathsf{z}$) and serif-style letters ($z$) for denoting random variables and their realizations or variables, respectively. We define $(z)^{+}~\raisebox{-0.03cm}{$\triangleq$}~\max\{0,z\}$.
\section{System Model}\label{sec:notation}
We consider an $N$-antenna receiver communicating with a desired transmitter at an arbitrary distance $d$.\footnote{Although the main result captures only the dual-antenna case, it will be useful in the later discussions to generalize the model to $N$ antennas.} The transmitted signal received at the $N$ antennas is corrupted by noise and interference caused by other transmitters. The locations $\{\mathsf{x}_{i}\}_{i=0}^{\infty}$ of these interfering transmitters are modeled by a stationary planar Poisson point process (PPP) $\Phi~\raisebox{-0.03cm}{$\triangleq$}~\{\mathsf{x}_{i}\}_{i=0}^{\infty}\subset\mathbb{R}^2$ of density $\lambda$. The PPP model is widely-accepted for studying multiple kinds of networks, see for instance\cite{baccelli09b,andrews11,blas12}.
More complex interference geometries, e.g., with carrier-sensing at the nodes, can be incorporated with acceptable effort using Poisson-like models, cf.\cite{baccelli09b,hunter10,tanbourgi12}. Such modifications are beyond the scope of this contribution.
\newcounter{mycounter}
\begin{figure*}[!t]
\normalsize
\setcounter{mycounter}{\value{equation}}
\setcounter{equation}{3}
\begin{IEEEeqnarray}{rCl}
\mathtt{P}_{\text{\textnormal{MRC}}}&=&\sum\limits_{k=0}^{m_{\text{\textnormal{D}}}-1}\frac{(-1)^{k+m_{\text{\textnormal{D}}}}}{k!\,\Gamma(m_{\text{\textnormal{D}}})}\int_{0}^{\infty}\frac{\partial^{k}\partial^{m_{\text{\textnormal{D}}}}}{z\,\partial s^{k}\partial t^{m_{\text{\textnormal{D}}}}}\left[\exp\left(-\frac{(T-z)^{+}sm_{\text{\textnormal{D}}}}{\mathtt{SNR}}-\frac{ztm_{\text{\textnormal{D}}}}{\mathtt{SNR}}-\pi\lambda\mathcal{A}(z,s,t)\right)\right]_{\substack{s=1\\ t=1}}\mathrm dz\label{eq:pc_theorem}\IEEEeqnarraynumspace
\end{IEEEeqnarray}
\hrulefill
\begin{subnumcases}{\label{eq:cal_A}\mathcal{A}(z,s,t) =}
s^{2/\alpha}(T-z)^{2/\alpha}\,d^{2}\,\Gamma(1-2/\alpha)\left(\tfrac{m_{\text{\textnormal{D}}}}{m_{\text{\textnormal{I}}}}\right)^{2/\alpha}\Gamma(2/\alpha+2m_{\text{\textnormal{I}}})\notag\\
\qquad\quad\times\,{}_{2}\mathbf{F}_{1}\left(-2/\alpha,m_{\text{\textnormal{I}}},2m_{\text{\textnormal{I}}},1-\frac{zt}{(T-z)s}\right),\quad 0\leq z< T\\
(zt)^{2/\alpha}\,d^{2}\,\Gamma(1-2/\alpha)\left(\tfrac{m_{\text{\textnormal{D}}}}{m_{\text{\textnormal{I}}}}\right)^{2/\alpha}\frac{\Gamma(2/\alpha+m_{\text{\textnormal{I}}})}{\Gamma(m_{\text{\textnormal{I}}})},\quad z\geq T
\end{subnumcases}
\hrulefill
\begin{IEEEeqnarray}{rCl}
\mathtt{P}_{\text{\textnormal{MRC}}}^{\alpha=4,m=1}&=&-\int_{0}^{\infty}z^{-1}\exp\left(-\frac{(T-z)^{+}}{\mathtt{SNR}}\right)\frac{\partial}{\partial t}\left[\exp\left(-\frac{zt}{\mathtt{SNR}}-\frac{\lambda\pi^2}{2}\frac{\left((T-z)^{+}\right)^{3/2}-(zt)^{3/2}}{(T-z)^{+}-zt}\right)\right]_{t=1}\,\mathrm dz\IEEEeqnarraynumspace\label{eq:special_case1}
\end{IEEEeqnarray}
\setcounter{equation}{\value{mycounter}}
\hrulefill
\vspace*{3pt}
\end{figure*}
Due to the stationarity of $\Phi$ the interference statistics are location-invariant\cite{stoyan95}. Thus, we can place the considered receiver in the origin $o\in\mathbb{R}^2$ without loss of generality. The path loss between a given transmitter at $x\in\mathbb{R}^2$ and the considered receiver is given by $\|x\|^{-\alpha}$, where $\alpha>2$ is the path loss exponent. We denote by $\mathsf{g}_{n}$ the channel fading (power) gain between the desired transmitter and the $n\th$ antenna of the considered receiver. Similarly, the set of channel fading gains of the interfering channels to the $n\th$ antenna is defined as $\mathbf{h}_{n}~\raisebox{-0.03cm}{$\triangleq$}~\{\mathsf{h}_{n,i}\}_{i=0}^{\infty}$, where $\mathsf{h}_ {n,i}$ denotes the fading gain of the channel between the $i\th$ interferer to the $n\th$ antenna of the considered receiver. We consider independent Nakagami fading across all channels, which corresponds to assuming that all fading gains independently follow a Gamma distribution having probability density function
\begin{align}
f_{\mathsf{y}}(y) = \frac{m^m y^{m-1}}{\Gamma(m)} \exp \left(-m y \right),\quad y\geq0,
\end{align}
with shape $m$ and scale $1/m$, where $m$ is the Nakagami fading parameter\cite{goldsmith05}. To preserve generality, we allow for non-identical fading between the desired and the interfering links, i.e., desired and interference signals undergo Nakagami fading with possibly unequal Nakagami parameter. In what follows, the $\mathsf{g}_{n}$ are associated with Nakagami parameter $m_{\text{\textnormal{D}}}$, while the $\mathsf{h}_{n,i}$ are associated with Nakagami parameter $m_{\text{\textnormal{I}}}$. Importantly, we require $m_{\text{\textnormal{D}}}$ to be integer-valued. The corresponding tail probability of $\mathsf{g}_{n}$ (similarly, $\mathsf{h}_{n,i}$) is given by $\mathbb{P}(\mathsf{g}_{n}>g)=Q(m_{\text{\textnormal{D}}},m_{\text{\textnormal{D}}} g)$ for $n=1,\ldots,N$, where $Q(a,x)~\raisebox{-0.03cm}{$\triangleq$}~\Gamma(a,x)/\Gamma(a)$ is the regularized upper incomplete Gamma function \cite{olver10}. It is easy to check that $\mathbb{E}[\mathsf{g}_{n}] = 1$, and $\mathsf{g}_{n} \rightarrow 1$ almost surely as $m_{\text{\textnormal{D}}} \rightarrow \infty$. The same holds for $\mathsf{h}_{n,i}$ for all $n=1,\ldots,N$ and $i\in\mathbb{N}$. Possible extensions toward general fading distributions can be incorporated in the model, e.g., using ideas from \cite{keeler13,dhillon13}. We assume the same fixed transmit power for all nodes and a slotted medium access with a slot duration smaller than or equal to the channel coherence time, and leave possible extensions for future work. Fig.~\ref{fig:illustration} illustrates the considered scenario.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{General notation used throughout this work}
\label{tab:notation}
\centering
\small
\begin{tabular}{c|p{6.8cm}}
\hline
\bfseries{Notation} & \hspace{2.4cm}\bfseries{Description}\\
\hline
$N$ & Number of receive antennas (branches)\\
\hline
$d$ & Distance between considered receiver and desired transmitter\\
\hline
$\alpha$ & Path loss exponent\\
\hline
$\mathsf{g}_{n}$ & Power fading gain between desired transmitter and $n\th$ antenna of the considered receiver\\
\hline
$\mathsf{h}_{n,i};\mathbf{h}_{n}$ & Power fading gain between the $i\th$ interferer and $n\th$ antenna of the considered receiver; set $\{\mathsf{h}_{n,i}\}_{i=1}^{\infty}$ of all interferer channel gains to the $n\th$ antenna of the considered receiver\\
\hline
$m_{\text{\textnormal{D}}};m_{\text{\textnormal{I}}}$ & Nakagami fading parameter on the desired links; and on the interfering links\\
\hline
$\Phi;\lambda$ & Interferer locations modeled as PPP; spatial density of interferers\\
\hline
$\mathsf{I}_{n}$ & Current interference power at $n\th$ antenna (branch)\\
\hline
$\mathtt{SNR}$ & Average SNR at the considered receiver\\
\hline
$\sinr_{\text{\textnormal{MRC}}}$ & Post-combiner $\mathtt{SINR}$ for MRC\\
\hline
$T$ & $\mathtt{SINR}$ threshold\\
\hline
$\mathtt{P}_{\text{\textnormal{MRC}}}$ & Success probability for an MRC receiver\\
\hline
\end{tabular}
\end{table}
We assume that the receiver is interference-aware, i.e., it can not only perfectly estimate the instantaneous fading gain of the desired link but also the current interference-plus-noise power within one slot. By\cite{brennan03}, the MRC weight in the $n\th$ branch is proportional to the fading amplitude gain of the desired link and inversely proportional to the current interference-plus-noise power at the $n\th$ antenna, see Appendix~\ref{sec:mrc_model} for details. The post-combiner $\mathtt{SINR}$ for MRC then takes the form
\begin{IEEEeqnarray}{rCl}
\sinr_{\text{\textnormal{MRC}}}~\raisebox{-0.03cm}{$\triangleq$}~\frac{\mathsf{g}_{1}}{\mathsf{I}_{1}+\mathtt{SNR}^{-1}}+\ldots+\frac{\mathsf{g}_{N}}{\mathsf{I}_{N}+\mathtt{SNR}^{-1}},\label{eq:sir_general}\IEEEeqnarraynumspace
\end{IEEEeqnarray}
where $\mathsf{I}_{n}~\raisebox{-0.03cm}{$\triangleq$}~ d^{\alpha}\sum_{\mathsf{x}_{i}\in\Phi}\mathsf{h}_{n,i}\|\mathsf{x}_{i}\|^{-\alpha}$ is the interference power experienced at the $n\th$ antenna normalized by $d^{-\alpha}$ and $\mathtt{SNR}$ is the average signal-to-noise ratio. $\mathsf{I}_{n}$ is understood as the instantaneous interference power averaged over the interferer symbols within one transmission slot, and hence corresponds to the current variance of the aggregate interference signal at the $n\th$ antenna, see Appendix~\ref{sec:mrc_model} for details. Due to the slotted medium access, we can assume that $\mathsf{I}_{n}$ remains constant for the duration of one slot. It can be shown that $\mathsf{I}_{n}<\infty$ almost surely for all $n\in[1,\ldots,N]$ when $\alpha>2$\cite{HaenggiBook}. Note that, although the fading gains $\mathbf{h}_{1},\ldots,\mathbf{h}_{N}$ are independently distributed, the $\mathsf{I}_{1},\ldots,\mathsf{I}_{N}$ and hence the individual $\mathtt{SINR}$s on different branches are correlated since the interference terms originate from the same set of interferers, i.e., from the point process $\Phi$. The distribution of \eqref{eq:sir_general} can, in general, be obtained using the joint density of the interference amplitudes derived in\cite{chopra12} for the case of isotropic interference, i.e., averaging the conditional $\mathtt{SINR}$ distribution over the interference statistics. However, this approach is analytically involved since (i) the joint density cannot be given in closed-form and (ii) the sum of non-identical gamma random variables must be considered. Table~\ref{tab:notation} summarizes the notation used in this work.
\section{Success Probability of Dual-Branch MRC}\label{sec:cov_prob}
In this section, the performance of MRC receivers under the setting described in Section~\ref{sec:notation} is studied. We use the success probability as the performance metric, which is defined as
\begin{IEEEeqnarray}{rCl}
\mathtt{P}_{\text{\textnormal{MRC}}}~\raisebox{-0.03cm}{$\triangleq$}~\mathbb{P}\left(\sinr_{\text{\textnormal{MRC}}}\geq T\right)
\end{IEEEeqnarray}
for a modulation- and coding-specific $\mathtt{SINR}$-threshold $T>0$. The $\mathtt{P}_{\text{\textnormal{MRC}}}$ can be seen as the complementary cumulative distribution function of the $\sinr_{\text{\textnormal{MRC}}}$ or as 1-outage probability.
The number of antennas mounted on practical wireless devices typically remains small due to space limitations and complexity constraints, e.g., smartphones, WiFi routers, thereby often not exceeding $N=2$ antennas. For this special case, the following key result characterizes the resulting performance in terms of success probability.
\begin{theorem}[Success probability of dual-branch MRC] \label{thm:cov_prob} The success probability for dual-branch MRC ($N=2$) under the described setting is given by \eqref{eq:pc_theorem}
at the top of the page.
\end{theorem}
\begin{IEEEproof}
See Appendix~\ref{proof:cov_prob}.
\end{IEEEproof}
\setcounter{equation}{5}
The function ${}_{2}\mathbf{F}_{1}(a,b,c;z)~\raisebox{-0.03cm}{$\triangleq$}~{}_{2}{F}_{1}(a,b,c;z)/\Gamma(c)$ is known as the \textit{regularized} Gaussian hypergeometric function\cite{olver10} and is implemented in most numerical software programs. A method for efficient and robust semi-numerical evaluation of the success probability result of Theorem~\ref{thm:cov_prob} is presented and discussed in Section~\ref{sec:diff}.
\begin{remark}
The integral in \eqref{eq:pc_theorem} over $[0,\infty)$ can be split into two integrals with limits $[0,T)$ and $[T,\infty)$ to get rid of the $(\cdot)^{+}$ function and to exploit the fact that the integrand of the upper integral becomes zero for all $s$-derivatives.
\end{remark}
Making use of the functional relation ${}_{2}F_{1}(-1/2,1,2,z)=\tfrac{2}{3z}\left(1-(1-z)^{3/2}\right)$, the result in Theorem~\ref{thm:cov_prob} can be further simplified in the case of Rayleigh fading and a path loss exponent $\alpha=4$.
\begin{corollary}[Special case: $\alpha=4$, Rayleigh fading links] When $m_{\text{\textnormal{D}}}=m_{\text{\textnormal{I}}}=m=1$ (Rayleigh fading) and $\alpha=4$, the success probability under the described setting for dual-branch MRC ($N=2$) reduces to \eqref{eq:special_case1} at the top of the page.
\end{corollary}\setcounter{equation}{6}
Similar simplifications that express \eqref{eq:pc_theorem} through elementary functions can be obtained by invoking functional identities of the Gaussian hypergeometric function for suitable $\alpha$ and $m_{\text{\textnormal{I}}}$\cite{abramowitz64,olver10}.
\begin{remark} Letting $\mathtt{SNR}\to\infty$ in \eqref{eq:special_case1} and differentiating with respect to $t$, we recover the result from \cite{tanbourgi13_2}.
\end{remark}
Figure~\ref{fig:pc} shows the success probability $\mathtt{P}_{\text{\textnormal{MRC}}}$ over $T$ for different $m_{\text{\textnormal{D}}}=m_{\text{\textnormal{I}}}=m$ (identical Nakagami fading). It can be seen that the result from Theorem~\ref{thm:cov_prob} perfectly matches the simulation results. Furthermore, increasing the Nakagami fading parameter has two effects on $\mathtt{P}_{\text{\textnormal{MRC}}}$: for not too small values of $\mathtt{P}_{\text{\textnormal{MRC}}}$, decreasing channel variability ($m\uparrow$) improves transmission reliability, whereas for (non-practical) small values of $\mathtt{P}_{\text{\textnormal{MRC}}}$ this trend is reversed. Interestingly, all curves seem to intersect at one unique point (in this example around $T=2.3$ dB).
From the general result of Theorem~\ref{thm:cov_prob}, one can derive the success probability under pure interference-limited and pure noise-limited performance.
\begin{corollary}[Interference vs. noise]
The success probability $\lim_{\mathtt{SNR}\to\infty}\mathtt{P}_{\text{\textnormal{MRC}}}$ in the interference-limited regime is obtained by letting $\mathtt{SNR}\to\infty$ in \eqref{eq:pc_theorem}. Similarly, the success probability $\lim_{\lambda\to0}\mathtt{P}_{\text{\textnormal{MRC}}}$ in the noise-limited case can be recovered by letting $\lambda\to0$ in \eqref{eq:pc_theorem}, yielding $\mathtt{P}_{\text{\textnormal{MRC}}}=Q(2m_{\text{\textnormal{D}}},m_{\text{\textnormal{D}}} T/\mathtt{SNR})$.
\end{corollary}
\begin{IEEEproof}
By the dominated convergence theorem, we can interchange limit and integration in both cases. For the noise-limited case, we further note that
\begin{IEEEeqnarray}{rCl}
&&\sum\limits_{k=0}^{m_{\text{\textnormal{D}}}-1}\hspace{-.08cm}\frac{(-1)^{k+m_{\text{\textnormal{D}}}}}{k!\,\Gamma(m_{\text{\textnormal{D}}})}\hspace{-.05cm}\int_{0}^{\infty}\hspace{-.3cm}\frac{\partial^{k}\partial^{m_{\text{\textnormal{D}}}}}{z\,\partial s^{k}\partial t^{m_{\text{\textnormal{D}}}}}\hspace{-.1cm}\left[\exp\hspace{-.05cm}\left(-\frac{s\psi_{1}}{\mathtt{SNR}}-\frac{t\psi_{2}}{\mathtt{SNR}}\right)\right]_{\substack{s=1\\ t=1}}\hspace{-.15cm}\mathrm dz\IEEEnonumber\\
&&=\int_{0}^{\infty}\hspace{-.18cm}\left(\frac{m_{\text{\textnormal{D}}}}{\mathtt{SNR}}\right)^{\hspace{-.07cm}m_{\text{\textnormal{D}}}}\hspace{-.07cm}\frac{z^{m_{\text{\textnormal{D}}}-1}e^{-\frac{zm_{\text{\textnormal{D}}}}{\mathtt{SNR}}}}{\Gamma(m_{\text{\textnormal{D}}})}\,Q\hspace{-.1cm}\left(m_{\text{\textnormal{D}}},\frac{m_{\text{\textnormal{D}}}}{\mathtt{SNR}}(T-z)^{+}\right)\,\mathrm dz\IEEEnonumber\\
&&=\mathbb{E}_{\mathsf{g}_{2}\mathtt{SNR}}\big[\mathbb{P}_{\mathsf{g}_{1}\mathtt{SNR}}\left(\mathsf{g}_{1}\mathtt{SNR}+\mathsf{g}_{2}\mathtt{SNR}\geq T\left\lvert\right.\mathsf{g}_{2}\mathtt{SNR}\right)\big]\IEEEnonumber\\
&&=Q\left(2m_{\text{\textnormal{D}}},\tfrac{m_{\text{\textnormal{D}}} T}{\mathtt{SNR}}\right)
\end{IEEEeqnarray}
which concludes the proof.
\end{IEEEproof}
Another special case one may think of is when the channel variability becomes very small, i.e., $1/m_{\text{\textnormal{D}}},1/m_{\text{\textnormal{I}}}\to0$, eventually leading to the pure path loss model. However, taking the limit $m_{\text{\textnormal{D}}},m_{\text{\textnormal{I}}}\to\infty$ in \eqref{eq:pc_theorem} looks quite difficult.
\begin{remark}[Success Probability as $m_{\text{\textnormal{D}}},m_{\text{\textnormal{I}}}\to\infty$]
Since $\mathsf{g}_{n} \rightarrow 1$ and $\mathsf{h}_{n} \rightarrow 1$ as $m_{\text{\textnormal{D}}},m_{\text{\textnormal{I}}} \rightarrow \infty$, the $\sinr_{\text{\textnormal{MRC}}}$ of a $N$-branch receiver becomes $\tfrac{N}{\mathtt{SNR}^{-1}+ \mathsf{I}}$, with $\mathsf{I}=d^{\alpha}\sum_{\mathsf{x}_{i}\in\Phi}\|\mathsf{x}_{i}\|^{-\alpha}$, which is the same as the $\mathtt{SINR}$ of a single-branch receiver with $N$-fold received power increase. The corresponding $\mathtt{P}_{\text{\textnormal{MRC}}}$ can be characterized, e.g., by Laplace inversion\cite{baccelli09a} or by the dominant-interferer bounding technique\cite{weber10}. For the case of $\alpha=4$, a closed-form solution can be found in\cite{HaenggiBook}.
\end{remark}
\section{Comparison with Simpler Correlation Models}\label{sec:simple_models}
For analytical tractability, it is frequently assumed in the literature that the interference power across different branches is either equally-strong or statistically independent. Certainly, such simplifications may lead to an accuracy loss as the true interference correlation structure is distorted. Using the exact model derived in Section~\ref{sec:cov_prob}, this accuracy loss is studied next.\vspace{-.2cm}
\subsection{Full-Correlation Model}
In the full-correlation model, the current interference power is assumed equally strong across the branches, i.e., $\mathsf{I}_{n}\equiv\mathsf{I}_{m}$ for $m,n\in[1,\ldots,N]$, see for instance\cite{hunter08,crismani13}. This assumption effectively ignores the additional variability in the per-branch $\mathtt{SINR}$s resulting from the de-correlation effect of the fading on the interfering links.
\begin{figure}[t]
\centering
\includegraphics[width=.48\textwidth]{figure2.pdf}
\caption{$\mathtt{P}_{\text{\textnormal{MRC}}}$ vs. $T$ for different $m_{\text{\textnormal{D}}}=m_{\text{\textnormal{I}}}=m$ (identical Nakagami fading). Parameters are: $\lambda=10^{-3}$, $\alpha=4$, $d=10$ $\mathtt{SNR}=0$ dB. Marks represent simulation results.}\label{fig:pc}
\end{figure}
\begin{definition}[Full-correlation (FC) model]\label{def:fc}
In the FC model, the interference terms $\mathsf{I}_{n}$ at the $N$ branches are assumed to be equal, i.e., $\mathsf{h}_{m,i}\equiv\mathsf{h}_{n,i}$ for all $m,n\in[1,\ldots,N]$ and $i\in\mathbb{N}$. The corresponding post-combiner $\mathtt{SINR}$ is $\mathtt{SINR}^\text{\textnormal{FC}}_{\text{\textnormal{MRC}}}$.
\end{definition}
Hence, in the FC model the post-combiner $\mathtt{SINR}$ becomes
\begin{IEEEeqnarray}{rCl}
\mathtt{SINR}^\text{\textnormal{FC}}_{\text{\textnormal{MRC}}}=\frac{\sum_{n=1}^{N}\mathsf{g}_{n}}{\mathsf{I}+\mathtt{SNR}^{-1}}.
\end{IEEEeqnarray}
The next result gives the success probability $\mathtt{P}^{\text{\textnormal{FC}}}_{\text{\textnormal{MRC}}}$ in the FC model for arbitrary $N\geq1$.
\begin{proposition}[Success probability $\mathtt{P}^{\text{\textnormal{FC}}}_{\text{\textnormal{MRC}}}$ for FC model]\label{prop:cov_prob_fc}
The success probability for $N$-branch MRC in the FC model is
\begin{IEEEeqnarray}{rCl}
\mathtt{P}^{\text{\textnormal{FC}}}_{\text{\textnormal{MRC}}}&=&\hspace{-.15cm}\sum\limits_{k=0}^{Nm_{\text{\textnormal{D}}}-1}\hspace{-.1cm}\frac{(-1)^{k}}{k!}\,\frac{\partial^k}{\partial s^{k}}\left[\exp\left(-\frac{sm_{\text{\textnormal{D}}} T}{\mathtt{SNR}}-\lambda\pi d^{2}s^{2/\alpha}\right.\right.\IEEEnonumber\\
&&\left.\left.\hspace{-.15cm}\times T^{2/\alpha}\Gamma(1-2/\alpha)\frac{\Gamma(2/\alpha+m_{\text{\textnormal{I}}})}{\Gamma(m_{\text{\textnormal{I}}})}\left(\tfrac{m_{\text{\textnormal{D}}}}{m_{\text{\textnormal{I}}}}\right)^{2/\alpha}\right)\right]_{s=1}\hspace{-.3cm}.\IEEEeqnarraynumspace
\end{IEEEeqnarray}
\end{proposition}
\begin{IEEEproof}
We first note that $\sum_{n=1}^{N}\mathsf{g}_{n}$ is Gamma distributed with shape parameter $Nm_{\text{\textnormal{D}}}$ and scale parameter $1/m_{\text{\textnormal{D}}}$\cite{feller71}. Applying a similar technique as in the proof of Theorem~\ref{thm:cov_prob}, we obtain
\begin{IEEEeqnarray}{rCl}
\mathtt{P}^{\text{\textnormal{FC}}}_{\text{\textnormal{MRC}}}&=&\mathbb{E}_{\mathsf{I}}\big[Q\left(Nm_{\text{\textnormal{D}}},m_{\text{\textnormal{D}}} T(\mathsf{I}+\mathtt{SNR}^{-1})\right)\big]\IEEEnonumber\\
&=&\sum\limits_{k=0}^{Nm_{\text{\textnormal{D}}}-1}\frac{(-1)^{k}}{k!}\,\frac{\partial^{k}}{\partial s^{k}}\big[\mathcal{L}_{\mathsf{Y}}(s)\big]_{s=1},
\end{IEEEeqnarray}
where $\mathsf{Y}~\raisebox{-0.03cm}{$\triangleq$}~ m_{\text{\textnormal{D}}} T\,(\mathsf{I}+\mathtt{SNR}^{-1})$. Finally, the Laplace transform $\mathcal{L}_{\mathsf{Y}}(s)$ is computed using the probability generating functional (PGFL) of a PPP \cite{stoyan95}.
\end{IEEEproof}
\begin{figure*}[!t]
\centerline{\subfloat[Success Probability]{\includegraphics[width=.48\textwidth]{figure3.pdf}
\label{fig:comp_exact_fc_nc}}
\hfil
\subfloat[FC Outage Probability Deviation]{\includegraphics[width=0.48\textwidth]{figure4.pdf}
\label{fig:dev_mrc_fc}}}
\caption{(a) Success probability vs. $\mathtt{SINR}$-threshold $T$ for different $m_{\text{\textnormal{D}}}=m_{\text{\textnormal{I}}}=m$. Marks represent simulation results. Parameters are: $\lambda=10^{-3}$, $\alpha=4$, $d=10$, $\mathtt{SNR}=0$ dB. (b) Outage probability deviation of FC model vs. $\mathtt{SINR}$-threshold $T$ for different $m_{\text{\textnormal{D}}}$, $m_{\text{\textnormal{I}}}$, and $\alpha$. Parameters are $\lambda=10^{-3}$, $d=10$, $\mathtt{SNR}=10(4-\alpha)$.}
\vspace*{4pt}
\end{figure*}
\subsection{No-Correlation Model}
In contrast to modeling the interference terms $\mathsf{I}_{n}$ as being (fully) correlated, one can also assume statistical independence among them. Then, \eqref{eq:sir_general} reduces to a sum over i.i.d. random variables. Note that this no-correlation model overestimates the true diversity.
\begin{definition}[No-correlation (NC) model]\label{def:nc}
In the NC model, the interference terms $\mathsf{I}_{n}$ at the $N$ branches are assumed to be statistically independent, i.e., $\mathbb{P}\left(\{\mathsf{I}_{n}\in A\}\cap\{\mathsf{I}_{m}\in B\}\right)=\mathbb{P}\left(\mathsf{I}_{n}\in A\right)\,\mathbb{P}\left(\mathsf{I}_{m}\in B\right)$ for all $m,n\in[1,\ldots,N]$ and all Borel sets $A,B$ on $\mathbb{R}^{+}_{0}$. The corresponding post-combiner $\mathtt{SINR}$ is denoted by $\mathtt{SINR}^\text{\textnormal{NC}}_{\text{\textnormal{MRC}}}$.
\end{definition}
Note that Definition~\ref{def:nc} implies that the interference experienced at each branch originates from a distinct interferer set $\{\mathsf{x}_{i}\}_{i=0}^{\infty}$. For $N>1$, one can in general (numerically) obtain the success probability $\mathtt{P}^{\text{\textnormal{NC}}}_{\text{\textnormal{MRC}}}$ by the Laplace inversion technique for sums of independent random variables, provided the Laplace transform of the per-antenna $\mathtt{SINR}$ is known.
\begin{proposition}[Success probability $\mathtt{P}^{\text{\textnormal{NC}}}_{\text{\textnormal{MRC}}}$ for NC model and $N=2$]\label{prop:cov_prob_nc}
The success probability for dual-branch MRC in the NC model has the same form as in \eqref{eq:pc_theorem} of Theorem~\ref{thm:cov_prob} with $\mathcal{A}(z,s,t)$ replaced b
\begin{IEEEeqnarray}{rCl}
\mathcal{B}(z,s,t) &=& \Gamma(1-2/\alpha)\,d^{2}\,\frac{\Gamma(2/\alpha+m_{\text{\textnormal{I}}})}{\Gamma(m_{\text{\textnormal{I}}})}\,\left(\frac{m_{\text{\textnormal{D}}}}{m_{\text{\textnormal{I}}}}\right)^{2/\alpha}\IEEEnonumber\\
&&\times\left( \left(s\,(T-z)^{+}\right)^{2/\alpha}+(zt)^{2/\alpha}\right).\IEEEeqnarraynumspace\label{eq:prob_nc}
\end{IEEEeqnarray}
\end{proposition}
\begin{IEEEproof}
The proof is analogous to the proof of Theorem~\ref{thm:cov_prob} until step (a) in \eqref{eq:proof_thm1_step7}. Due to distinct interferer sets across the two branches, the expectation with respect to $\Phi$ in \eqref{eq:proof_thm1_step7} step (a) decomposes into the product
\begin{IEEEeqnarray}{rCl}
&&\mathbb{E}_{\Phi}\left[\prod\limits_{\mathsf{x}_{i}\in\Phi}\mathbb{E}_{\mathsf{h}_{1}}\Big[\exp\left(-s\psi_{1}d^{\alpha}\mathsf{h}_{1}\|\mathsf{x}_{i}\|^{-\alpha}\right)\Big]\right]\IEEEnonumber\\
&&\quad\times\mathbb{E}_{\Phi}\left[\prod\limits_{\mathsf{x}_{i}\in\Phi}\mathbb{E}_{\mathsf{h}_{2}}\Big[\exp\left(-t\psi_{2}d^{\alpha}\mathsf{h}_{2}\|\mathsf{x}_{i}\|^{-\alpha}\right)\Big]\right]\IEEEnonumber\\
&&\hspace{.2cm}\overset{\text{(a)}}{=}\exp\left(-\lambda\pi\int_{0}^{\infty}2r\,\left(2-\mathbb{E}_{\mathsf{h}_{1}}\left[e^{-s\psi_{1}d^{\alpha}\mathsf{h}_{1}r^{-\alpha}}\right]\right.\right.\IEEEnonumber\\
&&\left.\left.\qquad\qquad\qquad\qquad\qquad-\mathbb{E}_{\mathsf{h}_{2}}\left[e^{-t\psi_{2}d^{\alpha}\mathsf{h}_{2}r^{-\alpha}}\right]\right)\,\mathrm dr\right),\IEEEeqnarraynumspace\label{eq:nc_proof}
\end{IEEEeqnarray}
where (a) follows from the PGFL for PPPs\cite{stoyan95}. After evaluating the integral with respect to $r$ and using the fact that $\mathbb{E}[\mathsf{h}_{n}^{2/\alpha}]=m_{\text{\textnormal{I}}}^{-2/\alpha}\Gamma(2/\alpha+m_{\text{\textnormal{I}}})/\Gamma(m_{\text{\textnormal{I}}})$, \eqref{eq:nc_proof} becomes $\exp\left(-\lambda\pi\,\mathcal{B}(z,s,t)\right)$. Substituting this back into \eqref{eq:proof_thm1_step7} step (a) proves the result.
\end{IEEEproof}
Figure~\ref{fig:comp_exact_fc_nc} compares the success probability for the exact model against the success probability for the NC and FC correlation models introduced above. The simulation results (indicated by marks) confirm our theoretical expressions. It can be seen that the NC model is considerably optimistic for practically relevant $\mathtt{P}_{\text{\textnormal{MRC}}}$ values. Interestingly, the gap between $\mathtt{P}_{\text{\textnormal{MRC}}}$ and $\mathtt{P}^{\text{\textnormal{NC}}}_{\text{\textnormal{MRC}}}$ increases with the Nakagami parameter. This is due to the fact that the de-correlation effect of the channel fading is reduced as $m_{\text{\textnormal{I}}}$ increases which, in turn, increases the correlation across the per-antenna $\mathtt{SINR}$s. Ignoring correlation hence becomes even more inappropriate as the true diversity is strongly overestimated in this case.
In contrast, Fig.~\ref{fig:comp_exact_fc_nc} suggests that the FC model yields a closer approximate characterization of $\mathtt{P}_{\text{\textnormal{MRC}}}$; the gap between $\mathtt{P}_{\text{\textnormal{MRC}}}$ and $\mathtt{P}^{\text{\textnormal{FC}}}_{\text{\textnormal{MRC}}}$ remains fairly small over a wide range of $T$. In\cite{tanbourgi13_2} it was shown for the case $m_{\text{\textnormal{D}}}=m_{\text{\textnormal{I}}}=1$ that the size of this gap depends on the path loss exponent $\alpha$ and ranges from $9\%$ for $\alpha=6$ to $27\%$ for $\alpha=2.5$. For larger Nakagami fading parameters the gap seems to vanish, as the $\mathtt{P}_{\text{\textnormal{MRC}}}$ and $\mathtt{P}^{\text{\textnormal{FC}}}_{\text{\textnormal{MRC}}}$ lines become indistinguishable already for $m_{\text{\textnormal{D}}}=m_{\text{\textnormal{I}}}=4$. This observation motivates the following corollary.
\begin{corollary}[Asymptotic equivalence between exact and FC model]\label{col:asym_equiv}
The exact and the FC model become asymptotically equivalent in terms of success probability as $m_{\text{\textnormal{I}}}\to\infty$.
\end{corollary}
\begin{IEEEproof}
We first consider the Laplace transform of $\mathsf{H}$ in \eqref{eq:frac_mom_H} of Appendix~\ref{proof:cov_prob} as $m_{\text{\textnormal{I}}}\to\infty$. Since $\lim_{m_{\text{\textnormal{I}}}\to\infty}\mathcal{L}_{\mathsf{H}}(u)=\exp\left(-u\,(s\psi_{1}+t\psi_{2})\right)$, this implies that $\mathsf{H}$ converges in distribution to a degenerative random variable with density $\delta(s\psi_{1}+t\psi_{2})$. Since $\mathsf{H}$ is uniformly integrable for all $m_{\text{\textnormal{I}}}\geq1$, it then follows from \cite[Theorem~5.9]{gut05} that
\begin{IEEEeqnarray}{rCl}
\lim_{m_{\text{\textnormal{I}}}\to\infty}\mathbb{E}\left[\mathsf{H}^{2/\alpha}\right]=(s\psi_{1}+t\psi_{2})^{2/\alpha}.\label{eq:asym_tight1}\IEEEeqnarraynumspace
\end{IEEEeqnarray}
On the other hand, using the same approach as in the proof of Theorem~\ref{thm:cov_prob} until step (a) in \eqref{eq:proof_thm1_step7}, $\mathtt{P}^{\text{\textnormal{FC}}}_{\text{\textnormal{MRC}}}$ can be written as
\begin{IEEEeqnarray}{rCl}
&&\sum\limits_{k=0}^{m_{\text{\textnormal{D}}}-1}\frac{(-1)^{k+m_{\text{\textnormal{D}}}}}{k!\,\Gamma(m_{\text{\textnormal{D}}})}\int_{0}^{\infty}\frac{\partial^{k}\partial^{m_{\text{\textnormal{D}}}}}{z\,\partial s^{k}\partial t^{m_{\text{\textnormal{D}}}}}\,\Bigg[\exp\left(-\frac{s\psi_{1}}{\mathtt{SNR}}-\frac{t\psi_{2}}{\mathtt{SNR}}\right)\IEEEnonumber\\
&&\,\times\mathbb{E}_{\Phi}\bigg[\prod\limits_{\mathsf{x}_{i}\in\Phi}\hspace{-.05cm}\mathbb{E}_{\mathsf{h}}\Big[\exp\left(-(s\psi_{1}+t\psi_{2})d^{\alpha}\mathsf{h}\|\mathsf{x}_{i}\|^{-\alpha}\right)\Big]\bigg]\Bigg]_{\substack{s=1\\ t=1}}\hspace{-.25cm}\mathrm dz,\IEEEeqnarraynumspace\label{eq:proof_equiv1}
\end{IEEEeqnarray}
where we have exploited the fact that $\mathsf{h}_{m,i}\equiv\mathsf{h}_{n,i}$ for all $m,n\in[1,\ldots,N]$ and $i\in\mathbb{N}$ by Definition~\ref{def:fc}. Using the PGFL for PPPs\cite{stoyan95}, the expectation with respect to $\Phi$ in \eqref{eq:proof_equiv1} can be computed as
\begin{IEEEeqnarray}{rCl}
&&\exp\left(-\lambda\pi\int_{0}^{\infty}2r\left(1-\mathbb{E}_{\mathsf{h}}\left[e^{-(s\psi_{1}+t\psi_{2})d^{\alpha}\mathsf{h}\|\mathsf{x}_{i}\|^{-\alpha}}\right]\right)\,\mathrm dr\right)\IEEEnonumber\\
&&\qquad=\exp\bigg(-\lambda\pi(s\psi_{1}+t\psi_{2})^{2/\alpha}d^{2}\IEEEnonumber\\
&&\qquad\qquad\qquad\quad\times\Gamma(1-2/\alpha)\frac{\Gamma(2/\alpha+m_{\text{\textnormal{I}}})}{m_{\text{\textnormal{I}}}^{2/\alpha}\Gamma(m_{\text{\textnormal{I}}})}\bigg)\IEEEeqnarraynumspace
\end{IEEEeqnarray}
and shown to converge to $\exp(-\lambda\pi(s\psi_{1}+t\psi_{2})^{2/\alpha}\,d^{2}\,\Gamma(1-2/\alpha)$ as $m_{\text{\textnormal{I}}}\to\infty$. Combining this observation for the FC model with the fact that after substituting \eqref{eq:asym_tight1} into \eqref{eq:poisson_mean2} the same expression is obtained for the exact model, the asymptotic equivalence of the two models follows.
\end{IEEEproof}
\begin{figure*}[!t]
\normalsize
\setcounter{mycounter}{\value{equation}}
\setcounter{equation}{16}
\begin{subnumcases}{\label{eq:diff_A}\left.\frac{\partial^{n}\mathcal{A}(z,s,t)}{\partial t^{n}}\right\lvert_{t=1} =}
(-1)^{n} z^{2/\alpha}\,d^{2}\,\Gamma(1-2/\alpha)\,\left(\frac{m_{\text{\textnormal{D}}}}{m_{\text{\textnormal{I}}}}\right)^{2/\alpha}\frac{(-2/\alpha)_{n}(m_{\text{\textnormal{I}}})_{n}}{\Gamma(2m_{\text{\textnormal{I}}}+n)}\,\Gamma(2/\alpha+2m_{\text{\textnormal{I}}})\notag\\
\qquad\qquad\quad\times{}_{2}{F}_{1}\left(-2/\alpha+n,m_{\text{\textnormal{I}}},2m_{\text{\textnormal{I}}}+n,1-\frac{(T-z)s}{z}\right),\quad 0\leq z< T\\
z^{2/\alpha}\,d^{2}\,\Gamma(1-2/\alpha)\left(\frac{m_{\text{\textnormal{D}}}}{m_{\text{\textnormal{I}}}}\right)^{2/\alpha}\frac{\Gamma(2/\alpha+m_{\text{\textnormal{I}}})}{\Gamma(m_{\text{\textnormal{I}}})}\,(2/\alpha-n+1)_{n},\quad z\geq T
\end{subnumcases}
\setcounter{equation}{\value{mycounter}}
\hrulefill
\setcounter{tmp_equation}{\value{equation}}
\setcounter{equation}{21}
\begin{IEEEeqnarray}{rCl}
C_k~\raisebox{-0.03cm}{$\triangleq$}~ \int_{0}^{1}u^{2/\alpha-1-k}\left(1-u\right)^{k}\,\hypergeombf{-\tfrac{2}{\alpha}+m_{\text{\textnormal{D}}}+k}{m_{\text{\textnormal{I}}}+k}{2m_{\text{\textnormal{I}}}+m_{\text{\textnormal{D}}}+k}{\tfrac{2u-1}{u}}\,\mathrm du\label{eq:asym_cp2}
\end{IEEEeqnarray}
\setcounter{equation}{\value{tmp_equation}}
\hrulefill
\vspace*{4pt}
\end{figure*}
Corollary~\ref{col:asym_equiv} is particularly useful for justifying the use of the FC model for scenarios in which the interfering links undergo poor scattering. The remaining accuracy loss with respect to the exact model can be further studied by looking at the outage probability deviation $\delta_{\text{FC}}~\raisebox{-0.03cm}{$\triangleq$}~(1-\mathtt{P}^{\text{\textnormal{FC}}}_{\text{\textnormal{MRC}}})/(1-\mathtt{P}_{\text{\textnormal{MRC}}})$.
Fig~\ref{fig:dev_mrc_fc} illustrates the impact of $m_{\text{\textnormal{D}}}$, $m_{\text{\textnormal{I}}}$ and $\alpha$ on the deviation $\delta_{\text{FC}}$. In accordance with\cite{tanbourgi13_2}, the deviation decreases with $\alpha$ and/or $T$ which is due to the fact that interference power becomes effectively dominated by a few nearby interferers only; with a smaller set of interferers the interference naturally becomes more correlated. Note that the deviation $\delta_{\text{FC}}$ becomes negative for sufficiently large $T$ (practically non-relevant low $\mathtt{P}_{\text{\textnormal{MRC}}}$ values). This observation for the FC model is consistent with the findings in\cite{tanbourgi13_2,crismani13}. Furthermore, it can be seen how non-identical Nakagami fading affects the deviation: similar to what was observed in Fig.~\ref{fig:comp_exact_fc_nc} for the case of identical Nakagami fading, the deviation decreases with smaller variability of the fading on the interfering links, i.e., as $m_{\text{\textnormal{I}}}$ increases.
Interestingly, this is not true for the fading on the desired links as the deviation increases with $m_{\text{\textnormal{D}}}$. This is due to the fact that for a smaller variability of fading on the desired links, the ``modeling error'' associated with the FC model becomes more salient. In this example, the additional deviation compared to the identical Nakagami case is about $5\%$ for $\alpha=5$. Hence, the FC model is inappropriate when fading variability on the desired links is smaller than on the interfering links, for instance when channel-inversion power control is used.
\section{Discussion}\label{sec:numerical}
In order to complement the theoretic work presented in the prior sections, we will discuss some related practical aspects next. First, a method for efficiently computing the result of Theorem~\ref{thm:cov_prob} is presented. Furthermore, we study the performance of dual-branch MRC in the low outage probability regime. Then, we compare the performance of MRC to other popular combining methods under a similar interference and fading setting. Finally, we also study the local throughput of dual-branch MRC receivers.
\subsection{Semi-Numerical Evaluation of Theorem~\ref{thm:cov_prob}}\label{sec:diff}
The mathematical form of \eqref{eq:pc_theorem} in Theorem~\ref{thm:cov_prob} involves two higher-order derivatives of a composite function which renders an analytical calculation of $\mathtt{P}_{\text{\textnormal{MRC}}}$ complicated. To compute $\mathtt{P}_{\text{\textnormal{MRC}}}$ for a set of parameters, one thus has to resort to numerical methods, of which several approaches exist in the literature. We next propose and discuss a methodology for efficient and robust semi-numerical evaluation of \eqref{eq:pc_theorem}.
\textbf{Fa\`{a} di Bruno's formula and Bell polynomials for analytical $t$-differentiation:} High-order derivatives of general composite functions of the form $f(g(x))$ can be evaluated using the well-known Fa\`{a} di Bruno formula, see for instance\cite{bruno1857,olver10}. Whenever the outer function $f(\cdot)$ is an exponential function (as in our case), it is useful to rewrite Fa\`{a} di Bruno's formula using the notion of Bell polynomials\cite{johnson07}
\begin{IEEEeqnarray}{rCl}
\frac{\partial^{n}}{\partial x^{n}}f(g(x)) = f(g(x))\,B_{n}\left(g^{(1)}(x),\ldots,g^{(n)}(x)\right),\IEEEeqnarraynumspace\label{eq:faa_bell}
\end{IEEEeqnarray}
where $B_{n}\left(x_{1},\ldots,x_{n}\right)$ is the $n\th$ \textit{complete} Bell polynomial. The complete Bell polynomials can be efficiently obtained using a matrix determinant identity\cite{ivanoff58}. It remains to compute the derivatives of the inner function $g(x)$ up to order $n$. Transferred to our case, we thus need to compute the derivatives of the exponent in \eqref{eq:pc_theorem} up to order $m_{\text{\textnormal{D}}}$.
\begin{corollary}[$n\th$ $t$-derivative of $\mathcal{A}(z,s,t)$]
The $n\th$ $t$-derivative of $\mathcal{A}(z,s,t)$ evaluated at $t=1$ is given in \eqref{eq:diff_A} at the top of the next page, where $(a)_{n}~\raisebox{-0.03cm}{$\triangleq$}~\Gamma(a+n)/\Gamma(a)$ is the Pochhammer symbol\cite{olver10}.
\end{corollary}
Using the approach described above, the $t$-differentiation is computed analytically, i.e., without numerical difference methods. For the subsequent $s$-differentiation, however, Fa\`{a} di Bruno's formula may not be the best choice since the outer function is no longer an exponential function and the derivatives of the inner function are difficult to obtain. We therefore propose a different approach for the $s$-differentiation.
\textbf{Chebyshev interpolation method for numerical $s$-differentiation:} Before explaining this differentiation technique, we first note that the $\partial^{k}/\partial s^{k}$ operator in \eqref{eq:pc_theorem} can be moved outside the $z$-integration according to Leibniz's integration rule for improper integrals\cite{olver10}. This step comes with the advantage of first numerically computing the integral without caring about how to perform the $s$-differentiation. Interpreting the integration result as a function of $s$, say $V(s)$, we then propose to approximate this function using the Chebyshev interpolation method in an interval $[a,b]$, yielding the approximation\cite{press07}
\setcounter{equation}{17}
\begin{IEEEeqnarray}{rCl}
V(s)\approx\tilde V(s)~\raisebox{-0.03cm}{$\triangleq$}~-\frac{c_{0}}{2}+\sum\limits_{i=0}^{p-1}c_{\ell}\,T_{\ell}\left(\tfrac{s-(a+b)/2}{(b-a)/2}\right),\IEEEeqnarraynumspace\label{eq:cheby_ap}
\end{IEEEeqnarray}
where $s\in[a,b]$, $T_{\ell}(x)~\raisebox{-0.03cm}{$\triangleq$}~\cos(\ell\arccos x)$ is the $\ell\th$ Chebyshev polynomial of the first kind, $p$ is the number of sampling points, and \begin{IEEEeqnarray}{rCl}
c_{\ell}&=&\frac{2}{p}\sum\limits_{i=0}^{p-1}V\left(\tfrac{1}{2}(b-a)\cos\left[\tfrac{\pi}{p}(i+1/2)\right]+\tfrac{1}{2}(a+b)\right)\IEEEnonumber\\
&&\qquad\times\cos\left[\tfrac{\ell\pi}{p}(i+1/2)\right]\label{eq:cheby_nodes}
\end{IEEEeqnarray}
is the $\ell\th$ Chebyshev node. Differentiating $\tilde V(s)$ in \eqref{eq:cheby_ap} instead of $V(s)$ at the point $s=1$, we then obtain
\begin{IEEEeqnarray}{rCl}
\left.\frac{\partial^{k}V(s)}{\partial s^{k}}\right\lvert_{s=1}\hspace{-.15cm}&\approx&\left.\frac{\partial^{k}\tilde V(s)}{\partial s^{k}}\right\lvert_{s=1}\IEEEnonumber\\
&=&\sum\limits_{\ell=0}^{p-1}c_{\ell}\,\frac{\partial^{k}}{\partial s^{k}}\left[T_{\ell}\left(\tfrac{s-(a+b)/2}{(b-a)/2}\right)\right]_{s=1}\IEEEnonumber\\
&\overset{\text{(a)}}{=}&\left(\frac{2}{b-a}\right)^{k}\,\sum\limits_{\ell=k}^{p-1}c_{\ell}\,T_{\ell}^{(k)}\left(\tfrac{1-(a+b)/2}{(b-a)/2}\right),\IEEEeqnarraynumspace\label{eq:cheby_der}
\end{IEEEeqnarray}
where (a) follows from the fact that $\partial^{k}T_{\ell}(s)/\partial s^{k}=0$ when $\ell<k$ for all $s$. It is well-known that the Chebyshev approximation has the smallest maximum error among all polynomial approximations. This is due to the fact that end-points are effectively avoided through projecting the function's domain onto the angular interval $[0,\pi]$; thereby achieving exponential convergence as $p$ increases\cite{press07}.
A step-by-step overview of the proposed methodology for evaluating \eqref{eq:pc_theorem} is depicted in Fig.~\ref{alg:cp}. All numerical results and figures in this work were obtained using this methodology.\vspace{.1cm}
{\it Some comments regarding the numerical recipe in Fig.~\ref{alg:cp}:}
\begin{itemize}
\item Line 2: We exploit the fact that the higher-order $s$-differentiation can be moved outside the integral. This is especially useful because the $z$-integration can be efficiently computed using powerful build-in numerical integration tools with maximum-error criterion.
\item Line 6: We used $p=m_{\text{\textnormal{D}}}+5$ throughout this work, which was found to yield a good balance between complexity and accuracy. Furthermore, we set $a=.8$ and $b=1.2$.
\item Lines 7--9: This ``for''-loop is the most time-consuming task and should be parallelized whenever allowed by the hardware and numerical software.
\item Line 18: When $\mathtt{SNR}<\infty$, the linear combination of $\mathtt{SNR}$-related term and $\mathcal{A}(z,s,t)$ in the exponent of \eqref{eq:pc_theorem} must be differentiated at $t=1$. The former has first-order derivative $zm_{\text{\textnormal{D}}}/\mathtt{SNR}$ and higher-order derivatives equal to zero.
\end{itemize}
\begin{figure}[t]
\small
\begin{spacing}{1.1}
\begin{boxedalgorithmic}
\Procedure {Evaluation of \eqref{eq:pc_theorem}}{}
\State{$w_{0},\ldots,w_{m_{\text{\textnormal{D}}}-1}\gets s\text{-\textsc{Diff}}(m_{\text{\textnormal{D}}})$}
\State{$\mathtt{P}_{\text{\textnormal{MRC}}}=\sum_{k=0}^{m_{\text{\textnormal{D}}}-1}(-1)^{k+m_{\text{\textnormal{D}}}}\frac{w_k}{k!\Gamma(m_{\text{\textnormal{D}}})}$}
\EndProcedure
\Statex
\Function{$s$-Diff}{$m_{\text{\textnormal{D}}}$}\Comment{$s$-derivatives up to order $m_{\text{\textnormal{D}}}-1$}
\State{$s\gets[a,\ldots,b]$}\Comment{Chebyshev points, $0<a<1<b$}
\ForP{$\ell\gets0,p-1$}
\State $V[\ell]\gets \int_0^{\infty}t\text{-\textsc{Diff}}(z,s[\ell])\,\tfrac{\mathrm dz}{z}$\Comment{Values at Chebyshev points}
\EndForP
\State{$c_{1},\ldots,c_{p}\gets\eqref{eq:cheby_nodes}$}\Comment{Get all Chebyshev nodes}
\For{$k\gets0,m_{\text{\textnormal{D}}}-1$}
\State{$\partial^{k}\tilde V(s)/\partial s^{k}|_{s=1}\gets\eqref{eq:cheby_der}$}\Comment{Differentiate interpolant}
\EndFor
\EndFunction
\Statex
\Function{$t$-Diff}{$z,s$}\Comment{$m_{\text{\textnormal{D}}}$-th $t$-derivative for specific $z,s$}
\State{$f(x)\gets e^{x}$}
\State{$g^{(1)}(1),\ldots,g^{(m_{\text{\textnormal{D}}})}(1)\gets$ \eqref{eq:diff_A} }\Comment{Get inner $t$-derivatives} \State{$\frac{\partial^{m_{\text{\textnormal{D}}}}}{\partial t^{m_{\text{\textnormal{D}}}}}f(g(t))\gets$ \eqref{eq:faa_bell}}\Comment{Invoke Fa\`{a} di Bruno's formula}
\EndFunction
\end{boxedalgorithmic}
\caption{Numerical recipe for proposed semi-numerical evaluation of \eqref{eq:pc_theorem}.}\label{alg:cp}\vspace{-.3cm}
\end{spacing}
\end{figure}
\subsection{Asymptotic Analysis of Dual-Branch MRC}\label{sec:asym}
Practical communications systems typically operate at rather small outage probabilities in order to be energy-efficient. It is therefore interesting to study the performance of MRC in the small outage probability regime, i.e., when $\mathtt{P}_{\text{\textnormal{MRC}}}\to1$. A second motivation for such an asymptotic analysis is that the resulting asymptotic outage probability expression often follows a fairly simple law that can be characterized in closed-form. In this regard, it would be advantageous to obtain an asymptotic expression for $\mathtt{P}_{\text{\textnormal{MRC}}}$ in \eqref{eq:pc_theorem} that does no longer contain an improper-integral over two higher-order derivatives. In the following, we will consider the asymptotic performance of dual-branch MRC in the absence of receiver noise. A similar though more bulky expression can be derived also for the case with receiver noise, however, with no additional insights.
\begin{figure*}[!t]
\centerline{\subfloat[Asymptotic Outage Probability]{\includegraphics[width=0.48\textwidth]{figure5}
\label{fig:asym_ps}}
\hfil
\subfloat[Relative Outage Probability Reduction]{\includegraphics[width=0.49\textwidth]{figure6}
\label{fig:rel_gain_single_antenna}}}
\caption{(a) Outage probability of dual-branch MRC in the low outage regime for exact, FC, and NC model. ``Blind MRC" corresponds to $1-\mathtt{P}_{\text{\textnormal{MRC}}}^{\text{blind}}(2)$ in \eqref{eq:chopra}. Parameters are: $\lambda=10^{-3}$, $d=10$, $\alpha=3.5$, $m_{\text{\textnormal{D}}}=4$, $m_{\text{\textnormal{I}}}=1.5$. No receiver noise. (b) Relative outage probability reduction $\Delta_{\text{MRC-SA}}$ when switching from single-antenna to dual-branch MRC. Nakagami parameters are $m_{\text{\textnormal{D}}}=m_{\text{\textnormal{I}}}=m$ (symmetric case). No receiver noise.}
\end{figure*}
\begin{corollary}[Asymptotic $\mathtt{P}_{\text{\textnormal{MRC}}}$]\label{cor:asym_cp} In the absence of noise, the success probability for dual-branch MRC under the described setting becomes
\begin{IEEEeqnarray}{rCl}
\mathtt{P}_{\text{\textnormal{MRC}}}&\hspace{-.05cm}\sim&\hspace{-.05cm}1- \kappa\,T^{2/\alpha}\,\frac{\Gamma(m_{\text{\textnormal{D}}}-\tfrac{2}{\alpha})\Gamma(m_{\text{\textnormal{I}}}+\tfrac{2}{\alpha})}{\Gamma(m_{\text{\textnormal{I}}})\,\Gamma(m_{\text{\textnormal{D}}})}\IEEEnonumber\\
&&\hspace{-.45cm}+\tfrac{2}{\alpha}\kappa\,T^{2/\alpha}\frac{\Gamma(2m_{\text{\textnormal{I}}}+\tfrac{2}{\alpha})}{B(m_{\text{\textnormal{I}}},m_{\text{\textnormal{D}}})}\hspace{-.05cm}
\sum\limits_{k=0}^{m_{\text{\textnormal{D}}}-1}\hspace{-.12cm}\frac{\Gamma(-\tfrac{2}{\alpha}+m_{\text{\textnormal{D}}}+k)\,C_k}{B(m_{\text{\textnormal{I}}},k+1)(m_{\text{\textnormal{I}}}+k)}\IEEEeqnarraynumspace\label{eq:asym_cp1}
\end{IEEEeqnarray}
\setcounter{mycounter}{\value{equation}}
\setcounter{equation}{\value{mycounter}+1}
as $T\to0$, where $B(x,y)~\raisebox{-0.03cm}{$\triangleq$}~\tfrac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$ is the Beta function\cite{olver10}, $\kappa~\raisebox{-0.03cm}{$\triangleq$}~\pi\lambda d^2(m_{\text{\textnormal{D}}}/m_{\text{\textnormal{I}}})^{2/\alpha}$ and $C_k$ is given by \eqref{eq:asym_cp2} at the top of the page.
\end{corollary}
Note that \eqref{eq:asym_cp1} is a closed-form expression, i.e., it does neither contain an improper integral nor higher-order derivatives. The integral in \eqref{eq:asym_cp2} can be solved using standard numerical software. For the special case $m_{\text{\textnormal{D}}}=m_{\text{\textnormal{I}}}=1$ (Rayleigh fading model) and $\alpha=4$, we obtain $C_0=2 +2^{-3/2}\log(6 - 4\sqrt{2})-2^{-1/2}\log(2+\sqrt{2})\approx0.753$ and \eqref{eq:asym_cp1} then reduces to
\begin{IEEEeqnarray}{rCl}
\mathtt{P}_{\text{\textnormal{MRC}}}\sim1-\kappa\,T^{1/2}\frac{\pi}{2}\left(1-\frac{3}{4}\,0.753\right)\quad\text{as }T\to0.
\end{IEEEeqnarray}
Figure~\ref{fig:asym_ps} shows the outage probability for the exact, NC, and FC model in the small outage regime for $m_{\text{\textnormal{D}}}=4$, $m_{\text{\textnormal{I}}}=1.5$ and $\alpha=3.5$. Also shown is the asymptotic expression from \eqref{eq:asym_cp1} of Corollary~\ref{cor:asym_cp}. For reference, we also included the asymptotic outage probability expression from\cite[(5.24)]{chopra11} for $N$-antenna MRC for the isotropic interference model
\begin{IEEEeqnarray}{rCl}
1-\mathtt{P}_{\text{\textnormal{MRC}}}^{\text{blind}}(N)\sim\kappa\,T^{2/\alpha}\frac{\Gamma(m_{\text{\textnormal{I}}}+\tfrac{2}{\alpha})\,\Gamma(Nm_{\text{\textnormal{D}}}-\tfrac{2}{\alpha})}{\Gamma(m_{\text{\textnormal{I}}})\,\Gamma(Nm_{\text{\textnormal{D}}})}.\IEEEeqnarraynumspace\label{eq:chopra}
\end{IEEEeqnarray}
We refer to \eqref{eq:chopra} as the asymptotic outage probability for {\it interference-blind} MRC, since in the isotropic interference model the MRC combining weights in\cite{chopra11} depend only on the fading gains of the desired link, cf.~\cite[Sec.~5.5.2]{chopra11}. First, it can be seen that the semi-numerical approach discussed in Section~\ref{sec:diff} accurately reflects the performance also in the low outage regime. Furthermore, the asymptotic expression in \eqref{eq:chopra} for interference-blind MRC corresponds to the outage probability for the FC model as $T\to0$. This is intuitively clear as the combining weights for interference-blind MRC do not take into account varying interference power across antennas; as a result, the combining is performed presuming identical interference power at all antennas, which corresponds to the FC model.
We further observe that the NC model cannot capture the true diversity order as the diversity that can be harvested is significantly overestimated. A similar insight was obtained in\cite{tanbourgi13_2} for the case of Rayleigh fading links.
\begin{remark}\label{rem:mrc_gain}
The first term in \eqref{eq:asym_cp1} corresponds to the asymptotic success probability for single-antenna receivers, which was derived in\cite{ganti11}. Hence, the second term in \eqref{eq:asym_cp1} characterizes the success probability gain due to dual-branch MRC.
\end{remark}
By Remark~\ref{rem:mrc_gain}, the outage probability for the above special case $m_{\text{\textnormal{D}}}=m_{\text{\textnormal{I}}}=1$ and $\alpha=4$ is hence reduced by $56.2$\% when switching from single-antenna to dual-branch MRC in the asymptotic regime. We next extend this observation to the case of different $m$ and $\alpha$. Fig.~\ref{fig:rel_gain_single_antenna} shows the relative reduction in outage probability in the asymptotic regime when switching from a single-antenna system to dual-branch MRC. The relative reduction is denoted by $\Delta_{\text{MRC-SA}}$ and can be obtained by making use of Remark~\ref{rem:mrc_gain}. As expected, decreasing the per-antenna $\mathtt{SINR}$ variance through increasing either the path loss exponent $\alpha$ or the Nakagami parameter $m$ reduces the relative improvement of MRC. For typical path loss exponents $3<\alpha<6$, the relative improvement is $20\%<\Delta_{\text{MRC-SA}}<40\%$ for large $m$, and $40\%<\Delta_{\text{MRC-SA}}<70\%$ for small $m$ (close to Rayleigh fading).
\subsection{Comparison with other Diversity Combining Techniques}\label{sec:other_div_com}
\begin{figure*}[!t]
\centerline{\subfloat[Success probability]{\includegraphics[width=0.474\textwidth]{figure7}
\label{fig:comp_pc_mrc_se_mmse}}
\hfil
\subfloat[Relative Diversity Gain]{\includegraphics[width=0.48\textwidth]{figure8}
\label{fig:div_gain_mrc_se_mmse}}}
\caption{(a) Success probability vs. $\mathtt{SINR}$-threshold $T$ for different $\alpha$. Parameters are: $\lambda=10^{-3}$, $m=1$, $d=15$, $\mathtt{SNR}=\infty$.}
\vspace*{4pt}
\end{figure*}
Besides MRC there also exist other diversity combining techniques, which differ in both performance and implementation complexity. The latter is generally dictated by the system design and hardware requirements, and hence does not change with the radio environment. This is, however, not true for the expected performance as different set of assumptions about the radio environment may lead to a significantly different performance prediction. In order to better understand the performance-complexity trade-offs involved in diversity combining techniques, it is therefore essential to study them under more realistic model assumptions. In the following, we will compare the expected performance of MRC with two other popular schemes, namely SC and MMSE combining, under spatially correlated interference.
In SC, only the branch with the highest instantaneous individual $\mathtt{SINR}$ is selected. SC therefore has a lower complexity at the cost of a lower performance compared to MRC. In\cite{haenggi12_1}, the success probability $\mathtt{P}_{\text{\textnormal{SC}}}$ of SC under correlated interference without noise was derived for Rayleigh fading ($m=1$) as
\begin{IEEEeqnarray}{rCl}
\mathtt{P}_{\text{\textnormal{SC}}}&=&\sum_{n=1}^{N}(-1)^{n+1}\binom{N}{n}\exp\left(-\Delta\,T^{2/\alpha}D_n(2/\alpha)\right),\IEEEeqnarraynumspace
\end{IEEEeqnarray}
where $\Delta~\raisebox{-0.03cm}{$\triangleq$}~\lambda\tfrac{2\pi^2}{\alpha} d^2\csc(2\pi/\alpha)$ and $D_{n}(x)~\raisebox{-0.03cm}{$\triangleq$}~\prod_{i=1}^{n-1}(1+x/i)$ is the so-called diversity polynomial.
In MMSE combining, the combining weights are chosen so as to maximize the post-combiner $\mathtt{SINR}$ under knowledge of the interference autocorrelation matrix. The success probability $\mathtt{P}_{\text{\textnormal{MMSE}}}$ for MMSE combining under Rayleigh fading ($m=1$) was derived in\cite{ali10} as
\begin{IEEEeqnarray}{rCl}
\mathtt{P}_{\text{\textnormal{MMSE}}}&=&Q\left(N,\Delta T^{2/\alpha}+\frac{d^{\alpha}T}{\mathtt{SNR}}\right).
\end{IEEEeqnarray}
Note that similar expressions for SC and MMSE combining for the case of Nakagami fading are currently not available in the literature. Generalizing the SC and MMSE results to Nakagami fading is beyond the scope of this contribution and is left for possible future work.
Figure~\ref{fig:comp_pc_mrc_se_mmse} compares the success probability of MRC, SC and MMSE combining for $m_{\text{\textnormal{D}}}=m_{\text{\textnormal{I}}}=1$ (Rayleigh fading) and different $\alpha$. The performance of MRC is sandwiched by SC on the lower end and MMSE combining on the upper end as expected. Interestingly, the success probability for MRC and MMSE combining become similar as $\alpha$ decreases. This means that for small $\alpha$ almost no benefit can be harvested from estimating the interference and adapting the combining weights accordingly, compared to simply treating interference as white noise. However, such a trend is not observed for SC, where the horizontal width of the success probability gap varies no more than about 1.2 dB over a wide range of $T$ independent of $\alpha$. These observations are further elucidated in Fig.~\ref{fig:div_gain_mrc_se_mmse}, which shows the relative diversity gains $\Delta_{\text{MRC-SC}}~\raisebox{-0.03cm}{$\triangleq$}~\mathbb{E}[\sinr_{\text{\textnormal{MRC}}}]/\mathbb{E}[\mathtt{SINR}_{\text{SC}}]$ and $\Delta_{\text{MRC-MMSE}}~\raisebox{-0.03cm}{$\triangleq$}~\mathbb{E}[\sinr_{\text{\textnormal{MRC}}}]/\mathbb{E}[\mathtt{SINR}_{\text{MMSE}}]$ over $\alpha$ for the respective combining methods. The expectations in $\Delta_{\text{MRC-SC}}$ and $\Delta_{\text{MRC-MMSE}}$ can be obtained using the relation $\mathbb{E}[\mathtt{SINR}]=\int_0^{\infty}\mathbb{P}(\mathtt{SINR}>T)\,\mathrm dT$.
It can be seen that $\Delta_{\text{MRC-MMSE}}$ (in dB) grows almost linearly in $\alpha$. Relative to SC, the diversity gain of MRC is roughly above 1~dB for practically relevant path loss exponents. This gain over SC, however, is always smaller than in the well-studied interference-free case. In the latter, the relative diversity gain for Rayleigh fading ($m=1$) can be written for arbitrary $N$ in terms of the harmonic series as $\Delta_{\text{MRC-SC}}^{\text{no Int.}}(N)~\raisebox{-0.03cm}{$\triangleq$}~ N\,(\sum_{n=1}^{N}1/n)^{-1}$\cite{goldsmith05}, which yields $\Delta_{\text{MRC-SC}}^{\text{no Int.}}(2)\approx1.249$ dB for the dual-branch case. The fact that $\Delta_{\text{MRC-SC}}<\Delta_{\text{MRC-SC}}^{\text{no Int.}}(N)$ for arbitrary $N$ and $\alpha>2$ can be easily verified using Jensen's inequality\cite{feller71}
\begin{IEEEeqnarray}{rCl}
\Delta_{\text{MRC-SC}}
&=&\frac{\mathbb{E}\left[\frac{\mathsf{g}_{1}}{\mathsf{I}_{1}}+\ldots+\frac{\mathsf{g}_{N}}{\mathsf{I}_{N}}\right]}{\mathbb{E}_{\mathsf{g}_{1}\ldots\mathsf{g}_{N}}\left[\mathbb{E}_{\mathsf{I}_{1}\ldots\mathsf{I}_{N}}\left[\max\left\{\frac{\mathsf{g}_{1}}{\mathsf{I}_{1}},\ldots,\frac{\mathsf{g}_{N}}{\mathsf{I}_{N}}\right\}\right]\right]}\IEEEeqnarraynumspace\IEEEnonumber\\
&\overset{\mathrm{(a)}}{\leq}&\frac{N\mathbb{E}\left[\frac{\mathsf{g}}{\mathsf{I}}\right]}{\mathbb{E}_{\mathsf{g}_{1}\ldots\mathsf{g}_{N}}\left[\max\left\{\mathbb{E}_{\mathsf{I}_{1}}\left[\frac{\mathsf{g}_{1}}{\mathsf{I}_{1}}\right],\ldots,\mathbb{E}_{\mathsf{I}_{N}}\left[\frac{\mathsf{g}_{N}}{\mathsf{I}_{N}}\right]\right\}\right]}\IEEEeqnarraynumspace\IEEEnonumber\\
&\overset{\mathrm{(b)}}{=}&\frac{\mathbb{E}_{\mathsf{I}}\left[\mathsf{I}^{-1}\right]\,N\,\mathbb{E}[\mathsf{g}]}{\mathbb{E}_{\mathsf{I}}\left[\mathsf{I}^{-1}\right]\,\mathbb{E}_{\mathsf{g}_{1}\ldots\mathsf{g}_{N}}\left[\max\left\{\mathsf{g}_{1},\ldots,\mathsf{g}_{N}\right\}\right]}\IEEEeqnarraynumspace\IEEEnonumber\\
&=&\Delta_{\text{MRC-SC}}^{\text{no Int.}}(N),
\end{IEEEeqnarray}
where (a) follows from the fact that $\mathsf{g}_{1}/\mathsf{I}_{1},\ldots,\mathsf{g}_{N}/\mathsf{I}_{N}$, and hence the $\max$ function are convex in $\mathsf{I}_{1},\ldots,\mathsf{I}_{N}$, and by Jensen's inequality, (b) follows from the $\mathsf{I}_{n}$ being identically distributed. Note that the inequality in (a) applies not only to the Rayleigh fading case.
Interestingly, we have $\Delta_{\text{MRC-SC}}\to\Delta_{\text{MRC-SC}}^{\text{no Int.}}(2)\approx1.249$ dB as $\alpha\to2$. This can be explained by the fact that as $\alpha\to2$, the $\mathsf{I}_{n}$ degenerate to $\mathsf{I}_{n}\equiv\infty$ almost surely\cite{HaenggiBook}. For a degenerate random variable, Jensen's inequality becomes an equality.
\subsection{Transmission Capacity for Dual-Branch MRC Receivers}\label{sec:tc}
In decentralized wireless networks such as \textit{ad hoc} networks, it is desirable to know the maximum number of local transmissions that can take place simultaneously subject to a quality of service constraint. Such a local throughput metric was introduced in\cite{weber10} under the term \textit{transmission capacity}, which is defined as
\begin{IEEEeqnarray}{c}
c(\epsilon)~\raisebox{-0.03cm}{$\triangleq$}~ \lambda(\epsilon)\,(1-\epsilon),\quad 0\leq\epsilon\leq1,
\end{IEEEeqnarray}
where $\epsilon$ is the (target) outage probability and $\lambda(\epsilon)$ is the maximum allowable density of simultaneously active transmitters such that the success probability is at least $1-\epsilon$. We refer to\cite{weber10,HaenggiBook} for further elaborations on this metric. Since the success probability is in general monotonic in $\lambda$, $\lambda(\epsilon)$ can be obtained by (numerically) solving $\mathtt{P}_{\text{\textnormal{MRC}}}$ in \eqref{eq:pc_theorem} for $\lambda$, yielding the transmission capacity under dual-branch MRC.
Figure~\ref{fig:tc} shows the transmission capacity under dual-branch MRC for different $m$ (identical Nakagami fading). Consistent with the observations made in Section~\ref{sec:simple_models}, the FC and NC models yield a slightly pessimistic and a significantly optimistic result, respectively. Interestingly, while the accuracy loss in the NC model scales with the Nakagami fading parameter as expected, the transmission capacity gap between the FC and the exact models is fairly small even for $m=1$. The transmission capacity for the single-antenna case is also shown for reference. They were computed using \eqref{eq:prob_nc} and setting $N=1$. As can be seen, tremendous gains can be obtained when switching from single-antenna to dual-antenna MRC. These gains increase with the Nakagami fading parameter.
\section{Conclusion and Future Work}
In this paper, we developed a theoretical framework to analyze the post-combiner $\mathtt{SINR}$ for MRC under interference-induced correlation, independent Nakagami channel fading and receiver noise. An exact expression for the success probability was derived in semi-closed form for the dual-branch case. Our analysis concretely demonstrated that while ignoring interference correlation, thereby overestimating the true diversity, may result in significantly misleading results, assuming the same interference levels at all the antennas, thereby underestimating the true diversity, provides reasonable results when the Nakagami fading parameter of the interfering links is greater than one and/or the path loss exponent is large. In such scenarios, the frequently used full-correlation model may hence be justified. It was also shown that treating interference not as white noise through MMSE combining does not provide substantial diversity gains compared to MRC when the path loss exponent is small. Also, the gain of MRC over SC in terms of diversity gain is smaller when interference dominates noise, and this gain decays with the path loss exponent. It is important to mention that the net performance of MRC, e.g., the average rate, will also depend on the temporal correlation of the fading channel as well as of the interference. Since the locations of interferers are likely to not change significantly within consecutive transmission attempts, positive temporal interference correlation will affect the joint statistics of the $\mathtt{SINR}$ over time\cite{Haenggi14twc}.
\begin{figure}[t!]
\centering
\includegraphics[width=.48\textwidth]{figure9}
\caption{Transmission capacity $c(\epsilon)$ vs. target outage probability $\epsilon$ for different $m_{\text{\textnormal{D}}}=m_{\text{\textnormal{I}}}=m$ (identical Nakagami fading). Parameters are: $T=3$, $d=10$, $\alpha=4$, $\mathtt{SNR}=6$~dB. Marks represent simulation results.}\label{fig:tc}
\end{figure}
This work has numerous extensions. Since our analysis was limited to the dual-branch case, a useful research direction would be to extend this framework to more than two receive antennas. The approach used in this work, namely first conditioning on the $\mathtt{SINR}$ in one branch and applying elementary point process theory results, does not look promising for this purpose and hence, a different approach that similarly benefits from basic stochastic geometry tools would be mandatory. Besides MRC, the performance of other combining techniques should be studied under spatially-correlated interference and fairly general fading channels, e.g., Nakagami channel fading. For instance, one concrete problem in this direction could be to study the performance of MMSE under correlation and Nakagami fading, using tools developed in~\cite{ali10} and this paper. In addition to correlation in space, the impact of temporal interference correlation\cite{Haenggi14twc} on diversity combining techniques may further be of interest to develop robust re-transmission and/or space-time coding schemes for multi-antenna systems. Another rich direction of work is to extend this framework to account for multiple antennas at the transmitter.
|
2,869,038,156,782 | arxiv | \section{Introduction}
Arguably one of the most enigmatic phenomena in the history of quantum optics is the discovery of superradiance by Dicke \cite{Dicke:1954,Rehler:1971,Bonifacio:1971,Friedberg:1973,Agarwal:1974,Gross:1982,Skribanowitz:1973,Vrehen:1977,Kaiser:2016,Thiel:2007,Maser:2009,Blatt:2008,Monz:2011,Wiegner:2011,Scully:2009,Scully:2009a,Svidzinsky:2013,Bienaime:2013,Feng:2014,Scully:2015,Longo:2016,Svidzinsky:2016,Rohlsberger:2010,Rohlsberger:2013,Bhatti:2015,DeVoe:1996}.
A key requirement in Dicke's work is the initial preparation of an ensemble of two-level atoms in a special class of collective states, so-called symmetric Dicke states. The startling gist is that even though atoms in these states have no dipole moment, they radiate with an intensity which is enhanced by a factor of $N$ compared to $N$ independent atoms.
The preparation of such states has been a challenge since.
The first experiments on superradiance were done in atomic vapors \cite{Skribanowitz:1973,Vrehen:1977,Gross:1982}, where it was assumed that the fully excited system in the course of temporal evolution would at some time be found in a Dicke state leading to the emission of superradiant light \cite{Rehler:1971,Bonifacio:1971,Gross:1982}.
The same assumption led to the recent observation of subradiance \cite{Kaiser:2016}.
\begin{figure*}
\centering \includegraphics{1.eps}
\caption{\label{fig:cavity_setup} Basic properties of the system. (a) Sketch of the system consisting of two atoms (A) that are coupled to a single-mode cavity (C) and driven by a coherent laser (L) with Rabi frequency $\eta$. Intracavity photons can leak through the mirrors by cavity decay ($\kappa$) and be registered by a detector (D). Another possible dissipative process is spontaneous emission ($\gamma$) by the atoms. The inset shows a magnified section of the arrangements of the atoms: One atom (depicted left) is fixed at an anti-node of the cavity field, while the other atom (right) can be scanned along the cavity axis causing a relative phase shift $\phi_z$ between the radiation of the atoms. (b,c) Energy levels and transitions of the system for (b) in-phase and (c) out-of-phase radiation of the atoms. In the case of two atoms, the state space consists of manifolds of four Dicke states with different intracavity photon numbers: For a fixed cavity state $\ket{n}$, the unentangled two atom ground and two atom excited state $\ket{gg}$ and $\ket{ee}$, respectively, as well as the maximally entangled symmetric and anti-symmetric Dicke states $\ket{\pm}$. For clarity, we neither draw the transitions due to cavity decay nor the detuning. (d) Energy levels and transitions of the corresponding system containing a single atom.}
\end{figure*}
Many of the mysteries behind this effect have begun to unfold only recently.
In the seventies, it was realized that the superradiant emission results from the strong quantum correlations among the atoms being prepared in symmetric Dicke states \cite{Agarwal:1974,Gross:1982}. Only lately, it became clear that the $N$-fold radiative enhancement can be explained by the multiparticle entanglement of the Dicke states.
For example, studying superradiance in a chain of Dicke-entangled atoms on a lattice enables one to identify that multiple interfering quantum paths lead to the collective subradiant and superradiant behavior \cite{Nienhuis:1987,Wiegner:2011}.
The strong entanglement of the states can already be inferred from the simpler two atom case and holds for almost all Dicke states of a multi-atom-system.
With the current advances in quantum information science, we indeed understand the great difficulties in precisely preparing such highly entangled multiparticle states.
There are proposals for the generation of whole classes of Dicke states using projective measurements \cite{Thiel:2007,Bastin:2009,Maser:2009}, yet these schemes have very low success probability. Deterministic entanglement has been produced with about a dozen qubits in the form of $W$-states \cite{Blatt:2008,Monz:2011}, but we are still far away from the realization of $W$-states for an arbitrary number of qubits.
Calculations show that these states, which can be considered to be the analog to single excitation Dicke states with appropriate phase factors, produce also enhancement by a factor of $N$ \cite{Scully:2009,Scully:2009a,Wiegner:2011}.
In comparably simpler systems, yet with a higher number of excitations, one can also study the quantum statistical aspects of the collective emission \cite{Bhatti:2015}.
However, down to the present day the generation and measurement of higher excited multi-particle entangled Dicke states are challenging, so mostly systems with no more than one excitation have been realized. In these systems, the dynamics is still quite complex, yet superradiance and also subradiance can be fruitfully explored \cite{Scully:2009,Scully:2009a,Svidzinsky:2013,Feng:2014,Scully:2015,Longo:2016,Svidzinsky:2016,Bienaime:2013,Wiegner:2011}.
Experiments on superradiance with single photon excited Dicke states were also reported for nuclear transitions \cite{Rohlsberger:2010,Rohlsberger:2013}. A recent work discusses preparation of a single photon subradiant state and its radiation characteristics for atoms in free space \cite{Scully:2015}. Even applications of superradiance are beginning to appear. Lately, a laser with a frequency linewidth less than that of a single-particle decoherence linewidth was realized \cite{Bohnet:2012} by using more than one million intracavity atoms and operating in a steady-state superradiant regime \cite{Meiser:2010a,Meiser:2010}.
Despite these advances, one is still faced with the difficulties in the optical domain which arise from the infinite number of modes in free space and interatomic effects like the dipole-dipole interaction \cite{Agarwal:1974,Friedberg:1973}. It is thus evident that one needs to work with systems which have fewer degrees of freedoms and where a precise preparation of entangled states is possible. This brings us to work with single-mode cavities \cite{DeVoe:1996,DeVoe:1984} with few atoms. The current technological progress in atom trapping and the availability of well characterized single-mode cavities is making this ideal situation becoming more and more a reality. Several experiments in the last two years have been reported using such well-characterized systems \cite{Reimann:2015,Casabone:2015,Neuzner:2016}.
The experiments consist of two coherently driven atoms \cite{Reimann:2015,Neuzner:2016} or entangled ions \cite{Casabone:2015} coupled to a single-mode cavity as depicted in Fig. \ref{fig:cavity_setup}(a). This setup enables one to study collective behavior as a function of various atomic and cavity parameters, e.g., the precise location of the atoms.
In spite of this recent progress on superradiant and subradiant behavior \cite{Scully:2009,Scully:2009a,Svidzinsky:2013,Feng:2014,Scully:2015,Longo:2016,Svidzinsky:2016,Bienaime:2013,Wiegner:2011} and the surge of new classes of experiments \cite{Reimann:2015,Casabone:2015,Neuzner:2016,Bohnet:2012}, there is yet no report of atomic light emission beyond that of superradiance.
In this paper, we demonstrate that a two-atom system coupled to a single-mode cavity is capable of radiating up to several orders of magnitude higher than a corresponding system consisting of two uncorrelated atoms, thereby exceeding the free-space superradiant emission by far. We call this effect hyperradiance. Surprisingly, hyperradiance occurs in a regime which one usually considers to be non-ideal, namely when the two atoms radiate out-of-phase. Such nonideal conditions are rather expected to suppress superradiance and thus one is not inclined to imagine in this regime an emission burst exceeding the one of superradiance.
Although the study that we present is in the context of atomic systems, the results should be applicable to other types of two-level systems like ions \cite{DeVoe:1996,Stute:2012,Casabone:2015}, superconducting qubits \cite{Wallraff:2004,Fink:2009,Mlynek:2014} and quantum dots \cite{Hennessy:2007,Faraon:2008,Miguel-Sanchez:2013,Rundquist:2014,Leymann:2015}. We thus expect that our findings stimulate a multitude of new experiments in various domains of physics.
\section{Methods}
\subsection{System}
The investigated system follows the experiments of \cite{Reimann:2015,Neuzner:2016} consisting of two atoms (A) coupled to a single-mode cavity (C) as shown in Fig. \ref{fig:cavity_setup}(a). A laser (L) oriented perpendicular to the cavity axis coherently drives the atoms. In this paper, we fix one atom at an antinode of the cavity field, while we vary the position of the other atom along the cavity axis inducing a relative phase shift between the radiation of the atoms. The atoms within the cavity are modeled as two-level systems with transition frequency $\omega_A$, driven by a laser field at frequency $\omega_L$, and couple to a single-mode of the cavity with frequency $\omega_C=2\pi c/\lambda_C$.
The $i$th atom is characterized by spin-half operators $S_i^+ = \ket{e}_i\bra{g}_i$, $S_i^- = (S_i^+)^\dagger$ and $S_i^z = (\ket{e}_i\bra{e}_i - \ket{g}_i\bra{g}_i)/2$.
Bosonic annihilation and creation operator $a$ and $a^\dagger$ describe the intracavity mode.
The dynamical behavior of the entire system can be treated in a master equation approach \cite{Agarwal:2012} and is governed by
\begin{equation}
\label{eq:master}
\frac{d}{dt} \rho = -\frac{i}{\hbar} \left[H_0 + H_I + H_L, \rho \right] + \mathcal{L}_\gamma \rho + \mathcal{L}_\kappa \rho \, ,
\end{equation}
where $\rho$ is the density operator of the atom-cavity system.
In the interaction frame rotating at the laser frequency, atoms and cavity are described by $H_0 = \hbar \Delta (S^z_1+S^z_2) + \hbar \delta a^\dagger a$. Here, $\Delta=\omega_A-\omega_L$ is the atom-laser detuning and $\delta=\omega_C-\omega_L$ the cavity-laser detuning.
The Tavis-Cummings interaction term of atom-cavity coupling is given by
$H_I = \hbar \sum_{i=1,2} g_i \left( S_i^+ a + S_i^- a^\dagger \right)$
and can be obtained by utilizing the dipole approximation and applying the rotating wave approximation \cite{Tavis:1968}.
The term $g_i=g\cos (2\pi z_i / \lambda_C)$ describes the position-dependent coupling strength between cavity and $i$th atom. The interatomic distance $\Delta z$ induces a phase shift $\phi_z=2\pi\Delta z / \lambda_C$ between the radiation emitted by the two atoms. Since $\phi_z$ can be chosen $\pmod {2\pi}$, separations of the atoms much larger than the cavity wavelength $\lambda_C$ can be achieved in order to avoid direct atom-atom interactions as in \cite{Goldstein:1997}. Observe that in our setup at $\phi_z=\pi/2 \, (3\pi/2)$, only one atom is coupled to the cavity.
The coherent pumping of the atoms is characterized by the Hamiltonian $H_L = \hbar \eta \sum_{i=1,2} \left( S_i^+ + S_i^- \right)$. Hereby, it is assumed that the pumping laser with Rabi frequency $\eta$ propagates perpendicular to the cavity axis. Neglecting possible interatomic displacements in $y$-direction leads to a homogeneous driving of the atoms. Varying pump rates due to spatial variation of the laser phase could be absorbed into effective coupling constants of the atoms \cite{Casabone:2015}.
For fixed atomic transition dipole moment, $\eta$ indicates the strength of the coherent pump.
Spontaneous emission of the atoms at rate $\gamma$ is taken into account by the term $\mathcal{L}_\gamma \rho = \gamma/2 \sum_{i=1,2} \left( 2 S_i^- \rho S_i^+ - S_i^+S_i^- \rho - \rho S_i^+ S_i^- \right)$, whereas cavity decay at rate $\kappa$ is considered by the Liouvillian $\mathcal{L}_\kappa \rho = \kappa/2 \left( 2a\rho a^\dagger -a^\dagger a \rho - \rho a^\dagger a \right)$.
In this letter, we neglect marginal dephasing effects, which, for example, become relevant in the case of quantum dots.
In order to work out the dynamical behavior of the atom-cavity system, we have to solve Eq. \eqref{eq:master}, which depends on many parameters. Whereas $\eta$, $\delta$, and $\Delta$ can be easily varied, $g$, $\kappa$, and $\gamma$ are intrinsic properties and depend on the design of the cavity and the atomic system used.
The specific dynamics very much depends on the cavity coupling and the cavity $Q$-factor. Thus to keep our discussion fairly general it becomes necessary to solve the master equation quite universally so that the behavior in different regimes can be studied. We thus resort to numerical techniques based on QuTiP \cite{Johansson:2012}. We ensured the numerical convergence of our results by considering different cutoffs of the photonic Hilbert space.
\subsection{Transitions}
To clarify the dynamical behavior of the system, we make use of the collective basis states $\ket{gg}$, $\ket{ee}$ and $\ket{\pm}$ to describe the atoms. The symmetric and anti-symmetric Dicke state $\ket{\pm}=D_\pm^\dagger \ket{gg}=(\ket{eg} \pm \ket{ge})/\sqrt{2}$ are created by the collective Dicke operators $D_\pm^\dagger=(S_1^+ \pm S_2^+)/\sqrt{2}$ \cite{Agarwal:2012}.
Rewriting interaction and pumping Hamiltonian in terms of the collective operators $D_\pm^\dagger$ \cite{Fernandez-Vidal:2007}, yields a clear picture of the occurring transitions as can be seen in Fig. \ref{fig:cavity_setup}(b) and (c). The pumping term is then given by $H_L=\hbar \sqrt{2}\eta(D_+^\dagger + D_+)$ and gives rise to the transitions $\ket{gg,n}\overset{\eta}{\rightarrow}\ket{+,n}\overset{\eta}{\rightarrow}\ket{ee,n}$ with $n$ being the number of photons in the cavity mode. Hence, only symmetric Dicke state $\ket{+}$ and doubly excited state $\ket{ee}$ are pumped.
The interaction term, on the other hand, couples the cavity to $\ket{+}$ or $\ket{-}$ depending on the interatomic phase $\phi_z$. It reads $H_I=H_+ + H_-$ with $H_\pm = \hbar g_\pm(\phi_z) (a D_\pm^\dagger+a^\dagger D_\pm)$ and $g_\pm(\phi_z)=g(1\pm \cos (\phi_z)) /\sqrt{2}$.
In case of an in-phase radiation of the atoms, apparently $g_{-}(\phi_z=0)=0$ and the anti-symmetric Dicke state $\ket{-}$ is uncoupled from the dynamics. Possible atom-cavity interactions are then via the states $\ket{ee,n}\overset{g_{+}}{\longleftrightarrow}\ket{+,n+1}\overset{g_{+}}{\longleftrightarrow}\ket{gg,n+2}$, see also Fig. \ref{fig:cavity_setup}(b).
For atoms radiating out of phase, however, $g_+(\phi_z=\pi)=0$ and the cavity only couples via $\ket{-}$, i.e. $\ket{ee,n}\overset{g_{-}}{\longleftrightarrow}\ket{-,n+1}\overset{g_{-}}{\longleftrightarrow}\ket{gg,n+2}$. Note that although only the symmetric Dicke state $\ket{+}$ is pumped by the applied coherent field, the photon number in the cavity is non-zero for an out of phase radiation of the atoms due to higher-order processes, which can populate the state $\ket{-}$.
These are direct cavity coupling $\ket{ee,n} \overset{g_{-}}{\rightarrow} \ket{-,n+1}$ and spontaneous emission $\ket{ee,n} \overset{\gamma}{\rightarrow} \ket{\pm,n} \overset{\gamma}{\rightarrow} \ket{gg,n}$, see also Fig. \ref{fig:cavity_setup}(c). Note that the latter process, of course, takes place for $\phi_z=0$ as well as $\phi_z=\pi$.
For a phase in between, both couplings are present as $g_-(\phi_z)$ and $g_+(\phi_z)$ will be nonzero. For the sake of completeness, we list the transitions due to cavity decay which read $\ket{.,n} \overset{\kappa}{\rightarrow} \ket{.,n-1}$ and are possible for all values of $\phi_z$.
\begin{figure*}
\centering \includegraphics{2.eps}
\caption{\label{fig:R_outofphase} Radiance witness $R$ for different regimes as a function of the interatomic phase $\phi_z$ and pumping rate $\eta$. The color encodes six different regimes of radiation, i.e., extremely subradiant (black), subradiant (blue), uncorrelated (light blue), enhanced (yellow), superradiant (orange), and hyperradiant (red). Dotted, dashed, and solid curve in the figures indicate the mean photon numbers $\braket{a^\dagger a}_2=0.01,0.1,1$, respectively.
(a) 3D plot and 2D surface map of the predominant hyperradiant area for $\gamma=\kappa$, $g=10\kappa$ and no detuning. Here, the superradiant and uncorrelated scattering area are very small and can hardly be seen.
(b,c) Results for bad and intermediate cavity with $\gamma=\kappa$, no detuning and (b) $g=0.1\kappa$, (c) $g=\kappa$.
(d,e) Influence of the detuning on hyperradiance with $\gamma=\kappa$, $g=10\kappa$ and (d) $\delta=\Delta=\kappa$, (e) $\delta=\Delta=10\kappa$.}
\end{figure*}
\subsection{Radiance witness $R$}
In the considered setup, it is natural to measure the emitted radiation at an external detector (D) placed along the cavity axis, see Fig. \ref{fig:cavity_setup}(a). As the pumping beam (L) is perpendicular to the cavity axis and thus will not contribute photons along the cavity axis, the registered mean photon number at the detector (D) is proportional to the corresponding intracavity quantity.
By performing a reference simulation of a single atom located at an antinode of the cavity field, we are thus able to quantify the radiant character of the two-atom system as a function of the correlations of the two atoms by use of a radiance witness
\begin{equation}\label{eq:superradiance-witness}
R:=\frac{\braket{a^\dagger a}_{2}-2 \braket{a^\dagger a}_{1}}{2 \braket{a^\dagger a}_{1}} \, ,
\end{equation}
involving the intracavity bosonic operators $a$ and $a^\dagger$.
Here, $\braket{a^\dagger a}_i$ is the steady-state mean photon number with $i=1,2$ atoms in the cavity. The factor $2$ arising in front of $\braket{a^\dagger a}_{1}$ results from the comparison of the coupled two-atom system to the system of two uncorrelated atoms, while the denominator in Eq. \eqref{eq:superradiance-witness} yields a normalization of $R$.
The witness $R$ is composed of experimental observables, i.e., number of photons, which can be measured as in \cite{Reimann:2015}. A possible detection strategy for $R$ is, for instance, scanning the second atom from $\phi_z=\pi/2$ to $\phi_z=\theta$ simulating the transition from effectively one atom coupled to the cavity to two atoms radiating in-phase ($\theta=0$) or out-of-phase ($\theta=\pi$) into the cavity mode. By evaluating the experimental data according to Eq. \eqref{eq:superradiance-witness}, the radiance witness $R$ can be obtained.
$R=0$ reveals an uncorrelated scattering, where the scattering of the two atom-cavity system is simply the sum of two independent atoms in the cavity.
A value of $R$ different from zero thus indicates correlations between the atoms.
Negative or positive values of $R$ signal a suppressed or enhanced radiation of the two atom-cavity system, respectively.
$R=1$, in particular, implies that the radiation scales with the square of the number of atoms $\propto N^2$, which is called superradiance with respect to the free-space scenario \cite{Dicke:1954}.
Atoms confined to a cavity, however, feel a back-action of the cavity field which modifies their collective radiative behavior, allowing for a remarkably new possibility $R>1$. In fact, we found regimes with $\braket{a^\dagger a}_2 > 50 \braket{a^\dagger a}_1$ yielding $R$ greater than $24$. In order to emphasize this phenomenon, we call the domain of $R>1$ hyperradiant.
The atomic correlation quantity $\braket{S^+S^-}$, on the other hand, can be used to obtain the sideway radiation of the atoms. Note that in the bad cavity regime, $R$ reduces to the definition in terms of atomic operators like in \cite{Meiser:2010a,Meiser:2010,Bohnet:2012} due to adiabatic elimination, while in good cavities with $g > \gamma$, the emission of photons into the cavity mode dominates over spontaneous emission in side-modes. $R$ thus constitutes a very natural witness for the setup of Fig. \ref{fig:cavity_setup}(a).
\subsection{Semiclassical treatment}
Several phenomena of fundamental atom-light interaction can be fully analyzed within a semiclassical framework, even atoms coupled to a cavity in the weak atomic excitation limit. In a semiclassical approximation, one decouples the dynamics of atoms and cavity, i.e., $\braket{aS^z} \approx \braket{a}\braket{S^z}$ and assumes a vanishing atomic excitation leading to $\braket{S_i^z}\approx -1/2$. In steady state, one is able to deduce an analytical result for $\braket{a}$, which is proportional to the intracavity field. In terms of the parameters of the system, it reads
\begin{equation}\label{eq:classical-field}
\braket{a}=\frac{\eta}{g} \frac{N\mathcal{G}}{\frac{1}{g^2}\left(\frac{\gamma}{2}+i\Delta\right)\left(\frac{\kappa}{2}+i\delta\right)-N\mathcal{H}} \, ,
\end{equation}
where $N$ is the number of atoms inside the cavity. We further introduced the two collective coupling parameters, $\mathcal{H}=N^{-1}\sum_{i=1}^N \cos^2 [2\pi z_i/\lambda_C]$ along the cavity and $\mathcal{G}=N^{-1}\sum_{i=1}^N \cos [2\pi z_i/\lambda_C]$ for the incident beam \cite{Tanji-Suzuki:2011}, which involve the position-dependent atom-cavity couplings $g_i$.
In the investigated two-atom system, these can be written as a function of the interatomic phase only: $\mathcal{H}(\phi_z)=[1+\cos^2(\phi_z)]/2$ and $\mathcal{G}(\phi_z)=[1+\cos(\phi_z)]/2$. Equation \eqref{eq:classical-field} can also be derived in a classical framework, where the atoms are treated as radiating dipoles that couple to a non-quantized standing-wave optical resonator \cite{Tanji-Suzuki:2011}. Hereby, one exploits the condition that the intracavity field needs to match itself after a round trip in order to be sustained by the resonator.
Observe that the \mbox{(semi-)}classical intracavity field is proportional to $\mathcal{G}(\phi_z)$. For an out-of-phase configuration, it holds $\mathcal{G}(\phi_z=\pi)=0$ and thus semiclassical treatment predicts a vanishing intracavity field.
\section{Results and Discussion}
In what follows, we study the radiance witness $R$ of Eq. \eqref{eq:superradiance-witness} for the setup of Fig. \ref{fig:cavity_setup}(a) in a very broad regime of parameters. In Fig. \ref{fig:R_outofphase}, for example, we plot $R$ as a function of the interatomic phase $\phi_z$ and the pumping rate $\eta$ for weak and strong values with respect to the atomic spontaneous emission rate $\gamma$. In all figures we set $\gamma=\kappa$, where $\kappa$ is the cavity decay rate, while the other parameters are varied from figure to figure.
We categorize the value range of $R$ into six different classes, which are depicted in unified colors: extremely subradiant ($R<-0.5$, black), subradiant ($-0.5<R<0$, blue), uncorrelated ($R=0$, light blue), enhanced ($0<R<1$, yellow), superradiant ($R=1$, orange) and hyperradiant ($1<R$, red) scattering. For the color scheme see also the color palette of Fig. \ref{fig:R_outofphase}. Dotted, dashed, and solid curve in the figures indicate a mean photon number $\braket{a^\dagger a}_2=0.01,0.1,1$, respectively.
\begin{figure}
\centering \includegraphics{3.eps}
\caption{\label{fig:R_outofphase_profiles} Results for different cavities. Vertical cuts of the radiance witness at $\eta\approx0.5\kappa$ as a function of the interatomic phase $\phi_z$ for different types of cavities: blue (dashed) for a bad cavity corresponding to Fig. \ref{fig:R_outofphase}(b); green (dotdashed) for an intermediate cavity corresponding to Fig. \ref{fig:R_outofphase}(c); and black (bold) with highlighted hyperradiant area ($R>1$) for a good cavity corresponding to Fig. \ref{fig:R_outofphase}(a).}
\end{figure}
In good cavities and for atoms radiating out of phase the system can exhibit the phenomenon of hyperradiance, see Fig. \ref{fig:R_outofphase}(a). The radiation can exceed the one of two atoms emitting in phase with otherwise identical parameters distinctly, thereby also surpassing the free-space limit $R=1$.
This is the synergy of two effects:
Higher-order processes can populate the doubly excited atomic Dicke state $\ket{ee}$, see Fig. \ref{fig:cavity_setup}(b) and (c). In the case of $\phi_z=\pi$, this leads to the emission of single photons into the cavity via the transition $\ket{ee,n}\overset{\gamma}{\rightarrow} \ket{-,n} \overset{g_{-}}{\rightarrow} \ket{gg,n+1}$ or even photon pairs via $\ket{ee,n}\overset{g_{-}}{\rightarrow} \ket{-,n+1} \overset{g_{-}}{\rightarrow} \ket{gg,n+2}$ producing superradiant or even hyperradiant light.
For $\phi_z=0$, however, cavity backaction prevents the excitation of the atoms. This is due to vacuum Rabi splittings \cite{Agarwal:1984,Zhu:1990,Thompson:1992,Tabuchi:2014,Abdurakhimov:2015} of the intracavity field, which for a driving laser on resonance leads to a suppressed excitation of the atoms. The latter can also be interpreted as a destructive quantum path interference \cite{Fernandez-Vidal:2007} between the laser-induced excitation $\ket{gg,n}\overset{\eta}{\rightarrow}\ket{+,n}$ and the cavity-induced excitation $\ket{gg,n+1}\overset{g_{+}}{\rightarrow}\ket{+,n}$, see Fig. \ref{fig:cavity_setup}(b), resulting in subradiant light. The interpretation holds true for uncorrelated atoms, where the interfering terms can be seen in Fig. \ref{fig:cavity_setup}(d) and read $\ket{g,n}\overset{\eta}{\rightarrow}\ket{e,n}$ and $\ket{g,n+1}\overset{g}{\rightarrow}\ket{e,n}$, respectively, which when superimposed yield little excitation of the atom. For two atoms radiating out of phase, however, this back reaction is suppressed as the cavity couples to the anti-symmetric Dicke state $\ket{-}$ and thus the pathway $\ket{gg,n+1}$ to $\ket{+,n}$ is not allowed. As a result, we observe hyperradiance.
Note that in contrast to the coherent light emitted by a laser, the hyperradiant light is (super-)bunched (as revealed by a second-order correlation function at zero time $g^{(2)}(0) > 1$) due to the emission of photon pairs in the out-of-phase configuration (see Fig. \ref{fig:cavity_setup}(c)). Moreover, commonly lasing is observed when atoms radiate in phase \cite{Meiser:2009}. Opposed to that, in the investigated system the atoms radiate out of phase in the hyperradiant regime.
\begin{figure}
\centering \includegraphics{4.eps}
\caption{\label{fig:R_inphase} Comparison of in-phase and out-of-phase radiation. Both figures constitute a plot of $R$ as a function of pumping rate $\eta$ and atom-cavity coupling $g$, where $g=0.1\kappa \rightarrow 10\kappa$ reflects the transition from bad to good cavities. Results are shown for $\gamma=\kappa$, no detuning and (a) atoms radiating in phase ($\phi_z=0$), (b) atoms radiating out of phase ($\phi_z=\pi$). For the clarification of the color code as well as dotted, dashed and solid line, see Fig. \ref{fig:R_outofphase}.}
\end{figure}
In intermediate and bad cavities with $g \lesssim \kappa$ the radiation of two atoms out of phase is, however, highly suppressed, see Fig. \ref{fig:R_outofphase}(b) and (c). At $\phi_z=\pi$, the cavity couples to the anti-symmetric Dicke state $\ket{-}$, which is often also called the dark state \cite{Casabone:2015}. When the atoms are driven well below saturation, the coherent laser only pumps the symmetric Dicke state $\ket{+}$ (bright state).
As a consequence, the cavity mode is almost empty due to destructive interference of the radiation emitted from the two atoms \cite{Fernandez-Vidal:2007}. The radiant character is extremely subradiant $R<-0.9$ or $\braket{a^\dagger a}_2 < 0.2 \braket{a^\dagger a}_1$.
By contrast, two atoms emitting in phase into an intermediate cavity change their radiant character at higher pumping $\eta$. Here, even at low pumping rates photons can be emitted into the cavity mode via the laser-pumped state $\ket{+}$.
In Fig. \ref{fig:R_outofphase}(c), for instance, for $\eta \lesssim \kappa$ the pumping strength is not sufficient to pump both atoms leading to a subradiant behavior. At higher $\eta$, both emitters at first scatter uncorrelated, while then higher order processes via $\ket{ee}$ reinforce the atom-cavity coupling leading to an enhanced radiation. If $\eta$ gets too high, the already mentioned destructive quantum path interference takes place. This also occurs at high pumping rates in bad cavities, see Fig. \ref{fig:R_outofphase}(b), while at lower $\eta$ the in-phase radiation is mainly enhanced. In the limit of an extremely bad cavity, corresponding to a free-space setup, superradiant scattering is recovered ($R \rightarrow 1$) for an in-phase configuration.
The detuning can change the radiant behavior drastically, see Fig. \ref{fig:R_outofphase}(d) and (e). By comparing to the undetuned results of Fig. \ref{fig:R_outofphase}(a), we can infer that small detuning of the order of $\delta=\Delta=\kappa$ (d) weakens the hyperradiant behavior while in systems with stronger detuning of the order of $\delta=\Delta=10\kappa$ (e), no hyperradiance can be observed and the light is predominantly subradiant for atoms radiating out-of-phase.
In the experimental realization by Reimann et al. \cite{Reimann:2015}, the authors measure the intensity of the system in a regime where the radiation is suppressed independent of the interatomic phase. Using the parameters of \cite{Reimann:2015}, we observe a transition of the witness from $R=-0.37$ in case of $\phi_z=0$ to $R=-1.00$ in case of $\phi_z=\pi$. Thus, the system becomes extremely subradiant, as the atoms tend to radiate out-of-phase.
In fact, one could guess that atoms radiating out-of-phase scatter predominantly subradiantly, as observed in all previously mentioned experiments \cite{Reimann:2015,Casabone:2015,Neuzner:2016}. This is the case in bad and intermediate cavities (see dashed and dot-dashed curve in Fig. \ref{fig:R_outofphase_profiles}), or at high detuning. Yet, when studying the behavior in a good cavity and zero detuning, we find that the number of photons within the system can become much larger than in the corresponding setup with uncorrelated atoms. The bold line of Fig. \ref{fig:R_outofphase_profiles} displays this tendency of $R$ for $\eta \approx 0.5\kappa$. Here, the transition of atoms radiating in phase to atoms radiating out of phase is accompanied by the transition from (extreme) subradiance to hyperradiance. In order to observe hyperradiance, the previous experiments \cite{Reimann:2015,Casabone:2015,Neuzner:2016} would need to adapt to the parameters of Figs. \ref{fig:R_outofphase}(a) and \ref{fig:R_inphase}(b).
A brief comparison of atoms located at anti-nodes of a cavity can be seen in Fig. \ref{fig:R_inphase}. Here, the radiation of two atoms radiating in-phase ($\phi_z=0$) and two atoms radiating out-of-phase ($\phi_z=\pi$) is compared over a wide range of coupling constants $g: 0.1\kappa \rightarrow 10\kappa$ reflecting the transition from a bad to a good cavity.
For an in-phase radiation of the atoms, see Fig. \ref{fig:R_inphase}(a), the radiant character hardly depends on the pumping rate as long as $\eta \lesssim \kappa$ but is determined by $g$: for $g\gtrsim 0.5\kappa$ ($g\lesssim 0.5\kappa$), the radiation is subradiant (enhanced) and at $g\approx 0.5\kappa$ uncorrelated.
Note that $g/\kappa \rightarrow 0$ reflects a free-space setting for which our calculations show that superradiant scattering is recovered, $R\rightarrow 1$.
In Fig. \ref{fig:R_inphase}(b), we compare these findings to atoms radiating out of phase in the same parameter range. While for atoms radiating in phase, the transition from bad to good cavities goes along with the transfer from superradiance or enhanced radiation to subradiance, the situation is reversed for atoms radiating out of phase. Here they radiate subradiantly in bad cavities, whereas their radiation in good cavities can exceed the superradiant limit distinctly, finally ending up in hyperradiance, which can be explained via quantum path interference.
\begin{figure}
\centering
\includegraphics{5.eps}
\caption{Comparison of classical and quantum mechanical treatment. The ratio $\left|\braket{a}\right|^2/\braket{a^\dagger a}$ comparing the classical intensity of the intracavity field with the quantum-mechanical mean photon number is shown as a function of the interatomic phase $\phi_z$ for $g=10\kappa$, $\gamma=\kappa$ and $\eta=0.1\kappa$.}
\label{fig:quantum}
\end{figure}
Interestingly, the classical treatment of the discussed setup predicts an intracavity field that vanishes in the case of an out-of-phase configuration, see Eq. \eqref{eq:classical-field} with $\mathcal{G}(\phi_z=\pi)=0$. One can quantify the deviation from the classical approach by considering the ratio $\left|\braket{a}\right|^2/\braket{a^\dagger a}$, which compares the classical intensity of the intracavity field with the quantum-mechanical mean photon number. A deviation from unity reveals quantum features displayed by the system. For the investigated two atom-cavity system, $\left|\braket{a}\right|^2/\braket{a^\dagger a}$ equals one for an in-phase configuration, but tends to zero as $\phi_z\rightarrow \pi$, see Fig. \ref{fig:quantum}.
A value below one of the ratio displayed in Fig. \ref{fig:quantum} corresponds to the quantum theory predicting a higher intensity than the classical approach.
Therefore, the occurrence of hyperradiance even in the low pumping regime $\eta \approx 0.1 \kappa$ can only be explained in a full quantum-mechanical treatment revealing the true quantum origin of the phenomenon hyperradiance.
\section{Conclusions}
In conclusion, we have shown a new phenomenon in the collective behavior of coherently driven atoms, which we call hyperradiance. In this regime, the radiation of two atoms in a single-mode cavity coherently driven by an external laser can exceed the free-space superradiant behavior considerably. Hyperradiance occurs in good cavities and, surprisingly, for atoms radiating out of phase. The effect cannot be explained in a \mbox{(semi-)}classical treatment revealing a true quantum origin. Moreover, by modifying merely the interatomic phase, crossovers from subradiance to hyperradiance can be observed.
Our results should stimulate various new experiments examining the possibility for the observation of hyperradiance in this fundamental system, consisting of a cavity coupled to any kind of two-level systems like atoms, ions, superconducting qubits or quantum dots.
\section*{Acknowledgments}
M.-O.P. gratefully acknowledges the hospitality at the Oklahoma State University.
The authors gratefully acknowledge funding by the Erlangen Graduate School in Advanced Optical Technologies (SAOT) by the German Research Foundation (DFG) in the framework of the German excellence initiative.
Some of the computing for this project was performed at the OSU High Performance Computing Center at Oklahoma State University supported in part through the National Science Foundation grant OCI–1126330.
|
2,869,038,156,783 | arxiv | \section*{Introduction}
\label{sec:intro}
In the Minimal Supersymmetric Standard
Model (MSSM)~\cite{mssm_1} two Higgs
doublets are required, giving rise to
five Higgs bosons: a charged scalar pair, \Hpm; two neutral scalars, \h
and \bigH; and a neutral pseudoscalar, \A.
Within this framework, the \h and \A are predicted to be the lightest Higgs
particles and, therefore, the most likely to be observed at LEP.
The two main production mechanisms are investigated in this letter:
\begin{align}
&\epemtohZ \label{eq:higgstrahlung} \\
&\epemtohA. \label{eq:pairproduction}
\end{align}
Process~(\ref{eq:higgstrahlung}) is very similar to the dominant
Standard Model Higgs production mechanism, for which L3 has set a
lower limit on the mass of the Higgs at
95.3\GeV~\cite{l3_sm_higgs_99_paper}. The production rate
for~(\ref{eq:higgstrahlung}) is, in general, smaller than that of the
Standard Model reaction, but this is compensated by the additional
pair-production process~(\ref{eq:pairproduction}).
Previous searches for the h and A bosons have been reported by L3~\cite{l3_1998_16} and
other
experiments~\cite{opal_4}.
In this letter, our sensitivity to these particles is extended by
including the data taken at $\rts=189\GeV$ and by scanning over a
larger MSSM parameter space.
\section*{Data and Monte Carlo Samples}
The data were collected using the L3
detector~\cite{l3_1990_1}
at LEP during 1998. The integrated luminosity is 176.4\pb at an
average center-of-mass energy of 188.7\GeV.
The signal cross sections and branching ratios are calculated using
the HZHA generator~\cite{hzha}. For the efficiency studies, Monte
Carlo samples of Higgs events are generated using
PYTHIA~\cite{pythia_1} and HZHA. For the
background studies, the following Monte Carlo programs are used:
PYTHIA (\epemtoqqg), KORALW~\cite{koralw_1} (\epemtoWW),
KORALZ~\cite{koralz} (\epemtotautau),
PHOJET~\cite{phojet_1} (\epemtoeeqq),
EXCALIBUR~\cite{excalibur} (\epemtoffff) and PYTHIA (\epemtoZZ and
$\epem\!\rightarrow\!\Z\epem$). The number of simulated background
events for the most important background channels is typically 100
times the number of collected data events. The Monte Carlo signals
are 300 times the number of events expected to be observed with these
luminosities.
The L3 detector response is simulated using the GEANT~3.15
program~\cite{geant}, which takes into account the effects of energy
loss, multiple scattering and showering in the detector. The GHEISHA
program~\cite{gheisha} is used to simulate hadronic interactions in
the detector.
\section*{Analysis Procedures}
The search for \hA and \hZ production is carried out within a constrained
MSSM assuming unification of the scalar fermion masses, the gaugino masses
and the trilinear Higgs-fermion couplings at the GUT scale.
This choice has little impact on the Higgs mass phenomenology but reduces
significantly the number of free parameters.
The universal scalar fermion mass $m_0$ and the gaugino mass parameter
$M_2$ are fixed to \mbox{1 TeV}. The Higgs mass parameter $\mu$ is set
to $-0.1\TeV{}$. Two extreme scenarios are considered
corresponding to maximal and minimal scalar top mixing as suggested in
\mbox{Reference ~\cite{lep2_higgs}}. The minimal mixing scenario corresponds
to setting the trilinear Higgs-fermion coupling $A$ to zero. Maximal scalar
top mixing occurs at $A=\sqrt{6}\;\rm{TeV}$. A scan is then performed, in
each mixing scheme, over
the two remaining free parameters \mA and \tanb. For this search,
the minimum value of
\tanb considered has been decreased from 1.0 to 0.7 and the minimum \A
mass considered has been decreased from 30\GeV to 10\GeV with respect to
our previous publication. Values of \mA in the range $\mA<10$\GeV
have been previously excluded at LEP~\cite{opal_2}.
The two Higgs production mechanisms, \epemtohA and \epemtohZ, vary in
relative importance as a function of \tanb. The production of \hA is
dominant at high \tanb, while \hZ production is dominant at low \tanb.
The description of the \hZ analyses at $\rts=189\GeV$ of the decay
modes other than \hZtobbqq and \hZtobbtt can be found in
Reference~\cite{l3_sm_higgs_99_paper}. The analyses for \hZtobbqq and
\hZtobbtt(\tautau\qqbar) used in this letter have been optimized to
account for the analogous signatures in the \hA channel: \hAtobbbb and
\hAtobbtt.
For values of \mA less than 30\GeV, decays of the \h into a pair of \A bosons
become possible. The \A decays predominantly to b quarks and tau
leptons for most of the \tanb region probed. The \hZtobbqq analysis
has a significant cross-efficiency for the \hZtoAAff channel
and is used to search for this process.
Common search procedures are applied to both the \hA and \hZ
channels. First, a preselection is applied which significantly
reduces background while keeping high signal efficiency. This is
especially effective against background from the two-photon interaction,
which has a large
cross section at these LEP energies. Second, a final set of selection
cuts is chosen to distinguish signal from background. Once the final
selection has been applied, a discriminating variable as defined
in \mbox{Reference~\cite{l3_sm_higgs_99_paper,l3_1998_16}} is calculated
for each scan point in the $(\tanb,\mA)$ plane.
There is a significant overlap in the selection for \hA and \hZ in
both the channels involving either four jets, or two jets and two
taus. The confidence level calculation requires that all events be
uniquely assigned to a given channel. To this end, for events that
pass both the \hA and \hZ selections, an unique assignment is made based on the
reconstructed masses and the relative production rates at each scan
point.
\subsection*{The $\boldsymbol{\hAtobbbb}$\, and $\boldsymbol{\hZtobbqq}$\, Channels}
\label{sec:bbbb}
The signature of both the \hAtobbbb and \mbox{\hZtobbqq} decay modes
is four high-multiplicity hadronic jets and the presence of b hadrons.
The dominant backgrounds come from \qqbar production and hadronic
decays of \W pairs and \Z pairs. In the case of \hAtobbbb, the
identification of b hadrons plays an especially important role.
The analysis follows closely that of Reference~\cite{l3_1998_16}.
First, a high multiplicity hadronic preselection, common to both \hA
and \hZ, is applied which eliminates background from the two-photon
interaction. The
preselection is similar to the one used at $\rts=183\GeV$
and only minor changes are made to account for
the increased center-of-mass energy. Events passing the preselection
are then forced to have four jets using the DURHAM~\cite{DURHAM}
clustering algorithm, and a kinematic fit requiring four-momentum
conservation (4C) is performed.
Once the preselection has been satisfied, an optimization procedure
is applied on the Monte Carlo to choose
cuts on variables that maximize the separation between signal and
background. These optimized cuts serve mainly to reject the multi-jet
QCD background and are dependent on the topology being investigated:
\hA or \hZ. Selection cuts are placed on the maximum and minimum
dijet mass, minimum jet energy, maximum jet energy difference
and on \Ytf, being the value of
the DURHAM jet resolution
parameter for which the event is transformed from a four-jet to a three-jet
topology.
Values of the cuts for the \hA and \hZ analyses are shown in
Table~\ref{tab:cuts}. The number of observed and expected events from
Standard Model processes in
the $\rts=189\GeV$ data along with the signal efficiencies for the
preselection and selection cuts are shown in Table~\ref{tab:4jet_eff}.
Events passing the selection cuts are then classified in three
categories: 1) those that pass only the hA cuts; 2) those that pass only
the hZ cuts; and 3) those that pass both sets of cuts. Category 3) is
then split into two separate samples by choosing the most likely
hypothesis based on the relative production rate for hA and hZ
and the probability of the \mbox{mass $\chi^2$} as defined in
\mbox{Reference ~\cite{l3_1998_16}}.
\begin{table}[htbp]
\begin{center}
\leavevmode
\begin{tabular}{lll} \hline
\multicolumn{1}{c}{Cut} & \multicolumn{1}{c}{\hA} & \multicolumn{1}{c}{\hZ} \\
\hline
Minimum dijet mass (GeV) & $>$ 15.7 & $>$ 25.3 \\
Maximum dijet mass (GeV) & $<$ 135.3 & $<$ 118.7 \\
Minimum jet energy (GeV) & $>$ 15.1 & $>$ 25.9 \\
Maximum $\Delta E_{\rm jet}$ (GeV) & $<$ 54.8 & $<$ 42.4 \\
\Ytf & $>$ 0.003 & $>$ 0.009 \\
Visible Energy (GeV) & $>$ 129.3 & $>$ 133.8 \\
Number of Tracks & $>$ 25 & $>$ 22 \\
\hline
\end{tabular}
\caption{
Selection cuts for the \hA and \hZ four-jet Higgs search
channels. In addition to those abbreviations defined in the text,
the symbol $\Delta E_{\rm jet}$ is the energy difference between any two jets
of the four-jet system.}
\label{tab:cuts}
\end{center}
\end{table}
\begin{table}[htb]
\begin{center}
\begin{tabular}{crrr}\hline
& \multicolumn{3}{c}{Number of Events} \\
Process & Preselection & \hA cuts & \hZ cuts \\
\hline
\epemtoeeqq & 7.1 & 0.7 & 0.6 \\
\epemtoqq & 758.0 & 203.7 & 57.4 \\
\epemtoWW & 1331.7 & 913.5 & 582.1 \\
\epemtoZZ & 76.0 & 47.5 & 37.6 \\
\hline
Total Expected & 2172.8 & 1165.4 & 677.7 \\
Data & 2141 \hspace*{5.5pt} & 1110 \hspace*{5.5pt} & 641 \hspace*{5.2pt} \\ \hline \hline
\hspace*{6pt}Efficiency \hAtobbbb $\rule{0pt}{12pt}$
& 91.5\% & 77.1\% & 43.6\% \\
Efficiency \hZtobbqq & 93.3\% & 78.2\% & 66.2\% \\
\hline
\end{tabular}
\caption{Number of events expected and observed in the
four-jets channels. The signal
efficiencies at $\rts=189\GeV$ are quoted for hA at $\mA=\mh=80\GeV$
and for hZ at $\mh=95\GeV$.}
\label{tab:4jet_eff}
\end{center}
\end{table}
In the final step, the analysis is optimized for four regions in the
\tanbmh plane near the limit of our discovery potential. For this,
the $\Btag$ variable (Figure 1a),
the Higgs production angle with respect to the beam axis, $\Theta$, (Figure 1b)
and the probability for
the $\chi^2$ of the Higgs mass hypothesis (Figure 1c) are used.
The relative discriminating power of these
variables changes with the Higgs mass hypothesis. For this reason,
a cut optimization is performed at four points in the \tanbmh plane:
(2.7,95\GeV), (7.5,80\GeV), (20,80\GeV) and (50,80\GeV).
The final discriminating variable is the logarithm of the weighted
combination
of the probabilities of the $\Btag$ and
the mass $\chi^2$ to be consistent with background.
Distributions of the final discriminant for the
\hA search and the \hZ search are shown in
Figure~\ref{fig:final_plot}.
\begin{figure}[htb]
\begin{center}
\includegraphics*[width=0.9\textwidth,bb=12 201 625 1018]{figure1_new.eps}
\caption{Distributions of the a) $\Btag$ and b) cosine of the Higgs
production angle $\Theta$ in the four-jets search.
The hatched histogram is the expected
\hA signal (multiplied by a factor of 50) for $\mh=80\GeV$ and $\tanb=50$.
Distribution c) is the logarithm of the probability of the mass $\chi^2$.
The hatched histogram is the expected \hZ signal (multiplied by a factor
of 10) for
$\mh=95\GeV$ and $\tanb=3$.}
\label{fig:last_cuts}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics*[width=0.9\textwidth,bb=8 200 628 1020]{figure2_new.eps}
\caption{Distributions of the final discriminant for the
category of events passing a) both \hA and \hZ cuts but classified as
\hA candidates and b) events passing only the set of cuts for \hA.
The hatched histogram is the \hA signal expectation
for $\mh=80\GeV$ and $\tanb=50$.
Distributions are plotted for c) the events passing both \hA and
\hZ cuts but classified as \hZ
and d) events passing only the \hZ selection.
The hatched histogram is the \hZ signal expectation for
$\mh=95\GeV$ and $\tanb=3$.}
\label{fig:final_plot}
\end{center}
\end{figure}
\subsection*{The $\boldsymbol{\hZtoAAff}$ Channel}
To investigate \h decays into \A-pairs in the region of very low
\tanb and low \mA, where this channel becomes dominant, the \hZ four
jet analysis described above is employed. The signature of this
process is at least four hadronic jets with very
high probabilities to contain b quarks. The preselection and
optimized cuts chosen for the four jet analysis are applied without
adjustment. The efficiency on
$\hZ\!\rightarrow\!\A\A\Z\!\rightarrow\!\bbbar\bbbar\qqbar$ is above
40\% over the region of interest.
The mass $\chi^2$ of the four jet analysis is less effective in the six jet
topology,
however the $\Btag$ gives the final variable enough discriminating power
to distinguish between signal and background.
\subsection*{The $\boldsymbol{\hA}\!\boldsymbol{\rightarrow}\!\boldsymbol{\bbbar}\pmb{\tau}^{\boldsymbol{+}}\pmb{\tau}^{\boldsymbol{\, -}}$, $\boldsymbol{\hZ}\!\boldsymbol{\rightarrow}\!\boldsymbol{\bbbar}\pmb{\tau}^{\boldsymbol{+}}\pmb{\tau}^{\boldsymbol{\, -}}$\, and $\boldsymbol{\hZ}\!\boldsymbol{\rightarrow}\!\pmb{\tau}^{\boldsymbol{+}}\pmb{\tau}^{\boldsymbol{\, -}}\boldsymbol{\qqbar}$ Channels}
\label{sec:bbtt}
The signatures of $\hAtobbtt$, \hZtobbtt or \hZtottqq
events\footnote{The $\hAtottbb$ is also considered.} are a pair of taus
accompanied by two hadronic jets. The main
background comes from W-pair decays containing taus. Two analyses are
optimized for the hZ and for the hA channels. The hZ analysis follows that
of the Standard Model Higgs search and is
described in detail in Reference~\cite{l3_sm_higgs_99_paper}. The
$\hAtobbtt$ selection is described in this letter. As
in the Standard Model Higgs search, two selections are performed, one based
on the tau identification (particle-based selection) and the other
relying more on the event kinematics (jet-based selection).
First a common preselection
is applied to
both analyses, then cuts specific to each analysis are chosen.
The major difference in the \hA selection from that
of the \hZ analysis is the need for greater sensitivity to
lower Higgs masses. To accomplish this, the cuts on opening angles of
the jet and tau pairs have been removed, and the invariant mass cuts on
the tau-tau
and jet-jet systems
have been relaxed.
To reject the increased background accepted by loosening
the selection, additional cuts are applied which exploit the
kinematics of the \hA events. A cut is placed on the ratio of the sum
of the energies of the tau decay products to the sum of the jet energies.
The magnitude of the missing momentum vector in the rest frame of the Higgs
is restricted,
where the taus are expected to be back-to-back
resulting in a partial cancellation of the missing momentum vectors.
Finally, there is a requirement on the cosine of the production angle
of the Higgs boson with respect to the beam axis
similar to that in the four-jet \hA analysis. The selection
cuts chosen for both the particle- and jet-based selections are shown
in Table~\ref{tab:bbtt_cuts}. The number of events observed, the
number expected from background processes, and the signal efficiency
for the \hA and \hZ analyses, after combining the particle- and jet-based
selections, are shown in Table~\ref{tab:bbtt_eff}.
The final variable is the likelihood of the event to be \hA or \hZ
based on the $\Btag$ values for each hadronic jet, shown in
Figures~\ref{fig:last_vars_bbtt}a and~\ref{fig:last_vars_bbtt}b, and
the reconstructed invariant mass of either the jet or tau system,
shown in Figures~\ref{fig:last_vars_bbtt}c
and~\ref{fig:last_vars_bbtt}d, using the same technique as in the
Standard Model Higgs search.
Events which pass the \hA as well as the \hZ selection are classified as
either \hA or \hZ depending on the cross section weighted values of
these likelihoods. Examples of the final variable for the \hA search
at large values of \tanb and the \hZ search at low values of \tanb are
shown in Figure~\ref{fig:bbtt_final}.
\begin{table}[htbp]
\begin{center}
\leavevmode
\begin{tabular}{cll} \hline
Cut & \multicolumn{1}{c}{Particle-based selection} & \multicolumn{1}{c}{Jet-based selection} \\
\hline
Number of tracks & $\ge$ 5 & $\ge$ 5 \\
Number of clusters & $\ge$ 15 & $\ge$ 15 \\
$E_{\rm vis}/\sqrt{s}$ & $\ge$ 0.4, $\le$ 0.95 & $\ge$ 0.4, $\le$ 0.90\\
$E_{\rm e}$,$E_{\mu}$,$E_{\gamma}$ & $\le$ 40\GeV & $\le$ 40\GeV \\
ln\Ytf & $\ge$ -6 & $\ge$ -6 \\
$E^{\tau}$
& $\le$ 1 & $\le$ 1 \\
$m_{\tau\tau}$,$m_{qq}$ & $\ge$ 5\GeV,$\le$ 125\GeV
&$\ge$ 5\GeV,$\le$ 125\GeV \\
$\mid\cos\Theta\mid$ & $\le$ 0.8 & $\le$ 0.8 \\
$\mid p_{\rm miss}^* \mid$ & $\le$ 40\GeV & $\le$ 40\GeV \\
$\mid\cos(\Theta_{\rm miss})\mid$ & - & $\le$ 0.95 \\
$\alpha_{\mbox{\scriptsize \rm $\tau$-jet}}$ & - & $\ge$ 25$\,^\circ$ \\
\hline
\end{tabular}
\caption{
Selection cuts for particle-based and jet-based tau selections
in the $\hAtobbtt$ search channel. In addition to those abbreviations
defined in the text: $E_{\rm vis}$ is the visible energy;
$E_{\rm e},\;E_{\mu}$ and $E_{\gamma}$ are the electron, muon and photon
energies, respectively; $E^{\tau}$ is the
ratio of the sum of the energies of the tau decay products to the sum of the
jet energies; $m_{\tau\tau}$,$m_{qq}$ is the invariant mass of the
tau-tau and jet-jet systems, respectively; $\Theta$ is the
production angle of
the Higgs boson with respect to the beam axis; $p_{\rm miss}^*$ is the
magnitude of the missing momentum vector in the rest frame of the Higgs;
$\Theta_{\rm miss}$ is the angle of missing energy vector with respect to
the beam axis; and $\alpha_{\mbox{\scriptsize \rm $\tau$-jet}}$ is the angle
between a tau jet and the closest quark jet.}
\label{tab:bbtt_cuts}
\end{center}
\end{table}
\begin{table}[htb]
\begin{center}
\begin{tabular}{crr}\hline
& \multicolumn{2}{c}{Number of Events} \\
Process & \hA selection & \hZ selection \\
\hline
\epemtoqq & 2.3 & 2.3 \\
\epemtoWW & 11.3 & 11.2 \\
\epemtoZZ & 2.6 & 3.1 \\
$\mathrm{e^+e^-\rightarrow Ze^+e^-}$ & 0.4 & 0.5 \\
\hline
Total Expected & 16.6 & 17.1 \\
Data & 20 \hspace*{5.5pt} & 12 \hspace*{5.5pt} \\ \hline\hline
\hspace*{6pt}Efficiency \hAtobbtt \rule{0pt}{12pt}
& 35.2\% & 35.4\% \\
\hspace*{1pt}Efficiency \hZtobbtt & 21.1\% & 30.0\% \\
Efficiency \hZtottqq & 21.8\% & 29.8\% \\
\hline
\end{tabular}
\caption{Number of events expected and observed after selection
for the tau search channels. The signal
efficiencies at $\rts=189\GeV$ are quoted for hA at $\mA=\mh=80\GeV$ and
for hZ at $\mh=95\GeV$.}
\label{tab:bbtt_eff}
\end{center}
\end{table}
\begin{figure}[htb]
\begin{center}
\includegraphics*[width=0.9\textwidth,bb=11 199 622 1016]{figure3_new.eps}
\caption{The distributions for the $\hAtobbtt$
search channel of a) the $\Btag$ for hadronic jet 1 and
b) hadronic jet 2, c) the reconstructed mass for the hadronic
system, and d) the reconstructed mass for the tau-tau system.
The hatched histogram is the
$\hAtobbtt$ signal normalized for $\mh=80\GeV$ and $\tanb=50$.}
\label{fig:last_vars_bbtt}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics*[width=0.9\textwidth,bb=9 205 669 1019]{figure4_new.eps}
\caption{Distributions of the final variables for the a)
$\hAtobbtt$ for $\mh=80\GeV$ at $\tanb=50$,
b) the
\hZtobbtt for $\mh=95\GeV$ at $\tanb=3$ and c) the
\hZtottqq search for $\mh=95\GeV$ at $\tanb=3$.}
\label{fig:bbtt_final}
\end{center}
\end{figure}
\subsection*{Results}
No evidence of the production of the \h and \A bosons is observed in
the data. The excluded region of the MSSM parameter space
is evaluated by calculating the confidence level (\CL) that the
expected signal is absent in the observed data for the plane defined
by \tanbma. The \CL is calculated using the technique described
in References~\cite{l3_1997_18,new_method}. Bins of an analysis with a
signal-over-background ratio in the Monte Carlo of less than 0.05 are not considered in
the calculation of \CL. This cut is chosen to minimize the effect of
systematic errors on the average \CL as calculated from a large set of
Monte Carlo trials.
Systematic errors on the signal and background are considered using
the same procedure as in the Standard Model Higgs
searches~\cite{l3_1997_18,l3_1998_11,l3_sm_higgs_99_paper}.
The overall
systematic error is estimated to be 5\% on the number of signal and
10\% on the number of background events. Statistical uncertainties due
to Monte Carlo statistics are completely
uncorrelated among the different bins of the individual channels and
have little effect on the final \CL calculation.
The data from the MSSM Higgs search using lower center-of-mass
energies~\cite{l3_1998_16} is combined with the $\rts=189\GeV$ data.
Figure~\ref{fig:limit} shows the region of the $(\tanb,\mh)$ plane and
the \tanbma plane excluded by L3 for the maximal and minimal mixing
scenarios. On the plot, the 95\% \CL is shown as a solid line while
the expected median \CL is shown as a dashed line.
Table~\ref{tab:limit} lists the masses of the
\h and \A excluded at the 95\% \CL using the data at $\rts=189\GeV$
and lower center-of-mass energies for $\tanb=3$ and $\tanb=50$ as well
as the median and average exclusion and the probability to obtain a
higher limit. The probability to obtain a higher limit reaches a
maximum in the high \tanb region with an \mh mass of 68\GeV, where
there is an upward fluctuation in the data. The lowest value of \mh
excluded is at $\tanb=15.0$ for maximal mixing and the lowest value of
\mA is excluded at $\tanb=50.0$ for minimal mixing. An interesting
feature of these results is that the region of $0.8<\tanb<1.5$ is excluded
in the
MSSM, according to the current theoretical calculation of the maximum
Higgs mass allowed and for $m_{\rm top}$ equal to 175\GeV~\cite{hzha}.
However, recent two-loop
calculations~\cite{hollik98} seem to favor larger values of the
maximum allowed \mh in this region, which would change the excluded
band of \tanb.
For the MSSM parameters considered and assuming \tanb greater than
one, this results in lower mass limits at the 95\% \CL of
\begin{displaymath}
\mh > 77.1 \GeV, \;\;
\mA > 77.1 \GeV.
\end{displaymath}
\begin{table}[htbp]
\begin{center}
\leavevmode
\begin{tabular}{c|cc|cccc|c} \hline
& \multicolumn{6}{|c|}{Lower mass limits in\GeV at 95\% \CL} \\ \cline{2-7}
& \multicolumn{2}{|c|}{Observed} & \multicolumn{4}{|c|}{Expected} & \\
Mixing, \tanb & $\boldsymbol{\mh}$ & $\boldsymbol{\mA}$ & \mhavg & \mAavg & \mhmed & \mAmed & \CLb \\
\hline
minimal, 3 & {\bf 96.3} & {\bf 225.0} & 92.7 & 164.0 & 94.6 & 192.6 & 12\% \\
minimal, 50 & {\bf 77.1} & {\bf 77.1} & 78.2 & 78.2 & 80.0 & 80.0 & 80\% \\
maximal, 3 & {\bf 95.4} & {\bf 128.9} & 89.0 & 111.9 & 90.4 & 117.1 & 15\% \\
maximal, 50 & {\bf 77.5} & {\bf 77.6} & 78.9 & 79.0 & 81.4 & 81.5 & 77\% \\
\hline
\end{tabular}
\caption{Higgs mass limits in the MSSM from the data at
$\rts=130\GeV-189\GeV$. The
masses in boldface are the lower mass limits set at the
\mbox{95\% \CL} from the data. The
masses $< \kern -0.4em m \kern -0.4em >$ and $\overline{m}$
are respectively the average and median mass limits for the \h
and \A bosons as calculated from a large set of Monte Carlo
trials. Assuming there is no signal, \CLb is the probability to
obtain a mass limit on \mh larger than the one observed.
\label{tab:limit}}
\end{center}
\end{table}
\begin{figure}[htb]
\begin{center}
\includegraphics*[width=0.9\textwidth,bb=16 184 624 1038]{figure5_new.eps}
\caption{Exclusion plots of the Higgs mass versus \tanb at the
95\% \CL. In all plots the area shaded by diagonal lines is the
95\% exclusion, while the cross-hatched region is theoretically
disallowed. The grey region in plots a) and c)
corresponds to $\mA\!<\!10\GeV$ and has been previous excluded at
LEP~\protect\cite{opal_2}.
Plot a) is the
95\% \CL exclusion of \mh versus \tanb in the minimal mixing
scenario, and b) is the 95\% exclusion of \mA versus \tanb also for
minimal mixing. Plots c) and d) are the same for the maximal
mixing scenario.}
\label{fig:limit}
\end{center}
\end{figure}
\section*{Acknowledgements}
We acknowledge the efforts of the engineers and technicians who
have participated in the construction and maintenance of L3 and
express our gratitude to the CERN accelerator divisions for the superb
performance of LEP.
|
2,869,038,156,784 | arxiv | \section{introduction}
The discovery of the high-temperature superconductivity in iron-based compounds with transition temperature $T_C$ up to 55 K have attracted tremendous research interest for the last decade~\cite{JACS2008Kamihara,PRL2008Rotter,CPL2008Ren,RMP2011Stewart}. Similar to the phase diagram in cuprate~\cite{Keimer2015}, the superconductivity in iron-based superconductors always emerges in close proximity to a state with antiferromagnetism ~\cite{nature2008Cruz,NatPhys2011Basov,RevModPhys2012Scalapino,PRL2014Medici}. Since previous theoretical studies have ruled out the phonon-mediated pairing~\cite{PRL2008Boeri}, it is widely believed that the superconductivity in iron-based superconductors is unconventional and has a magnetic origin.
Therefore, many efforts have been spent on revealing the origin of magnetism in iron-based superconductors. As continuous debates between itinerant scenario~\cite{Mazin2008PRL,epl2008Dong,nature2010Mazin} and localized picture~\cite{Yildirim2008PRL,Si2008PRL} remain unsettled, a compromising explanation arised where coupling of itinerant electrons and localized spins is taken into account~\cite{Johannes2009PRB,You2009PRB}. In spite of the above open discussions, two consensuses have been reached. One is that density functional theory (DFT) calculations can qualitatively describe the magnetic properties of parent states in both iron pnictides and iron chalcogenides, although the magnetic moments are always overestimated due to the fact that the intermediate strength of electronic correlations in open $d$ shell of iron atom can not be properly captured in the approximation of the functionals. The other is that, in contrast to the cuprates, the effect of electronic correlations is adopted to come from the Hund's rule coupling~\cite{HauleNJP2009,YinNatPhys2011,NicolaPRB2013}, rather than on-site Coulomb repulsion. And the metallic states of iron-based superconductors are called Hund's metal where the Hund's rule of maximum multiplicity is supposed to be valid~\cite{GeorgesARCMP2013}.
However, above consensuses have been seriously challenged since the discovery of CuFeAs. The magnetic susceptibility measurements showed that it is antiferromagnetic with N\'eel temperature $T_N$ of around 9 K~\cite{JPSJ2014Thakur,PRB2018Li}. The antiferromagnetism was further demonstrated by neutron diffraction experiments~\cite{PRB2017Zou,PRB2017Kamusella}, where either an unusual {\it G}-type antiferromagnetic order or proximity to an antiferromagnetic instability was proposed. Though ferromagnetism was also reported in the literature~\cite{PRB2015Qian,thesis2009lv}, it was pointed out that the weak ferromagnetism probably comes from a ferromagnetic component of a canted antiferromagnetic state~\cite{PRB2017Zou,PRB2015Qian}. Therefore, while the type of antiferromagnetic order is still unclear, it seems conclusive that the ground state of CuFeAs is antiferromagnetic experimentally. Nonetheless, an early theoretical study based on DFT calculations suggested that this compound is a ferromagnet~\cite{WANG201638}, in stark contrast to the experimental observations~\cite{JPSJ2014Thakur,PRB2018Li,PRB2017Zou,PRB2017Kamusella}, which casts doubts on the existing consensus. Obviously, further studies based on DFT calculations are required to resolve the contradiction, and to clarify the magnetic structure in CuFeAs, as well as to verify the role of Hund's coupling.
Here, the nature of magnetism of CuFeAs is investigated by applying DFT calculations. We find that the ground-state magnetic structure of CuFeAs are controlled by As height $h_{\text{As}}$ from iron plane, similar to other iron-based superconductor parent compounds, and the critical height $h_{\text{c}}$ of 1.612 {\AA} is identified. If $h_{\text{As}}<h_{\text{c}}$, the ground state is in a collinear antiferromagnetic (CAFM) state. On the contrary, when As height is larger than $h_c$, the on-site Coulomb interaction is crucial to involve in order to correctly account for observed antiferromagnetic state. It is found that bicollinear antiferromagnetic (BAFM) order gives lowest total energy among the states we studied, which becomes even more favorable in the intermediate value of on-site Coulomb interaction after introducing Cu vacancy and shows weak ferrimagnetism where total magnetic moment turns out to be nonzero. The small magnetic moment per iron is ascribed to the violation of Hund's rule~\cite{[Due to the presence of tetragonal crystal field in iron-based compounds\text{,} together with the negligible strength of spin-orbit coupling in iron\text{,} the orbital angular momentum of $3d$ orbitals is completely quenched in CuFeAs. Therefore\text{,} the second and the third Hund's rules are irrelevant in CuFeAs\text{,} and throughout the paper\text{,} the Hund's rule refers to the first one]Hundsrule} where antiparallel orbital magnetic moments on each iron are present. Our results can be applied to fully understand the experimental results~\cite{JPSJ2014Thakur,PRB2018Li,PRB2017Zou,PRB2017Kamusella,PRB2015Qian}.
CuFeAs is isostructural to 111-type iron-pnictide superconductor parent compounds LiFeAs~\cite{PhysRevB2008Tapp} and NaFeAs~\cite{PRB2009Li}. It is characterized by large $h_{\text{As}}$ in comparison to other iron pnictides, varying from $h_{\text{Kam}}$=1.53 {\AA}~\cite{PRB2017Kamusella} to $h_{\text{Li}}$=1.57 {\AA}~\cite{PRB2018Li} and $h_{\text{Thakur}}$=1.74 {\AA}~\cite{JPSJ2014Thakur}, finally to $h_{\text{Zou}}$=1.80 {\AA}~\cite{PRB2015Qian}. Moreover, it was reported to be nonstoichiometric~\cite{JPSJ2014Thakur,PRB2015Qian,PRB2017Zou,PRB2017Kamusella,PRB2018Li}, namely the Cu vacancies are always present.
Finally, like other iron-based superconductor compounds, CuFeAs is a material with large Sommerfeld coefficient, indicating that electronic interactions are not negligible~\cite{PRB2015Qian}.
\section{method}
The DFT calculations were performed using full-potential linearized augmented plane wave method as implemented in the Wien2k code~\cite{SCHWARZ200271}. We adopted generalized gradient approximation (GGA) of Perdew-Burke-Ernzerhhor~\cite{prl1996Perdew} for the exchange-correlation potentials. In order to determine the magnetic ground state, we have studied nonmagnetic (NM), ferromagnetic (FM), and three distinct antiferromagnetic configurations, including N\'eel antiferromagnetic (NAFM), CAFM, and BAFM orders~\cite{prl2010Yin,[See also cartoons for these magnetic states in the appendix A]cartoon}. We employed $\sqrt{2}\times\sqrt{2}\times1$ and $2\times1\times{1}$ unit cells for CAFM and BAFM states, and primitive cell for the other states, respectively. The Brillouin zone integration is carried out with a k mesh of $24\times{24}\times{18}$ for NM, FM, and NAFM states, and $24\times{24}\times{20}$ for CAFM phase as well as $16\times32\times20$ for BAFM phase, respectively. Furthermore, the influence of on-site Coulomb interaction on magnetic stabilities of CuFeAs was investigated by GGA+U approach~\cite{Liechtenstein1995PRB}. The around mean-field double counting~\cite{prb1994Czy} is employed as CuFeAs is a correlated metal. Unless specified otherwise, the Hund's coupling $J=U/10$~\cite{JPSJ2010Miyake} was used throughout all our GGA+U calculations~\cite{[The explicit inclusion of on-site Coulomb repulsion $U$ and exchange parameter $J$ which favors the first Hund's rule and thus is so-called the Hund's coupling term allows us to investigate multi-orbital physics in CuFeAs. In contrast to the conventional understanding where maximum multiplicity should appear in the presence of the Hund's coupling\text{,} breakdown of the Hund's rule occurs in CuFeAs\text{,} resulting in an exotic BAFM state with small magnetic moment]methods}. The conclusions remain valid if $J$ is fixed at $U/4$. Here, the experimental lattice constants reported by Thakur {\it et al.}~\cite{JPSJ2014Thakur} was used and also, the conclusions will not be altered when other experimental lattice constants are used. The $x$ ($y$) of local coordinate system for each iron is along the closest Fe-Fe bond direction.
\begin{figure}[b]
\includegraphics[width=0.48\textwidth]{CuFeAs_GSE_H_GGA_v2.pdf}
\caption{(Color online) (a) The calculated total energies of bicollinear antiferromagnetic (BAFM), collinear antiferromagnetic (CAFM), N\'eel antiferromagnetic (NAFM), ferromagnetic (FM) and nomagnetic (NM) states as a function of As height $h_{\text{As}}$. Here, the energies of NM state were set to zero. (b) By mapping the energies of ferromagnetic and various antiferromagnetic states onto a Heisenberg model, the nearest-, next-nearest-, and third-nearest-neighbor exchange couplings $J_1$, $J_2$, $J_3$ were derived. The inset denotes the cartoon of $J_1$, $J_2$, $J_3$ exchange interactions.}
\label{CuFeAs_GSE_LDAU}
\end{figure}
\section{results}
Since it is well known that the magnetic properties of parent states of iron-based superconductors are sensitive to As height $h_{\text{As}}$ measured from iron plane~\cite{PRL2009Yildirim,PRL2008Yin}, we first investigated the effect of $h_{\text{As}}$ on ground-state magnetic structure by only varying the As $z$ positions and leaving all the other internal coordinates unchanged. Fig.~\ref{CuFeAs_GSE_LDAU} (a) shows the calculated total energies of different magnetic configurations as a function of As height within GGA. The energies of the NM state were set to zero. Starting from As height $h_{\text{Kam}}$ of 1.53 {\AA}, CAFM state gives lowest total energy. When $h_{\text{As}}$ exceeds critical height $h_c$ of 1.612 {\AA}, FM state, rather than any AFM states, becomes stablest. The appearance of phase transition from a CAFM state to a FM one suggests that the magnetism is strongly dependent on $h_{\text{As}}$ in CuFeAs, similar to the other parent compounds of iron-based superconductors~\cite{PRL2008Yin,PRL2009Yildirim,prl2010Moon}.
However, remarkable conflict can be found between the calculated results and the experimental data. As can been seen in Fig.~\ref{CuFeAs_GSE_LDAU} (a) where various As heights reported in different experiments are shown, at $h_{\text{As}}$ less than $h_{\text{c}}$, our theoretical calculations point to a CAFM ground state, which is mainly consistent with the experimental results where either long-range~\cite{PRB2018Li} or short-range~\cite{PRB2017Kamusella} AF order was observed, though the magnetic structure has not been determined experimentally yet. But, in the cases of $h_{\text{As}}$ greater than $h_{c}$, while the theoretical ground state is strongly FM, it was inferred from experiments~\cite{JPSJ2014Thakur,PRB2015Qian,PRB2017Zou} that CuFeAs should be an antiferromagnet probably with ferromagnetic components. The contradiction raises a great challenge to the existing theory~\cite{prl2010Moon,prl2010Yin}, which seems valid for the whole family of iron-based superconductors including both iron pnictides and iron chalcogenides, where various magnetic ground states can be correctly accounted for after different anion height is considered. In fact, the theory of anion height is also true for a sister compound of CuFeAs, so called CuFeSb, where DFT calculations~\cite{WANG201638} and experiments~\cite{PRB2012Qian,PRB2016Sirohi} both obtained a FM ground state, which can be attributed to a very high anion height of $>1.84$~\AA. Therefore, it is urgent to understand why CuFeAs with intermediate value of anion height is so extraordinary that DFT calculations can not agree with experimental findings even qualitatively.
Till now the magnetism of iron pnictides can be explained by both the local moment picture~\cite{Yildirim2008PRL,Si2008PRL} and the weak-coupling scenario~\cite{Mazin2008PRL,epl2008Dong,nature2010Mazin}. From local moment picture, the magnetic ground state can be effectively described by the frustrated Heisenberg model $H=\sum_{ij}J_{ij}S_iS_j$ where $J_{ij}$ is the superexchange interactions between local Fe moments with spin $S_i$. From above data based on the DFT calculations within GGA, the nearest, next-nearest, and next-next-nearest neighbor exchange couplings $J_1$, $J_2$, and $J_3$ can be derived from the energy differences among various magnetic states and are summarized in Fig.~\ref{CuFeAs_GSE_LDAU} (b). As can be seen, $J_1$ remains FM, thus favors a FM order. It is strongly enhanced as a function of increased $h_{\text{As}}$. While $J_2$ is AFM and plays a dominant role at small $h_{\text{As}}$, it drastically reduced in the vicinity of $h_{c}$ and eventually turns into FM nature at larger $h_{\text{As}}$. $J_3$ is also AFM and barely dependent on $h_{\text{As}}$. If only FM, NAFM, CAFM, and BAFM states are taken into account in the classic limit of Heisenberg $J_1-J_2-J_3$ model, CAFM is energetically favorable over other magnetic configurations when $J_2>-J_1/2$ and $J_2>2J_3$. The conditions are satisfied when $h_{\text{As}}$ is smaller than $h_{\text{c}}$, resulting in an agreement between our first-principles results and recent experimental observations~\cite{PRB2017Kamusella,PRB2018Li}. However, when $h_{\text{As}}$ is greater than $h_{\text{c}}$, the FM state becomes the ground state owing to $J_1<-2J_2$ and $J_1<-J_2-2J_3$. Since antiferromagnetism was observed in experiments~\cite{JPSJ2014Thakur,PRB2015Qian,PRB2017Zou} when $h_{\text{c}}<h_{\text{As}}<1.84$~\AA,
it indicates that local moment picture fails to account for the magnetic ground state of CuFeAs in the intermediate region of anion height.
\begin{figure}[htbp]
\includegraphics[width=0.48\textwidth]{CuFeAs_SUS_submit.pdf}
\caption{(Color online) the orbitally-resolved Pauli susceptibilities of Fe 3$d$ orbitals for As height equal to 1.74 {\AA} (a) and 1.53 {\AA} (b) along high symmetry path of $\Gamma-\text{X}-\text{M}-\Gamma$.}
\label{CuFeAs_SUS}
\end{figure}
In order to know if the magnetism can be understood from the weak-coupling limit where the Fermi surface nesting plays roles, we have calculated the orbitally resolved Pauli susceptibility $\chi^{\tau\tau}_{\tau\tau}(q)$, as defined in Ref.~\cite{Graser_2009,Ding2013PRB}, to quantify the nesting property. Owing to the fact that the magnetism is mainly controlled by intraorbital particle-hole excitations, we only show in Fig.~\ref{CuFeAs_SUS} the intraorbital components of the Pauli susceptibility along the path of $\Gamma-\text{X}-\text{M}-\Gamma$ in the Brillouin zone. It is found that the susceptibility of $d_{x^2-y^2}$ and $d_{z^2}$ orbitals are much smaller than those of $d_{xy}$ and $d_{yz/xz}$ orbitals, suggesting that the magnetic instabilities in CuFeAs are mainly contributed from the latter three orbitals. As is well known, a prominent peak present in the Pauli susceptibility denotes a tendency towards a certain long-range magnetic ordered state whose magnetic configuration is determined by the position of the peak in the momentum space. However, in the intermediate region of anion height, for instance, $h_{\text{Thakur}}$=1.74 {\AA} as shown in Fig.~\ref{CuFeAs_SUS} (a), neither the susceptibilities of $d_{xy}$ orbital nor those of $d_{xz/yz}$ orbitals show any pronounced peaks. On the contrary, the plateau appearing in the susceptibilities of $d_{xy}$ orbital along the $\text{X}-\text{M}$ path may indicate that CuFeAs is highly magnetically frustrated due to the competitions among enormous instabilities.
For comparison, we have calculated the Pauli susceptibility at low As height of 1.53 {\AA}, as shown in Fig.~\ref{CuFeAs_SUS} (b). In contrast to the featureless orbital-resolved susceptibilities at $h_{\text{As}}=$ 1.74 {\AA}, the counterpart at low As height shows pronounced instabilities in $d_{xy}$ orbital around wave vector ($\pi,\pi$), indicating a strong tendency towards a antiferromagnetic state. This is consistent with our total-energies calculations and previous experimental observations~\cite{PRB2017Kamusella,PRB2018Li}. It suggests that the magnetism in CuFeAs for $h_{\text{As}}<h_{c}$ case can be explained by both the Fermi surface nesting scenario~\cite{Mazin2008PRL,epl2008Dong,nature2010Mazin} and the local moment picture~\cite{Yildirim2008PRL,Si2008PRL}, similar to all the other iron pnictides. However, if $h_{c}<h_{\text{As}}<1.84$ \AA, neither the local moment picture nor the Fermi surface nesting scenario could be applied to understand the magnetism in CuFeAs.
\begin{figure}[htbp]
\includegraphics[width=0.48\textwidth]{Energy_CuFeAs_GGA_U-new.pdf}
\caption{(Color online) The effect of on-site Coulomb interaction $U$ on the magnetic ordering in CuFeAs (a) and Cu$_{0.5}$FeAs (b). As $U$ increases, the phase transiton from FM state to BAFM state occurs at around 4.1~eV for CuFeAs (a) and at about 3.5~eV for Cu$_{0.5}$FeAs (b), respectively. }
\label{CuFeAs_GSE_GGA_U}
\end{figure}
Considering the orbital degrees of freedom existing in CuFeAs, inclusion of on-site Coulomb interactions may strongly change the spin and charge populations among different orbitals. Therefore, in the following, we will apply GGA+U~\cite{prb1994Czy} to allow for multi-orbital effects and unravel the truth for magnetic ground state in CuFeAs. Fig.~\ref{CuFeAs_GSE_GGA_U} (a) shows the evolution of total energies of different magnetic configurations as a function of $U$. At small $U$, the FM state gives the lowest total energy. As $U$ becomes large, the BAFM state becomes stabler than other magnetic states. The FM-BAFM phase transition takes place at critical point $U_c$ of around $4.1$~eV, which is slightly smaller than the on-site Coulomb interaction $U\approx4.5$ eV estimated from constrained local density approximation which is comparable to those in other iron pnictides~\cite{JPSJ2010Miyake}. This implies that the material is in the BAFM state and close to the phase boundary between FM and BAFM states.
Furthermore, as was observed in experiments~\cite{PRB2015Qian,PRB2017Zou}, CuFeAs is nonstoichiometric with Cu sites being partially vacant. Cu vacancy will cause heavy hole doping and may alter crystal field splitting due to the deficiency of cations in certain positions. Here, the effect of Cu vacancy on the magnetic properties of ground state was considered by simply removing one Cu from the primitive cell, leading to chemical formula of Cu$_{0.5}$FeAs. The total energies of various magnetic configurations as a function of $U$ are displayed in Fig.~\ref{CuFeAs_GSE_GGA_U} (b). Similar results are obtained as those for the stoichiometric case, except that the critical value of $U$ is considerably suppressed from around $4.1$~eV to about $3.5$~eV, suggesting that the BAFM state is further stabilized when Cu vacancies are introduced.
\begin{figure*}[htbp]
\includegraphics[width=0.9\textwidth]{CuFeAs_Magnetic_Moment-new.pdf}
\caption{(color online) (a) The averaged magnetic moment on Fe atoms for CuFeAs and Cu$_{0.5}$FeAs. The inset is the total Fe magnetic moments for Cu$_{0.5}$FeAs in the BAFM state. The orbitally-resolved magnetic moments of Fe 3$d$ orbitals for CuFeAs (b) and Cu$_{0.5}$FeAs (c). For Cu$_{0.5}$FeAs in the BAFM state (c), the averaged orbital magnetic moments over irons of two sublattices are different, leading to the formation of ferrimagnet.}
\label{Moment_Occupancy}
\end{figure*}
To gain deep insight into the effect of on-site Coulomb interaction and Cu vacancies on the ground state, we have calculated total and averaged magnetic moments on iron as well as orbitally resolved magnetic moments of 3$d$ orbitals on iron for both CuFeAs and Cu$_{0.5}$FeAs. Fig.~\ref{Moment_Occupancy} (a) displays the averaged Fe moments. It is found that the averaged Fe moment decreases as $U$ increases and at critical point $U_c$, a sharp drop appears. The presence of Cu vacancy gives rise to a decrease of the averaged Fe moments in the FM state while an increase in the BAFM state, indicating that the Cu vacancy energetically stabilizes the BAFM state and enhances the spin polarizations in the BAFM state. In addition, if Cu vacancy is present, the total magnetic moments of Fe are finite in the BAFM state, as depicted in the inset of Fig.~\ref{Moment_Occupancy} (a) which is consistent with the nonzero spontaneous magnetic moment observed by experiments~\cite{JPSJ2014Thakur,PRB2015Qian}.
Fig.~\ref{Moment_Occupancy} (b) and (c) shows the averaged magnetic moments of Fe 3$d$ orbitals for CuFeAs and Cu$_{0.5}$FeAs, respectively. The orbitally-resolved magnetic moments are obtained by constructing atomic projectors as implemented in Wien2k code~\cite{SCHWARZ200271}. It is found that, while the magnetic moments of $d_{xy}$ and $d_{yz/xz}$ orbitals which dominate the states close to the Fermi level are changed slightly in FM state, those of $d_{z^2}$ and $d_{x^2-y^2}$ decrease remarkably and finally become antiparallel to the magnetic moments of $d_{xy}$ and $d_{yz/xz}$ orbitals. Such kind of breakdown of the Hund's rule was proposed to be a possible origin for the low magnetization in LaFeAsO~\cite{Cricchio2010PRB,prl2010Bascones,prb2012Liu}, where DFT calculations~\cite{Cricchio2010PRB} suggested that opposite magnetization among different orbitals is stabilized against the Hund's rule by the formation of large multipoles of the spin density, while model calculations based on a five-band Hubbard model~\cite{prl2010Bascones} and a two-orbital Heisenberg model~\cite{prb2012Liu} found that interorbital exchange interaction overrides the Hund's rule. From our DFT calculations and corresponding derivations of tight-binding model parameters by Wannier90~\cite{MOSTOFI2008Arash}, we conclude that both magnetic multipoles~\cite{Cricchio2010PRB} and interorbital hoppings~\cite{prl2010Bascones} which induce interorbital exchanges as the Hubbard interaction $U$ is involved play roles in the formation of antiparallel magnetic moments among five 3d orbitals on each iron in CuFeAs. Further increasing $U$, a phase transition from the FM state to a BAFM state occurs with a significant reduction of magnetic moment in each orbital. However, the breakdown of the Hund's rule remains. The weak antiferromagnetism agrees well with experimental results~\cite{JPSJ2014Thakur,PRB2017Zou,PRB2017Kamusella,PRB2018Li}. Note in the presence of Cu vacancy, the orbital magnetic moments on different magnetic sublattices are strongly deviated from each other (Fig.~\ref{Moment_Occupancy} (c)), leading to a finite total magnetic moment as shown in the inset of Fig.~\ref{Moment_Occupancy} (a).
\section{discussions}
From above investigations, it was shown that the magnetic properties of CuFeAs vary from a CAFM state to a BAFM phase as a function of the experimental As height measured from iron plane, where the critical height for the CAFM-BAFM transition is $h_{\mathrm{c}}\approx1.612$ \AA. At the As height of $h_{\mathrm{As}}>1.84$ \AA, the pnictogen-height driven FM phase is expected to be stablest~\cite{PRB2012Qian,PRB2016Sirohi}. This can account for various experimental observations in CuFeAs, where all magnetic susceptibility measurements at low external magnetic fields~\cite{PRB2018Li,JPSJ2014Thakur,PRB2015Qian} suggest this material is in an antiferromagnetic state for $1.57$ \AA~$\le{}h_{\mathrm{As}}\le1.80$ \AA. At $h_{\mathrm{As}}=1.53$ \AA, the short-range CAFM order was observed by the M$\ddot{\rm{o}}$ssbauer spectroscpy and muon spin resonance experiments~\cite{PRB2017Kamusella}, which is also qualitatively consistent with our results (shown in Fig.~\ref{CuFeAs_GSE_LDAU} (a)) since the quantum fluctuations are completely frozen in the DFT calculations. Aware of a fact that the interlayer spin exchange hardly affects the magnetic structure in the Fe$_2$As$_2$ plane for CuFeAs, the {\it G}-type antiferromagnetic state, a three dimensional NAFM state proposed in recent study~\cite{PRB2017Zou}, can not become the ground-state magnetic ordering because the energy of NAFM state is higher than those of the other antiferromagnetic states we considered, as displayed in Fig.~\ref{CuFeAs_GSE_LDAU} (a) and Fig.~\ref{CuFeAs_GSE_GGA_U}. Besides, the agreement between our results and experiments indicates that the Coulomb interaction plays a key role in the description of magnetism of the multi-orbital system CuFeAs, especially at the intermediate As height .
Moreover, the antiferromagnetic state with nonvanishing total magnetic moments is found to be energetically stablized when Cu vacancies are present, which explains the spontaneous magnetization in nonstoichiometric CuFeAs~\cite{JPSJ2014Thakur,PRB2015Qian}. Considering the weak ferrimagnetism, CuFeAs should be susceptible to external magnetic field. It is the reason why the magnetization exhibits the AFM-like behavior with the decrease of temperature at low magnetic fields, but shows the FM-like behavior at magnetic field of $>$ 500 Oe due to the field-induced ferromagnetic component~\cite{PRB2018Li,JPSJ2014Thakur,PRB2015Qian}.
Finally, we found that CuFeAs is an unique compound in the family of iron pnictides which may be used to unveil the origin of weak magnetism present commonly in the pnictides. In contrast to the CAFM state at lower As height and the FM state at higher As height, both of which can be understood from the itinerant electron picture~\cite{Mazin2008PRL,epl2008Dong,nature2010Mazin} and the localized spin scenario~\cite{Yildirim2008PRL,Si2008PRL}, the BAFM phase which can account for various experimental observations in CuFeAs~\cite{JPSJ2014Thakur,PRB2015Qian,PRB2017Zou} can only be explained by breakdown of the Hund's rule. The antiparallel arrangement of magnetic moments in different orbitals on each iron atom has been proposed as the possible origin for weak magnetism in LaFeAsO~\cite{Cricchio2010PRB,prl2010Bascones}, which unfortunately was not prevalent since it was just an alternative theory to the widely accepted ones based on itinerant or localized scenario~\cite{Mazin2008PRL,epl2008Dong,nature2010Mazin,Yildirim2008PRL,Si2008PRL}. However, CuFeAs may be the first counterexample which casts doubts on the applicabilities of the well accepted theories and supports breakdown of the Hund's rule as a unified picture for the weak magnetism observed experimentally in iron pnictides. It should be noted that violation of the first Hund's rule appears in the presence of two partly filled shells like cerium~\cite{Morgan1993JPC}, while there is only one partly filled shell in CuFeAs. Moreover, Hund's coupling is always believed to dominate the correlated metallic behavior in iron pnictides~\cite{HauleNJP2009,YinNatPhys2011,NicolaPRB2013,GeorgesARCMP2013} as was frequently pointed out by dynamical mean field theory (DMFT)~\cite{GeorgesRMP1996} or LDA+DMFT~\cite{Kotliar2006RMP} studies where nonlocal correlations are totally ignored. If breakdown of the Hund's rule were dominant, the intersite interorbital hybridizations and multipole interactions become important, which brings the concept of Hund's metal into question and requires further investigations beyond local approximation and Hubbard interactions. And breakdown of the Hund's rule may provide a new route to form singlet cooper pairs locally~\cite{Hoshino2017PRL}.
Note that the CuFeAs is unique among iron pnictides due to the fact that it possesses the highest arsenic height in comparison to other iron arsenic compounds, for example; $h_{\mathrm{As}}\sim1.51$~\AA~for LiFeAs~\cite{Pitcher2008RSC}, $1.31$~\AA~for LaFeAsO~\cite{nature2008Cruz}, $1.35$~\AA~ for BaFe$_2$As$_2$~\cite{Huang2008PRL}. And it is even higher than that of Fe$_{1.01}$Se where $h_{\mathrm{Se}}\sim1.47$~\AA~\cite{McQueen2009PRB}. The height is comparable to $h_{\mathrm{Te}}\sim1.75$~\AA~of Fe$_{1.068}$Te which also shows bicollinear antiferromagnetic order~\cite{Li2009PRB}. However, it is smaller than $h_{\mathrm{Sb}}\sim1.84$~\AA~of CuFeSb where ferromagnetism is observed~\cite{PRB2012Qian,PRB2016Sirohi}.
\section{conclusion}
In conclusion, we have investigated the magnetism of CuFeAs by applying DFT calculations. It is found that breakdown of Hund's rule occurs and is responsible for the exotic BAFM state in CuFeAs at the height of As atom of 1.612 {\AA} $<h_{\text{As}}<$ 1.84 {\AA}. The novel phase intersects between a CAFM state at $h<$ 1.612 {\AA} and an FM state at $h>$ 1.84 {\AA}. The presence of Cu vacancy favors the BAFM state and induce weak ferrimagnetism due to the symmetry breaking between magnetic sublattices. The interaction is indispensible to correctly capture the ground state of CuFeAs. Our results can be applied to fully understand experimental observations and have important implication that breakdown of Hund's rule may be a unified theory for weak magnetism in iron pnictides.
\begin{acknowledgments}
This work is financially supported by the National Natural Science Foundation of China (Grant Nos. 11774258, 12004283) and Postgraduate Education Reform Project of Tongji University (Grant No. GH1905). Z. Y. Song acknowledges the financial support by China Postdoctoral Science Foundation (Grant No. 2019M651563).
\end{acknowledgments}
|
2,869,038,156,785 | arxiv | \section{Supplementary information}
\subsection{Topological classification and bulk topological invariants}
\label{sec:classification}
The Bloch Hamiltonian of our photonic crystal in the tight-binding limit with coupling between nearest-neighbor waveguides is
\begin{align}
h({\bf k}) &= c_{ext} h_{ext}({\bf k}) + c_{int} h_{int},
\label{eq:hamiltonian}
\end{align}
where $h_{ext}({\bf k})=\oplus_{i=1}^3 [\cos({\bf k} \cdot {\bf a}_i) \sigma_x + \sin({\bf k} \cdot {\bf a}_i) \sigma_y]$
is due to couplings between waveguides of neighboring unit cells and $h_{int}$ is a matrix with entries $[h_{int}]^{mn}=1$ for nearest-neighbor waveguides $m$, $n$ within the same unit cell, and 0 otherwise.
Here ${\bf a}_1 = (1,0)$, ${\bf a}_{2,3}=(\pm1/2, \sqrt{3}/2)$ are primitive lattice vectors, and the basis of states in the matrices are the six internal degrees of freedom in the unit cell (see Fig. \ref{Fig1}d for numbering).
The existence of crystalline symmetries expands the topological classification beyond the 10-fold classification \cite{Altland1997} which is built upon time-reversal, particle-hole, and chiral symmetries. In this section we construct the topological classification for crystals in class BDI \cite{Altland1997} with additional $C_6$ symmetry, as these are the symmetries in our tight-binding Hamiltonian \eqref{eq:hamiltonian}. We will then see that our crystalline structure can transition from a non-trivial class to the trivial class as we vary the ratio $s/L$ from $s/L<3$ to $s/L>3$. We begin by pointing out that in BDI class, systems have TR and chiral symmetries
\begin{align}
\hat{T} h({\bf k}) \hat{T}^{-1} &= h(-{\bf k})\nonumber\\
\Pi h({\bf k}) \Pi^{-1} &= - h({\bf k}).
\label{eq:TR_PH_symmetries}
\end{align}
where the TR and chiral operators are $\hat{T} = K$ (where $K$ is complex conjugation) and $\Pi =\sigma_z \oplus -\sigma_z \oplus \sigma_z$. While the TR symmetry is an intrinsic symmetry of photonic systems, chiral symmetry is specific to our lattice structure, and is only approximately preserved (up to exponentially small corrections from further neighbor coupings between the same sublattice in the honeycomb lattice). TR and chiral symmetries imply the existence of (an approximate for the same reason above) particle-hole symmetry
\begin{align}
\Xi h({\bf k}) \Xi^\dagger = -h(-{\bf k}),
\label{eq:chiral_symmetry}
\end{align}
where $\Xi = \Pi \hat{T}$ is the particle-hole operator. We now consider $C_6$ symmetry,
\begin{align}
\hat{r}_6 h({\bf k}) \hat{r}_6^\dagger = h (R_6 {\bf k}),\;\; \hat{r}_6 = \left(\begin{array}{cccccc}
0&\sigma_0&0\\
0&0&\sigma_0\\
\sigma_x&0&0
\end{array}\right),
\label{eq:rotation_symmetry}
\end{align}
where $\hat{r}_6$ is the rotation operator acting on the internal degrees of freedom of the unit cell, which obeys $[\hat{r}_6, \hat{T}]=0$ and $\hat{r}_6^6=1$, and $R_6$ is the matrix that rotates the crystal momentum by $2\pi/6$ radians. This symmetry implies that the Brillouin zone has the hexagonal shape of Fig. \ref{fig:classification}a. The entire BZ can be generated by rotating the fundamental domain shown by the shaded region in Fig. \ref{fig:classification}a by multiples of $2\pi/6$ rad. In this BZ there are rotation invariant momenta (RIM) ${\bf k}^{(\alpha)}$ which map back to themselves upon a rotation by $\hat{r}_\alpha$ (that is, $\hat{r}_\alpha {\bf k}^{(\alpha)} = {\bf k}^{(\alpha)}$, modulo a reciprocal lattice vector). In $C_6$ symmetric crystals, the RIM are ${\bf k}^{(6)}={\bf \Gamma}$, ${\bf k}^{(3)}={\bf K}$ and ${\bf K'}$ and ${\bf \Pi}^{(2)}={\bf M}$, ${\bf M'}$, and ${\bf M''}$, as seen in Fig. \ref{fig:classification}a. Notice that since ${\bf \Gamma}$ is a 6-fold RIM, it is also a 3-fold and a 2-fold RIM.
The existence of the RIM implies, from \eqref{eq:rotation_symmetry}, that the Hamiltonian commutes with the rotation operator $\hat{r}_\alpha$ at RIM ${\bf k}^{(\alpha)}$,
\begin{align}
[\hat{r}_\alpha,h({\bf k}^{(\alpha)})]=0
\end{align}
Thus, the $\beta$-energy eigenstates at these points of the BZ, $\ket{u^n_{{\bf k}^{(\alpha)}}}$, i.e., the solutions to
\begin{align}
h({\bf k}^{(\alpha)}) \ket{u^n_{{\bf k}^{(\alpha)}}} &= \beta^n({{\bf k}^{(\alpha)}}) \ket{u^n_{{\bf k}^{(\alpha)}}}
\end{align}
are also eigenstates of the rotation operator,
\begin{align}
\hat{r}_\alpha \ket{u^n_{{\bf k}^{(\alpha)}}} &= r^n_\alpha \ket{u^n_{{\bf k}^{(\alpha)}}}.
\end{align}
This allows us to use the rotation eigenvalues $r^n_\alpha$ as labels for the rotation representation of the subspace of negative $\beta$ bands. This is useful since a difference in the group representations of the subspace of negative $\beta$ bands at two m-fold RIM of the BZ implies a non-trivial topology in the system. In particular, we compare the rotation representation at the momenta ${\bf M}$ and ${\bf K}$ with that at ${\bf \Gamma}$, following the construction in reference \cite{Benalcazar2014}, to build topological invariants in $C_6$-symmetric crystals. However, in addition to imposing restrictions on these invariants due to PH symmetry, as in \cite{Benalcazar2014}, we also impose those of TR symmetry.
Out of all the RIM in the $C_6$-symmetric BZ, we only compare ${\bf M}$ and ${\bf K}$ to ${\bf \Gamma}$ because $C_6$ symmetry identifies the rotation representation in ${\bf K}$ to that in ${\bf K'},$ and the rotation representation in ${\bf M}$ to those in ${\bf M'}$ and ${\bf M''}$, and thus these other points provide redundant topological information. At the 2-fold RIM $\bf M$ we have two rotation eigenvalues $M_1=1$ and $M_2=-1$, while at the 3-fold RIM $\bf K$ we have three rotation eigenvalues $K_1 = 1$, $K_2 = e^{i 2\pi/3}$, and $K_3 = e^{-i 2\pi/3}$ (see Fig. \ref{fig:classification}b). Additionally, at $\bf \Gamma$ we have $\Gamma^{(2)}_1=1$, $\Gamma^{(2)}_1=-1$, as well as $\Gamma^{(3)}_1=1$, $\Gamma^{(3)}_2=e^{i 2\pi/3}$, and $\Gamma^{(3)}_3=e^{-i 2\pi/3}$.
We therefore define the invariants
\begin{align}
[M_i]&=\# M_i - \# \Gamma^{(2)}_i\\
[K_j]&=\# K_j - \# \Gamma^{(3)}_j
\end{align}
for $i=1,2$ and $j=1,2,3$. Here $\# M_i$ is the number of states below the gap in the $\beta$ spectrum that have rotation eigenvalues $M_i$ at RIM ${\bf M}$, and similarly for $\# K_j$, $\# \Gamma^{(2)}_i$, and $\# \Gamma^{(3)}_j$. Out of these five topological invariants, however, some of them are redundant. Since the total number of occupied states is constant over the BZ, we have that
\begin{align}
\# M_1 + \# M_2 &= \# \Gamma^{(2)}_1 + \# \Gamma^{(2)}_2\nonumber\\
\# K_1 + \# K_2 + \# K_2 &= \# \Gamma^{(3)}_1 + \# \Gamma^{(3)}_2 + \# \Gamma^{(3)}_3\nonumber
\end{align}
or
\begin{align}
[M_1]+[M_2] = [K_1]+[K_2]+[K_3] = 0
\label{eq:invariant_restrictions_1}
\end{align}
Additionally, TR, PH, and chiral symmetries impose further restrictions on these rotation invariants. Since two of these symmetries imply the third one, we only need to consider restrictions due to two of them. We choose TR and chiral symmetries. The relations between rotation eigenvalues constrained by TR symmetry are due to the fact that the TR and rotation operators commute, $[\hat{r}_\alpha,\hat{T}]=0$, so it follows that
\begin{align}
\hat{r}_\alpha \hat{T} \ket{u^n_{{\bf k}^\alpha}} &= \hat{T} \hat{r}_\alpha \ket{u^n_{{\bf k}^\alpha}} \nonumber\\
&= \hat{T} r^n_{\bf k^\alpha} \ket{u^n_{{\bf k}^\alpha}} \nonumber \\
&= (r^n_{\bf k^\alpha})^* \hat{T} \ket{u^n_{{\bf k}^\alpha}},
\end{align}
where the asterisk stands for complex conjugation. Now, if $\ket{u^n_{\bf k}}$ is an eigenstate of $h({\bf k})$ with eigenvalue $\beta_n({\bf k})$, then $\hat{T} \ket{u^n_{\bf k}}$ is an eigenstate of $h(-{\bf k})$ with the same eigenvalue $\beta_n({\bf k})$ [c.f. \eqref{eq:TR_PH_symmetries}]. Thus, more directly we have
\begin{align}
\hat{r}_\alpha \hat{T} \ket{u^n_{{\bf k}^\alpha}} &= r^n_{-\bf k^\alpha} \hat{T} \ket{u^n_{{\bf k}^\alpha}}.
\end{align}
Comparing the last two expresions we conclude that the rotation eigenvalues under TR symmetry obey
\begin{align}
r^n_{-\bf k^\alpha} = (r^m_{\bf k^\alpha})^*\;\; \mbox{for}\;\; \beta^n(-{\bf k}^\alpha) = \beta^m(\bf k^\alpha).
\end{align}
In particular, at time-reversal invariant momenta (TRIM) that are also RIM, $-{\bf k}^\alpha = {\bf k}^\alpha$ (up to a reciprocal lattice vector), if the $\beta$ eigenstates at ${\bf k}^\alpha$ are non-degenerate, the rotation eigenvalues are real, while if they are $\beta$-degenerate the rotation eigenvalues can also come in complex conjugate pairs. In the case of $C_6$-symmetric crystals, $\bf M$, $\bf M'$, and $\bf M''$ are both TRIM and RIM. Since they have eigenvalues of $\pm1$, TR symmetry does not impose restrictions on them. Regarding $\bf K$ and $\bf K'$, since $-{\bf K} = {\bf K'}$, the restriction above reads as
\begin{align}
\# K_1 &= \# K'_1 \nonumber\\
\# K_2 &= \# K'_3 \nonumber\\
\# K_3 &= \# K'_2, \nonumber
\end{align}
which, once added to the condition due to $C_6$ symmetry,
\begin{align}
\# K_j = \# K'_j,\nonumber
\end{align}
for $j=1,2,3$ leads to the relation between invariants,
\begin{align}
[K_2] = [K_3].
\label{eq:invariant_restrictions_2}
\end{align}
So, taking into account the relations between invariants in \eqref{eq:invariant_restrictions_1} and \eqref{eq:invariant_restrictions_2}, we see that only two invariants are necessary, since they determine the value of the remaining three under TR and $C_6$ symmetries. We take this invariants to be
\begin{align}
[M] &= \#M_1 - \# \Gamma^{(2)}_1\\
[K] &= \#K_1 - \# \Gamma^{(3)}_1.
\end{align}
The topological classes in TR invariant crystals with $C_6$ symmetry can then be specified by the two invariants above. The classification thus lies on a two-dimensional vector space specified by the vector
\begin{align}
\chi^{(6)} = ([M],[K]),
\end{align}
for $[M]$, $[K] \in \mathbb{Z}$.
Finally, we impose the constraints on the invariants due to chiral symmetry. Under this symmetry, if $\ket{u^n_{\bf k}}$ is an eigenstate of $h({\bf k})$ with eigenvalue $\beta_n({\bf k})$, then $\Pi \ket{u^n_{\bf k}}$ is an eigenstate of $h({\bf k})$ with eigenvalue $-\beta_n({\bf k})$ [c.f. \eqref{eq:chiral_symmetry}], i.e., $\ket{u^n_{\bf k}}$ and $\Pi \ket{u^n_{\bf k}}$ are partners on opposite sides of the $\beta$ spectrum (having opposite energies). Now let us consider what happens if $[\hat{r}_\alpha, \Pi]=0.$ In this case we have
\begin{align}
\hat{r}_\alpha \Pi \ket{u^n_{\bf k^\alpha}} &= \Pi \hat{r}_\alpha \ket{u^n_{\bf k^\alpha}}\nonumber\\
&= \Pi r^n_{\bf k^\alpha} \ket{u^n_{\bf k^\alpha}}\nonumber\\
&= r^n_{\bf k^\alpha} \Pi \ket{u^n_{\bf k^\alpha}}.
\end{align}
Thus, the rotation eigenvalues come in pairs, one on each side of the gap. Now, since $\hat{r}_\alpha$ is a constant operator (i.e. independent of the crystal momentum), its spectrum is the same at any $\alpha$-fold RIM. Thus, the total number of states over both negative and positive $\beta$ bands corresponding to a particular rotation eigenvalue also has to be constant across all the $\alpha$-fold RIM. Thus, if $[\hat{r}_\alpha, \Pi]=0$ we have $2 \# k^{(\alpha)}_i = 2 \#\Gamma^{(\alpha)}_i$, for all $i \in 1,\ldots, \alpha$ which leads to trivial invariants,
\begin{align}
[k^{(\alpha)}_i] = 0\;\; \mbox{if}\;\; [\hat{r}_\alpha, \Pi]=0
\end{align}
for $i \in 1,\ldots, \alpha$. In particular, our model has operators that obey
\begin{align}
[\hat{r}_2,\Pi] \neq 0,\;\;
[\hat{r}_3,\Pi] = 0\nonumber
\end{align}
and we verify that it has $[K]=0$ for all ratios $c_{int}/c_{ext}$. Thus, our structure is topologically characterized by the only invariant $[M]$, which can take integer values. In our model we find
\begin{align}
[M] = \left\{ \begin{array}{c}
0\quad \text{for } |c_{int}/c_{ext}| > 1\\
2\quad \text{for }|c_{int}/c_{ext}| < 1
\end{array}\right..
\end{align}
The transition at $c_{int}/c_{ext}=1$ occurs by closing the bulk $\beta$ gap at the $\bf \Gamma$ point of the BZ. This transition point corresponds to the usual honeycomb lattice, which is well known in the context of graphene to have Dirac cones at $\bf K$ and $\bf K'$. The difference in our formulation resides exclusively in our unit cell definition having six instead of two degrees of freedom (see Fig. \ref{Fig1}d). The $\beta$ bands in our model are shown in Fig. \ref{fig:BZ} for the trivial and non-trivial phases, as well as at the transition point.
\subsection{Weak invariants}
\label{sec:weak_invariants}
In addition to the bulk invariants described above, crystalline systems have two additional weak $Z_2$-valued topological invariants, given by
\begin{align}
\nu_i=\frac{1}{2\pi}\oint_{\mathcal{C}_i} \tr(\A)\;\;\mbox{mod 1},
\label{eq:weak_invariant}
\end{align}
where $\A^{mn}({\bf k})=-i\braket{u_{\bf k}^m}{d u_{\bf k}^n}$ is the Berry connection of negative $\beta$ bands $m$ and $n$, and $\mathcal{C}_i=\pi {\bf b_i} + s \epsilon_{ij} {\bf b_j}$ is a closed path on the boundary of the BZ along the direction of the reciprocal lattice vector $\epsilon_{ij} {\bf b_j}$. These invariants form a $Z_2$-valued reciprocal lattice vector
\begin{align}
{\bf G}_\nu = 2\pi (\nu_1 {\bf b_1} + \nu_2 {\bf b_2})
\end{align}
which indicates the existence of weak topological insulators along the direction ${\bf G}_\nu$. However, in $C_3$ symmetric systems, as this one, this invariant is always zero, as can be seen as follows. The reciprocal lattice unitary vectors ${\bf b_1} =(1,0)$ and ${\bf b_2} =(1/2,\sqrt{3}/2)$ change, upon a $2\pi/3$ rotation, as $R_3{\bf b_1} = {\bf b_2}$ and $R_3{\bf b_2} = -{\bf b_1}-{\bf b_2}$. Now, $C_3$ symmetry requires ${\bf G}_\nu$ to remain invariant under a $C_3$ rotation. Performing this rotation
\begin{align}
R_3{\bf G}_\nu &= 2\pi \left[ \nu_1 {\bf b_2} + \nu_2 (-{\bf b_1}-{\bf b_2})\right]\nonumber\\
&= 2\pi \left[ -\nu_2 {\bf b_1} + (\nu_1-\nu_2) {\bf b_2}\right] \nonumber
\end{align}
we conclude that $\nu_1 = -\nu_2$ and $\nu_2 = \nu_1-\nu_2$ mod 1, or $3 \nu_1=0$ mod 1. Thus, $\nu_1=\nu_2=0$.
\subsection{Zero energy modes: chiral charge and topological protection}
\label{sec:chiral_charge}
A physical consequence of our photonic crystal in the non-trivial phase is the existence of corner-localized modes pinned at zero $\beta$, which are topologically protected only at $2\pi/3$ corners. A topological argument can be made which explains the existence of these modes in the non-trivial phase, and which is easy to picture. Consider Fig. \ref{fig:chiral_charge}. In the non-trivial phase, $c_{int} < c_{ext}$ (see Fig. \ref{fig:chiral_charge}a for a configuration in the non-trivial phase). Even though a physical system will never have $c_{int} = 0$, as this would represent infinitely large unit cells, any system in the non-trivial phase can be adiabatically connected to the system having $c_{int} = 0$ without closing the energy gap. Thus, the crystal in the limit $c_{int}=0$ is also in the non-trivial phase $[M]=2$. In this limiting case, we can read off the numbers $N_\pm$ by counting the number of zero-energy modes per edge or corner unit cell. There is one zero-energy mode at each uncoupled waveguide in Fig. \ref{fig:chiral_charge}b. We see that at edge unit cells we have two zero modes, one of each chirality, (i.e., one `orange' and one `blue'). Thus, $N_+=N_-=1$, and $\mathcal{N}=0$. At $2\pi/6$ corners we have four zero modes, two of each chirality, (i.e., two `orange' and two `blue'). Thus, $N_+=N_-=2$, which also leads to $\mathcal{N}=0$. Finally, at $2\pi/3$ corners, there are three zero modes, two of one chirality and one of the other one (i.e., two `orange' and one `blue' at the upper right corner and two `blue' and one `orange' at the lower left corner). Thus, $N_+=2$ and $N_-=1$ or viceversa, which results in $|\mathcal{N}|=1$.
We now turn on $c_{int}$ back to a non-zero value, $c_{int}>0$. These couplings hybridize some of the zero energy modes, spliting their energies away from zero. This energy splitting, however, must conform to the restrictions imposed by chiral symmetry. Concretely, zero energy modes hybridize only in pairs that have canceling total chirality. To see how this is the case, let us consider the basis in which the chiral operator is diagonal,
\begin{align}
\Pi = \left( \begin{array}{cc}
\mathbb{I}_{3\times 3} & 0\\
0 & -\mathbb{I}_{3\times 3}
\end{array}
\right).
\end{align}
Pictorially, we have assigned the sector with chiral eigenvalue or `chiral charge' of $+1$ ($-1$) to orange (blue) waveguides. In this basis, chiral symmetry \eqref{eq:TR_PH_symmetries} implies that the Hamiltonian has the form
\begin{align}
h({\bf k}) = \left( \begin{array}{cc}
0 & q({\bf k})\\
q^\dagger({\bf k}) & 0
\end{array}
\right)
\end{align}
where $q({\bf k})$ is a $3 \times 3$ matrix. From the off-diagonal form of the Hamiltonian it follows that there is no coupling between waveguides belonging to the same chiral sector. All couplings exist only between waveguides of opposite chiral sectors. Thus, if initially there are $N_+$ and $N_-$ zero modes, only $N_+$ of them (if $N_+ < N_-$) or $N_-$ of them (if $N_+ > N_-$) can hybridize once we turn on $c_{int}$, leaving behind $|\mathcal{N}|=|N_+-N_-|$ still pinned at $\beta = 0$.
In our system it follows then that only $2\pi/3$ corners have one robust mode pinned at $\beta=0$, while edges and $2\pi/6$ corners have none.
To complete the argument, we show what happens in the opposite limiting case. In Fig. \ref{fig:chiral_charge}c the photonic crystal is in the trivial phase. It is adiabatically connected to the crystal shown in Fig. \ref{fig:chiral_charge}d, which has $c_{ext}=0$. Notice that in this limiting case there are no uncoupled waveguides. The eigenmode energies are equally gapped at each unit cell, with no special in-gap states at either edges or corners.
\subsection{Animations}
\label{sec:animations}
In this supplementary section, we present animations corresponding to the experimental data presented in the text, together with corresponding beam-propagation simulations.
The first animation (Movie1.gif) is an experimental result of optical propagation through $C_{6}$ symmetric photonic lattice with $L/s=2.61$ (corresponding to Fig. \ref{Fig1}c) when the bottom corner waveguide mode was initially excited for a range of wavelengths (Fig. \ref{Fig4} first row). The oscillation of the light intensity at the output facet is measured in steps of 5 nm from 1450 nm to 1650 nm, which occurs due to the beating between the trivial defect modes. Oscillation frequencies of all three waveguide modes are the same.
The second animation (Movie2.gif) is a similar experimental result when at one waveguide away from the lower-most corner waveguide was initially excited (Fig. \ref{Fig4} second row). The oscillation frequency of the corner waveguide is double the frequency of the other two, which occurs due to the beating between the topological mid-gap mode and the trivial defect modes.
The third animation (Movie3.gif) is a beam-propagation simulation that corresponds to the case of the first animation (Movie1.gif), where the bottom corner waveguide mode was initially excited. Parameters of the simulation are: $\Delta n = 4.5\times 10^{-3}$ and the radii of the major and minor axes are 4.3$\mu$m and 3.6$\mu$m, respectively. The fourth animation (Movie4.gif) is a similar simulation result that corresponds to the case of the second animation (Movie2.gif) with same simulation parameters. These beam-propagation simulations show good agreement with the experimental result. In addition, fifth animation (Movie5.gif) is the beam-propagation simulation result that shows how the beam evolves along the $z$ axis of the sample when the initial beam was incident on the lower-most corner waveguide, with same simulation parameters as above. It shows the characteristic of having same oscillation frequencies for all three waveguide mode as in the first animation (Movie1.gif). The last animation (Movie6.gif) is a similar simulation result of beam propagation along the $z$ axis that corresponds to the case of the second animation (Movie2.gif) with same simulation parameters as above.
\newpage
\begin{figure*}
\includegraphics[width=160mm]{lattice_chiral.jpg}
\caption{\textbf{$C_{6}$ symmetric photonic lattices and band structures}. Each column corresponds to lattices having different $L/s$ ratio, thus belonging to different topological phases: (\textbf{left}) trivial, with $L/s=3.53$, (\textbf{center}) critical at $L/s=3.00$, and (\textbf{right}) non-trivial, with $L/s=2.61$. \textbf{a-c}, Cross-sectional microscope images of the input facet of the photonic waveguide lattices. Light propagates through the structure along the axis perpendicular to the page. \textbf{d-f} Scaled diagrams of the lattices. Green hexagon in \textbf{d} delimits a unit cell. Black thin (red thick) lines represent intra-cell (extra-cell) couplings of strength $c_{int}$ ($c_{ext}$) in the tight-binding approximation. Color of waveguides represents their chiral charge. \textbf{g-i} Band dispersion calculated using the tight-binding approximation for a configuration with closed boundaries along one direction and open along the other one for crystals with $25$ waveguides along the open direction. The mid-gap bands in \textbf{i} (shown in thick, red lines) have eigenstates localized at edges.}
\label{Fig1}
\end{figure*}
\newpage
\begin{figure*}
\includegraphics[width=160mm]{corners.jpg}
\caption{\textbf{Numerically calculated density of states (DOS) and eigenmode probability density functions (PDF) of the defect-bound and edge modes using tight-binding approximation for hexagonally shaped lattices (i.e., with full open boundaries). a,} DOS of lattice in the trivial phase ($L/s=3.53$.) \textbf{b,} DOS of lattice in the non-trivial phase ($L/s=2.61$.) Inset: Enlarged DOS around $\beta=0$. We used a larger system size for this numerical calculation (127 unit cells simulated, 19 in experiment) for clearer isolation of the defect-bound modes. Inset labels correspond to PDF indicated in \textbf{c-e}. \textbf{c,} Combined PDF of the six topologically-protected defect-bound modes. \textbf{d,} Combined PDF of the twelve edge modes. \textbf{e,} Combined PDF of the twelve unprotected defect-bound modes. Both the protected and the unprotected defect modes are localized at the \textit{corner unit cells}. However, only unprotected modes occupy the \textit{corner waveguides}. In \textbf{c-e}, the $\pm$ signs indicate the chirality eigenvalues over the subspace spanned by the corresponding edge and corner modes.}
\label{Fig2}
\end{figure*}
\newpage
\begin{figure*}
\includegraphics[width=160mm]{confinement.jpg}
\caption{\textbf{Experimentally measured evolution of diffracted light at the output facet at different wavelengths. a-c,} Image of the diffracted light in the trivial phase ($L/s=3.53$) measured at the output facet. \textbf{d-f,} Image of the diffracted light at the critical point ($L/s=3.00$). \textbf{g-i,} Image of the diffracted light in the non-trivial phase ($L/s=2.61$). Columns correspond to injection of light with wavelengths $\lambda=$1450 (left column), 1550 (center column) and 1650 nm (right column). White arrows indicate the position of light injection. In the trivial phase and at the critical point, light increasingly scatters into the bulk as wavelength increases. On the other hand, in the non-trivial phase, light is kept localized near its injection corner within the wavelength range of measurement. In addition, beating of intensities between the corner waveguides enclosed by the white dashed box in \textbf{i} is observed as a function of wavelength (see Fig. \ref{Fig4}).}
\label{Fig3}
\end{figure*}
\newpage
\begin{figure*}
\includegraphics[width=160mm]{beating.jpg}
\caption{\textbf{Measurements of light intensity at the corner waveguides of the output facet in the non-trivial phase and estimation of its beating frequencies as a function of wavelength. a, } Diagram of the input facet of the waveguide array in non-trivial phase, zoomed-in around the corner where the light is initially injected and localized throughout its propagation (Fig. \ref{Fig3}i). The arrow indicates the waveguide where light was injected for the measurements in (b-d). \textbf{b-d,} Measured light intensities at the output facet at waveguides to the left of the corner (green), at the corner (blue), and to the right of the corner (red), respectively, for light injection as shown in (a). \textbf{e,} Diagram of the input facet of the waveguide array. The arrow indicates the initially excited waveguide for measurements in (f-h). \textbf{f-h,} Measured light intensities at each waveguide on the output facet for light injection as shown in (e). Solid lines are least-squares fit using a sinusoidal function to measure the beating frequencies. When the light is injected at the corner waveguide, the ratio of beating frequencies between the waveguides on and off the corner is approximately 1, which indicates that only the trivial defect modes are excited. On the other hand, when light is injected at one waveguide away from the corner, the corresponding ratio is approximately 2, which indicates that both trivial and topological defect modes are excited, and this topological mode has $\beta = 0$.}
\label{Fig4}
\end{figure*}
\newpage
\begin{figure*}[t]
\includegraphics[width=160mm]{classification.jpg}
\caption{\textbf{a} Brillouin zone of the photonic crystals with $C_6$ symmetry and its rotation invariant points. \textbf{b} Unit circle in the complex plane and the rotation eigenvalues at {\bf M} (left) and {\bf K} (right).}
\label{fig:classification}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=160mm]{BZ.jpg}
\caption{Brillouin zones for the crystal in the trivial (left), critical (center), and non-trivial (right) phases.}
\label{fig:BZ}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{chiral_charge.pdf}
\caption{\textbf{a} A configuration of our photonic crystal in the non-trivial phase. \textbf{b} Configuration as in \textbf{a} but with $c_{int} = 0$. Each uncoupled waveguide hosts a zero-$\beta$ energy mode. Tight-binding Hamiltonians in both {\bf a} and {\bf b} are in the same non-trivial phase $[M]=2$. \textbf{c} A configuration of our photonic crystal in the trivial phase. \textbf{d} Configuration as in \textbf{c} but with $c_{ext}=0$. Tight-binding Hamiltonians in both {\bf c} and {\bf d} are in the trivial phase $[M]=0$. Orange and blue colors represent chiral charge of $\pm1$ respectively.}
\label{fig:chiral_charge}
\end{figure}
\clearpage
|
2,869,038,156,786 | arxiv | \section{Introduction}
For sparse compressed signals, Donoho $et~al.$ \cite{2005Stable} proposed a compressive sensing theory that enables efficient data sampling at a much lower rate than the requirements, which can be modeled as follows in its standard formulation.
$\bf{Notations}$: In this paper, the matrices are represented in capital letters. For a matrix $A$, $A_{*i}$, $A_{i*}$ and $A_{ij}$ denote the $i-th$ column, the $i-th$ row and $(i,j)-th$ element of $A$, respectively; the $\|\dot\|_i$ represents the $i$-norm of a vector. All the vectors are column vectors unless transposed
to a row vector by a prime superscript $T$.
Compressive sensing can be formulated as:
\begin{eqnarray}
b = Ax+\epsilon
\end{eqnarray}
where $x\in R^n$ is an unknown vector, $b\in R^m$ is an observed vector, $A\in R^{m\times n}$ is called the compressive sensing matrix (usually $m<<n$), and $\epsilon$ is the unknown disturbance term or noise. Obviously, this is an underdetermined system of equations that does not have a sole solution. The least-squares method is usually used to solve the problem.
\begin{equation}
\min \limits_x \frac{1}{2}\|Ax - b\|_2^2
\end{equation}
To suppress overfitting, some scholars \cite{2010On,2010Sparse,1999Sparse,2017A} added the $L_0$-norm regularizer to introduce sparse prior information.
\begin{equation}\label{eq:L0}
\min\limits_x \frac{1}{2}\| Ax - b\|_2^2 + \beta\|x\|_0
\end{equation}
where $\|x\|_0$ denotes the number of nonzero components of $x$ and $\beta>0$ is a hyperparameter to control the tradeoff between accuracy and sparsity. Many methods have been developed to solve this problem, such as the penalty decomposition method \cite{2012Sp}, iterative hard threshold method \cite{2008Iterative}, fixed-point continuation method (FPC) \cite{2008Fixed}, approximate gradient homotopy method (PGH) \cite{2012A} and reweighted $L_0$ minimization method \cite{6743943,2015Zhao}.
However, Eq. (\ref{eq:L0}) is an NP-hard optimization problem \cite{1995Sparse}, which is highly discrete so that it is difficult to solve using a precise algorithm. Thus, we need to seek an effective approximation solution for this problem. The $L_1$-norm regularizer is introduced as a substitute for that of the $L_0$-norm. Such an approximation can be traced back to a wide range of fields, such as seismic traces \cite{1979Deconvolution}, sparse signal recovery \cite{2001Atomic}, sparse model selection in statistics (LASSO) \cite{1996Regression}, and image processing \cite{1970Total}. Many scholars have attempted to find the optimal solution to the following problem:
\begin{equation}
\min\limits_x \frac{1}{2}\|Ax - b\|_2^2 + \beta \| x \|_1
\end{equation}
It is a convex continuous optimization problem with a sole nondifferentiable point ($x=0$), which can usually be transformed into a second-order cone programming problem and then solved by methods such as interior-point methods. However, in large-scale problems, due to the high algorithmic complexity, the interior-point method is very time-consuming. Based on this, many researchers have solved the problem through simple gradient-based methods. Among them, the iterative shrinkage-thresholding algorithm (ISTA) proposed by Chambolle $et~al.$ \cite{Chambolle1998,2003An} has attracted much attention. ISTA updates $x$ through a shrinkage/soft threshold operation in each iteration.
\begin{equation}
x^{k + 1} = soft_{\beta t}[x^k - 2tA^T( {Ax^k - b} )]
\end{equation}
where $k$ represents the $k$-th iteration, $t$ is an appropriate stepsize and $soft$ is the soft threshold operation function.
\begin{equation}
soft_\theta (x_i) = sign(x_i)( \|x_i\| - \theta )
\end{equation}
Recently, the iteratively weighted shrinkage-thresholding algorithm (IWSTA) has attracted much interest compared with ISTA, which outperforms their unweighted counterparts in most cases. In these methods, decision variables and weights are optimized alternatingly, or decision variables are optimized under heuristically chosen weights. It can be written as:
\begin{equation}
\min \limits_{x,~w\geq0} \frac{1}{2}\| Ax - b\|_2^2 + \beta \sum\limits_{i = 1}^n w_i |x_i|_1
\end{equation}
The method assigns different weights to each component of $x$ in the iterative process and then updates $x$. In this way, each subproblem is convex and easy to solve. Many algorithms have been developed to solve it. For example, the iterative support detection (ISD) method \cite{2009Wang} assigns a weight of 0 to components in the support set and a weight of 1 to the other components during iteration, in which the support set at each iteration consists of all components whose absolute value is greater than the threshold. Zhao $et~al.$ \cite{2015Zhao} proposed a new method to calculate the optimal $w$ by the duality of linear programming based on the property of weighted range space. It alternately solves the weighted original problem with fixed weights to obtain a new solution $x$, and then it solves the duality problem to obtain a new weight $w$. More variants are available in \cite{6743943,David2006For} and its references. The details of some examples are listed in Tab. \ref{tab:existmethods}.
\begin{table}[width=.9\linewidth,cols=4,pos=h]\label{tab:exist}
\caption{Variants of weighted method.}
\label{tab:existmethods}
\begin{tabular*}{\tblwidth}{@{} LCCCCC@{} }
\toprule
Author & Termed & Weights &Min.&Max. & Regularizer\\
\midrule
Chambolle $et~al.$ \cite{Chambolle1998} & ISTA & 1 & 1&1 &$\sum\limits_{i = 1}^n | x_i| $ \\
Candes $et~al.$ \cite{EJ2008} & IRL1 &
$\frac{1}{|x_i^{k - 1}| + \delta }$& 0&$\frac{1}{\delta}$& $\sum\limits_{i = 1}^n \log (| x_i| + \delta) $ \\
Foucart $et~al.$ \cite{2009Sparsest} & WLP &$\frac{1}{( {|x_i^{k - 1}| + \delta } )^{1 - p}}$& 0&$\frac{1}{\delta^{1-p}}$& $\frac{1}{p}\sum\limits_{i = 1}^n ( |x_i| + \delta ) ^p$ \\
Wipf $et~al.$ \cite{2010Iterative} & NW4 & $\frac{1+(|x^{k - 1}| + \delta)^{p + 1}}{( | x^{k - 1}| + \delta )}^{p + 1}$ & 0&1&$\sum\limits_{i = 1}^n ( |x_i | - \frac{1}{(x_i+ \delta)^p})$ \\
\bottomrule
\end{tabular*}
\end{table}
There is a drawback for the above methods: the weights do not meet the usual definition of weights, and their sum is one, which leads them to be distributed in a very large range (see Tab. \ref{tab:existmethods}). Such weights are difficult to explain and can lead to an inaccurate result.
This paper proposes a new IWSTA type, called entropy regularized IWSTA (ERIWSTA), which obtains easily computable and interpretable weights. The weights automatically fall in the range of [0, 1], and the summation is
one so that they can be considered a probability of the contribution of each attribute to the model.
This is achieved by adding an entropy regularizer
to the cost function and then using the Lagrange multiplier
method to solve the problem. Experiments are executed for CT image restoration, and the results show that the proposed algorithm performs better in terms of both convergence speed and restoration accuracy compared with some state-of-the-art methods.
\section{Methodology}
The main idea of the IWSTA type algorithms is to define a weight for each attribute based on the current iteration ${x^k}$ and then use them to obtain a new $x$. In this section, we introduce an entropy regularizer to the cost function and obtain the following optimization model:
\begin{eqnarray}\label{eq:mainmodel}
\min &&\Phi _{\beta ,\gamma }(x,w) = F(x)+ \beta G_{\gamma}(x,w)\nonumber\\
s.~t. && w_i\geq 0, ~\sum_{i=1}^n w_i =1\nonumber\\
where&&F(X) = \frac{1}{2}\|Ax-b\|_2^2\nonumber\\
&&G_{\gamma}(x,w)=\sum_{i=1}^n w_i |x_i|+\gamma\sum_{i=1}^n {w_i}\ln {w_i}
\end{eqnarray}
where
$\gamma\geq0$ is a given hyperparameter.
As can be seen, while we do not use the entropy regularizer, $w$ can easily be solved as $w_i=1$ if $x_i=\arg\min\{|x_1|, ..., |x_n|\}$, or 0 otherwise$\footnote{The update rule can be easily explained by an example as \begin{equation}
\begin{aligned}
\min~\{4, -1, 5\}= min&~4w_1-1w_2+5w_3\\
s.~t.&~ w_1, w_2, w_3\ge0\\
&~w_1+w_2+w_3=1\nonumber
\label{eq9}
\end{aligned}
\end{equation}
The solution is $w_1=0$, $w_2=1$ and $w_3=0$, in which $w_2$ corresponds to the minimum value of \{4, -1, 5\}. It is very similar to the computation of the weights in the k-means algorithm. }$. It shows a simple fact that only one element of $w$ is 1, and the others are 0, which is grossly incompatible with the actual problem. Then, we add the negative entropy of the weights to measure the uncertainty of
weights and stimulate more
attributes to help signal reconstruction because it is well known that $\sum_{i=1}^n {w_i}\ln {w_i}$ is minimized in information theory when
\begin{equation}
w_1=w_2=...=w_n
\end{equation}
As follows, we will alternatively solve $w$ and $x$
in Eq. (\ref{eq:mainmodel}).
\subsection{Update rule for $w$}
To solve $w$, we introduce the Lagrange
multipliers $\lambda$ and then obtain the following Lagrange function.
Note that $F(x)$ is a constant with respect to $W$, so we only construct the Lagrange function on $G(x)$.
\begin{equation}
L(w,\lambda) = G_{\gamma}(x,w) + \lambda(\sum_{i = 1}^n w_i - 1),
\end{equation}
Set the partial derivative of $L(w,\lambda) $ with respect to $w_i$ and $\lambda$ to zero and then obtain the following two equations.
\begin{eqnarray}
\frac{\partial L(w,\lambda)}{\partial w_i}&=& |x_i| + \gamma (1 + \ln {w_i})+\lambda = 0\label{eq:Lag1} \\
\frac{\partial L(w,\lambda)}{\partial \lambda}&=& \sum_{i = 1}^n w_i - 1=0\label{eq:Lag2}
\end{eqnarray}
From Eq. (\ref{eq:Lag1}), we know that
\begin{equation}\label{eq:wi}
w_i = \exp(- \frac{\lambda}{\gamma})\exp(-\frac{|x_i|}{\gamma})
\end{equation}
Substituting Eq. (\ref{eq:wi}) into Eq. (\ref{eq:Lag2}), we have
\begin{equation}
\sum_{i = 1}^n {w_i} = \exp(- \frac{\lambda}{\gamma})\sum_{i = 1}^n \exp(-\frac{|x_i|}{\gamma}) = 1
\end{equation}
It follows that
\begin{equation}
\exp(- \frac{\lambda}{\gamma})=\frac{1}{\sum_{i = 1}^n \exp(-\frac{|x_i|}{\gamma})}
\end{equation}
Substituting this expression to Eq. (\ref{eq:wi}), we obtain that
\begin{equation}
w_i = \frac{\exp(-\frac{|x_i|}{\gamma})}{\sum_{l = 1}^n \exp(-\frac{|x_l|}{\gamma})}
\end{equation}
Such weights certainly meet the constraints that $w_i\geq0$ and $\sum_{i = 1}^n w_i=1$.
\subsection{Update rule for $x$}
Inspired by the work of ISTA \cite{2009A}, a similar approach was adopted for the iterative update of $x$.
The construction of a majorization is an important step in
obtaining the updating rule.
\begin{definition}\label{definition:surrogate}(Majorization)
Denote $\psi(x|x^k)$ as a majorization for ${\Psi}(x)$ at $x^k$ (fixed) if $\psi(x^k|x^k)={\Psi}(x^k)$ and
${\psi}(x|x^k)\geq {\Psi}(x)$.
\end{definition}
Clearly, ${\Psi}(x)$ is nonincreasing under the updating rule $x^{k+1}=\min_x
\psi(x|x^k)$ because
\begin{eqnarray}
{\Psi}(x^{k+1})\leq \psi(x^{k+1}|x^k)\leq \psi(x^k|x^k)={\Psi}(x^k)
\end{eqnarray}
Then, we can construct the majorization for $F(x)$.
\begin{proposition}
Obviously, $F(x)$ is a Lipschitz continuous and differentiable convex function, which has a majorization function at fixed current iteration $x^k$ as
\begin{eqnarray}
f(x,x^k) = F(x^k)+[\nabla F(x^k)]^T(x-x^k)+\frac{L}{2}\|x-x^k\|_2^2
\end{eqnarray}
where $L$ is larger than or equal to the maximum eigenvalue of $A^TA$.
\end{proposition}
\begin{proof}
It is well-known that
\begin{equation}
F(x) = \frac{1}{2}\|Ax-b\|_2^2=F(x^k)+[\nabla F(x^k)]^T(x-x^k)+\frac{1}{2}(x-x^k)^TA^TA(x-x^k)
\end{equation}
We compare $F(x)$ and $f(x,x^k)$ and find that only the last terms are different.
By singular value decomposition (SVD) of a symmetric definite matrix, we know that $A^TA=Q^T\Sigma Q$, in which $Q$ is an orthogonal matrix consisting of all eigenvectors and $\Sigma$ is diagonal consisting of all eigenvalues. Let $z=x-x^k$, then
\begin{equation}
z^T(A^TA)z=z^TQ^T\Sigma Qz\leq L\|Qz\|_2^2=L\|z\|_2^2
\end{equation}
And it is also certain that $z^TA^TAz= L\|z\|_2^2=0$ if $x=x^k$. Thus, the proof is established.
\end{proof}
Now, we obtain the majorization for the cost function $\Phi(x,w)$ on $x$.
\begin{equation}
\phi(x,x^k)=f(x,x^k)+\beta G_{\gamma}(x,w)
\end{equation}
which can be reorganized as
\begin{eqnarray}
\phi(x,x^k)&=& \frac{L}{2}\| {x - [x^k - \frac{1}{L}\nabla F ( x^k )]} \|_2^2 + \beta G_{\gamma}(x,w)\nonumber\\
&=&\sum_{i=1}^n\{\frac{L}{2}\| x_i - [x^k - \frac{1}{L}\nabla F ( {{x^k}}) ]_i \|_2^2 + \beta w_i|x_i|\}+constant
\end{eqnarray}
We find that the variables of the majorization are separable such that their minimizations can be easily obtained on each $x_i$, respectively, as follows:
\begin{equation}
x_i^{k + 1} = soft_{\beta t w_i}[x^k - 2tA^T( {A{x^k} - y} )]
\end{equation}
\begin{figure*}
\centering
\includegraphics[scale=.4]{1.pdf}
\caption{The original and noisy head phantom images. $(a)$ head phantom with 256×256 pixels; $(b)$ and $(c)$ blurred image with a 5$\times$5 uniform kernel and additive Gaussian noise with $\sigma=10^{ - 2}$ and $\sigma=10^{ - 3}$.}
\label{FIG:1}
\end{figure*}
\begin{figure*}
\subfigure[]{
\begin{minipage}[b]{0.5\textwidth}
\centering
\includegraphics[scale=.60]{2-1.pdf}
\end{minipage}
\subfigure[]{
\begin{minipage}[b]{0.5\textwidth}
\centering
\includegraphics[scale=.60]{2-2.pdf}
\end{minipage}}
\caption{3D profile of $\beta$ and $\lambda$ on MAE with different Gaussian noise levels: (a) $\sigma=10^{-2}$ and (b) $\sigma=10^{-3}$.}
\label{FIG:hyperparameter}
\end{figure*}
\begin{table}[width=.9\linewidth,cols=4,pos=h]
\caption{The optimal MAE value and corresponding hyperparameter (Gaussian noise with $\sigma=10^{-2}$).}
\label{tbl2}
\begin{tabular*}{\tblwidth}{@{} LLLLL@{} }
\toprule
Termed & $\beta$ & $\gamma$ & $\delta$ & MAE\\
\midrule
ISTA & ${10^{-3}}$ & $-$ & $-$ & ${5.312077*10^{-7}}$ \\
WLP & ${10^{-5}}$ & ${10^{ - 10}}$& ${10^{ - 3}}$ & ${5.228672*10^{-7}}$ \\
NW4 & ${10^{-5}}$ & ${10^{ - 2}}$ & ${10^{ - 3}}$ & ${5.410231*10^{-7}}$ \\
IRL1 & ${10^{-5}}$ & $-$ & ${10^{ - 3}}$ & ${5.228672*10^{-7}}$ \\
ERIWSTA & ${10^{2}}$& ${10^{-2}}$ & $-$ & ${5.218246*10^{-7}}$ \\
\bottomrule
\end{tabular*}
\end{table}
\begin{table}[width=.9\linewidth,cols=4,pos=h]
\caption{The optimal MAE value and corresponding hyperparameter (Gaussian noise with $\sigma=10^{-3}$).}
\label{tbl3}
\begin{tabular*}{\tblwidth}{@{} LLLLL@{} }
\toprule
Termed & $\beta$ & $\gamma$ & $\delta$ & MAE\\
\midrule
ISTA & ${10^{-3}}$ & $-$ & $-$ & ${5.122013*10^{-7}}$ \\
WLP & ${10^{-5}}$ & ${10^{ - 5}}$ & ${10^{ - 3}}$ & ${5.018339*10^{-7}}$ \\
NW4 & ${10^{-5}}$ & ${10^{ - 2}}$ & ${10^{ - 3}}$ & ${5.410231*10^{-7}}$ \\
IRL1 & ${10^{-5}}$ & $-$ & ${10^{ - 3}}$ & ${5.018340*10^{-7}}$ \\
ERIWSTA & ${10^{2}}$& ${10^{-2}}$ & $-$ & ${5.005524*10^{-7}}$ \\
\bottomrule
\end{tabular*}
\end{table}
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[scale=.5]{3-1.pdf}
}
\subfigure[]{
\includegraphics[scale=.5]{3-2.pdf}
}
\caption{Cost function versus iteration number for different Gaussian noise levels: (a) $\sigma=10^{-2}$ and (b) $\sigma=10^{-3}$.}
\label{FIG:CostCurve}
\end{figure*}
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[scale=.5]{4-1.pdf}
}
\subfigure[]{
\includegraphics[scale=.5]{4-2.pdf}
}
\caption{MAE versus iteration number for different Gaussian noise levels: (a) $\sigma=10^{-2}$ and (b) $\sigma=10^{-3}$.}
\label{FIG:MAECurve}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale=.75]{5-1.pdf}
\caption{After 30 iterations, the denoising results of ISTA, WLP, NW1, IRL1 and ERIWSTA with Gaussian noise with $\sigma=10^{-2}$.}
\label{FIG:ImageHighNoise}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.75]{5-2.pdf}
\caption{After 30 iterations, the denoising results of ISTA, WLP, NW1, IRL1 and ERIWSTA with Gaussian noise with $\sigma=10^{-3}$.}
\label{FIG:ImageLowNoise}
\end{figure}
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[scale=.33]{6-1.pdf}
}
\subfigure[]{
\includegraphics[scale=.33]{6-2.pdf}
}
\caption{Horizontal central profiles of the restored images with different Gaussian noise levels: (a) $\sigma=10^{-2}$ and (b) $\sigma=10^{-3}$..}
\label{fig:centralline1}
\end{figure*}
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[scale=.33]{7-1.pdf}
}
\subfigure[]{
\includegraphics[scale=.33]{7-2.pdf}
}
\caption{Vertical central profiles of the restored images with different Gaussian noise levels: (a) $\sigma=10^{-2}$ and (b) $\sigma=10^{-3}$.}
\label{fig:centralline2}
\end{figure*}
\section{Numerical experiments}
Numerical experiments are provided to evaluate the performance of the proposed ERWISTA compared with ISTA, WLP, NW4 and IRL1 on the denoising problem of computed tomography (CT) images. All experiments are performed on an HP computer with a 2.5 GHz Intel(R) Core(TM) i7-4710MQ CPU with 12 GB of memory using MATLAB R2019a for coding. A simulated Shepp-Logan phantom with $256\times256$ pixels was used to evaluate the algorithm performance, which is usually used in CT image analysis. There are many advantages to using simulated phantoms, including prior knowledge of the pixel values and the ability to control noise. We blurred the image by using a uniform $5\times5$ kernel (applied by the MATLAB function "\emph{fspecial}") and then added Gaussian noise by the following formula. We select $\sigma=10^{ - 2}$ and ${10^{ - 3}}$ as examples of high and low noise levels for the following experiments.
\begin{equation}
x^{noise}=x^{true}+N(0,\sigma)
\end{equation}
Fig. \ref{FIG:1} shows the original and blurred-and-noisy images. Based on the good time-frequency localization characteristics of the wavelet transform, it can effectively distinguish high-frequency noise from low-frequency information. Therefore, the wavelet transform is used to reduce noise. The introduction of the wavelet matrix can also ensure the sparsity of the whole optimization algorithm. Without losing generality, let $A = PW$, where $P$ is the predetermined system matrix indicating the blurring information and $W$ represents the second-order Haar wavelet matrix.
Mean absolute error (MAE) was used to measure the similarity to the true image. The value of the MAE was calculated by taking the average of the squared differences between the restored pixel values and the true pixel values.
\begin{equation}
MAE = \frac{1}{N}{||x^{restoration} - x^{true} ||_1}
\end{equation}
\subsection{Hyperparameter selection}
To select the penalty hyperparameter $\beta$ and the entropy weighted hyperparameter $\gamma$, we compare the MAE value after 100 iterations with respect to them from $10^{ - 10}$ to $10^{ 10}$. The results are shown
in Fig. \ref{FIG:hyperparameter}, demonstrating that ERIWSTA can achieve a consistently low MAE value over a wide range of $\beta$ and $\gamma$, which displays its robustness.
We also quantitatively display the optimal MAE and corresponding hyperparameters of the algorithms in Tabs. \ref{tbl2} and \ref{tbl3}. An interesting observation is that, regardless of low or high noise levels, the restoration accuracy of our algorithm is always better than the others. These optimal hyperparameters are also used in the following experiments.
\subsection{Algorithmic performance}
Fig. \ref{FIG:CostCurve} displays the cost function of the algorithms. As can be seen, the proposed algorithm always have the fast convergence speed, which arrive at the stable status early.
Fig. \ref{FIG:MAECurve} shows the MAE curves of the algorithms with respect to the number of iterations. The proposed ERIWSTA always has superior performance to the other algorithms, rapidly obtaining the minimum MAE value.
Figs. \ref{FIG:ImageHighNoise} and \ref{FIG:ImageLowNoise} indicate the denoising results with the given noise level. As can be seen, all of the algorithms achieve a similar image. Howver, Figs. \ref{fig:centralline1} and \ref{fig:centralline2} quantitatively compares the vertical profiles of the
restored images with that of the true phantom in the central row and column.
We can see that ERIWSTA
follows the outline of the phantom more accurately than the other algorithms.
\section{Conclusions}
In this paper, a new IWSTA type, called ERIWSTA, is proposed to solve the linear inverse problem. An entropy weighted term is introduced to measure the certainty of the weights, and then the Lagrange multiplier method is used to obtain a simple solution. The experimental results on image restoration of a synthetic CT head phantom show that ERIWSTA can achieve convergence faster with fewer iterations and better restoration accuracy than the other algorithms.
However, as with many existing algorithms, our algorithm also involves two main hyperparameters ($\beta$ and $\gamma$). In the future, we will focus on designing an automatic method to adjust these hyperparameters.
\section*{Acknowledgments}
This work was supported by the Fundamental Research Funds for the Central Universities (N2019006 and N180719020).
\printcredits
|
2,869,038,156,787 | arxiv | \subsection{Existence and uniqueness}
In \cite[Section 3]{chen2019center}, the authors establish the existence of a local curve of homoclinic solutions to $\eqref{cub}$ bifurcating from $(u,\lambda)=(0,0)$ under the assumptions
\begin{empheq}[]{align}
\label{specialcase}
\begin{split}
b(u,\lambda)=&(\lambda-1)u +b_{1}u^{3}\\
\mathcal{W}'(q)=&1+2c_{1}q
\end{split}
\end{empheq}
with $b_{1}+2c_{1}<0$. When this inequality is reversed, front-type solutions are instead obtained. The authors extend this argument to deal with more generalized $b$, including the form \eqref{b}, in \cite[Appendix~B.1]{chen2019center}. Those arguments can also be used to show the existence of local solutions under the more general assumptions of Model I and Model II. This is the content of the next theorem.
\begin{theorem}
\thlabel{existenceuniqueness}
There exists an $\epsilon_{0}>0$ and a local $C^{0}$ curve
\begin{gather*}
\mathcal{C}^{I,II}_{\textup{loc}}=\{(u^{\epsilon},\epsilon^{2}) \; : \; 0<\epsilon<\epsilon_{0}\} \subset X_{0}\times\mathbb{R}
\end{gather*}
of solutions to \eqref{cub}, corresponding to Model I or Model II, with the asymptotics
\begin{gather}
\label{smallsol}
u^{\epsilon}(x,y) = a_{1}\epsilon\textup{sech}(\epsilon x)\cos(y)+O(\epsilon^{2}) \qquad \text{in} \; \; C^{3}_{\textup{b}}(\overline{\Omega})
\end{gather}
where $a_{1} = \dfrac{2}{\sqrt{3|b_{2}+2c_{1}|}}$.
\end{theorem}
\begin{proof}
We reparametrize with $\lambda =\epsilon^{2}$ for convenience. As mentioned above, an existence result was obtained in \cite[Section~3]{chen2019center} under the conditions \eqref{specialcase}. We will follow closely that proof and focus on the places where deviations are necessary to accommodate the more general form of $\mathcal{W}$ we consider.
Let $L:= \mathcal{F}_{u}(0,0)$ and $L'$ be defined as the restriction of $L$ to $x$-independent functions ($L'$ is called the transversal linearized operator). The center manifold reduction result in \cite{chen2019center} requires that $0$ is a simple eigenvalue of $L'$. The operator $L$ corresponding to \eqref{specialcase}, Model I, or Model II is simply $\Delta+1$ as seen by the structure of \eqref{analytic_assumptions} and \eqref{specialcase}. Clearly $L'$ satisfies the requirements mentioned above. Now, the center manifold reduction given by \cite[~Theorem 1.1]{chen2019center} shows that solutions of \eqref{cub}, that lie in a sufficiently small neighborhood of the origin in $C_{\text{b}}^{2+\alpha}(\overline{\Omega})\times \mathbb{R}$ can be expressed as
\begin{gather}
\label{smallu}
u(x+\tau,y) = v(x)\varphi_{0}(y)+v'(x)\tau\varphi_{0}(y)+\Psi(v(x),v'(x), \epsilon)(\tau,y),
\end{gather}
where $v(x):= u(x,0)$, $\varphi_{0}(y)$ generates the kernel of $L'$, and $\Psi: \mathbb{R}^{3} \to C^{3+\alpha}_{\mu}(\overline{\Omega})$ is a $C^{4}$ coordinate map. Here $\mu>0$ is a positive constant depending on the largest non-zero eigenvalue of $L'$. Moreover, if $(u,\epsilon^{2}) \in C^{3+\alpha}_{\text{b}}(\overline{\Omega})\times \mathbb{R}$ is any sufficiently small solution to \eqref{cub}, then, by \cite[~Theorem 1.1]{chen2018existence}, $v$ solves the second-order ODE
\begin{gather}
\label{ODEv}
v''=f(v,v',\epsilon^{2}), \qquad \text{where} \qquad f(A,B,\epsilon^{2}):= \dfrac{d}{dx^{2}}\bigg\rvert_{x=0}\Psi(A,B,\epsilon)(x,0).
\end{gather}
Thus, we are left to show that the change in $\Psi$ resulting from the conditions of Model I or Model II does not affect the existence or general form of $u^{\epsilon}$ in \eqref{smallsol}. Let us point out that $\Psi$ inherits the following symmetry properties from the original PDE \eqref{cub}:
\begin{gather}\label{psi_sym}
\Psi(-A,-B,\epsilon) = -\Psi(A,B,\epsilon) \qquad \text{and} \qquad \Psi(A,-B,\epsilon)(-x,y) = \Psi(A,B,\epsilon)(x, y).
\end{gather}
From \eqref{ODEv} it follows that
\begin{gather} \label{f_sym}
f(-A,-B) = - f(A,B) \qquad \text{and} \qquad f(A,-B) = f(A,B).
\end{gather}
To derive an expression for $f$, we exploit \cite[Theorem 1.2]{chen2019center} to conclude that $\Psi$ admits a Taylor expansion of the form
\begin{gather}
\label{psiexpand}
\Psi(A,B,\epsilon) = \sum_{\mathcal{J}}\Psi_{ijk}A^{i}B^{j}\epsilon^{k}+\mathcal{R},
\end{gather}
where the index set $$\mathcal{J} =\{(i,j,k) \in \mathbb{N} \; : \; i+2j+k \leq 3, i+j+k \geq 2, i+j \geq 1\},$$ the coefficients $\Psi_{ijk}\in C_{\mu}^{3+\alpha}(\overline{\Omega})$, and the error term $\mathcal{R}$ is of order
$O((|A|+|B|^{1/2}+\epsilon)^4)$ in $C^{3+\alpha}_{\mu}(\overline{\Omega})$.
Combining \eqref{smallu} and \eqref{psiexpand} yields
\begin{gather}\label{u_expand}
u(x,y) = (A+Bx)\varphi_{0}(y)+\sum_{\mathcal{J}}\Psi_{ijk}A^{i}B^{j}\epsilon^{k}+\mathcal{R},
\end{gather}
where $A=v(0)$ and $B=v'(0)$. For a fixed $i,j,k$ the general theory now allows us to solve for $\Psi_{ijk}$ via a hierarchy of equations of the form
\begin{empheq}[left=\empheqlbrace]{align}
\label{psi_system}
\begin{split}
&L(\Psi_{ijk})=F_{ijk} \\
& Q(\Psi_{ijk})=0,
\end{split}
\end{empheq}
where $Q$ is the projection onto $\text{ker}L'$. The $F_{ijk}$ terms are obtained by iteratively feeding truncations of \eqref{u_expand} into $\mathcal{F}^{r} - L$ where $\mathcal{F}^{r}$ is $\mathcal{F}$ precomposed with a certain cutoff function. The key point here is that the $Q$ is unchanged by our modification of $\mathcal{W}$, and the $F_{ijk}$ terms are independent of terms of the order $O(|A|+|B|^{1/2}+\epsilon)^4)$ in $C^{3+\alpha}_{\mu}(\overline{\Omega})$. Our generalized $\mathcal{W}$ introduces, for example, the extra nonlinear term $c_{2}\nabla\cdot(|\nabla u|^{4}\nabla u)$ into \eqref{cub} near $(u,\lambda)=(0,0)$. We see that applying this to \eqref{u_expand} yields only terms of order $O((|A|+|B|^{1/2}+\epsilon)^4)$. Hence, from this point on the argument for existence of solutions to \eqref{cub} carries through without change. In particular, one can solve for $\Psi_{ijk}$ in the exact manner presented in \cite[Section~3.1]{chen2019center} and \cite[Appendix~B.1]{chen2019center}.
Although the rest of the argument now follows verbatim from \cite{chen2019center}, we continue the sketch because it will help to explain some later reasoning. Having calculated $\Psi_{ijk}$, we find that $f$ takes the form
\begin{gather}
\label{f}
f(A,B,\epsilon) = \epsilon^{2}A+\dfrac{3(b_{1}+2c_{1})}{4}A^{3}+r(A,B,\epsilon)
\end{gather}
where $r \in C^3$ is an error term of the order $O(|A|(|A|+|B|^{1/2}+\epsilon)^{3}+|B|(|A|+|B|^{1/2}+\epsilon)^{2})$. Using the re-scaled variables
\begin{gather*}
x = : X/\epsilon , \qquad v(x) = : \epsilon V(x), \qquad v_{x}(x) =: \epsilon^{2}W(x)
\end{gather*}
we may now write \eqref{ODEv} as the planar system
\begin{gather}
\label{system}
\begin{cases}
V_{X}=W\\
W_{X} = V-a_{1}^{-2}V^{3}+R(V,V,\epsilon)
\end{cases}.
\end{gather}
where the rescaled error $R(V,W,\epsilon) = O(|\epsilon|(|V|+|W|)$. When $\epsilon=0$ the system has the explicit homoclinic orbit
\begin{gather}
V=a_{1}\text{sech}(X) \qquad W=-a_{1}\text{sech}(X)\text{tanh}(X) .
\end{gather}
This solutions crosses the $V$-axis transversely. Since \eqref{system} has the reversal symmetries
\begin{gather*}
(V(X),W(X)) \mapsto (V(-X),-W(-X))\qquad \text{and} \qquad (V(X), W(X)) \mapsto (V(X),-W(X)),
\end{gather*}
which it inherits from \eqref{f_sym} and \eqref{f}, this intersection will persist for small $\epsilon$, so we obtain a family of homoclinic solutions. Undoing the scaling and appealing to \cite[~Theorem 1.1]{chen2019center} shows that the family \eqref{smallsol} are indeed solutions to \eqref{cub}.
\end{proof}
We now establish some qualitative properties of small solutions to \eqref{cub}.
\begin{theorem}
\thlabel{Positive}
Suppose that $(u, \epsilon^{2}) \in X_{0} \times \mathbb{R}$ is a solution to \eqref{cub} under the assumptions of Model I or Model II. There exists $\delta_{0}$ such that if $|u|_{3+\alpha}+\epsilon^{2} < \delta_{0}$, then $(u, \epsilon^{2}) \in \mathcal{C}^{I,II}_{\textup{loc}}$ after a possible translation or reflection in $x$. Moreover, if $(u,\epsilon^{2})\in \mathcal{C}^{I,II}_{\textup{loc}}$, then $u$ is even in $x$ and $y$ and monotone in that $u_{x} < 0$ for $x>0$.
\end{theorem}
\begin{proof}
First, we show there exists $\delta_{0}$ small enough to ensure $u>0$. The Malgrange preparation theorem allows us to write $b(\lambda,z)=zw(\lambda ,z)$ for a smooth $w$ defined in some neighborhood of $(0,0)$, see for example \cite[Theorem~7.1]{chow}. Then, \eqref{cub} becomes
\begin{empheq}[left=\empheqlbrace]{align}
\label{cubmod}
\begin{split}
\mathfrak{a}_{1}u_{xx}+\mathfrak{a}_{2}u_{yy}+4\mathcal{W}''(|\nabla u|^2)u_{x}u_{xy}u_{y}-uw(\lambda, u)&=0 \qquad \text{in} \; \Omega\\
u&=0 \qquad \text{on} \; \partial \Omega.
\end{split}
\end{empheq}
where
\begin{gather*}
\mathfrak{a}_{1}=\mathcal{W}'(|\nabla u|)+2\mathcal{W}''(|\nabla u|^2)(u_{x})^{2} \qquad \text{and} \qquad \mathfrak{a}_{2}=\mathcal{W}'(|\nabla u|)+2\mathcal{W}''(|\nabla u|^2)(u_{y})^{2}.
\end{gather*}
Thus $\mathfrak{a}_{1}$ and $\mathfrak{a}_{2}$ are uniformly positive for small enough $\delta_{0}$ (note that this is only a concern for Model II, since \eqref{Wcond} ensures a such a lower bound holds for Model I). We write \eqref{cubmod} this way in order to view it as a linear elliptic PDE and apply a comparison principle argument.
Consider the function
\begin{gather}
\Phi^{\delta}=\Phi^{\delta}(y): = \log{(2+\sqrt{\delta}y)}\cos(\sqrt{1-\lambda}y).
\end{gather}
An elementary calculation reveals that
\begin{empheq}{align}
\label{supsol1}
\begin{split}
a_{2}\Phi^{\delta}_{yy}-w(\lambda, u)\Phi^{\delta} = \Big(&\dfrac{-\epsilon\cos(\sqrt{1-\lambda}y)}{(2+\sqrt{\epsilon}y)^2}-\dfrac{2\sqrt{\epsilon(1-\lambda)}\sin(\sqrt{1-\lambda}y)}{2+\sqrt{\epsilon}y}\Big)(1+O(\epsilon^{2})) \\
&+O(\epsilon^{2}\cos(\sqrt{1-\lambda}y)) \;\; \text{in} \; \; C^{0}(\overline{\Omega}),
\end{split}
\end{empheq}
where we have used the asymptotics of $b$ and $\mathcal{W}$ in \eqref{analytic_assumptions}, which hold for small enough $\epsilon_{0}$. The right hand side of \eqref{supsol1} is strictly negative whenever $0\leq y \leq \frac{\pi}{2}$ and $\delta$ is small enough. Moreover, $\Phi^{\delta}>0$ in $\overline{\Omega}$. So, if we establish non-negative boundary values for $u$ on the region $(-\infty,\infty)\times(0,\frac{\pi}{2})$, then we may invoke the maximum principle for uniformly elliptic operators with a positive super-solution (see \hyperref[pos_sup_sol]{\thref{max_pricp}.\ref{pos_sup_sol}}) to conclude that $u>0$ on $\mathbb{R}\times (0,\frac{\pi}{2})$.
We already know $u=0$ on $\mathbb{R}\times\{\frac{\pi}{2}\}$, and a phase plane analysis will show $u>0$ on $\mathbb{R} \times \{0\}$. Indeed, $v:=u(x,0)$ solves the ODE \eqref{ODEv} by \cite[~Theorem 1.1]{chen2019center}. If we write this as a planar system, which has the same structure as \eqref{system}, then the symmetries $(V,W) \mapsto (-V,-W)$ and $(V,W) \mapsto (V(-X),-W(-X))$ imply that a homoclinic orbit that intersects the positive $V$ axis meets the $W$ axis only at $(0,0)$. Hence, after a possible reflection $u(x,0)>0$, so $u>0$ for $0<y <\frac{\pi}{2}$ by the remarks at the end of the previous paragraph. Redoing the above analysis with $\Phi^{\delta}(-y)$ shows $u>0$ in $\Omega$.
Now that the positivity of $u$ is established, we find from a moving planes argument in \cite[Theorem~3.2]{MR1113099} that $u$ is even in $x$ about some line $x=x_{1}$ with $u_{x}<0$ for $x>x_{1}$. The translation $x \mapsto x-x_{1}$ sends $u$ to a positive solution of \eqref{cub} with the desired monotonicity and evenness properties in $x$. The phase plane analysis for \eqref{system} in \thref{existenceuniqueness} shows that $u^{\epsilon}_{x}(0,y)=0$, where $u^{\epsilon} \in \mathcal{C}^{I,II}_{\text{loc}}$, whence it follows that $u^{\epsilon}$ is even about $x=0$.
Observe that the previous paragraphs established the uniqueness of small solutions to \eqref{cub} up to translations and reflections in $x$. In particular, the elements of $\mathcal{C}^{I,II}_{\text{loc}}$ are the unique positive and even solutions to \eqref{cub} in a sufficiently small neighborhood of $(0,0)$ in $X_{0}\times\mathbb{R}$. Finally, the elements of $\mathcal{C}^{I,II}_{\text{loc}}$ must be even in $y$ since the reflection $y \mapsto -y$ will take an element of $\mathcal{C}^{I,II}_{\text{loc}}$ to another positive solution that is even and monotone in $x$.
\end{proof}
\subsection{Linearized problem}
In this section, we show the linearized operator $\mathcal{F}_{u}(0,\lambda): X \to Y$ is invertible for $0<\lambda\leq1$. This fact plays an important role in the analysis to follow. In particular, it implies the Fredholmness of $\mathcal{F}: X \to Y$, which will extend to the global curve. A simple calculation yields
\begin{gather}
\label{LO}
\mathcal{F}_{u}(0,0) = \Delta+1
\end{gather}
for Model I or Model II. The notion of a limiting operator is needed for the next two lemmas. If
\begin{gather*}
L = a_{ij}(x,y)\partial_{x_{i}}\partial_{x_{j}}+b_{i}(x,y)\partial_{x_{i}}+c(x,y),
\end{gather*}
and as $x \to \pm \infty$ we have
\begin{gather*}
a_{ij}(x,y) \to \tilde{a}_{ij}(y), \;\; b_{i}(x,y) \to \tilde{b}_{i}(y), \;\; c(x,y) \to \tilde{c}(y),
\end{gather*}
where each of $\tilde{a_{ij}}, \tilde{b}_{i}$, and $\tilde{c}$ belongs to $C^{\alpha}_{\text{b}}[-\frac{\pi}{2},\frac{\pi}{2}]$, then the limiting operator $\tilde{L}$ is defined as
\begin{gather*}
\tilde{L}: = \tilde{a}_{ij}\partial_{x_{i}}\partial_{x_{j}}+\tilde{b}_{x_{i}}\partial_{x_{i}}+\tilde{c}.
\end{gather*}
\begin{lemma}[Invertibility of linearized operator at 0]
\thlabel{linearized at 0}
For all $0<\lambda \leq 1$, $\mathcal{F}_{u}(0,\lambda): X \to Y$ is invertible.
\end{lemma}
\begin{proof}
Fix $0<\lambda\leq1$ and let $\varphi(y)=\cos(\sqrt{1- \lambda}y)$. Since $\varphi>0$ on $[-\frac{\pi}{2},\frac{\pi}{2}]$, we may write $u=:v\varphi$, so that $\mathcal{F}_{u}(0,\lambda)u=0$ implies
\begin{empheq}[left=\empheqlbrace]{align} \label{divisiontrick}
\begin{split}
\Delta v + \dfrac{2\varphi_{y}}{\varphi}v_{y} &=0 \qquad \text{in} \; \Omega\\
v&=0 \qquad \text{on} \; \partial \Omega.
\end{split}
\end{empheq}
Let $L$ be the linear operator associated with~\eqref{divisiontrick} which acts on $v$. Note that $L:X_{\text{b}} \to Y_{\text{b}}$ has trivial kernel by the strong maximum principle (\hyperref[strong_max]{\thref{max_pricp}.\ref{strong_max}}). For $\gamma \in \mathbb{R}$, let $L_{\gamma}=L-\gamma$ and denote by $\mathcal{B}$ the corresponding bilinear operator:
\begin{gather*}
\mathcal{B}[w,w] = \int_{\Omega} \left(|\nabla w|^{2} - \dfrac{\varphi_{y}}{\varphi}ww_{y}+\gamma w^{2}\right)\,dx\,dy=
\int_{\Omega}\left( |\nabla w|^{2} +\left(\dfrac{\partial}{\partial y}\dfrac{2\varphi_{y}}{\varphi}\right)w^{2}+\gamma w^{2}\right)\,dx\,dy,
\end{gather*}
for $w \in H_{0}^{1}$. When $\gamma$ is large enough, $\mathcal{B}$ is coercive and hence Lax--Milgram implies $L_{\gamma}:H_{0}^{1} \to L^{2}$ is invertible.
We will next show that $L_{\gamma}: X_{\text{b}} \to Y_{\text{b}}$ is invertible. The argument is similar to the one found in \cite[Appendix A.2]{wheeler2013large}. Let $\rho_{\epsilon}(x):=\text{sech}(\epsilon x)$. Conjugating by $\rho_{\epsilon}$ the problem $L_{\gamma}=f$ may be transformed into the equivalent one
\begin{gather*}
L_{\gamma}^{\epsilon}u_{\epsilon}=L_{\gamma}u_{\epsilon}-\dfrac{2\partial_{x}\rho_{\epsilon}}{\rho_{\epsilon}}\partial_{x}u_{\epsilon}+\left(\dfrac{\partial_{x}^{2}\rho_{\epsilon}}{\rho_{\epsilon}}-\dfrac{2(\partial_{x}\rho_{\epsilon}^{2})}{\rho_{\epsilon}^{2}}\right)u_{\epsilon}=f_{\epsilon}
\end{gather*}
where $u_{\epsilon}:=u\rho_{\epsilon}$ and $f_{\epsilon}:=f\rho_{\epsilon}$. If $f \in Y_{\text{b}}$ then $f_{\epsilon} \in L^{2}$, and the equation $L_{\gamma}(u_{\epsilon})=f_{\epsilon}$ is solvable by the work above. Note that
\begin{gather*}
\|L_{\gamma}^{\epsilon}-L_{\gamma}\|_{X_{\text{b}} \to Y_{\text{b}}} = \left\|\dfrac{2\partial_{x}\rho_{\epsilon}}{\rho_{\epsilon}}\partial_{x}+\left(\dfrac{\partial_{x}^{2}\rho_{\epsilon}}{\rho_{\epsilon}}-\dfrac{2(\partial_{x}\rho_{\epsilon}^{2})}{\rho_{\epsilon}^{2}}\right)\right\|_{X_{\text{b}}\to Y_{\text{b}}} \longrightarrow 0, \;\;\; \text{as} \;\;\;\epsilon\to 0,
\end{gather*}
so for small enough $\epsilon_{0}$, the pertubation $L_{\gamma}^{\epsilon}$ of $L_{\gamma}$ remains invertible whenever $0<\epsilon<\epsilon_{0}$.
From \cite[Theorem 8.8]{gilbarg2015elliptic} and \cite[Theorem 9.19]{gilbarg2015elliptic}, we know $u_{\epsilon} \in C^{3+\alpha}(\overline{\Omega})\cap C_{\text{b}}^{\alpha}(\overline{\Omega})$. Moreover, By Schauder estimates and injectivity, we have the bound
\begin{gather*}
|u_{\epsilon}|_{2+\alpha}\leq C|f_{\epsilon}|_{\alpha},
\end{gather*}
wehre $C>0$ is independent of $\epsilon$. Therefore, we are able to extract a subsequence $\epsilon_{n}\to 0$ for which $u_{\epsilon_{n}}\to u$ in $C_{\text{loc}}^{2}(\overline{\Omega})$ with $u \in C_{\text{b}}^{2+\alpha}(\overline{\Omega})$. Letting $n \to \infty$ in the above equation we find the $L_{\gamma}u=f$.
Now that the invertibility of $L_{\gamma}:X_{\text{b}}\to Y_{\text{b}}$ has been established, we will make use of the continuity of the Fredholm index to conclude that $L: X_{\text{b}} \to Y_{\text{b}}$ is invertible. Let $L_{\gamma t}:= L -t\gamma$. It is clear that $L_{\gamma t}$ is its own limiting operator for $t \in [0,1]$, since its coefficients are $x$-independent. The limiting problem has no non-trivial solutions because $L_{t\gamma}$ satisfies the strong maximum principle (\hyperref[strong_max]{\thref{max_pricp}.\ref{strong_max}}). Lemma A.8 of \cite{wheeler2015solitary} now shows $L_{t\gamma}$ must be semi-Fredholm with index $<\infty$. Thus, the Fredholm index must then be preserved along the family $\{L_{t\gamma}\}_{t \in [0,1]}$. We can now conclude that $L:X_{\text{b}} \to Y_{\text{b}}$ has Fredholm index $0$, just as the operator $L_{\gamma }$. Hence, $L: X_{\text{b}} \to Y_{\text{b}}$ is in fact invertible since it also has a trivial kernel. From \cite[Lemma A.12]{wheeler2015solitary}, $L:X_{0}\to Y_{0}$ must also have Fredholm index $0$, and again the kernel is trivial so that $L:X_{0} \to Y_{0}$ is invertible. Finally, it is not hard to see from the structure of $L$ that data $f \in Y \subset Y_{0}$ must have a corresponding solution $u \in X$. For example, if $f$ is even in $y$, and $v$ is the unique solution to $Lv=f$, then a quick check shows that $Lv(x,-y) =f$ as well. Hence, $L: X \to Y$ is invertible.
\end{proof}
Now consider the linearized operator $\mathcal{F}_{u}(u,\lambda)$ with $(u,\lambda) \in \mathcal{C}^{I,II}_{\text{loc}}$. We know $\mathcal{F}_{u}(u,\lambda)\partial_{x} u=0$ by translation invariance and elliptic regularity. Thus, $\mathcal{F}_{u}(u,\lambda)$ has nontrivial kernel acting on $X_{\text{b}}$. However, if we instead restrict to $X$, which by definition imposes even symmetry, then we will have injectivity.
\begin{lemma}[Trivial kernel]
\thlabel{Trivial Kernel}
For all $(u, \lambda) \in \mathcal{C}_{\textup{loc}}^{I,II}$, $\mathcal{F}_{u}(u,\lambda): X \to Y$ is injective, whenever $(u,\lambda) \in \mathcal{C}^{I,II}_{\textup{loc}}$.
\end{lemma}
\begin{proof}
From \cite[Theorem~1.6]{chen2019center} and \cite[~Appendix B.1.]{chen2019center}, if $\dot{u}\in C_{\text{b}}^{3+\alpha}(\overline{\Omega})$ is a solution of $\mathcal{F}_{u}(u,\lambda)\dot{u}=0$, then $\dot{v}:=\dot{u}(\cdot,0)$ solves the linearized reduced ODE
\begin{gather}
\label{reduction}
\dot{v}''=r_{B}\dot{v}'+(\lambda+\frac{9(b_{1}+2c_{2})}{4}v^{2}+r_{A})\dot{v}
\end{gather}
where $v:= u(\cdot,0)$. As noted above, $\partial_{x}u$ is in the kernel of $\mathcal{F}_u(u,\lambda)$, so $v_{x}$ is an odd and bounded solution to \eqref{reduction}. Suppose that we had another bounded solution $w \in C^{2}_{\text{b}}(\mathbb{R})$ to $\eqref{reduction}$ that is linearly independent of $v$. From Abel's identity
\begin{gather*}
W(x)=W(0)\exp{\left(\int_{0}^{x} \text{tr}(P(s))\, ds\right)}
\end{gather*}
where $W(x)$ is the Wronskian of $v$ and $w$ evaluated at $x$, and $P$ is the matrix defined by
\begin{gather*}
P\;:=
\begin{pmatrix}
0 & 1 \\
\lambda+\frac{9(b_{1}+2c_{1})}{4}c_{1}v^{2}+r_{A}(v,v',\epsilon) & r_{B}
\end{pmatrix}.
\end{gather*}
Since $u_{x}, u_{xx}, w,$ and $w_{x}$ are all bounded, and $u_{x},u_{xx}$ each decay at infinity, we see that
\begin{gather*}
|\text{det}W(x)|\leq (w^{2}(x)+w_{x}^{2}(x))\cdot (u_{x}^{2}(x,0)+u_{xx}(x,0)) \to 0 \;\;\; \text{as} \;\;\;|x| \to \infty.
\end{gather*}
But then must have
\begin{gather}\label{abel_contr}
\int_{0}^{x}r_{B}(u_{x}(t,0),u_{xx}(t,0))\,dt \to -\infty \;\;\; \text{and} \;\;\; \int_{-x}^{0}r_{B}(u_{x}(t,0),u_{xx}(t,0))\,dt \to \infty \;\;\; \text{as} \;\;\; x \to \infty.
\end{gather}
Recalling the symmetry properties of $f$ in \eqref{f_sym} and the explicit form given in \eqref{f}, it follows that
\begin{gather*}
r_{B}(A,B)=-r_{B}(A,-B)=r_{B}(-A,-B).
\end{gather*}
This would imply
\begin{align}\label{abel_contr2}
\begin{split}
\lim_{x \to \infty}\int_{-x}^{0}r_{B}(u_{x}(t,0),&u_{xx}(t,0))\,dt = \lim_{x\to \infty}-\int_{0}^{x}r_{B}(u_{x}(-t,0),u_{xx}(-t,0))\,dt \\
&=\lim_{x\to \infty}\int_{0}^{x}r_{B}(u_{x}(t,0),u_{xx}(t,0))\,dt,
\end{split}
\end{align}
where we used the properties of $r_{B}$, oddness of $u_{x}$ and evenness of $u_{xx}$. Equations \eqref{abel_contr} and \eqref{abel_contr2} together force a contradiction.
\begin{comment}
\begin{gather*}
\lim_{x \to \infty}\int_{-x}^{0}r_{B}(u_{x}(t,0),u_{xx}(t,0))dt \to \infty
\end{gather*}
whereas
\begin{gather*}
\lim_{x\to \infty}\int_{0}^{x}r_{B}(u_{x}(t,0),u_{xx}(t,0))\,dt \to - \infty.
\end{gather*}
\end{comment}
Hence, there cannot be two linearly independent bounded solutions to $\eqref{reduction}$.
At this point we may conclude that $v_{x}$ generates the solution set of \eqref{reduction}. Thus, $\mathcal{F}_{u}:X \to Y$ has trivial kernel, since any non-zero element would necessarily be odd. To see this, suppose, by a slight abuse of notation, that some $w(x,y) \in C^{3}_{\text{b}}(\overline{\Omega})$ satisfies $\mathcal{F}_{u}(u,\lambda)w =0$. Recall that \cite[Theorem~1.1]{chen2019center} gives the expansion
\begin{gather*}
w(x,y) = \varphi_{0}(x)w(x,0)+\Psi(w(x,0),w_{x}(x,0),\lambda)(0,y),
\end{gather*}
where $w(x,0)$ is odd in $x$ by the work above. The symmetries in \eqref{psi_sym} imply the additional symmetry
\begin{gather*}
\Psi(-A,B,\lambda)(0,y) = -\Psi(A,B,\lambda)(0,y),
\end{gather*}
from which we may conclude that $w$ is odd in $x$.
\end{proof}
Finally, we show that $\mathcal{F}_{u}$ is invertible along the local curve.
\begin{lemma}[Invertibility]
\thlabel{Invertibility}
For any $(u,\lambda) \in \mathcal{C}_{\text{loc}}^{I,II}$, the linearized operator $\mathcal{F}_{u}(u,\lambda): X \to Y$ is invertible.
\end{lemma}
\begin{proof}
We found that $\mathcal{F}_{u}(u,\lambda): X \to Y$ has trivial kernel whenever $(u,\lambda) \in \mathcal{C}^{I,II}_{\text{loc}}$ in \thref{Trivial Kernel}. It therefore suffices to show that this operator is Fredholm index $0$. The limiting operator of $\mathcal{F}_{u}(u, \lambda)$ is simply $\mathcal{F}_{u}(0,\lambda)$ because $u$ decays as $x \to \pm \infty$. Recall that $\mathcal{F}_{u}(0,\lambda)$ was shown to be invertible in \thref{linearized at 0}. By \cite[Lemma A.13]{wheeler2015solitary} it follows that the Fredholm indices of $\mathcal{F}_{u}(u,\lambda)$ and $\mathcal{F}_{u}(0,\lambda)$ match. Hence, $\mathcal{F}_{u}(u,\lambda)$ is in fact Fredholm index $0$, and the result follows.
\end{proof}
\end{section}
\begin{section}{Global bifurcation}
\label{globalsection}
\subsection{Background theory}
We begin this section by recalling some of the global bifurcation theory developed in \cite[Section 6]{chen2018existence}. The results stated here are tailored to the problem at hand. Let $\mathcal{I}=(0,1)$ and
\begin{align}
\label{ODef}
\begin{split}
&\mathcal{O} = \bigcup\limits_{\delta>0}\mathcal{O}_{\delta} \qquad \text{where} \\ \mathcal{O}_{\delta} = X \cap \big\{u \in C^{3}(\overline{\Omega})\,:\, &\liminf_{(x,y) \in \overline{\Omega}}\big(\mathcal{W}'(q)+2q\mathcal{W}''(q)\big)\big\vert_{q=|\nabla u(x,y)|^{2}} >\delta \big\}.
\end{split}
\end{align}
\begin{theorem}
\thlabel{global}
There is a curve of solutions $\mathcal{C}^{I,II}\subset \mathcal{F}^{-1}(0)$, where $\mathcal{F}$ corresponds to either Model I or Model II, parameterized as $\mathcal{C}^{I,II} \coloneqq \{ (u(s),\lambda(s)) : 0<s<\infty\} \subset \mathcal{O} \times \mathcal{I}$ with the following properties.
\begin{enumerate}[label=\rm(\alph*)]
\item One of the following alternatives holds.
\begin{enumerate}[label=\rm(\roman*)]
\item \label{blowup_alt}\textup{(Blowup)} As $s \to \infty$
\begin{gather}
\label{blowup}
N(s)\coloneqq |u(s)|_{3+\alpha}+\dfrac{1}{\text{dist}(u(s),\partial \mathcal{O})}+\lambda(s)+\dfrac{1}{\text{dist}(\lambda(s),\partial \mathcal{I})}\to \infty
\end{gather}
\item \label{lossofC}\textup{(Loss of compactness)} There exists a sequence $s_{n}\to \infty$ such that $\sup_{n}N(s_{n}) < \infty$ but $\{u(s_{n})\}$ has no subsequences converging in $X$.
\end{enumerate}
\label{alternatives}
\item Near each point $(x(s_{0}),\lambda(s_{0}))\in \mathcal{C}$, we can reparametrize $\mathcal{C}$ so that $s\mapsto (x(s),\lambda(s))$ is real analytic.
\item $(x(s),\lambda(s)) \notin \mathcal{C}_{\textup{loc}}$ for $s$ sufficiently large. \label{notinloc}
\end{enumerate}
\end{theorem}
\begin{proof}
We have shown that the linearized operator is invertible along the local curve and the result follows directly from \cite[Theorem~6.1]{chen2018existence}.
\end{proof}
Alternative (i) encapsulates several interesting possibilities. We note that a blow-up in \eqref{blowup} can be achieved by a loss of ellipticity, $\lambda$ returning to $0$, or the more obvious unboundedness of $\lambda$ or $|u(s)|_{3+\alpha}$. Throughout the rest of the paper we investigate alternatives (i) and (ii) for Models I and II. This will ultimately lead us to discover that broadening occurs invariably in Model I and that a loss of ellipticity is ensured for Model II. At times we focus on segments of the curve $\mathcal{C}^{I,II}$ of the form
\begin{gather}
\mathcal{C}^{I,II}_{\delta} := \mathcal{C}^{I,II} \cap \mathcal{O}_{\delta}.
\end{gather}
Note that $\mathcal{C}^{II}=\mathcal{C}^{II}_{\xi_{1}}$ by \eqref{Wcond}.
At this point, it is convenient to recall another result from \cite{chen2018existence} which helps characterize alternative (ii) of~\thref{global}.
\begin{theorem}[Chen, Walsh, Wheeler \cite{chen2018existence}]
\thlabel{CorF}
If $\{(u_{n},\lambda_{n})\}$ is a sequence of solutions to \eqref{cub} that is uniformly bounded in $C_{\textup{b}}^{3+\alpha}(\overline{\Omega})\times \mathbb{R}$, with the additional monotonicity property
\begin{gather}\label{u_even}
u_{n}(x,y) \; \; \text{is even in $x$ and} \; \; u_{x} \leq 0 \;\; \text{for} \; \; x \geq 0
\end{gather}
for each $n$ as well as the asymptotic condition
\begin{gather}\label{U_to_zero}
\lim_{|x| \to \infty}u_{n}(x,y) = U(y) \; \; \text{uniformly in $y$}
\end{gather}
for some fixed function $U \in C_{\textup{b}}^{3+\alpha}([-\frac{\pi}{2}, \frac{\pi}{2}])$, then either
\begin{enumerate}[label=(\roman*), font=\upshape]
\item we can extract a subsequence $\{u_{n}\}$ so that $u_{n} \to u$ in $C_{\textup{b}}^{3+\alpha}(\overline{\Omega})$; or \label{Compactness}
\item we can extract a subsequence and find $x_{n} \to \infty$ so that the translated sequence $\{\tilde{u}_{n} \}$ defined by $\tilde{u}_{n} = u_{n}(\cdot + x_{n}, \cdot)$ converges in $C_{\textup{loc}}^{3}(\overline{\Omega})$ to some $\tilde{u} \in C_{\textup{b}}^{3+\alpha}(\overline{\Omega})$ that solves \eqref{cub} and has $\tilde{u} \not\equiv U$ with $\tilde{u}_{x} \leq 0$. \label{Front}
\end{enumerate}
\end{theorem}
Note that this theorem requires some symmetry and monotonicity properties in $u_{n}$. The following subsection demonstrates these properties, and more, for elements of $\mathcal{C}^{I,II}$.
\subsection{Monotonicity and nodal properties}
\label{mono}
We show that elements of $\mathcal{C}^{I,II}_{\text{loc}}$ exhibit certain qualitative features by using the asymptotics \eqref{smallsol} and maximum principle arguments. In fact, we have already established that \eqref{u_even} and \eqref{U_to_zero} (with $U(y) =0$) hold along $\mathcal{C}^{I,II}_{\text{loc}}$ in \thref{Positive}. Our goal is to prove that these persist along $\mathcal{C}^{I,II}$. The following sets will be useful for our analysis:
\begin{align}
\begin{split}
\Omega^{+}&: = \{ (x,y) \in \Omega \; : \; x>0 \} \\
\Omega_{+}&: = \{(x,y)\; : \; |x|<R, 0<y \leq \frac{\pi}{2} \} \\
L &:= \{ (0,y) \; : \; -\pi/2 < y < \pi/2 \} \\
T &:= \{ (x, \pi/2) \; : \; 0< x < \infty \} \\
B &:= \{ (x, -\pi/2) \; : \; 0< x< \infty \} \\
M &:= \{ (x, 0) \; : \; 0\leq x< \infty \}.
\end{split}
\end{align}
The nodal properties we are concerned with are as follows:
\begin{align}
\label{nodalprop}
\begin{split}
u_{x}< 0& \; \; \; \text{on} \; \;\Omega^{+} \\
u_{y}< 0& \; \; \; \text{on} \; \;\Omega_{+}\\
u_{xx}< 0& \; \; \; \text{on} \; \; L \\
u_{xy}> 0& \; \; \; \text{on} \; \; T \\
u_{xxy}> 0 \; \; \; \text{at} \; \; (0,\frac{\pi}{2}) \; \; &\text{and} \;\; u_{xxy}> 0 \; \; \; \text{at} \; \; (0,-\frac{\pi}{2}) \\
u_{yy}< 0& \; \; \; \text{on} \; \; M
\end{split}
\end{align}
The reason for such a long list is owed to the style of argument. Roughly speaking, we will split the right half (or upper half) of $\overline{\Omega}$ into a finite rectangle and infinite tail region (or into a finite rectangle and \textit{two} tail regions). The conditions in \eqref{nodalprop} will help gain control on the sign of either $u_{x}$ or $u_{y}$ near the boundary.
\begin{comment}
For ease of exposition we introduce the following condition which well be used repeatedly in the sequel:
\begin{align}
\begin{split} \label{dual}
(u,\lambda) \in \mathcal{O}_{\delta} \cap \mathcal{F}^{-1}(0),\;\;\text{for some} \; \;\delta>0.
\end{split}
\end{align}
\end{comment}
The following result gives a condition which ensures a sign on the $x$ derivative of small solutions to \eqref{cub}.
\begin{lemma}[Asymptotic monotonicity]\thlabel{asymptotic}
There exists $\epsilon_{0}>0$ such that, if $u \in C^{3}_{\textup{b}}(\overline{\Omega})$ and $(u,\lambda) \in \mathcal{O}_{\delta} \cap \mathcal{F}^{-1}(0),\;\;\text{for some} \; \;\delta>0$, $\lambda>0$, $u_{x} \leq 0$ on $L_{x_{0}}: = \{(x,y) \in \Omega\; : \; x = x_{0}\}$, and
\begin{gather*}
|u|_{2}< \epsilon_{0},
\end{gather*}
then $u_{x}\leq0$ in $\Omega\cap \{ (x,y)\; : \; x\geq x_{0}\}$.
\end{lemma}
\begin{proof}
Fix $\lambda$ with $0<\lambda \leq 1$. Differentiating $(\ref{cub})$ with respect to $x$ gives
\begin{empheq}[left=\empheqlbrace]{align}
\label{quasilinearize}
\begin{split}
\nabla \cdot(\mathcal{W}'(|\nabla u|^{2})\nabla v+ 2\mathcal{W}''(|\nabla u|^{2})(\nabla u \otimes \nabla u)\nabla v)-b_{u}(u,\lambda)v&=0\qquad \text{in} \; \Omega\\
v&=0 \qquad \text{on} \; \partial \Omega
\end{split}
\end{empheq}
where $v=u_{x}$. We see that $\eqref{quasilinearize}$ is uniformly elliptic by \eqref{Wcond} in the case of Model I, and the fact that $u \in \mathcal{O}_{\delta}$ in the case of Model II. Let $v:=\varphi z$, where
\begin{gather} \label{mod_u_x}
\varphi(y) = \cos(\sqrt{k}y)
\end{gather}
and $1-\lambda < k < 1$. After plugging \eqref{mod_u_x} into \eqref{quasilinearize}, we find that $z$ satisfies a uniformly elliptic equation with zeroth order term
\begin{gather}
\label{zeroth}
\frac{1}{\varphi}(\partial_{y}(\mathcal{W}'(|\nabla u|^{2})\varphi+2\mathcal{W}''(|\nabla u|^{2})(u_{y}^{2}+u_{x}u_{y})\varphi).
\end{gather}
If $\epsilon_{0}$ is chosen small enough, then from \eqref{w} and \eqref{b} it follows that \eqref{zeroth} admits the $C^{0}(\overline{\Omega})$ expansion
\begin{gather}\label{neg_zeroth}
\frac{1}{\varphi}((1-\lambda-k)\varphi+O(\epsilon_{0}^{2}))<0.
\end{gather}
Thus, \eqref{neg_zeroth} implies that $z$ satisfies the strong maximum principle (\hyperref[strong_max]{\thref{max_pricp}.\ref{strong_max}}). Note $z = 0$ on $\partial \Omega$, and $z \leq 0$ on $L_{x_{0}}$, so it follows from the maximum principle that $z \leq 0$ in $\overline{\Omega}\cap\{ (x,y)\; : \; x\geq x_{0}\}$. Since $\varphi(y)>0$, we must have $u_{x}<0$ in $\overline{\Omega}\cap\{ (x,y)\; : \; x\geq x_{0}\}$ as well.
If $\lambda>1$, then $v=u_{x}$ still solves \eqref{quasilinearize}. For $\epsilon_{0}$ sufficiently small, $-b_{z}(u,\lambda)\leq 0$, by \eqref{b}. As before, the strong maximum principle and boundary conditions now yield the desired conclusion.
\end{proof}
\begin{remark}\thlabel{asyp_ext}
The above lemma is stated for the half strip $(x_{0},\infty) \times (-\frac{\pi}{2},\frac{\pi}{2})$, for some $x_{0}>0$, but a similar result holds for sets of the form $(x_{0}, \infty)\times (0,\frac{\pi}{2})$ or $(-\infty,-x_{0})\times (0,\frac{\pi}{2})$.
\end{remark}
Next, we consider the nodal properties of a monotone solution.
\begin{lemma}[Nodal properties]
\thlabel{nodalproplemma}
Let $(u,\lambda) \in \mathcal{O}_{\delta} \cap \mathcal{F}^{-1}(0),\;\;\text{for some} \; \;\delta>0$. Suppose that $u_{x}<0$ in $\Omega^{+}$, $u_{y}<0$ in $\Omega_{+}$, and $u \in X$. Then $u$ satisfies \eqref{nodalprop}. \end{lemma}
\begin{proof}
Note $u_{x}=0$ on $\partial \Omega^{+}$ from the boundary conditions and evenness in the $x$ variable. In particular, $u_{x}=u_{xx}=0$ on $T$. The Hopf lemma (\hyperref[hopf]{\thref{max_pricp}.\ref{hopf}}) shows that $u_{xx}<0$ on $L$, $u_{xy}<0$ on $B$, and that $u_{xy}>0$ on $T$. Moreover, $u_{xy}=u_{xyy}=0$ on $L$, since $u_{x}=0$ on $L$. If $(s_{1},s_{2})$ is a unit outward pointing vector at $(0,\frac{\pi}{2})$ with $s_{1}<0$ and $s_{2}>0$, then Serrin's lemma (\hyperref[serrin]{\thref{max_pricp}.\ref{serrin}}) requires $\partial^{2}_{s}u_{x}<0$ at $(0,\frac{\pi}{2})$ since $u_{xx}=u_{xy}=0$ at $(0,\frac{\pi}{2})$. A simple calculation shows
$\partial_{s}^{2}u_{x}=s_{1}^{2}u_{xxx}+2s_{1}s_{2}u_{xxy}+s_{2}^{2}u_{xyy}<0$ at $(0,\frac{\pi}{2})$. From this we see $u_{xxy}>0$ at $(0,\frac{\pi}{2})$. A similar argument shows $u_{xxy}<0$ at $(0,-\frac{\pi}{2})$. We are left only to show that $u_{yy}<0$ on $M$. The evenness of $u$ in the $y$ variable implies that $u_{y}=0$ along $M$, and the result follows from the Hopf lemma.
\end{proof}
We now show that the collection of $(u,\lambda)$ satisfying \eqref{nodalprop} is both open and closed in an appropriate relative topology.
\begin{lemma}[Open property]
\thlabel{open}
Let $(u,\lambda), (\Tilde{u},\Tilde{\lambda}) \in \mathcal{O}_{\delta} \cap \mathcal{F}^{-1}(0),\;\;\text{for some} \; \;\delta>0$. Suppose that $0< \lambda, \Tilde{\lambda}$ and $u, \Tilde{u} \in C_{\textup{b,e}}^{3}(\overline{\Omega})\cap C_{0}^{2}(\overline{\Omega})$. If $u$ satisfies \eqref{nodalprop}, then there is some $\epsilon_{0}>0$ for which $|u-\Tilde{u}|_{3}+|\lambda -\Tilde{\lambda}|<\epsilon_{0}$ implies $\Tilde{u}$ also satisfies \eqref{nodalprop}.
\end{lemma}
\begin{proof}
We will establish the sign of either $\Tilde{u}_{x}$ or $\Tilde{u}_{y}$ in several finite regions, and then invoke \thref{asymptotic} to determine the signs in a leftover tail region. See Figure~\ref{domains} for a sketch of the domains used. Now, because $u \in C_{0}^{2}(\overline{\Omega})$, there is an $R>0$ large enough so that $|u|_{2}< \epsilon/2$ for $x>R$, where $\epsilon$ is chosen to satisfy \thref{asymptotic}. If $|u-\Tilde{u}|_{2}<\epsilon_{1}=\epsilon/2$, then $|\Tilde{u}(x,y)|_{2}<\epsilon$ for $x>R$. Let $\Omega^{+,2R}$ be the rectangle $(0,2R)\times(-\frac{\pi}{2},\frac{\pi}{2})$ and $\Omega^{+}_{k}$ the inscribed rectangle with distance $1/k$ from $\Omega^{+,2R}$. Let us define several regions useful for our analysis:
\begin{gather*}
T_{k}:=\{(x,\frac{\pi}{2})\; : \; 1/k<x<2R-1/k\} \\ B_{k};=\{(x,- \frac{\pi}{2})\; : \; 1/k<x<2R-1/k\} \\ L_{k} := \{(0,y) \;: \: -\frac{\pi}{2}+1/k<y< \frac{\pi}{2} - 1/k \}.
\end{gather*}
For a given $k>0$, there is an $\epsilon_{k}$ such that $|u-\Tilde{u}|_{3}<\epsilon_{k}$ implies $\Tilde{u}_{x}<0$ in $\overline{\Omega}_{k}$, $\Tilde{u}_{xx}>0$ on $L_{k}$, $\Tilde{u}_{xy}>0$ on $T_{k}$, and $\Tilde{w}_{xy}<0$ on $B_{k}$.
Suppose $\epsilon_{0}<1$, and consider the Taylor expansion of $\Tilde{u}_{x}$ at a point $(x_{0},\frac{\pi}{2})$ on $T_{k}$:
\begin{gather} \label{tay}
\Tilde{u}_{x}(x_{0},y)=\Tilde{u}_{xy}(x_{0},\frac{\pi}{2})(y-\frac{\pi}{2})+O((y-\frac{\pi}{2})^{2}) \; \;\; \text{in} \;\; \;C^{0}(\overline{\Omega}),
\end{gather}
where $\frac{\pi}{2}-1/k < y <\frac{\pi}{2}$. When $k$ is large enough, the remainder term in \eqref{tay} is dominated by the first term and $\Tilde{u}_{x}(x_{0},y)<0$. Analogous arguments show that for large enough $k$, $u_{x}<0$ in the rectangle $(0,1/k)\times (-\frac{\pi}{2}+1/k,\frac{\pi}{2} - 1/k)$, and that $u_{x}<0$ in $(1/k, 2R-1/k)\times(0,1/k)$.
We still need to deal with the corners. For a given $k$, consider the quarter circle of radius $\frac{\sqrt{2}}{k}$ in $\Omega^{+,2R}$ centered at $(0,\frac{\pi}{2})$. Because $u_{x}, u_{xx}, u_{xy}, u_{xxx} =0$ at $(0,\frac{\pi}{2}$),
\begin{gather*}
\Tilde{u}_{x}(x,y)=\Tilde{u}_{xxy}(0,\frac{\pi}{2})(x)(y-\frac{\pi}{2})+O((y-\frac{\pi}{2})^{2})\;\;\; \text{in} \;\;\;C^{0}(\overline{\Omega}).
\end{gather*}
For a given $k$ there exists an $\epsilon_{k}'$ so small that $|u-\Tilde{u}|_{3}<\epsilon_{k}'$ implies that $\Tilde{u}_{xxy}(0,\frac{\pi}{2})>0$. Arguing like before, we see that $\Tilde{u}_{x}(x,y)<0$ in the quarter circle, whenever $k$ is sufficiently large. A similar argument shows that $u_{x}<0$ in quarter circle of radius $k$ centered at $(0,-\frac{\pi}{2})$.
\begin{figure} \label{domains}
\centering
\begin{tikzpicture}[scale=0.62]
\draw (0,.43) rectangle (10,3.57) ;
\draw[thick] (8,0.43)--(8, 3.57);
\draw[draw] (0, .43) --(1, 0.43) -- (1,.43) arc [start angle=0, end angle=90, radius=1]--(0, .43);
\draw[draw] (0,3.57) --(0, 2.57) -- (0,2.57) arc [start angle=-90, end angle=0, radius=1]--(0, 3.57);
\draw[dashed] (0.75, 0.43)--(0.75, 3.57);
\draw[dashed] (7.25,0.43)--(7.25,3.57);
\draw [dashed] (0,1.02)--(8, 1.02);
\draw [dashed] (0, 2.97)--(8, 2.97);
\node[align=left] at (-0.2,4.2) {$(0,\frac{\pi}{2})$};
\node[ align=left] at (-0.2,-.3) {$(0,-\frac{\pi}{2})$};
\filldraw [black] (0,0.43) circle (2pt);
\filldraw[black] (0,3.57) circle (2pt);
\node[align=left] at (4,-0.2) {$(R,0)$};
\node[align = left] at (8,-0.2){$(2R,0)$};
\filldraw [black] (4,0.43) circle (2pt);
\filldraw[black] (8,0.43) circle (2pt);
\node[align=left] at (4,4) {$T_{k}$};
\node[ align=left] at (4,2) {\large $\Omega^{+}_{k}$};
\node[align=left] at (-0.45, 2) {$L_{k}$};
\end{tikzpicture}
\hspace{0.5cm}
\begin{tikzpicture}[scale=0.62]
\draw [black] (0, .43) rectangle (12,3.57);
\draw[black, dashed] (2,1)--(10,1);
\node[ align=left] at (6,2) {\large $\Omega_{+,k}$};
\node[ align=left] at (6,4) {$(0,\frac{\pi}{2})$};
\node[ align=left] at (6,-0.2) {$(0,0)$};
\filldraw [black] (6,0.43) circle (2.3pt);
\filldraw[black] (6, 3.57) circle (2.3pt);
\draw (10, 0.43)--(10,3.57);
\draw(2, 0.43)--(2,3.57);
\node[ align=left] at (8,-0.2) { $M^{2R}$};
\node[align =left] at (10, -0.2) {$(2R,0)$};
\node[align=left] at (2,-0.2) {$(-2R,0)$};
\filldraw [black] (10,0.43) circle (2pt);
\filldraw[black] (2,0.43) circle (2pt);
\end{tikzpicture}
\caption{Left: Regions use to control the sign of $\tilde{u}_{x}$. Right: Regions used to control the sign of $\tilde{u}_{y}$. }
\end{figure}
From the work above, we find that if $k$ is taken sufficiently large, and $\epsilon$ taken sufficiently small, then $\tilde{u}_{x}<0$ in $\Omega^{+,2R}$. In particular, $\tilde{u}_{x}<0$ on the line segment with $x=R$ and $-\frac{\pi}{2} < y < \frac{\pi}{2}$. \thref{asymptotic} now implies that $\tilde{u}_{x} <0$ in $\Omega^{+}$. If we can show that $\tilde{u}_{y}<0$ in $\Omega_{+}$, then we will be able invoke \thref{nodalproplemma} to get the desired result.
The argument to establish a sign on $\tilde{u}_{y}$ is similar to the one just given for $\tilde{u}_{x}$, so we provide only a sketch. Let $\Omega_{+,2R} = \Omega_{+}\cap \{ (x,y) \; : \; |x |\leq 2R\}$ and $\Omega_{+,k} = \Omega_{+}\cap \{ (x,y) \; : \; |x| \leq 2R, y>\frac{1}{k}\}$ and $M^{2R} = M \cap \{(x,y) \; : \; |x| \leq 2R\}$. We see that \eqref{quasilinearize} holds for $v=u_{y}$ in $\Omega_{+}$, except the homogeneous Dirichlet condition is lost. Thus, $u_{y}$ satisfies a uniformly elliptic PDE with a non positive zeroth order coefficient in the tail region $\Omega_{+}\cap \{ (x,y) \; : \; |x| \geq R\}$, where $R$ is the same constant from the above argument. For small enough $\epsilon_{0}$ and $k$ we find that $\tilde{u}_{y}<0$ on $\Omega_{+,k}$ and $\tilde{u}_{yy} < 0$ on $M^{2R}$. If $k$ is sufficiently large, then from a Taylor expansion along $M^{2R}$ we find that $\tilde{u}_{y}<0$ in $\Omega_{+,2R}$. Thus, a sign condition for $\tilde{u}_{y}$ is established in the finite region $\Omega_{+,2R}$. To deal the corresponding infinite tails, we just need to establish good boundary values, since we know $\tilde{u}_{y}$ satisfies the maximum principle whenever $|x|>R$ (see \thref{asyp_ext}). From the above argument, we have seen that if $\epsilon_{0}$ is small enough, then $\tilde{u}_{xy}>0$ on $T$. This, along with the decay of $\tilde{u}$ at $x = \infty$, is enough to establish that $\tilde{u}_{y}<0$ on all of $T$. A symmetric argument will show that $u_{y}<0$ on the line segment $\{(x,\frac{\pi}{2})\; : \; -\infty< x <0 \}$. Also, $\tilde{u}_{y}=0$ on $M$ by evenness in the $y$ variable. Finally, since $\tilde{u}_{y}\leq0$ on the line segment with $x=R$ and $0\leq y \leq \frac{\pi}{2}$ (and on the segment with $x=-R$ and $0 \leq y \leq \frac{\pi}{2}$, by evenness in the $x$ variable), we conclude that $\Tilde{u}_{y}<0$ on $\Omega_{+}$.
\end{proof}
\begin{lemma}[Closed property] \thlabel{closedprop}
Let $\{(u_{n},\lambda_{n})\} \subset \mathcal{O}_{\delta}\cap \,\mathcal{F}^{-1}(0)$, for some $\delta>0$. Suppose that $(u_{n},\lambda_{n})\to (u,\lambda)$ in $C^{3}_{\textup{b}}(\overline{\Omega})\times \mathbb{R}$. If each $u_{n}$ satisfies \eqref{nodalprop}, then so does $u$, unless $u\equiv0$.
\end{lemma}
\begin{proof}
By continuity we have that $u_{x}\leq 0$ in $\Omega^{+}$, $u_{x}=0$ on $\partial \Omega$, and $u_{y} \leq 0$ in $\Omega_{+}$. So $u_{x}$ and $u_{y}$ each satisfy the strong maximum principle (\hyperref[strong_max]{\thref{max_pricp}.\ref{strong_max}}) in the relevant domain because $\mathcal{F}_{u}(u,\lambda)u_{x}=0$ and $\mathcal{F}_{u}(u,\lambda)u_{y}$. Hence, if $u_{x}$ is not trivial, then $u_{x}<0$ in $\Omega^{+}$ and $u_{y}<0$ in $\Omega_{+}$. \thref{nodalproplemma} now implies that $u$ satisfies \eqref{nodalprop}.
\end{proof}
Next, we show that \eqref{nodalprop} holds along $\mathcal{C}^{I,II}_{\text{loc}}$, which in turn shows that they hold on all of $\mathcal{C}^{I,II}$.
\begin{lemma}[Nodal properties of the local curve] \thlabel{npropsmall}
If $(u^{\epsilon}, \epsilon^{2}) \in \mathcal{C}^{I,II}_{\textup{loc}}$ and $0< \epsilon \ll 1$, then $u^{\epsilon}$ exhibits the nodal properties \eqref{nodalprop}.
\end{lemma}
\begin{proof}
In \thref{Positive}, we established that $u^{\epsilon}_{x}<0$ in $\Omega^{+}$. Since $u^{\epsilon}_{x} =0$ on $T$, the Hopf lemma (\hyperref[hopf]{\thref{max_pricp}.\ref{hopf}}) implies that $u^{\epsilon}_{xy}>0$ on $T$. From \eqref{smallsol}, we know that $u^{\epsilon}_{y}(0,\frac{\pi}{2})<0$ for small enough $\epsilon$. Combining this with the decay of $u_{y}^{\epsilon}$ at infinity allows us to conclude that $u^{\epsilon}_{y}<0$ along all of $T$.
We now proceed as in the proof of \thref{Positive}. We have seen that $u^{\epsilon}$ satisfies equation \eqref{quasilinearize}. If we consider the corresponding uniformly elliptic operator acting on $x$-independent functions of the form $v=f(y)\cos(\sqrt{1-\lambda}y)$, then we obtain an expression with the following asymptotics in $C^{0}(\overline{\Omega})$
\begin{empheq}{align}
\label{zz}
\begin{split}
&(1+O(\epsilon^{2})(f''\cos(\sqrt{1-\lambda}y)-2\sqrt{1-\lambda}f'\sin(\sqrt{1-\lambda}y)) \\+ O(\epsilon^{2})(f'&\cos(\sqrt{1-\lambda}y)-\sqrt{1-\lambda}f\sin(\sqrt{1-\lambda}y))+O(\epsilon^{2}f\cos(\sqrt{1-\lambda}y)).
\end{split}
\end{empheq}
Inspecting \eqref{zz} shows that if we choose $f=\Phi_{\epsilon}$, as in the proof of \thref{Positive}, then for sufficiently small $\epsilon$ we can ensure \eqref{zz} is negative for $0\leq y \leq \frac{\pi}{2}$. The boundary condition on $T$, evenness in $y$ (which implies $u^{\epsilon}_{y}=0$ on $M$), maximum principle for uniformly elliptic operators with a positive super-solution (\hyperref[pos_sup_sol]{\thref{max_pricp}.\ref{pos_sup_sol}}), and decay in $x$ are now enough to conclude that $u^{\epsilon}_{y}<0$ in $\Omega_{+}$. Now, \thref{nodalproplemma} implies the result.
\end{proof}
\begin{theorem}[Global nodal properties]
\thlabel{globalnodal}
Every $(u,\lambda) \in \mathcal{C}^{I,II}$ exhibits the nodal properties \eqref{nodalprop}.
\end{theorem}
\begin{proof}
Let $(u,\lambda) \in \mathcal{C}^{I,II}_{\text{loc}}$. From \thref{npropsmall} $u$ satisfies \eqref{nodalprop}. Since the nodal properties are both open and closed in the relative topology of $\mathcal{C}^{I,II}$ by \thref{open} and \thref{closedprop}, we conclude that they hold everywhere on $\mathcal{C}^{I,II}$.
\end{proof}
\end{section}
\begin{section}{Uniform regularity and bounds on loading parameter}
\label{UR}
The main result of this section, which is stated in \thref{apriori}, is that $|u(s)|_{3+\alpha}$ is uniformly bounded along $C^{I,II}_{\delta}$. This is achieved by first using Schauder theory to estimate $|u(s)|_{3+\alpha}$ in terms of $|\nabla u(s)|_{0}$ and then estimating $|\nabla u(s)|_{0}$ in terms of $|u(s)|_{0}$ and $|\lambda(s)|$ by a maximum principle argument. Upper bounds on $|u(s)|_{0}$ and $|\lambda(s)|$ are then established for $C_{\delta}^{I,II}$. Finally, for $s\gg1$ it is shown that there is a positive uniform lower bound on $\lambda(s)$ along $\mathcal{C}^{I,II}_{\delta}$.
\subsection{A conserved quantity and $L^{p}$ estimates}
We derive a conserved quantity of the system that will play a key role establishing uniform bounds on $|u(s)|_{0}$ and $|\lambda(s)|$ along $\mathcal{C}^{I,II}_{\delta}$. These results, in tandem with \thref{sperbtype}, give our desired a priori estimates. The following calculation is valid for any $C^2$ solution of \eqref{cub}. Let
\begin{gather}
\mathcal{L}(z,\xi,\eta,\lambda):=\frac{1}{2}\mathcal{W}(|\xi^2+\eta^2|^{2})+B(z,\lambda)
\end{gather}
where $$B(z,\lambda) := \int_{0}^{z}b(t, \lambda)\, dt.$$ The anti-plane elastostatic problem \eqref{cub} is formally the Euler--Lagrange equation given formally by
\begin{gather*}
\delta \, \int_{\Omega} \mathcal{L}(u,|\nabla u|^{2},\lambda) \, dx \, dy = 0.
\end{gather*}
Naturally, the translation invariance in $x$ of our system leads us to expect a corresponding conserved quantity. Consider the functional
\begin{empheq}{align}
\label{hamiltonian}
\begin{split}
\mathcal{H}(u,\lambda; x):&= \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} (\mathcal{L}(u,|\nabla u|^{2},\lambda)-\mathcal{L}_{\xi}(u,|\nabla u|^{2}, \lambda) u_{x}) \, dy \\
&= \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} (\frac{1}{2}\mathcal{W}(|\nabla u|^{2})-\mathcal{W}'(|\nabla u|^{2}) u_{x}^{2} + B(u,\lambda) )\, dy.
\end{split}
\end{empheq}
If $(u,\lambda)$ solves \eqref{cub}, then $\mathcal{H}(u,\lambda; \cdot)$ is constant in $x$:
\begin{align*}
\partial_{x}\mathcal{H}&= \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\left( \mathcal{L}_{z}u_{x}+\mathcal{L}_{\xi}u_{xx}+\mathcal{L}_{\eta}u_{xy}-(\partial_{x}\mathcal{L}_{\xi})u_{x}-\mathcal{L}_{\xi}u_{xx}\right) \,dy \\
&=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} u_{x}(\mathcal{L}_{z}-\partial_{y}\mathcal{L}_{\eta}-\partial_{x}\mathcal{L}_{\xi} )\,dy =0
\end{align*}
where we used integration by parts and that
\begin{gather*}
(\mathcal{L}_{z}-\partial_{y}\mathcal{L}_{\eta}-\partial_{x}\mathcal{L}_{\xi})(y,u,u_{x},u_{y},\lambda) = \mathcal{F}(u,\lambda)=0.
\end{gather*}
It is clear that $\mathcal{H}(u(x,y),\lambda; x)\to0$ as $x \to \infty$, so $\mathcal{H}$ is identically $0$. We record this as a lemma. Note that the arguments of $\mathcal{H}$ will often be suppressed in the sequel.
\begin{lemma}[Conserved quantity]
\thlabel{Hconst}
Let $u \in C^{2}(\overline{\Omega})$ be a solution to \eqref{cub} for a fixed $\lambda$. Then $\mathcal{H}$ is constant in $x$. In particular, if $u \in X$, then $\mathcal{H}=0$.
\end{lemma}
The conserved quantity and growth conditions of both $\mathcal{W}$ and $b$ are enough obtain a uniform bound on $|u(s)|_{0}$ (and on $|\lambda(s)|$, as shown in Subsection~\ref{boundsonlambda_sub}).
\begin{lemma}[$L^{p}$ bounds]
\thlabel{top}
There exists a constant $C(c_{1}, c_{2}, b_{1})$ such that if $u \in X$ is a solution to \eqref{cub}, corresponding to Model I, with $0<\lambda $, then
\begin{gather}
\|u(0,\cdot)\|_{2},\;\; \|u_{y}(0,\cdot)\|_{6}, \; \; |u(0,\cdot)|_{1/2} \leq C,
\end{gather}
where $\| \cdot \|_{p} $ denotes the $L^{p}$ norm on $[-\frac{\pi}{2},\frac{\pi}{2}]$. Moreover, for any $x_{0} \in \mathbb{R}$ we have
\begin{gather*}
\|u(x_{0},\cdot)\|_{2}, \;\;|u(x_{0},0)|_{0} \leq C
\end{gather*}
\end{lemma}
\begin{proof}
From \eqref{hamiltonian}, \thref{Hconst}, \eqref{wgrowth2} and \eqref{bcond} we see that when $x=0$
\begin{align*}
0=2\mathcal{H} = 2\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\mathcal{L}\, dy \geq& \|u_{y}(0,\cdot)\|_{2}^{2}+c_{1}\|u_{y}(0,\cdot)\|_{4}^{4}+c_{2}\|u_{y}(0,\cdot)\|_{6}^{6}\\+&(\lambda- 1)\|u(0,\cdot)\|_{2}^{2}+\frac{b_{1}}{2}\|u(0,\cdot)\|_{4}^{4},
\end{align*}
For the remainder of the proof, we will suppress arguments of $u(0,\cdot)$ and $u_{y}(0,\cdot)$ appearing in $L^{p}[-\frac{\pi}{2},\frac{\pi}{2}]$ norms. Wirtinger's inequality implies that $\|u_{y}\|_{2}^{2}-\|u\|_{2}^{2} \geq 0$, and $\|u\|_{4} \leq \pi\|u_{y}\|_{4}$ by Friedrichs's inequality, so that
\begin{gather}
\label{in1}
c_{1}\|u_{y}\|_{4}^{4}+c_{2}\|u_{y}\|_{6}^{6}+\frac{b_{1}\pi}{2}\|u_{y}\|_{4}^{4}\leq -\lambda\|u\|_{2}^{2} +(\|u\|_{2}^{2}-\|u_{y}\|_{2}^{2})\leq 0.
\end{gather}
H\"older's inequality yields
\begin{gather*}
\label{in2}
\|u_{y}\|^{4}_{4} \leq \pi^{\frac{1}{3}}\|u_{y}\|_{6}^{4}.
\end{gather*}
Altogether these give
\begin{gather} \label{upperlambda}
(c_{1}+\frac{b_{1}\pi}{2})\pi^{\frac{1}{3}}\|u_{y}\|_{6}^{4}+c_{2}\|u_{y}\|_{6}^{6} \leq 0
\end{gather}
so that
\begin{gather*}
\|u_{y}\|^{2}_{6} \leq \dfrac{|c_{1}+\frac{b_{1}}{2}\pi|\pi^{\frac{1}{3}}}{c_{2}}.
\end{gather*}
Thus, $\|u_{y}\|_{6}$ is uniformly bounded. An application of H\"older's inequality shows that $\|u_{y}\|_{2}$ is uniformly bounded too. As mentioned above, $\|u\|_{2} \leq \|u_{y}\|_{2}$. Hence, we obtain a uniform bound on $|u(0,\cdot)|_{1/2}$ by Sobolev embedding. Because of the monotonicity of $u$ established in \eqref{nodalprop}, we see that $\|u(x_{0},\cdot)\|_{2}$ and $|u(x_{0},\cdot)|_{0}$ are maximized at $x_{0}=0$. Thus, the $L^{2}$ and $L^{\infty}$ norms of $u$ are uniformly bounded on any transversal line in $\Omega$.
\end{proof}
\begin{remark}\thlabel{bounds_u_II}
Note that this says nothing about solutions of Model II. If $u \in \mathcal{C}^{II}_{\delta}$, with $\delta>0$, then $|u|<C$, where $C$ depends on $q_{1}$. This is a direct consequence of \eqref{wnondegen}, \eqref{wdegen}, and the homogeneous Dirichlet conditions ($|\nabla u|^{2}<q_{2}$ along $\mathcal{C}_{\delta}^{II}$).
\end{remark}
\begin{remark}
More generally, if $\mathcal{H}=M$, then the above argument shows that $\|u_{y}(0,\cdot)\|_{6}$ is bounded uniformly by a constant $C$ that depends on $c_{1},c_{2},b_{1}$ and $M$, so long as one assumes sufficient growth of $\mathcal{W}$ relative to $b$.
\end{remark}
At this stage, we are left to establish control on $|\lambda(s)|$ to complete our desired estimates of $|u(s)|_{3+\alpha}$
\subsection{Uniform regularity}
We begin by using the so called ``$P$-function" technique (see \cite{sperb}) along with standard elliptic estimates to gain some control on $|u(s)|_{3+\alpha}$.
\begin{lemma}
\thlabel{sperbtype}
Let $(u,\lambda) \in \mathcal{O}_{\delta}\cap \,\mathcal{F}^{-1}(0)$, for some $\delta>0$. If $\lambda$ and $K$ are positive, and $|u|_{0}+\lambda<K$, then there is a constant $C(K,\delta)>0$ for which $|u|_{3+\alpha} \leq C(K,\delta)$. If $(u,\lambda)$ is a solution which corresponds to Model I, then the above estimate holds for some $C = C(K)$.
\end{lemma}
\begin{proof}
We prove this result by using a maximum principle of Payne and Philippin. First, we obtain bounds on $|\nabla u(s) |^{2}$. Recall, as mentioned in \thref{bounds_u_II} that this is trivial for Model II, so let us assume for now the conditions of Model I. By Theorem 1 of \cite{Payne} the function
\begin{gather*}
P(x,y)=\int_0^{|\nabla u(x,y)|^{2}}(\mathcal{W}'(\xi)+2\xi\mathcal{W}''(\xi))d\xi - 2\int_{0}^{u(x,y)}b(\eta,\lambda)\eta \,d\eta,
\end{gather*}
obtains its maximum either on $\partial \Omega$ or at a critical point of $u$. We should note that in \cite{Payne} the results are stated for bounded $C^{2+\alpha}$ domains. So, our application includes the additional possibility that the maximum of $P$ occurs in the limit as $x \to \pm \infty$. However, the decay of $u$ precludes this scenario for nontrivial solutions. The homogeneous Dirichlet boundary conditions of \eqref{cub} and monotonicity properties of \eqref{nodalprop} now imply that $P$ is maximized at $(0,0)$, which is the only critical point of $u$. Thus,
\begin{gather*}
(2q\mathcal{W}'(q)-\mathcal{W}(q))\big\vert_{q=|\nabla u(x,y)|^{2}}-2\int_{0}^{u(x,y)}b(\eta,\lambda)\eta \,d\eta \leq -2\int_{0}^{u(0,0)}b(\eta,\lambda)\eta \, d\eta.
\end{gather*}
So,
\begin{gather} \label{gradest}
2q\mathcal{W}'(q)-\mathcal{W}(q) \leq -2 \int_{u(x,y)}^{u(0,0)}b(\eta,\lambda)\eta \; d\eta \leq 2(u(0,0))^{2} \max\limits_{(x,y) \in \overline{\Omega}}{|b(u(x,y),\lambda)|}.
\end{gather}
\begin{comment}
\begin{empheq}{align} \label{firstgradest}
\begin{split}
2q\mathcal{W}'(q)-\mathcal{W}(q) &\leq -2 \int_{u(x,y)}^{u(0,0)}b(\eta,\lambda)\eta \; d\eta \leq 2\int_{0}^{\frac{\pi}{2}}((1-\lambda)u(0,y)^{2}+\frac{|b_{1}|}{2}u(0,y)^{4})u_{y}(0,y) \\
&\leq 2 \left(\int_{0}^{\frac{\pi}{2}}((1-\lambda)u(0,y)^{2}+\frac{|b_{1}|}{2}u(0,y)^{4})^{2} \,dy\right)^{\frac{1}{2}}\left(\int_{0}^{\frac{\pi}{2}}u_{y}(0,y)^{2}\,dy\right)^{\frac{1}{2}}
\end{split}
\end{empheq}
Note that \eqref{in1} implies, when combined with the additional estimates of \thref{top}, that $\lambda \|u\|_{2}^{2}$ is uniformly bounded. Thus, the last line of \eqref{firstgradest} is bounded above by
\begin{gather} \label{gradest}
C \left(\int_{0}^{\frac{\pi}{2}}\lambda^{2}u(0,y)^{4}\,dy\right)^{\frac{1}{2}}\left(\int_{0}^{\frac{\pi}{2}}u_{y}(0,y)^{2}\,dy\right)^{\frac{1}{2}} \leq C\lambda
\end{gather}
\end{comment}
Since $b$ is analytic in both $z$ and $\lambda$, it follows that the right hand side of \eqref{gradest} is bounded by $C(K)$. Moreover, the left hand side of \eqref{gradest} satisfies
\begin{gather*}
2q\mathcal{W}'(q)-\mathcal{W}(q) = \int_{0}^{q}\left(\mathcal{W}'+2q\mathcal{W}''(q)\right)\;dq \geq q\xi_{1},
\end{gather*}
whenever $q\geq 0$, by \eqref{Wcond}. Hence,
\begin{gather}\label{grad_sup}
|\nabla u|_{0}^{2} \leq \dfrac{2(u(0,0))^{2}}{\xi_{1}} \max\limits_{(x,y) \in \overline{\Omega}}{|b(u(x,y),\lambda)|}.
\end{gather}
Standard elliptic theory can now be invoked to upgrade a uniform bound in $|\nabla u|$ into a uniform bound in $C^{3+\alpha}(\overline{\Omega})$. As we have seen in \eqref{quasilinearize}, $\partial_{x}u$ solves a divergence form elliptic equation. In particular, from \eqref{gradest} it follows that we may view $\partial_{x}u$ as the solution to a linear PDE with uniformly bounded coefficients. An application of \cite[Theorem 8.29]{gilbarg2015elliptic} yields that for some $\alpha' \in (0,\alpha] $
\begin{gather*}
|u_{x}|_{C^{\alpha'}(\Omega_{M})} \leq C
\end{gather*}
where $\Omega_{M} = \Omega\cap \{(x,y) \; : \; M\leq x \leq M+1$\}, and both $\alpha'$ and $C$ depend on $K$ and $\delta$ (or only $K$ in the case of Model I). An analogous bound for $|u_{y}|_{C^{\alpha'}(\Omega_{M})}$ is obtained by differentiating in $y$ instead. Now, by viewing \eqref{cub} as a linear equation with coefficients that depend on $u_{x}$ and $u_{y}$, which we have just shown are uniformly bounded in $C^{\alpha'}(\Omega_{M})$, we may apply (linear) Schauder theory to obtain a uniform bound on $|u|_{C^{2+\alpha'}(\Omega_{M})}$. This gives control over $|u|_{C^{1+\alpha}(\Omega_{M})}$, so that by repeating the previous argument we gain control of $|u|_{C^{2+\alpha}(\Omega_{M})}$. Now, Schauder estimates applied to the linearized equations for either $u_{x}$ or $u_{y}$ provide a uniform bound on $|u|_{C^{3+\alpha}(\Omega_{M})}$.
\end{proof}
\subsection{Bounds on loading parameter}
\label{boundsonlambda_sub}
Now, we show that as we follow either $\mathcal{C}^{I}$ or $\mathcal{C}_{\delta}^{II}$, for some $\delta>0$, that $\lambda(s)$ cannot return to $0$ without the corresponding solutions returning to the reference configuration or the equation undergoing a loss of ellipticity. Moreover, an upper bound on $\lambda(s)$ is derived for either case. These estimates will be used to establish bounds on $|u(s)|_{3+\alpha}$ and \eqref{blowup}.
\begin{lemma}
\thlabel{boundsL}
If $\{(u_{n}, \lambda_{n})\} \subset \mathcal{C}^{I}$, or $\{(u_{n}, \lambda_{n})\} \subset\mathcal{C}^{II}_{\delta}$, for some $\delta>0$, is sequence of solutions to \eqref{cub}, that are uniformly bounded in $C_{\textup{b}}^{3+\alpha}(\overline{\Omega})$ and for which $\lambda_{n} \to 0$, then $u_{n}\to 0$ in $X$.
\end{lemma}
\begin{proof}
Assume that $\{u_{n}\}$ does not converge to $0$ in $C_{\text{b}}^{3+\alpha}(\overline{\Omega})$. By the hypothesis and \eqref{nodalprop}, we may then invoke \thref{CorF}. From this, one may conclude that either there is a subsequence converging in $C^{3+\alpha}_{\text{b}}(\overline{\Omega})$ to a solution $(u,0) \in X\times \mathbb{R}$ of \eqref{cub}, or there is a subsequence of translates
\begin{gather*}
u_{n}(\cdot +x_{n}, \cdot) \xrightarrow{C^{2}_{\text{b}}(\overline{\Omega})}\tilde{u}(\cdot, \cdot)
\end{gather*}
where $x_{n} \to \infty$, and $(\tilde{u},0) \in C^{3+\alpha}_{\text{b}}(\overline{\Omega})\times \mathbb{R}$ solves \eqref{cub}. Moreover, $\tilde{u}_{x},\tilde{u}_{y} \to 0$ uniformly in $x,y$ as $x\to \infty$, and $\tilde{u}_{x} \leq 0 $. From \eqref{nodalprop} we also know that $\tilde{u}_{y} \geq0$ for $y \in [-\frac{\pi}{2},0)$ and $\tilde{u}_{y} \leq 0$ for $y \in (0,\frac{\pi}{2}]$. An application of the strong maximum principle for signed solutions (see \hyperref[strong_max]{\thref{max_pricp}.\ref{strong_max}}) shows that the inequalities for $\tilde{u}_{x}$ and $\tilde{u}_{y}$ must in fact be strict in the interior of $\Omega$. Throughout the rest of the proof, the properties of $u$ and $\tilde{u}$ that concern us are the same, namely that they are monotone bounded solutions to \eqref{cub} that decay as $x \to \infty$. For simplicity, let us only write $u$ in the sequel, with the understanding that the argument applies equally well to $\tilde{u}$.
Let $0<R_{1}<R_{2}$ and $\varphi(y)=\cos(y)$. Multiplying \eqref{cub} by $\varphi$ and then integrating we find
\begin{align*}
0&=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\int_{R_{1}}^{R_{2}}(\nabla \cdot (\mathcal{W}'(|\nabla u|^{2})\nabla u)-b(u,\lambda))\varphi\,dxdy \\
&=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\mathcal{W}'(|\nabla u|^{2})u_{x}\varphi\big\vert_{R_{1}}^{R_{2}}\;dy+\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\int_{R_{1}}^{R_{2}}\left(-\mathcal{W}'(|\nabla u|^2)u_{y}\varphi_{y}-b(u,\lambda)\varphi\right)\,dxdy
\end{align*}
Note that $u_y\varphi_y > 0$ by the comments above. For $R_{1}$ sufficiently large, $$\frac{1}{2} <\mathcal{W}'(|\nabla u|^{2}) < 1 \;\;\;\text{in}\;\;\; (R_{1},\infty)\times(-\frac{\pi}{2},\frac{\pi}{2})$$ because of the decay in $u_{x}$ and $u_{y}$. Recall that $b$ has the form \eqref{b} when $|\nabla u|^{2}$ is sufficiently small. So for large enough $R_{1}$ we also have $-b(u,\lambda)-(1-\lambda)u\geq0$ whenever $x>R_{1}$ and $-\frac{\pi}{2} < y < \frac{\pi}{2}$. Letting $R_{2} \to \infty$ we see that
\begin{gather*}
\label{contra}
0=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\int_{R_{1}}^{\infty}(-\mathcal{W}'(|\nabla u|^{2})+1)u_{y}\varphi_{y}+(-b(u,\lambda)-(1-\lambda)u)\varphi \,dxdy \\-\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\mathcal{W}'(|\nabla u|^{2})u_{x}\varphi(R_{1},y)\,dy>0.
\end{gather*}
This is of course a contradiction.
\end{proof}
\begin{lemma}[Bounds on $\lambda$]
\thlabel{boundsonlambda}
Let $(u,\lambda) \in \mathcal{C}^{I} \setminus\mathcal{C}^{I}_{\textup{loc}}$. Then there exists some positive constants $\lambda_{1}^{\pm}=\lambda^{\pm}(c_{1},c_{2},b_{1})$ for which $0< \lambda^{-} < \lambda<\lambda^{+} < \infty$. If instead we take $(u,\lambda) \in \mathcal{C}^{II}_{\delta}\setminus\mathcal{C}_{\textup{loc}}$, for some $\delta>0$, then the result still holds with positive constants $\lambda_{2}^{\pm}$, except that $\lambda^{-}$ now depends on $\delta$ as well.
\end{lemma}
\begin{proof}
First, let us suppose that there is some sequence $\{(u_{n},\lambda_{n})\} \subset \mathcal{C}_{\delta}^{I,II} \setminus\mathcal{C}^{I,II}_{\text{loc}}$ for which $\lambda_{n} \to 0$. By \thref{sperbtype} and \thref{top} we see that $\{u_{n}\}$ is uniformly bounded in $C^{3+\alpha}_{\text{b}}(\overline{\Omega})$ (in the case of Model II there is dependence on $\delta$). Then, from \thref{boundsL}, it follows that $u_{n} \to 0$ in $C^{3+\alpha}_{\text{b}}(\overline{\Omega})$. However, \thref{Positive} then implies that $u_{n} \in \mathcal{C}^{I,II}_{\text{loc}}$ for large enough $n$, and this contradicts part \ref{notinloc} of \thref{global}.
To establish an upper bound on $\lambda$, see that \eqref{bcond} and \thref{Hconst} imply
\begin{gather*}
0 = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}(\mathcal{W}(|\nabla u |^{2})+2B)\big\vert_{x=0}\,dy \geq \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \mathcal{W}(|\nabla u|^{2})\big\vert_{x=0}\,dy+(\lambda -1) \|u(0,\cdot)\|_{2}^{2}+\frac{b_{1}}{2}\|u(0,\cdot)\|_{4}^{4},
\end{gather*}
where we are using $\|\cdot\|_{p}$ to denote the $L^{p}$ norm on $[-\frac{\pi}{2},\frac{\pi}{2}]$. Since $\mathcal{W}(q) > 0$ for $q>0$, it follows that $(\lambda -1)\|u(0,\cdot)\|_{2}^{2}+\frac{b_{1}}{2}\|u(0,\cdot)\|_{4}^{4} \leq 0$. Appealing to \thref{top}, We find that \begin{gather*}
\lambda \leq \left(1+\frac{|b_{1}|}{2}\sup_{s>0}|u(s)|^{2}\right)\|u(0,\cdot)\|_{2}^{2} \leq C(c_{1}, c_{2}, b_{1}) \qedhere
\end{gather*}
\begin{comment}
Now, suppose that there is some sequence $\{(u_{n},\lambda_{n})\} \subset \mathcal{C}_{\delta}^{I,II}$ for which $\lambda_{n} \to \infty$. Note that there is a lower bound for the left hand side of \eqref{upperlambda} which depends on $c_{1},c_{2}$, $b_{1}$, and in the case of Model II also on $\delta$. Thus, as $\lambda_{n} \to \infty$, we must also have $\|u(0,\cdot)\|_{2} \to 0$. Since $|\nabla u_{n}|_{0}^{2}$ is bounded uniformly in $n$, it follows that $|u| \to 0$. The estimate in \eqref{sperbtype} now implies that $|\nabla u_{n}|^{2} \to 0$ as well. So, for large enough $n$ \eqref{b} holds, and we find that $$b(u_{n}(0,0), \lambda_{n})=(\lambda_{n}-1)u_{n}+b_{1}u_{n}^{3}+O(u_{n}^{5}) >0.$$ Moreover, since $|\nabla u_{n}|^{2}=0$ at $(0,0)$, we find from \eqref{cub} that
\begin{gather*}
\Delta u_{n}(0,0) - b(u_{n}(0,0), \lambda_{n}) =0
\end{gather*}
which implies $\Delta u_{n} > 0$ at $(0,0)$. This is a contradiction since $(0,0)$ is a local max of $u$ by \eqref{nodalprop}.
Finally, suppose that $\{(u_{n}, \lambda_{n})\} \subset \mathcal{C}^{II}_{\delta} \setminus \mathcal{C}^{II}_{\text{loc}}$. If $\lambda_{n} \to 0$, then an argument for the analogous case above goes through with [] instead of []. If it is instead supposed that $\lambda_{n} \to \infty$, then
Then, \thref{Hconst} implies
\begin{gather*}
0 = \int_{\frac{\pi}{2}}^{\frac{\pi}{2}} \mathcal{W}(u^{2}_{y}) -(1-\lambda)u^{2}+\frac{b_{1}}{2}u^{4} \, dy
\end{gather*}
where we are evaluating $u$ and $u_{y}$ on $x=0$. From \thref{bounds_u_II} we see that $u$ and $u_{y}$ are bounded uniformly on a constant that depends on $q_{1}$. Thus, if $\lambda \to \infty$, we see once again that $\|u(0,y)\|_{2} \to 0$. The result now follows as in the previous paragraph.
\end{comment}
\end{proof}
We are now ready to state the main a priori estimate.
\begin{proposition} \thlabel{apriori}
If $(u, \lambda) \in \mathcal{C}^{I}$, then then there exists some $C(c_{1},c_{2},b_{1})>0$ for which $|u|_{3+\alpha} \leq C$. If instead $(u,\lambda) \in \mathcal{C}^{II}_{\delta}$, then there exists $C(c_{1},c_{2},b_{1},\delta) >0$ for which $|u|_{3+\alpha} \leq C$.
\end{proposition}
\begin{proof}
For $(u,\lambda) \in \mathcal{C}^{I}$, the result follows from \thref{top} and \thref{boundsonlambda} combined with \thref{sperbtype}. For $(u,\lambda) \in \mathcal{C}^{II}_{\delta}$, the estimate can be obtained from \thref{bounds_u_II} and \thref{boundsonlambda} in conjunction with \thref{sperbtype}.
\end{proof}
\end{section}
\begin{section}{Proof of the main results} \label{main_proofs}
We are now prepared to prove the main results. The existence of a global solution branch, for either model, is shown in Section~\ref{globalsection}. The key difference between the global behavior of $\mathcal{C}^{I}$ and $\mathcal{C}^{II}$ is related to alternative \ref{blowup_alt} of \thref{global}; for $\mathcal{C}^{I}$ it is shown to be impossible, thus forcing alternative \ref{lossofC}, whereas for $\mathcal{C}^{II}$ it is shown to hold (note that \ref{lossofC} is \textit{not} necessarily excluded in this case).
\begin{proof}[Proof of \thref{thm1}]
By \thref{global}, there exists a curve of solutions $\mathcal{C}^{I}$ extending $\mathcal{C}^{I}_{\text{loc}}$, which is locally real analytic with $C^{0}$ parameterization
\begin{gather*}
\mathcal{C}^{I} = \{(u(s),\lambda(s)) \; : \; 0<s < \infty \} \subset X\times \mathbb{R}.
\end{gather*}
The symmetry and monotonicity properties of \hyperref[sm]{\thref{thm1}.\ref{sm}} are proved throughout in \thref{globalnodal}. The bounds on $\lambda(s)$ and $\sup_{s \geq 0}|u(s)|_{3+\alpha}$ from \hyperref[th1boundslambda]{\thref{thm1}.\ref{th1boundslambda}} and \hyperref[bounds_displacement]{\thref{thm1}.\ref{bounds_displacement}} are established in \thref{boundsonlambda} and \thref{apriori}, respectively. This establishes all parts of \thref{thm1} except for the broadening in \ref{broadening}. From the alternatives in \thref{global}, we see that \ref{broadening} must hold if
\begin{gather*}
N(s)= |u(s)|_{3+\alpha}+\dfrac{1}{\text{dist}(u(s),\partial \mathcal{O})}+\lambda(s)+\dfrac{1}{\text{dist}(\lambda(s),\partial \mathcal{I})}
\end{gather*}
is bounded uniformly in $s$. The bound on the first term follows directly from \thref{sperbtype} and those on the third and fourth terms follow from the estimates on $\lambda$ established in \thref{boundsonlambda}. Recall that $\mathcal{O}$ is defined by \eqref{ODef}, so \eqref{Wcond} implies that the second term remains bounded as well. Hence, we have the desired control over $N(s)$, and the result must hold. \end{proof}
We need to prove one more lemma in preparation for \thref{thm2}.
\begin{lemma}[Nonexistence of monotone fronts]
\thlabel{dontlose}
Let $\mathcal{F}$ correspond to Model I. Then, $\mathcal{F}^{-1}(0) \cap \mathcal{O}_{\delta}$, for $\delta>0$, is locally pre-compact in $X$. In particular, alternative \ref{Front} of \thref{CorF} cannot hold for a sequence $\{(u_{n},\lambda_{n})\} \subset \mathcal{C}^{II}_{\delta}$, $\delta>0$.
\end{lemma}
\begin{proof}
From \thref{CorF} we find that if $\mathcal{F}^{-1}(0) \cap \mathcal{O}_{\delta}$ fails to be locally pre-compact, then \hyperref[Front]{\thref{CorF}.\ref{Front}} must hold. Suppose that $\{u_{n}\}$ is a sequence satisfying \hyperref[Front]{\thref{CorF}.\ref{Front}}. Then $\lim_{n \to \infty} u_{n}(\cdot + x_{n},y)=: \tilde{u}(x,y) \in C^{3+\alpha}_{\text{b}}(\overline{\Omega})$ must solve \eqref{cub}. Moreover, $U(y):= \lim_{x\to -\infty} \tilde{u}(x,y)$ satisfies
\begin{gather}
\label{limitingode}
((\mathcal{W}'((U_{y})^2)U_{y})_{y}-b(U,\lambda)=0,
\end{gather}
which can be seen by \cite[Lemma~2.3]{chen2020global}. Multiplying \eqref{limitingode} by $U(y)$ and integrating by parts yields
\begin{gather*}
\label{eq1}
0 = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}(\mathcal{W}'(U_{y}^{2})U_{y}^{2}+b(U,\lambda)U)\; dy.
\end{gather*}
From \thref{Hconst} we know that $\mathcal{H}(u_{n},\lambda; x)=0$, and hence that $\mathcal{H}(U,\lambda; x)=0$. Written explicitly this becomes
\begin{gather*}
0=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\left(\frac{1}{2}\mathcal{W}(U^{2}_{y})+B(U,\lambda)\right)\; dy.
\end{gather*}
After combining these equations, we find
\begin{gather}\label{modelII_cont}
0 = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}(\mathcal{W}'(U_{y}^{2})U_{y}^{2}-\mathcal{W}(U^{2}_{y})+b(U,\lambda)U-2B(U,\lambda))\; dy.
\end{gather}
A simple calculation will show
\begin{gather*}
b(z, \lambda)z -2 B(z, \lambda) < 0 \; \; \; \text{for} \; \; \; z>0,
\end{gather*}
where the concavity of $b(z,\lambda)$ and the fact that $b(0, \lambda)=0$ are used. Recall that $q\mathcal{W}'(q)-\mathcal{W}<0$ by \eqref{wdegendamp}. But then the right hand side of \eqref{modelII_cont} is negative, which is a contradiction, hence the result holds.
\end{proof}
\begin{proof}[Proof of \thref{thm2}]
From \thref{global}, we see there is a curve of solutions $\mathcal{C}^{II}$, extending $\mathcal{C}^{II}_{\text{loc}}$, which is locally real analytic with $C^{0}$ parameterization
\begin{gather*}
\mathcal{C}^{II} = \{(u(s),\lambda(s)) \; : \; 0<s < \infty \} \subset X \times \mathbb{R}.
\end{gather*}
As in the proof of \thref{thm1}, we wish to understand the alternatives in \hyperref[alternatives]{\thref{global}.\ref{alternatives}}. The quantity
\begin{gather*}
\inf_{\overline{\Omega}}\big(\mathcal{W}'(q)+2q\mathcal{W}''(q)\big)\big\vert_{q=|\nabla u(s)|^{2}}
\end{gather*}
is not bounded below a priori along $\mathcal{C}^{II}$ as it was for $\mathcal{C}^{I}$. This leads us to consider $N(s)$ (see \eqref{blowup} or the proof of \thref{thm1}) on a segment $\mathcal{C}^{II}_{\delta}$ of $\mathcal{C}^{II}$, with $\delta>0$. An estimate of the form $|u(s)|_{3+\alpha} < C(\delta)$ is obtained whenever $(u(s), \lambda(s)) \in \mathcal{C}^{II}_{\delta}$, by \thref{apriori}.
Now, if we assume $\mathcal{C}^{II} =\mathcal{C}^{II}_{\delta^{*}}$, for some $\delta^{*}>0$, then the first term of $N(s)$ is uniformly bounded along $\mathcal{C}^{II}$ by the paragraph above. Of course, this assumption also implies that the second term is uniformly bounded along $\mathcal{C}^{II}$ by definition. Furthermore, \thref{boundsonlambda} implies that the third and fourth terms are also controlled.
\begin{comment}
In particular, for some $0<\lambda^{\pm}<\infty$ we have $$\lambda^{-} <\liminf_{s} \lambda(s) \leq \limsup_{s>s_{1}}\lambda(s)<\lambda^{+}.$$
\end{comment}
Thus, there is some $C'(\delta)$ for which $s \gg 1$ implies $|N(s)| \leq C'(\delta)$. Hence, alternative \ref{lossofC} of \thref{global} must hold. However, this contradicts the impossibility of fronts established in \thref{dontlose}. So we must have
\begin{gather*}
\lim_{s\to \infty}\inf_{\overline{\Omega}}\big(\mathcal{W}'(q)+2q\mathcal{W}''(q)\big)\big\vert_{q=|\nabla u(s)|^{2}} = 0.
\end{gather*}
In particular, we must have $|\nabla u(s)|^{2} \to q_{1}$. Note that our estimates on $|u(s)|_{3+\alpha}$ and $\lambda(s)$ breakdown as $\delta \to 0$. This leaves open the possibility that $\lambda(s)$ approaches either $0$ or $\infty$, or that a blow-up in $C^{3+\alpha}(\overline{\Omega})$ (note that $|u(s)|_{0}$ and $|u_{y}(s)|_{0}$ are indeed bounded, but the elliptic estimates depend on $\delta$) occurs concurrently with the loss of ellipticity. This establishes \hyperref[loss_ellipt]{\thref{thm2}.\ref{loss_ellipt}}
The monotonicity and symmetry properties of \hyperref[sm2]{\thref{thm2}.\ref{sm2}} are prove in \thref{globalnodal}. Finally, the bound on $\lambda(s)$ of \hyperref[th2boundslambda]{\thref{thm2}.\ref{th2boundslambda}} is proved in \thref{boundsonlambda}.
\end{proof}
\end{section}
\section*{Acknowledgements}
The author was supported in part by the NSF through DMS-1812436. The author also wishes to thank Samuel Walsh for his advise and guidance throughout the writing of this paper.
|
2,869,038,156,788 | arxiv | \section{Introduction}
\label{sec:typesetting-summary}
\emph{Integer Programming} is widely used as a modelling tool for a variety of combinatorial optimization problems. A standard form of an integer program (IP) is defined as follows:
\begin{eqnarray}\label{ILP}
\max\{\vew\cdot \vex: H \vex=\veb, \vel\le \vex\le \veu, \vex\in \ensuremath{\mathbb{Z}}^{N} \}
\end{eqnarray}
where the coordinates of $H,\vew,\veb,\vel,\veu$ are integers. Here $H$ is the \emph{constraint matrix} with dimension $M\times N$. We let $\Delta$ be the largest absolute value among all the entries of $H$.
In general, IP is NP-hard, which was shown by Karp~\cite{karp1972reducibility}, thus motivating the search for tractable special cases. There are two important lines of research in the literature which target at different parameters and motivate our research in this paper.
The first line of research dated back to the work of Papadimitriou in 1981~\cite{papadimitriou1981complexity}, where he considered IPs with few constraints, and
provided an algorithm whose time is $(M\cdot \Delta)^{O(M^2)}$.
This result was later improved by Eisenbrand and Weismantel~\cite{eisenbrand2019proximity}, and then by Jansen et al.~\cite{jansen2018integer}. So far the best known result is $(\sqrt{M}\Delta)^{O(M)}\cdot\log(\|\veb\|_{\infty})$, where $\|\veb\|_{\infty}$ represents the maximal absolute value of coordinates in vector $\veb$.
The second line of research dated back to the work of Lenstra~\cite{lenstra1983integer} in 1983, where he considered IPs with few variables. This result was later on improved by Kannan~\cite{kannan1987minkowski} who presented an algorithm of running time $N^{O(N)}\cdot poly(M,\log\Delta)$. In recent years, there is further improvement on the coefficient of the exponent in the term $N^{O(N)}$ (see, e.g.~\cite{dadush2011enumerative}).
The above algorithms require $H$ to have either few rows or few columns, but in many applications it may be inevitable to have a constraint matrix with a huge number of rows and columns. In recent years, there is an increasing interest in the study of IP where the constraint matrix $H$ may have many rows and columns, but has a more restricted block structure. Such block-structured IP finds application in a variety of optimization problems including string matching, computational social choice, resource allocation, etc (see ,e.g.~\cite{knop2019combinatorial,faliszewski2018opinion,knop2020voting,chen2018covering,jansen2018empowering,knop2018scheduling}). We give a brief introduction below.
\iffalse
\subparagraph{$n$-fold IP.} Given $nt$-dimensional integer vectors $\vew,\veb,\vel,\veu$, $n$-fold integer programming ($n$-fold IP) is in the following form with variable dimension $nt$:
\begin{eqnarray}\max\{\vew \cdot\vex: H\vex=\veb, \vel\le \vex\le \veu, \vex\in \ensuremath{\mathbb{Z}}^{nt}\},
\end{eqnarray}
where
$$ {H} = \left(
\begin{array}{cccc}
D & D & \ldots &D\\
A & 0 & \ldots & 0\\
0& A&\ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & A\\
\end{array} \right) $$
is an $(r+ns)\times nt$ matrix with $D$ an $r\times t$ matrix and $A$ an $s\times t$ matrix. The vector $\vex$ is naturally partitioned into $n$ bricks of size $t$, and we index it as $\vex=(\vex^1,\vex^2,\ldots,\vex^n)$, where $\vex^i=(x^i_1,x^i_2,\ldots,x^i_t)$.
\fi
\smallskip
\noindent\textbf{Block-structured IP.}
We consider IP~\eqref{ILP} where $H$ is built from small submatrices $A$, $B$, $C$ and $D$ in the following form:
\begin{eqnarray}\label{eq:4block}
H
\begin{pmatrix}
C & D & D & \cdots & D \\
B & A & 0 & & 0 \\
B & 0 & A & & 0 \\
\vdots & & & \ddots & \\
B & 0 & 0 & & A
\end{pmatrix}.
\end{eqnarray}
Here, $A,B,C,D$ are $s_i\times t_i$ matrices, where $i=A,B,C,D$, respectively. $H$ consists of $n$ copies of $A,B,D$ and one copy of $C$. Consequently, $N=t_B+nt_A$ and $M=s_C+ns_B$. Notice that by plugging $A,B,C,D$ into the above block structure we require that $s_C=s_D$, $s_A=s_B$, $t_B=t_C$ and $t_A=t_D$.
The above IP is called 4-block $n$-fold IP. As a special case, when $C=B=0$, it is called $n$-fold IP; when $C=D=0$, it is called two stage-stochastic IP. It is worth mentioning that recently researchers have also considered more generalized IPs where the submatrices $A,B,D$ are not necessarily identical (i.e., the $n$ identical $A$'s, $B$'s, $D$'s are replaced with $A_i,B_i,D_i$, respectively). We call it generalized 4-block $n$-fold IP, and its two special cases generalized $n$-fold IP and generalized two stage-stochastic IP.
\smallskip
\noindent\textbf{Related work on Block-structured IP.}
Let $\varphi$ be the encoding length of a block-structured IP. For $n$-fold IP, Hemmecke et al.~\cite{hemmecke2013n} showed an algorithm of running time $n^3t_A^3\varphi\cdot (s_Ds_A\Delta)^{\mathcal{O}(t_A^2s_D)}$. Later on, improved algorithms were developed by a series of researchers including Eisenbrand et al.~\cite{eisenbrand2018faster,eisenbrand2019algorithmic}, Altmanov{\'a} et al.~\cite{altmanova2019evaluating}, Jansen et al.~\cite{jansen2019near}, Cslovjecsek et al.~\cite{cslovjecsek2020n}. So far, generalized $n$-fold IP can be solved in $(s_Ds_A\Delta)^{O(s_A^2+s_As_D^2)} nt_A$. Specifically, if $A=(1,\ldots,1)$ in an $n$-fold IP, then this is called combinatorial $n$-fold IP. Even such a restricted class of IP finds applications in a variety of problems including computational social choice, stringology, etc.~\cite{knop2019combinatorial}.
For two-stage stochastic IP, Hemmecke and Schultz~\cite{hemmecke2003decomposition} were the first to present an algorithm of running time $poly(n)\cdot f(s_A,s_B,t_A,t_B,\Delta)$ for some computable function
$f$, despite that the function $f$ is unknown. Very recently, Klein~\cite{klein2020complexity} developed an algorithm of such a running for generalized two-stage stochastic IP where $f$ is a doubly exponential function.
For 4-block $n$-fold IP, Hemmecke et al.~\cite{hemmecke2010polynomial} gave an algorithm which runs in time $n^{g(s_D+s_A,t_B+t_A,\Delta)}\varphi$ for some computable function $g$ which is doubly exponential. Very recently, Chen et al.~\cite{chen2020new} presented an improved algorithm whose running time is singly exponential.
It is noticeable that early algorithms for $n$-fold IP has a running time exponential in both the number of rows and columns of the small submatrices~\cite{hemmecke2013n}, and recent progress is able to reduce the running time such that it is only exponential in the number of rows of submatrices, coinciding the running time of ``Papadimitrious's line" of algorithm for general IP. It is thus natural to ask, can we hope for a ``Lenstra's line" of algorithm for block-structured IP that is polynomial in $\log\Delta$? More precisely, can we expect an algorithm for block-structured IP of running time $f(s_A,s_B,s_C,s_D,t_A,t_B,t_C,t_D)poly(n,\log\Delta)$, or $(n\log\Delta)^{f(s_A,s_B,s_C,s_D,t_A,t_B,t_C,t_D)}$ if the former is not possible? This paper aims at a systematic study in this direction.
\smallskip
\noindent\textbf{Our contributions.}
The major contribution of this paper is to give a full characterization on when FPT or XP algorithm exists for block-structured IP without $\Delta$, the largest coefficient, being part of the parameters.
We show that, in general, $n$-fold IP is NP-hard if $\Delta$ does not belong to the parameters. In particular, NP-hardness follows even if the submatrix $A=[1,1,\Delta]$.
On the positive side, we achieve the following algorithmic results:
\begin{itemize}
\item If $A=(1,\ldots,1)\in \ensuremath{\mathbb{Z}}^{1\times t_A} $, then 4-block $n$-fold IP can be solved in $(t_A+t_B)^{O(t_A+t_B)}\cdot poly(n,\log\Delta)$ time;
\item If $A\in\ensuremath{\mathbb{Z}}^{s_A\times t_A} $, $t_A=s_A+1$ and $\text{rank}(A)=s_A$, then 4-block $n$-fold IP can be solved in $(t_A+t_B)^{O(t_A+t_B)}\cdot n^{O(t_A)}\cdot poly(\log\Delta)$ time; Specifically, $n$-fold IP can be solved in linear time $n\cdot poly(t_A,\log \Delta)$.
\end{itemize}
It is remarkable that our NP-hardness results already rule out an algorithm of running time $n^{f(t_A)}poly(\log\Delta)$ even for $n$-fold IP when $t_A\ge s_A+2$, hence an algorithm for $t_A=s_A+1$ is the best we can hope for.
One implication of our results is on the impact of the box constraint $\vel\le\vex\le\veu$ to the complexity of block-structured IP. Our NP-hardness result can be translated to the NP-hardness of the following scheduling problem: given $m$ identical machines and three types of jobs, each type of a job has the same processing time on every machine. Each machine $i$ has cardinality constraints such that it can accept at most $c_i^j$ jobs of type $j$ where $j=1,2,3$. The goal is to find an assignment of jobs to machines such that makespan (largest job completion time) is minimized. Note that, however, this scheduling problem is polynomial time solvable if there is no cardinality constraints~\cite{goemans2014polynomiality}. When formulating the scheduling problem using $n$-fold IP, the cardinality constraints hide in the box constraints $\vel\le\vex\le \veu$. Therefore, if we look at the $n$-fold IP formulation of the scheduling problem, a simpler box constraint $\vex\ge 0$ allows a polynomial time algorithm for three or even a constant number of different types of jobs, while a general box constraint $\vel\le\vex\le \veu$ only leads to polynomiality of two types of jobs. The reader will also see that the most technical part of our algorithm lies on the dealing of the box constraints. In contrast, essentially all existing algorithms for block-structured IP rely on an iterative augmentation framework which does not really distinguish between different kinds of box constraints. From that perspective, our algorithmic results can be viewed as a complement to existing algorithms. It remains as an important problem what kind of box constraints can lead to polynomial time algorithms when $t_A\ge s_A+2$.
\section{Preliminaries}
\noindent\textbf{Notation.}
We write vectors in boldface, e.g. $\vex, \vey$, and their entries in normal font, e.g. $x_i, y_i$.
Recall that a solution $\vex$ for $4$-block $n$-fold IP is a $(t_B+nt_A)$-dimensional vector, we write it into $n+1$ \emph{bricks}, such that $\vex=(\vex^0,\vex^1,\cdots,\vex^n)$ where $\vex^0 \in \ensuremath{\mathbb{Z}}^{t_B}$ and each $\vex^i \in \ensuremath{\mathbb{Z}}^{t_A}$, $1\le i\le n$. We call $\vex^i$ the \emph{$i$-th brick} for $0\le i\le n$. For a vector or a matrix, we write $\|\cdot\|_{\infty}$ to denote the maximal absolute value of its elements. For two vectors $\vex,\vey$ of the same dimension, $\vex\cdot\vey$ denotes their inner product.
We use $\text{gcd}(\cdot,\cdot)$ to represent the greatest common divisor of two integers. For example, $\text{gcd}(\lambda,\mu)$ represents the greatest common divisor of integers $\lambda$ and $\mu$.
We usually use lowercase letters for variables and uppercase letters for matrices. For an arbitrary matrix $H$, we use $\text{rank}(H)$ to denote its rank. We use $poly(x)$ to denote a polynomial in $x$.
\smallskip
\noindent\textbf{Input size.} In an IP~\eqref{ILP}, it is allowed that the entries of $\veb, \vel,\veu$ are $\infty$. However, utilizing the techniques of Tardos~\cite{tardos1986strongly}, Koutecký et al.~\cite{koutecky2018parameterized} showed that without loss of generality we can restrict that $\|\veb\|_\infty, \|\vel\|_\infty, \|\veu\|_\infty\le 2^{O(n\log n)}\Delta^{O(n)}$. We assume this bound throughout this paper.
\smallskip
\noindent\textbf{B\'{e}zout's identity.}
Let $\lambda$ and $\mu$ be integers with greatest common divisor $\text{gcd}(\lambda,\mu)$. Then, there exist integers $x$ and $y$ such that $\lambda x + \mu y = \text{gcd}(\lambda,\mu)$.
\smallskip
\noindent\emph{Structure of solutions.}
When an arbitrary solution $(\hat{x}, \hat{y})$ has been computed (e.g., using extended Euclidean algorithm), all pairs of solutions can be represented in the form
$\Big(\hat{x}+\ell{\frac {\mu}{\text{gcd}(\lambda,\mu)}}, \hat{y}-\ell{\frac {\lambda}{\text{gcd}(\lambda,\mu)}}\Big),$
where $\ell$ is an arbitrary integer.
\noindent\textbf{Smith normal form.}
Let $A$ be a nonzero $s\times t$ matrix over a principal ideal domain. $\bar{A}$ is called the Smith normal form of $A$: there exist invertible $s\times s$ and $(t\times t)$-matrices $U$, $V$ such that the product $UAV$ is $\bar{A}$, and its diagonal elements $\alpha_{i}$ satisfy $\alpha_i|\alpha_{i+1}$ for all $1\le i\le h-1$, where $h=\text{rank}(A)$. The rest elements in $\bar{A}$ are zero.
\noindent\emph{Remark.} The process of transforming an integer matrix into its Smith normal form is in polynomial time, i.e., $poly(s,t,\log \Delta)$~\cite{kannan1979polynomial}.
\iffalse
That is,
$$ \bar{A}=UAV= \left(
\begin{array}{cccccccc}
\alpha_1 & 0 &0& \ldots &0&0& \ldots &0\\
0 & \alpha_2 &0& \ldots & 0&0& \ldots &0\\
0& 0&\alpha_3&\ldots & 0&0& \ldots &0\\
\vdots & \vdots & \vdots & \ddots & \vdots&\vdots& \ddots&\vdots\\
0 & 0 &0& \ldots &\alpha_h&0& \ldots &0\\
0 & 0 &0& \ldots &0&0& \ldots &0\\
\vdots & \vdots& \vdots & \ddots & \vdots& \vdots & \ddots& \vdots\\
0 & 0 &0& \ldots &0&0& \ldots&0\\
\end{array} \right).$$
\fi
\section{Hardness results}
Recall $n$-fold IP is a special case of $4$-block $n$-fold IP when $B=C=0$ in Eq~\eqref{eq:4block}. The goal of this section is to prove the following theorem.
\begin{theorem}\label{thm:np-nfold}
It is NP-hard to determine whether an $n$-fold IP admits a feasible solution even if $A=(1,1,\Delta)$ and $D=(1,0,0)$, where $\Delta\in \ensuremath{\mathbb{Z}}$ is part of the input.
\end{theorem}
\begin{proof}
We reduce from subset-sum. In a subset-sum problem, given are $n$ positive integers $\beta_1,\beta_2,\cdots,\beta_n$, and the goal is to find a subset of these integers which add up to exactly $\Delta\in\ensuremath{\mathbb{N}}$.
Given a subset-sum instance, we construct an $n$-fold integer program instance such that $A=(1,1,\Delta)$ and $D=(1,0,0)$. Note that each brick $\vex^i=(x^i_1,x^i_2,x^i_3)$. Let the interval constraints for variables be $0\le x^i_1\le \beta_i$, $0\le x^i_2\le \Delta-\beta_i$ and $0\le x^i_3\le 1$. Let $\veb^0=\veb^i=\Delta$. This finishes the construction.
Now we write down explicitly the $n$-fold integer program as follows:
\begin{subequations}
\begin{eqnarray}
&& \sum_{i=1}^n x^i_1=\Delta \label{np-1}\\
&& x^i_1+x^i_2+\Delta x^i_3=\Delta, \hspace{37mm} \forall 1\le i\le n \label{np-2}\\
&& 0\le x^i_1\le \beta_i, 0\le x^i_2\le \Delta-\beta_i, 0\le x^i_3\le 1, \hspace{5mm}\forall 1\le i\le n \nonumber\\
&& x^i_1, x^i_2, x^i_3\in \ensuremath{\mathbb{Z}}, \hspace{47mm} \forall 1\le i\le n \nonumber
\end{eqnarray}
\end{subequations}
Since $x^i_3\in\{0,1\}$, there are two possibilities. If $x^i_3=1$, then $x^i_1=x^i_2=0$; otherwise, $x^i_1+x^i_2=\Delta$. As $x^i_1\le \beta_i$ and $x^i_2\le \Delta-\beta_i$, we have $x^i_1=\beta_i$ and $x^i_2=\Delta-\beta_i$ if $x^i_3=0$. Hence, $x^i_1$ is either $0$ or $\beta_i$. By Constraint~\eqref{np-1}, the constructed $n$-fold integer program instance admits a feasible solution if and only if there exists a subset of $\{\beta_1,\beta_2,\cdots,\beta_n\}$ whose sum is $\Delta$. Hence, $n$-fold IP is NP-hard even if $s_D=s_A=1$, and $t_A= 3$. \qed
\end{proof}
\noindent\textbf{Remark.} Theorem~\ref{thm:np-nfold} also implies the NP-hardness of the following scheduling problem. There are $n$ machines and three types of jobs. The 1st and 2nd type of jobs have a processing time of 1, and the 3rd type of jobs have a processing time of $\Delta$. Each machine $i$ can accept at most $\beta_i$ jobs of type $1$, $\Delta-\beta_i$ jobs of type $2$, and $1$ job of type 3. Given $\Delta$ jobs of type 1, $(n-k-1)\Delta$ jobs of type $2$ and $k$ jobs of type 3, is it possible to schedule all the jobs within makespan $\Delta$? Let $x_j^i$ be the number of jobs of type $j\in\{1,2,3\}$ on machine $i$, we can establish a similar IP as that in the proof of Theorem~\ref{thm:np-nfold} and the NP-hardness follows directly.
Enforcing dummy constraints, we have the following corollary.
\begin{corollary}
It is NP-hard to determine whether an $n$-fold IP admits a feasible solution if $A\in\ensuremath{\mathbb{Z}}^{s_A\times t_A} $ and $t_A\ge s_A+2$.\label{n-15}
\end{corollary}
We remark that if we further consider generalized $n$-fold IP where the first row is $(D_1,D_2,\cdots,D_n)$ and the lower diagonal is $(A_1,A_2,\cdots,A_n)$, then essentially all non-trivial cases become NP-hard as is implied by the following theorem. Therefore, we restrict our attention to the standard 4-block $n$-fold IP in this paper.
\begin{theorem}\label{th2}
It is NP-hard to determine whether a generalized $n$-fold IP admits a feasible solution even if one of the following holds:
\begin{compactitem}
\item $A_i=A=(\Delta,1)$, $D_i=(\beta_i,0)$; or
\item $A_i=(1,\beta_i)$, $D_i=D=(1,0)$.
\end{compactitem}
\end{theorem}
Using a slight variation of the reduction we used in Theorem~\ref{thm:np-nfold}, we can show Theorem~\ref{th2}.
\paragraph{Proof of Theorem~\ref{th2}.}
\begin{compactitem}
\item
We reduce from subset-sum. In a subset-sum problem, given are $n$ positive integers $\beta_1,\beta_2,\cdots,\beta_n$, and the goal is to find a subset of these integers which add up to exactly $\Delta\in\ensuremath{\mathbb{N}}$.
Given a subset-sum instance, we construct an $n$-fold integer program instance such that $A=(\Delta,1)$ and $D_i=(\beta_i,0)$. Note that each brick $\vex^i=(x^i_1,x^i_2)$. Let the interval constraints for variables be $0\le x^i_1\le 1$, and $0\le x^i_2\le \Delta$. Let $\veb^0=\veb^i=\Delta$. This finishes the construction.
Now we write down explicitly the generalized $n$-fold integer program as follows:
\begin{subequations}
\begin{eqnarray}
&& \sum_{i=1}^n \beta_ix^i_1=\Delta \label{npp-1}\\
&& \Delta x^i_1+x^i_2=\Delta, \hspace{38mm} \forall 1\le i\le n \\
&& 0\le x^i_1\le 1, 0\le x^i_2\le \Delta, \hspace{23mm}\forall 1\le i\le n \nonumber\\
&& x^i_1, x^i_2\in \ensuremath{\mathbb{Z}}, \hspace{43mm} \forall 1\le i\le n \nonumber
\end{eqnarray}
\end{subequations}
Since $x^i_1\in\{0,1\}$, by Constraint~\eqref{npp-1}, we know that the constructed $n$-fold integer program instance admits a feasible solution if and only if there exists a subset of $\{\beta_1,\beta_2,\cdots,\beta_n\}$ whose sum is $\Delta$. Hence, the generalized $n$-fold IP is NP-hard even if $A\in \ensuremath{\mathbb{Z}}^{1\times 2}$.\qed
\item
We still reduce from subset-sum. In a subset-sum problem, given are $n$ positive integers $\beta_1,\beta_2,\cdots,\beta_n$, and the goal is to find a subset of these integers which add up to exactly $\Delta\in\ensuremath{\mathbb{N}}$.
Given a subset-sum instance, we construct an $n$-fold integer program instance such that $A_i=(1,\beta_i)$ and $D=(1,0)$. Each brick $\vex^i=(x^i_1,x^i_2)$. Let the interval constraints for variables be $0\le x^i_1\le \beta_i$, and $0\le x^i_2\le 1$. Let $\veb^0=\Delta$ and $\veb^i=\beta_i$. This finishes the construction.
Now we write down the generalized $n$-fold integer program as follows:
\begin{subequations}
\begin{eqnarray}
&& \sum_{i=1}^n x^i_1=\Delta \label{npp-2}\\
&& x^i_1+\beta_ix^i_2=\beta_i, \hspace{36mm} \forall 1\le i\le n \\
&& 0\le x^i_1\le \beta_i, 0\le x^i_2\le 1, \hspace{23mm}\forall 1\le i\le n \nonumber\\
&& x^i_1, x^i_2\in \ensuremath{\mathbb{Z}}, \hspace{43mm} \forall 1\le i\le n \nonumber
\end{eqnarray}
\end{subequations}
We know $x^i_2\in\{0,1\}$, when $x^i_2=0$, $x^i_1=\beta_i$; when $x^i_2=1$, $x^i_1=0$. Combining with Constraint~\eqref{npp-2}, we know that the constructed $n$-fold integer program instance admits a feasible solution if and only if there exists a subset of $\{\beta_1,\beta_2,\cdots,\beta_n\}$ whose sum is $\Delta$. Hence, the generalized $n$-fold IP is NP-hard even if $A\in \ensuremath{\mathbb{Z}}^{1\times 2}$.\qed
\end{compactitem}
\iffalse
Using a slight variation of the reduction we used above, we can show the NP-hardness of generalized $n$-fold IP in an even more restricted case. Consequently, we focus on developing algorithms for the standard 4-block $n$-fold IP in the next section.
\begin{theorem}
It is NP-hard to determine whether the generalized $n$-fold IP admits a feasible solution even if all $A$'s are the same and $A=(\Delta,1)$, and $D_i=(\beta_i,0)$, $\forall 1\le i\le n$, where $\beta_i$ is given as an input positive integer and $D_i$ is the $i$-th block in the first line in $H$.\label{theorem-a}
\end{theorem}
\begin{theorem}
It is NP-hard to determine whether the generalized $n$-fold IP admits a feasible solution even if all $D$'s are the same and $D=(1,0)$, and $A_i=(1,\beta_i)$, $\forall 1\le i\le n$, where $\beta_i$ is given as an input positive integer and $A_i$ is the $i$-th block corresponding to the $i$-th column in $H$.\label{theorem-b}
\end{theorem}
We put the proof of Theorem~\ref{theorem-a} and Theorem~\ref{theorem-b} into our full version.
\fi
\section{Algorithms for $4$-block $n$-fold IP}
We complement our hardness results in Theorem~\ref{thm:np-nfold} by establishing algorithms for the following two cases: i). $A=(1,1,\cdots,1)\in \ensuremath{\mathbb{Z}}^{1\times t_A}$, i.e., $A$ is a $t_A$-dimensional vector that only consists of $1$; ii). $A\in\ensuremath{\mathbb{Z}}^{1\times 2} $, i.e., $A$ is a vector of dimension 2. We will further generalize the second case to $A\in\ensuremath{\mathbb{Z}}^{s_A\times t_A} $ where $t_A=s_A+1$ and $\text{rank}(A)=s_A$.
\subsection{The case of $A=(1,1,\cdots,1)$}
The goal of this subsection is to prove the following theorem.
\begin{theorem}\label{11-n}
If $A=(1,\ldots,1)\in \ensuremath{\mathbb{Z}}^{1\times t_A} $, then $4$-block $n$-fold IP can be solved in time $(t_A+t_B)^{O(t_A+t_B)}\cdot poly(n,\log\Delta)$.
\end{theorem}
\begin{proof}
We write the $4$-block $n$-fold IP explicitly as follows:
\begin{eqnarray}
(\text{IP}_1): &\max& \vew\vex\nonumber\\
&& C\vex^0+D\sum_{i=1}^{n}\vex^i=\veb^0 \nonumber\\
&&B\vex^0+(1,\ldots,1)\vex^i=\veb^i, \hspace{24mm}\forall 1\le i \le n\nonumber \\
&&\vel^i \le \vex^i\le \veu^i, \hspace{42mm}\forall 0\le i \le n\nonumber\\
&& \vex^0\in \ensuremath{\mathbb{Z}}^{t_B}, \vex^i\in \ensuremath{\mathbb{Z}}^{t_A}\hspace{33mm}\ \forall 1\le i \le n\nonumber
\end{eqnarray}
In what follows, we show that the above $(\text{IP}_1)$ is equivalent to the following mixed integer linear programming (MIP$_2$) which can be solved in FPT time.
\begin{subequations}
\begin{eqnarray*}
(\text{MIP}_2): &\max& \vew\vex\nonumber\\
&& \sum_{i=1}^{n}\vex^i=\vey \nonumber\\
&& C\vex^0+D\vey=\veb^0 \nonumber \\
&&B\vex^0+ (1,\ldots,1)\vex^i=\veb^i, \hspace{22mm}\ \forall 1\le i \le n\nonumber\\
&& \vel^i \le \vex^i\le \veu^i, \hspace{40mm}\ \forall 0\le i \le n\nonumber\\
&& \vey\in \ensuremath{\mathbb{Z}}^{t_A}, \vex^0\in \ensuremath{\mathbb{Z}}^{t_B}\nonumber\\
&& \vex^i\in \ensuremath{\mathbb{R}}^{t_A}\hspace{46.5mm}\ \forall 1\le i \le n
\end{eqnarray*}
\end{subequations}
Notice that in $(\text{MIP}_2)$ we have $\vex^i\in \ensuremath{\mathbb{R}}^{t_A}$, whereas there are only $t_A+t_B$ integral variables in total. Applying Kannan's algorithm~\cite{kannan1987minkowski}, the optimal solution $(\vex_*,\vey_*)$ to $(\text{MIP}_2)$ can be computed in $(t_A+t_B)^{O(t_A+t_B)}\cdot poly(n,\log\Delta)$ time.
Next we show that the optimal solution to $(\text{IP}_1)$ can be derived in polynomial time based on $(\vex_*,\vey_*)$. Notice that in $(\vex_*,\vey_*)$, each brick $\vex_*^i$ may take fractional values, however, we can round them to integral values through the following LP:
\begin{subequations}
\begin{eqnarray}
(\text{LP}_3): &\max& \vew^0\vex_*^0+\sum_{i=1}^{n} \vew^i\vex^i\nonumber\\
&& \sum_{i=1}^{n}\vex^i=\vey_* \label{n-7}\\
&&B\vex_*^0+ (1,\ldots,1)\vex^i=\veb^i, \hspace{23mm}\ \forall 1\le i \le n\label{n-8}\\
&& \vel^i \le \vex^i\le \veu^i, \hspace{41mm}\ \forall 1\le i \le n\label{n-9}\\
&& \vex^i\in \ensuremath{\mathbb{R}}^{t_A}\hspace{48mm}\ \forall 1\le i \le n\nonumber
\end{eqnarray}
\end{subequations}
Note that $(\text{LP}_3)$ is the linear program by plugging $\vex^0=\vex_*^0$ and $\vey=\vey_*$ into $(\text{MIP}_2)$, hence $\vex^i=\vex^i_*$ is an optimal solution to $(\text{LP}_3)$. Meanwhile, it is not difficult to see that $(\text{LP}_3)$ is essentially an LP for assignment problem, which is totally unimodular \cite{hoffman2010integral}. Hence an integral optimal solution $\vex^i=\bar{\vex}^i$ to $(\text{LP}_3)$ can be computed in $O(n^2t_A+nt_A^2)$ time (see, e.g., Theorem 11.2 in~\cite{korte2018combinatorial}) and it achieves the same objective value as the fractional optimal solution $\vex^i=\vex^i_*$. Therefore, $(\vex_*^0,\bar{\vex}^i,\vey_*)$ is also an optimal solution to $(\text{MIP}_2)$. Overall, we solve $(\text{MIP}_2)$, and hence $(\text{IP}_1)$ in $(t_A+t_B)^{O(t_A+t_B)}\cdot poly(n,\log\Delta)$ time, and Theorem~\ref{11-n} is proved.\qed
\end{proof}
As a corollary, we obtain similar result for $n$-fold IP:
\begin{corollary}
For $n$-fold IP with $A=(1,\ldots,1)\in \ensuremath{\mathbb{Z}}^{1\times t_A} $, there exists an FPT algorithm of running time $t_A^{O(t_A)}\cdot poly(n,\log\Delta)$.\label{12-n}
\end{corollary}
\subsection{The case of $A\in\ensuremath{\mathbb{Z}}^{s_A\times t_A}$, $t_A=s_A+1$ and $\text{rank}(A)=s_A$}
The goal of this subsection is to prove the following theorem.
\begin{theorem}
If $A\in\ensuremath{\mathbb{Z}}^{s_A\times t_A} $ and $t_A=s_A+1$ and $\text{rank}(A)=s_A$, then $4$-block $n$-fold IP can be solved in time of $(t_A+t_B)^{O(t_A+t_B)}\cdot n^{O(t_A)}\cdot poly(\log\Delta)$. \label{thmm}
\end{theorem}
Towards this, we start with the simpler case $A\in\ensuremath{\mathbb{Z}}^{1\times 2}$ to illustrate the main techniques.
\begin{theorem} If $A\in\ensuremath{\mathbb{Z}}^{1\times 2}$, then $4$-block $n$-fold IP can be solved in time of $t_B^{O(t_B)}\cdot poly(n,\log\Delta)$. \label{n-10}
\end{theorem}
\begin{proof}
Let $A=(\lambda,\mu)$, we write the constraints of 4-block $n$-fold IP explicitly as follows:
\begin{subequations}
\begin{eqnarray}
&& C\vex^0+D\sum_{i=1}^{n}\vex^i=\veb^0\label{n-16} \\
&&B\vex^0+\lambda x_1^i+\mu x_2^i=\veb^i,\hspace{26mm}\forall 1\le i \le n\label{n-17} \\
&&\vel^i \le \vex^i\le \veu^i, \hspace{42.5mm}\forall 0\le i \le n\nonumber
\end{eqnarray}
\end{subequations}
\noindent\textbf{Step 1. Use the B\'{e}zout's identity to simplify~\eqref{n-16} and~\eqref{n-17}.}
We subtract $B\vex^0+\lambda x^1_1+\mu x^1_2=\veb^1$ from both sides of Eq~\eqref{n-17}, and get the following: $\lambda(x^i_1-x^1_1)+\mu(x^i_2-x^1_2)=\veb^i-\veb^1.$ Then we let $\theta_1=\frac{\mu}{\text{gcd}(\lambda,\mu)}, \theta_2=-\frac{\lambda}{\text{gcd}(\lambda,\mu)},$ where recall $\text{gcd}(\lambda,\mu)$ represents the greatest common divisor of $\lambda$ and $\mu$.
According to the B\'{e}zout's identity, we can get the following general solution:
\begin{eqnarray}
&& x_h^i=\hat{x}^i_{h}+\theta_h y_i +x_h^1,\quad h=1,2, i=2,3,\cdots, n\label{nn3}
\end{eqnarray}
where $(\hat{x}^i_{1},\hat{x}^i_{2})$ is an arbitrary solution to $\lambda\hat{x}^i_{1}+\mu\hat{x}^i_{2}=\veb^i-\veb^1$. To be consistent, we introduce dummy variables $\hat{x}_h^1=0$ for $h=1,2$ and $y_1=0$, whereas Eq~\eqref{nn3} also holds for $i=1$.
Notice that from now on $\theta_h$, $\hat{x}_h^i$ are all fixed values.
By Eq~\eqref{nn3}, we have
\begin{eqnarray}
&& \sum_{i=1}^{n}x_h^i=\sum_{i=1}^{n}\hat{x}^i_{h}+\theta_h\sum_{i=1}^{n} y_i+nx_h^1,\quad h=1,2 \nonumber
\end{eqnarray}
Plug the above into Eq~\eqref{n-16}, we have
\begin{eqnarray}
&& C\vex^0+ D \left(
\begin{array}{c}
\sum_{i=1}^{n}\hat{x}^i_{1}+\theta_1\sum_{i=1}^{n}y_i+nx_1^1\\
\sum_{i=1}^{n}\hat{x}^i_{2}+\theta_2\sum_{i=1}^{n}y_i +nx_2^1\\
\end{array} \right) =\veb^0.\label{nn1}
\end{eqnarray}
Till now, we have transformed 4-block $n$-fold IP into an equivalent IP with variables $y_i$ and $x^1_h$ for $1\le i\le n$ and $h=1,2$.
Next, we divide $x^1_h$ by $\theta_h$ and denote by $\xi_h$ and $z_h$ its remainder and quotient, respectively, that is,
\begin{eqnarray}
&& x_h^1=\xi_h+\theta_h z_h, \quad h=1,2,\label{nn2}
\end{eqnarray}
where $\xi_h\in [0,|\theta_h|-1]$.
Now we can rewrite the 4-block $n$-fold IP using new variables $\xi_h,z_h$ (where $h=1,2$) and $y_i$ (where $1\le i\le n$).
\begin{subequations}
\begin{eqnarray}
(\text{IP}_4): &\max& \vew\vex=\vew^0\vex^0+c_0+\sum_{h=1}^2\sum_{i=1}^{n}w^i_h\xi_h+\sum_{h=1}^2\sum_{i=1}^{n}[w^i_h\theta_h(y_i+z_h)]\nonumber\\
&&
C\vex^0+ D \left(
\begin{array}{c}
\sum_{i=1}^{n}\hat{x}^i_{1}+\theta_1\sum_{i=1}^{n}y_i+n(\xi_1+\theta_1z_1)\\
\sum_{i=1}^{n}\hat{x}^i_{2}+\theta_2\sum_{i=1}^{n}y_i +n(\xi_2+\theta_2z_2)\\
\end{array} \right) =\veb^0 \\
&& B\vex^0+\lambda\xi_1+\mu\xi_2+\lambda z_1\theta_1+\mu z_2 \theta_2=\veb^1 \\
&& y_1=0 \\
&&\vel^i \le \vex^i\le \veu^i, \hspace{46mm}\forall 0\le i \le n \label{IP4:box}
\end{eqnarray}
\end{subequations}
where $c_0:=\sum_{i=1}^{n}(w^i_1\hat{x}^i_{1}+w^i_2\hat{x}^i_{2})$ is a fixed value.
It remains to replace the box constraints $\vel^i\le \vex^i\le \veu^i$ with respect to the new variables.
\smallskip
\noindent\textbf{Step 2. Deal with the box constraints $\vel^i\le \vex^i\le \veu^i$.
Plug Eq~\eqref{nn3} and Eq~\eqref{nn2} into the box constraint, we have that
\begin{eqnarray}
&& (\ell_h^i-\hat{x}^i_{h}-\xi_h)\le \theta_h(y_i+z_h)\le (u_h^i-\hat{x}^i_{h}-\xi_h), \quad \forall 1\le i\le n, h=1,2\label{eq:a}
\end{eqnarray}
To divide the fixed value $\theta_h$ on both sides we need to distinguish between whether it is positive or negative. For simplicity, we define
\begin{subequations}
\begin{eqnarray}
&&\text{If $\theta_h>0$, then }d^i(\xi_h)=\lceil\frac{\ell_h^i-\hat{x}^i_{h}-\xi_h}{\theta_h}\rceil, \quad \bar{d}^i(\xi_h)=\lfloor\frac{u_h^i-\hat{x}^i_{h}-\xi_h}{\theta_h}\rfloor, \label{eq:theta>0}\\
&&\text{If $\theta_h<0$, then } d^i(\xi_h)=\lceil\frac{u_h^i-\hat{x}^i_{h}-\xi_h}{\theta_h}\rceil, \quad \bar{d}^i(\xi_h)=\lfloor\frac{\ell_h^i-\hat{x}^i_{h}-\xi_h}{\theta_h}\rfloor. \label{eq:theta<0}
\end{eqnarray}
\end{subequations}
Then Eq~\eqref{eq:a} can be simplified as
\begin{eqnarray}
&& d^i(\xi_h)\le y_i+z_h\le \bar{d}^i(\xi_h), \quad \forall 1\le i\le n, h=1,2. \label{eq:newbox}
\end{eqnarray}
Here we use the ceiling function to round up the left side and use the floor function to round down the right side since $y_i+z_h$ is an integer.
We emphasize that here $d^i(\xi_h)$ and $\bar{d}^i(\xi_h)$ are dependent on the variable $\xi_h$, however, since $\xi_h\in [0,|\theta_h|-1]$, either $d^i(\xi_h)$ or $\bar{d}^i(\xi_h)$ may take at most two different values. Hence, a straightforward counting yields $2^{2n}$ possibilities regarding the values for all $d^i(\xi_h)$ and $\bar{d}^i(\xi_h)$. However, notice that $d^i(\xi_h)$'s and $\bar{d}^i(\xi_h)$'s are not independent but change simultaneously as $\xi_h$ changes, we will show that we can divide the range $\xi_h\in [0,|\theta_h|-1]$ into a polynomial number of sub-intervals such that if $\xi_h$ lies in one sub-interval, then all $d^i(\xi_h)$'s and $\bar{d}^i(\xi_h)$'s take some fixed value. We call it an efficient sub-interval.
In the following step 3 we will show that $(\text{IP}_4)$ can be solved in FPT time once each $\xi_h$ lies in one of the efficient sub-intervals (and hence all $d^i(\xi_h)$'s and $\bar{d}_i(\xi_h)$'s are fixed), and then in step 4 we prove there are only a polynomial number of different efficient sub-intervals.
\smallskip
\noindent\textbf{Step 3. Solve $(\text{IP}_4)$ in FPT time when each $\xi_h$ lies in one efficient sub-interval.}
For any $h$, let $[\tau_h,\bar{\tau}_h]$ be an arbitrary efficient sub-interval of $\xi_h$ such that all $d^i(\xi_h)$'s and $\bar{d}_i(\xi_h)$'s take fixed value for all $\xi_h\in [\tau_h,\bar{\tau}_h]$. We will handle in Step 4 the construction of each $[\tau_h,\bar{\tau}_h]$.
From now on we write $d^i(\xi_h)$ and $\bar{d}_i(\xi_h)$ as $d^i_h$ and $\bar{d}^i_h$ as they become fixed values. By Eq~\eqref{eq:newbox} we have
\begin{eqnarray}
&& \max\{d^i_1-z_1,d^i_2-z_2\}\le y_i\le \min\{\bar{d}^i_1-z_1,\bar{d}^i_2-z_2\}, \quad \forall 1\le i\le n \label{eq:box1}
\end{eqnarray}
Note that among $d^i_1-z_1$ and $d^i_2-z_2$, which one is larger solely depends on $d^i_1-d^i_2$ and $z_1-z_2$. Hence, to get rid of the $\max$ and $\min$ on both sides of Eq~\eqref{eq:box1} for $1\le i\le n$, we need to compare the value of $z_1-z_2$ with at most $2n$ distinct values, which are $d^i_1-d^i_2$ and $\bar{d}^i_1-\bar{d}^i_2$. Now we divide $(-\infty,\infty)$ into at most $2n+1$ intervals based on the values of $d^i_1-d^i_2$ and $\bar{d}^i_1-\bar{d}^i_2$. Let these intervals be $I_1,I_2,\cdots,I_{2n+1}$. When $z_1-z_2$ lies in one of the intervals, say, $I_k$, Eq~\eqref{eq:box1} can be simplified as
\begin{eqnarray}
&& \ell^i(I_k,z_1,z_2)\le y_i\le u^i(I_k,z_1,z_2), \quad \forall 1\le i\le n \label{eq:box2}
\end{eqnarray}
where $\ell^i(I_k,z_1,z_2)$ and $u^i(I_k,z_1,z_2)$ are linear functions in $z_1$ and $z_2$. Recall that $y_1=0$, whereas $\ell^1(I_k,z_1,z_2)=u^1(I_k,z_1,z_2)=0$. For simplicity, we define a new variable $p_i:=y_i-\ell^i(I_k,z_1,z_2)$, then it is easy to see that\footnote{This is possible since $\|\vel\|_\infty, \|\veu\|_\infty \le 2^{O(n\log n)}\Delta^{O(n)}$ throughout this paper (see Preliminaries), and thus both the left and right sides are not $\infty$.}
\begin{eqnarray}
&& 0\le p_i\le u^i(I_k,z_1,z_2)-\ell^i(I_k,z_1,z_2), \quad \forall 1\le i\le n \label{eq:box3}
\end{eqnarray}
Now we rewrite $(\text{IP}_4)$ using new variables $p_i$ and $z_1,z_2$ as follows:
\begin{subequations}
\begin{eqnarray*}
(\text{IP}_5[k]): &\max& \vew\vex= \vew^0\vex^0+\sum_{h=1}^2\sum_{i=1}^{n}w^i_h\xi_h+\sum_{h=1}^2\sum_{i=1}^{n}w^i_h\theta_h p_i+L(z_1,z_2) \nonumber\\
&& C\vex^0+ D \left(
\begin{array}{c}
\sum_{i=1}^{n}\hat{x}^i_{1}+\theta_1\sum_{i=1}^{n}p_i+n\xi_1+L_1(z_1,z_2)\\
\sum_{i=1}^{n}\hat{x}^i_{2}+\theta_2\sum_{i=1}^{n}p_i +n\xi_2+L_2(z_1,z_2)\\
\end{array} \right) =\veb^0 \\
&& B\vex^0+\lambda\xi_1+\mu\xi_2+\lambda z_1\theta_1+\mu z_2 \theta_2=\veb^1 \\
&&0\le p_i\le u^i(I_k,z_1,z_2)-\ell^i(I_k,z_1,z_2) ,\quad \forall 1\le i \le n \\
&&\xi_h\in[\tau_h,\bar{\tau}_h],\quad h=1,2\\
&& z_1-z_2\in I_k\\
&&\vex^0\in\ensuremath{\mathbb{Z}}^{t_B},\xi_1,\xi_2,z_1,z_2,p_i\in \ensuremath{\mathbb{Z}}, \quad \forall 1\le i\le n
\end{eqnarray*}
\end{subequations}
Here $L(z_1,z_2)$, $L_1(z_1,z_2)$, $L_2(z_1,z_2)$ are all linear functions of $z_1,z_2$ (which may contain non-zero constant term). Note again that $p_1$ is a dummy variable as $u^1(I_k,z_1,z_2)=\ell^1(I_k,z_1,z_2)=0$ enforces that $p_1=0$. $(\text{IP}_4)$ can be solved by solving $(\text{IP}_5[k])$ for every $k$ then picking the best solution.
Now we show how to solve $(\text{IP}_5[k])$. Ignoring the dummy variable $p_1$, a crucial observation is that, while $(\text{IP}_5[k])$ contains variables $p_2,p_3,\cdots,p_n$, they have exactly the same coefficients in constraints, and therefore we can ``merge" them into a single variable $p:=\sum_{i=2}^np_i$. More precisely, we consider the coefficients of $p_i$'s in the objective function, which are $v_i:=\sum_{h=1}^2w_h^i\theta_h$ for $2\le i\le n$. By re-indexing variables, we may assume without loss of generality that $v_2\ge v_3\ge\cdots\ge v_n$. Using a simple exchange argument, we can show that if $p=\sum_{i=2}^np_i\le u^2(I_k,z_1,z_2)-\ell^2(I_k,z_1,z_2)$, then the optimal solution is achieved at $p_2=p$, $p_3=p_4=\cdots=p_n=0$. More generally, if
$$\sum_{\gamma=2}^j \left(u^\gamma(I_k,z_1,z_2)-\ell^\gamma(I_k,z_1,z_2)\right)< \sum_{i=2}^np_i\le \sum_{\gamma=2}^{j+1} \left(u^\gamma(I_k,z_1,z_2)-\ell^\gamma(I_k,z_1,z_2)\right),$$
then the optimal solution is achieved at $p_i=u^i(I_k,z_1,z_2)-\ell^i(I_k,z_1,z_2)$ for $2\le i\le j$ and $p_{i}=0$ for $i>j+1$.
Define $\Lambda(j):=\sum_{\gamma=2}^j \left(u^\gamma(I_k,z_1,z_2)-\ell^\gamma(I_k,z_1,z_2)\right)$ for $j\ge 2$, $\Lambda(1):=0$, and
$W(j):=\sum_{h=1}^2\sum_{i=1}^jw_h^i\theta_h\left(u^i(I_k,z_1,z_2)-\ell^i(I_k,z_1,z_2) \right)$.
Let $(\text{IP}_5[k,j])$ be as follows:
\begin{subequations}
\begin{eqnarray*}
(\text{IP}_5[k,j]): &\max& \vew\vex= \vew^0\vex^0+W(j-1)+L(z_1,z_2) \nonumber\\
&&\hspace{1cm}+\sum_{h=1}^2\sum_{i=1}^{n}w^i_h\xi_h+\sum_{h=1}^2w^j_h\theta_h \left(p-\Lambda(j-1)\right)\\
&& C\vex^0+ D \left(
\begin{array}{c}
\sum_{i=1}^{n}\hat{x}^i_{1}+\theta_1p+n\xi_1+L_1(z_1,z_2)\\
\sum_{i=1}^{n}\hat{x}^i_{2}+\theta_2p+n\xi_2+L_2(z_1,z_2)\\
\end{array} \right) =\veb^0 \\
&& B\vex^0+\lambda\xi_1+\mu\xi_2+\lambda z_1\theta_1+\mu z_2 \theta_2=\veb^1 \\
&& \Lambda(j-1)< p\le \Lambda(j) \\
&&\xi_h\in[\tau_h,\bar{\tau}_h],\quad h=1,2\\
&& z_1-z_2\in I_k\\
&&\vex^0\in\ensuremath{\mathbb{Z}}^{t_B},\xi_1,\xi_2,z_1,z_2,p\in \ensuremath{\mathbb{Z}}, \quad \forall 1\le i\le n
\end{eqnarray*}
\end{subequations}
Our argument above shows that $(\text{IP}_5[k])$ can be solved by solving $(\text{IP}_5[k,j])$ for all $1\le j\le n$ and picking the best solution.
It remains to solve each $(\text{IP}_5[k,j])$. Notice that this is an IP with $O(t_B)$ variables, and thus can be solved in $t_B^{O(t_B)}poly(\log\Delta)$ time by applying Kannan's algorithm. Thus, when each $\xi_h$ lies in one efficient sub-interval, $(\text{IP}_4)$ can be solved in $t_B^{O(t_B)}poly(n,\log\Delta)$ time.
\smallskip
\noindent\textbf{Step 4. Bounding the number of efficient sub-intervals of $(\xi_1,\xi_2 ) $.}
Recall Eq~\eqref{eq:theta>0} and Eq~\eqref{eq:theta<0}. For simplicity, we assume $\theta_h>0$, the case of $\theta_h<0$ can be handled in a similar way.
Divide $\ell^i_h-\hat{x}_h^i$ by $\theta_h>0$ and denote by $r_h\in [0,\theta_h-1]$ and $q_h$ the remainder and quotient, respectively. It is easy to see that if $0\le \xi_h < r_h$, then $d^i(\xi_h)=\lceil\frac{\ell_h^i-\hat{x}^i_{h}-\xi_h}{\theta_h}\rceil=q_h+1$. Otherwise, $r_h\le \xi_h<\theta_h$, then $d^i(\xi_h)=\lceil\frac{\ell_h^i-\hat{x}^i_{h}-\xi_h}{\theta_h}\rceil=q_h$. We define $r_h$ as one critical point which distinguishes between $d^i(\xi_h)=q_h+1$ and $d^i(\xi_h)=q_h$.
Similarly, divide $u^i_h-\hat{x}_h^i$ by $\theta_h>0$ and denote by $\bar{r}_h\in [0,\theta_h-1]$ and $\bar{q}_h$ the remainder and quotient, respectively. Using the same argument as above we define $\bar{r}_h$ as one critical point which distinguishes between $\bar{d}^i(\xi_h)=\bar{q}_h$ and $\bar{d}^i(\xi_h)=\bar{q}_h-1$. Critical points can be defined in the same way if $\theta_h<0$.
Overall, we can obtain at most $2n$ distinct critical points for $\xi_h$, which divides the whole interval $(-\infty,\infty)$ into at most $2n+1$ sub-intervals. It is easy to see that once $\xi_h$ lies in one of the sub-interval, all $d^i(\xi_h)$ and $\bar{d}^i(\xi_h)$ take fixed values.
Since there are at most $(2n+1)^2$ different possibilities regarding the efficient sub-intervals of $\xi_1$ and $\xi_2$, and we have concluded in step 3 that for each possibility $(\text{IP}_4)$ can be solved in $t_B^{O(t_B)} poly(n,\log\Delta)$ time, we know that overall 4-block $n$-fold can be solved in $t_B^{O(t_B)} poly(n,\log\Delta)$ time if $A\in\ensuremath{\mathbb{Z}}^{1\times 2}$. \qed
\end{proof}
The techniques of Theorem~\ref{n-10} can be further extended to handle the case when $A\in\ensuremath{\mathbb{Z}}^{s_A\times t_A}$ where $t_A=s_A+1$, $\text{rank}(A)=s_A$. The crucial observation is that, while $\vex^i$ contains more variables, the fact that $\text{rank}(A)=s_A$ and $t_A=s_A+1$ enforces that there can be only one ``free" variable, which is similar to the case when $A\in\ensuremath{\mathbb{Z}}^{1\times 2}$. Towards this, instead of applying B\'{e}zout's identity in Step 1, we will decompose $A$ into Smith normal form. The following Step 2, 3, 4 are similar except that now there will be $\xi_1,\xi_2,\cdots,\xi_{t_A}$, where each has $2n+1$ efficient sub-intervals. This gives rise to $n^{O(t_A)}$ different possibilities, yielding the overall running time $(t_A+t_B)^{O(t_A+t_B)}n^{O(t_A)}poly(\log\Delta)$.
\paragraph{Proof of Theorem~\ref{thmm}.}
Write the constraints of the n-fold IP as follows:
\begin{subequations}
\begin{eqnarray}
&& C\vex^0+D\sum_{i=1}^{n}\vex^i=\veb^0\label{n-24} \\
&&B\vex^0+A\vex^i=\veb^i,\hspace{31mm}\forall 1\le i \le n\label{n-25} \\
&&\vel^i \le \vex^i\le \veu^i, \hspace{38mm}\forall 0\le i \le n\nonumber
\end{eqnarray}
\end{subequations}
\noindent\textbf{Step 1. Decompose $A$ into Smith normal form to deal with two constraints~\eqref{n-24} and~\eqref{n-25}.}
From the previous knowledge in Preliminaries, we know that $\bar{A}$ is the Smith normal form of $A$ and $\bar{A}=UAV$, where $U$, $V$ are invertible $s_A\times s_A$ and $(t_A\times t_A)$-matrices. Then $ A=U^{-1}\bar{A}V^{-1}$. One can always calculate the Smith normal form of an integer matrix in polynomial time of $poly(t_A,\log \Delta)$~\cite{kannan1979polynomial}.
We subtract $B\vex^0+A\vex^1=\veb^1$ from both sides of Eq~\eqref{n-25}, and get the following:
$$A(\vex^i-\vex^1)=\veb^i-\veb^1. $$
Let $\vey^i:=V^{-1}(\vex^i-\vex^1)$ and $\widetilde{\veb}^i=U(\veb^i-\veb^1)$, and then we get
$\bar{A}\vey^i=\widetilde{\veb}^i.$
Assume the diagonal elements of $\bar{A}$ are $\alpha_1,\alpha_2,\ldots, \alpha_{s_A}$. And now we know that $t_A=s_A+1$. Thus,
$\alpha_1y^i_1=\widetilde{b}^i_1$, $\alpha_2y^i_2=\widetilde{b}^i_2$, $\cdots$, $\alpha_{s_A}y^i_{s_A}=\widetilde{b}^i_{s_A}$. Actually $\{y^i_h|1\le h\le {s_A},2\le i\le n\}$ are determined uniquely. To be consistent, we introduce dummy variables $y^1_h=0$ for $h=1,2,\ldots,{s_A},t_A$.
For $V$ is an invertible $t_A\times t_A$ matrix, $V\vey^i=\vex^i-\vex^1$ and $\vex^i=\vex^1+V\vey^i$.
Thus,
\begin{eqnarray}
&&\sum_{i=1}^{n}\vex^i=\sum_{i=1}^{n}\vex^1+V\sum_{i=1}^{n}\vey^i.\label{n-30}
\end{eqnarray}
Since $\{y^i_h|1\le h\le {s_A},1\le i\le n\}$ are determined uniquely, we can compute $V\sum_{i=1}^{n}\vey^i=({\theta'}_1+\theta_1\sum_{i=1}^{n}y^i_{t_A} ,\ldots, {\theta'}_{t_A}+\theta_{t_A}\sum_{i=1}^{n}y^i_{t_A})$, where ${\theta'}_h$ and $\theta_h$ for all $ h=1,2,\ldots,t_A$ are known integer constants. Plug the above into Eq~\eqref{n-24}, we have
\begin{eqnarray}
&& C\vex^0+ D \left(
\begin{array}{c}
{\theta'}_1+\theta_1\sum_{i=1}^{n}y^i_{t_A}+nx_1^1\\
{\theta'}_2+\theta_2\sum_{i=1}^{n}y^i_{t_A}+nx_2^1\\
\vdots\\
{\theta'}_{t_A}+\theta_{t_A}\sum_{i=1}^{n}y^i_{t_A} +nx_{t_A}^1\\
\end{array} \right) =\veb^0.\label{nn12}
\end{eqnarray}
Till now, we have transformed 4-block $n$-fold IP into an equivalent IP with variables $y^i_{t_A}$ and $x^1_h$ for $1\le i\le n$ and $h=1,2,\ldots,t_A$.
Next, we divide $x^1_h$ by $\theta_h$ and denote by $\xi_h$ and $z_h$ its remainder and quotient, respectively, that is,
\begin{eqnarray}
&&x_h^1=\xi_h+\theta_h z_h, \quad h=1,2,\ldots,t_A\label{nn44}
\end{eqnarray}
where $\xi_h\in [0,|\theta_h|-1]$.
Now we can rewrite the 4-block $n$-fold IP using new variables $\xi_h,z_h$ (where $h=1,2,\ldots,t_A$) and $y^i_{t_A}$ (where $1\le i\le n$).
\begin{subequations}
\begin{eqnarray}
(\text{IP}_6): &\max& \vew\vex=\vew^0\vex^0+c_0+\sum_{i=1}^{n}\sum_{h=1}^{t_A}w^i_h\theta_h(y^i_{t_A}+z_h)+\sum_{i=1}^{n}\sum_{h=1}^{t_A}w^i_h\xi_h\nonumber\\
&& C\vex^0+ D \left(
\begin{array}{c}
{\theta'}_1+\theta_1\sum_{i=1}^{n}y^i_{t_A}+n(\xi_1+z_1\theta_1)\\
{\theta'}_2+\theta_2\sum_{i=1}^{n}y^i_{t_A}++n(\xi_2+z_2\theta_2)\\
\vdots\\
{\theta'}_{t_A}+\theta_{t_A}\sum_{i=1}^{n}y^i_{t_A}+n(\xi_{t_A}+z_{t_A}\theta_{t_A})\\
\end{array} \right) =\veb^0\label{nn14}\\
&&B\vex^0+A\left(
\begin{array}{c}
\xi_1+z_1\theta_1 \\
\xi_2+z_2 \theta_2\\
\vdots\\
\xi_{t_A}+z_{t_A}\theta_{t_A}\\
\end{array} \right) =\veb^1\label{n-28}\\
&& y^1_h=0, \hspace{55mm}\forall 1\le h \le t_A \\
&&\vel^i \le \vex^i\le \veu^i, \hspace{47mm}\forall 0\le i \le n \label{IP6:box}
\end{eqnarray}
\end{subequations}
where $c_0:=\sum_{i=1}^{n}\sum_{h=1}^{t_A-1}\widetilde{w}^i_hy_h^i$ is a fixed value.
It remains to replace the box constraints $\vel^i\le \vex^i\le \veu^i$ with respect to the new variables.
\smallskip
\noindent\textbf{Step 2. Deal with the box constraints $\vel^i\le \vex^i\le \veu^i$.}
Plug Eq~\eqref{nn44} and the equality $\vex^i=\vex^1+V\vey^i$, $\forall 1\le i\le n$ into the box constraint,
we have that
\begin{eqnarray}
&& \ell^i_h-\widetilde{\theta}^i_h-\xi_h\le{\theta_h}( y^i_{t_A}+z_h)\le u^i_h-\widetilde{\theta}^i_h-\xi_h, \forall 1\le i\le n, h=1,2,\ldots,t_A \label{eq:bb}
\end{eqnarray}
where all $\widetilde{\theta}^i_h$, $1\le h\le t_A$ and $1\le i\le n$, are constants during the computation of $V\vey^i$.
To divide the fixed value $\theta_h$ on both sides we need to distinguish between whether it is positive or negative. Therefore we take the same way with~\eqref{eq:theta>0} and~\eqref{eq:theta<0} in Theorem~\ref{n-10}.
For simplicity, we define
\begin{subequations}
\begin{eqnarray}
&&\text{If $\theta_h>0$, then }d^i(\xi_h)=\lceil\frac{\ell_h^i-\widetilde{\theta}^i_h-\xi_h}{\theta_h}\rceil, \quad \bar{d}^i(\xi_h)=\lfloor\frac{u_h^i-\widetilde{\theta}^i_h-\xi_h}{\theta_h}\rfloor, \label{eq:theta>}\\
&&\text{If $\theta_h<0$, then } d^i(\xi_h)=\lceil\frac{u_h^i-\widetilde{\theta}^i_h-\xi_h}{\theta_h}\rceil, \quad \bar{d}^i(\xi_h)=\lfloor\frac{\ell_h^i-\widetilde{\theta}^i_h-\xi_h}{\theta_h}\rfloor. \label{eq:theta<}
\end{eqnarray}
\end{subequations}
Then Eq~\eqref{eq:bb} can be simplified as
\begin{eqnarray}
d^i(\xi_h)\le y^i_{t_A}+z_h\le \bar{d}^i(\xi_h), \quad \forall 1\le i\le n, h=1,2,\ldots,t_A. \label{eq:newboxes}
\end{eqnarray}
Here we use the ceiling function to round up the left side and use the floor function to round down the right side since $ y^i_{t_A}+z_h$ is an integer.
We emphasize that here $d^i(\xi_h)$ and $\bar{d}^i(\xi_h)$ are dependent on the variable $\xi_h$, however, since $\xi_h\in [0,|\theta_h|-1]$, either $d^i(\xi_h)$ or $\bar{d}^i(\xi_h)$ may take at most ${t_A}$ different values. Hence, a straightforward counting yields ${t_A}^{2n}$ possibilities regarding the values for all $d^i(\xi_h)$ and $\bar{d}^i(\xi_h)$. However, notice that $d^i(\xi_h)$'s and $\bar{d}^i(\xi_h)$'s are not independent but change simultaneously as $\xi_h$ changes, we will show that we can divide the range $\xi_h\in [0,|\theta_h|-1]$ into a polynomial number of sub-intervals such that if $\xi_h$ lies in one sub-interval, then all $d^i(\xi_h)$'s and $\bar{d}^i(\xi_h)$'s take some fixed value. We call it an efficient sub-interval.
In the following step 3 we will show that $(\text{IP}_6)$ can be solved in $(t_B+t_A)^{O(t_B+t_A)}poly(\log\Delta)$ time once each $\xi_h$ lies in one of the efficient sub-intervals (and hence all $d^i(\xi_h)$'s and $\bar{d}_i(\xi_h)$'s are fixed), and then in step 4 we prove there are $n^{O(t_A)}$ different efficient sub-intervals.
\smallskip
\noindent\textbf{Step 3. Solve $(\text{IP}_6)$ in FPT time when each $\xi_h$ lies in one efficient sub-interval.}
Let $[\tau_h,\bar{\tau}_h]$ be an arbitrary efficient sub-interval of $\xi_h$ such that all $d^i(\xi_h)$'s and $\bar{d}_i(\xi_h)$'s take fixed value for any $\xi_h\in [\tau_h,\bar{\tau}_h]$. From now on we write them as $d^i_h$ and $\bar{d}^i_h$. By Eq~\eqref{eq:newboxes}, $\forall 1\le i\le n$, we have
\begin{eqnarray}
&&\max\{d^i_1-z_1,d^i_2-z_2,\ldots,d^i_{t_A}-z_{t_A}\}\nonumber\\
&\le& y^i_{t_A}\nonumber\\
&\le &\min\{\bar{d}^i_1-z_1,\bar{d}^i_2-z_2,\ldots,\bar{d}^i_{t_A}-z_{t_A}\}.\label{eq:box11}
\end{eqnarray}
When we compare $d^i_{h_1}-z_{h_1}$ and $d^i_{h_2}-z_{h_2}$ for all $1\le i\le n$ and $\forall h_1,h_2\in \{1,2,\ldots, t_A\}$, we just need to compare the value of $z_{h_1}-z_{h_2}$ with at most $2n$ distinct values, which are $d^i_{h_1}-d^i_{h_2}$ and $\bar{d}^i_{h_1}-\bar{d}^i_{h_2}$. Hence, to get rid of the $\max$ and $\min$ on both sides of Eq~\eqref{eq:box11}, we only need to repeat the above process $\frac{t_A(t_A-1)}{2}$ times, creating at most $nt_A(t_A-1)$ critical values, and dividing $(-\infty,\infty)$ into at most $nt_A(t_A-1)+1$ intervals based on the values of $d^i_{h_1}-d^i_{h_2}$ and $\bar{d}^i_{h_1}-\bar{d}^i_{h_2}$ for all $h_1,h_2\in \{1,2,\ldots, t_A\}$. Let these intervals be $I_1,I_2,\cdots,I_{nt_A(t_A-1)+1}$. When $\{z_{h_1}-z_{h_2}|\forall h_1,h_2\in\{1,2,\ldots,{t_A}\}\}$ belong to one of the intervals, say, $I_k$, Eq~\eqref{eq:box11} can be simplified as
\begin{eqnarray}
\ell^i(I_k,z_1,z_2,\ldots,z_{t_A})\le y^i_{t_A}\le u^i(I_k,z_1,z_2,\ldots,z_{t_A}), \quad \forall 1\le i\le n \label{eq:box12}
\end{eqnarray}
where $\ell^i(I_k,z_1,z_2,\ldots,z_{t_A})$ and $u^i(I_k,z_1,z_2,\ldots,z_{t_A})$ are linear functions in $z_1,z_2$,\\
$\ldots,z_{t_A}$. Recall that $y^1_{t_A}=0$, whereas $\ell^1(I_k,z_1,z_2,\ldots,z_{t_A})=u^1(I_k,z_1,z_2,\ldots,z_{t_A})$\\$=0$. For simplicity, we define a new variable $p_i:=y^i_{t_A}-\ell^i(I_k,z_1,z_2,\ldots,z_{t_A})$, then it is easy to see that
\begin{eqnarray}
0\le p_i\le u^i(I_k,z_1,z_2,\ldots,z_{t_A})-\ell^i(I_k,z_1,z_2,\ldots,z_{t_A}), \quad \forall 1\le i\le n \label{eq:box13}
\end{eqnarray}
Now we rewrite $(\text{IP}_6)$ using new variables $p_i$ and $z_1,z_2,\ldots,z_{t_A}$ as follows:
\begin{subequations}
\begin{eqnarray*}
(\text{IP}_7[k]): &\max& \vew\vex= \vew^0\vex^0+\sum_{h=1}^{t_A}\sum_{i=1}^{n}w^i_h\xi_h+\sum_{h=1}^{t_A}\sum_{i=1}^{n}w^i_h\theta_h p_i+L(z_1,z_2,\ldots,z_{t_A}) \nonumber\\
&& C\vex^0+ D \left(
\begin{array}{c}
{\theta'}_1+\theta_1\sum_{i=1}^{n}p_i +n\xi_1+L_1(z_1,z_2,\ldots,z_{t_A})\\
{\theta'}_2+\theta_2\sum_{i=1}^{n}p_i +n\xi_2+L_2(z_1,z_2,\ldots,z_{t_A})\\
\vdots\\
{\theta'}_{t_A}+\theta_{t_A}\sum_{i=1}^{n}p_i+n\xi_{t_A}+L_{t_A}(z_1,z_2,\ldots,z_{t_A})\\
\end{array} \right) =\veb^0\\
&&B\vex^0+A\left(
\begin{array}{c}
\xi_1+z_1\theta_1 \\
\xi_2+z_2 \theta_2\\
\vdots\\
\xi_{t_A}+z_{t_A}\theta_{t_A}\\
\end{array} \right) =\veb^1\\
&&0\le p_i\le u^i(I_k,z_1,z_2,\ldots,z_{t_A})-\ell^i(I_k,z_1,z_2,\ldots,z_{t_A}) ,\quad \forall 1\le i \le n \\
&&\xi_h\in[\tau_h,\bar{\tau}_h],\quad h=1,2,\ldots,{t_A}\\
&& z_{h_1}-z_{h_2}\in I_k, \forall h_1,h_2\in\{1,2,\ldots,{t_A}\}\\
&&\vex^0\in\ensuremath{\mathbb{Z}}^{t_B},\xi_h,z_h,p\in \ensuremath{\mathbb{Z}}, \quad \forall 1\le i\le n,1\le h\le t_A
\end{eqnarray*}
\end{subequations}
Here $L(z_1,z_2,\ldots,z_{t_A})$, $L_h(z_1,z_2,\ldots,z_{t_A})$, $\forall 1\le h\le t_A$ are all linear functions of $z_1,z_2,\ldots,z_{t_A}$ which may contain constant term.
Note again that $p_1$ is a dummy variable, $u^1(I_k,z_1,z_2,\ldots,z_{t_A})=\ell^1(I_k,z_1,z_2,$\\$\ldots,z_{t_A})=0$ enforces that $p_1=0$. $(\text{IP}_6)$ can be solved by solving $(\text{IP}_7[k])$ for every $k$ then picking the best solution.
Now we show how to solve $(\text{IP}_7[k])$. Ignoring the dummy variable $p_1$, a crucial observation is that, while $(\text{IP}_7[k])$ contains variables $p_2,p_3,\cdots,p_n$, they have exactly the same coefficients in constraints, and therefore we can ``merge" them into a single variable $p:=\sum_{i=2}^np_i$. More precisely, we consider the coefficients of $p_i$'s in the objective function, which are $v_i:=\sum_{h=1}^{t_A}w_h^i\theta_h$ for $2\le i\le n$. By re-indexing variables, we may assume without loss of generality that $v_2\ge v_3\ge\cdots\ge v_n$. Using a simple exchange argument, we can show that if $p=\sum_{i=2}^np_i\le u^2(I_k,z_1,z_2,\ldots,z_{t_A})-\ell^2(I_k,z_1,z_2,\ldots,z_{t_A})$, then the optimal solution is achieved at $p_2=p$, $p_3=p_4=\cdots=p_n=0$. More generally, if
\begin{subequations}
\begin{eqnarray*}
&&\sum_{\gamma=2}^j \left(u^\gamma(I_k,z_1,z_2,\ldots,z_{t_A})-\ell^\gamma(I_k,z_1,z_2,\ldots,z_{t_A})\right)\\
&<& \sum_{i=2}^np_i\\
&\le& \sum_{\gamma=2}^{j+1} \left(u^\gamma(I_k,z_1,z_2,\ldots,z_{t_A})-\ell^\gamma(I_k,z_1,z_2,\ldots,z_{t_A})\right),
\end{eqnarray*}
\end{subequations}
then the optimal solution is achieved at $p_i=u^i(I_k,z_1,z_2,\ldots,z_{t_A})-\ell^i(I_k,z_1,z_2,$\\$\ldots,z_{t_A})$ for $2\le i\le j$ and $p_{i}=0$ for $i>j+1$.
Define $\Lambda(j):=\sum_{\gamma=2}^j \left(u^\gamma(I_k,z_1,z_2,\ldots,z_{t_A})-\ell^\gamma(I_k,z_1,z_2,\ldots,z_{t_A})\right)$, $\Lambda(1):=0$, $W(j):=\sum_{h=1}^{t_A}\sum_{i=1}^jw_h^i\theta_h\left(u^i(I_k,z_1,z_2,\ldots,z_{t_A})-\ell^i(I_k,z_1,z_2,\ldots,z_{t_A}) \right)$. Let $(\text{IP}_7[k,j])$ be as follows:
\begin{subequations}
\begin{eqnarray*}
(\text{IP}_7[k,j]): &\max& \vew\vex= \vew^0\vex^0+W(j-1)+L(z_1,z_2,\ldots,z_{t_A}) \nonumber\\
&&\hspace{1cm}+\sum_{h=1}^{t_A}\sum_{i=1}^{n}w^i_h\xi_h+\sum_{h=1}^{t_A}w^j_h\theta_h \left(p-\Lambda(j-1)\right)\\
&& C\vex^0+ D \left(
\begin{array}{c}
{\theta'}_1+\theta_1p +n\xi_1+L_1(z_1,z_2,\ldots,z_{t_A})\\
{\theta'}_2+\theta_2p +n\xi_2+L_2(z_1,z_2,\ldots,z_{t_A})\\
\vdots\\
{\theta'}_{t_A}+\theta_{t_A}p+n\xi_{t_A}+L_{t_A}(z_1,z_2,\ldots,z_{t_A})\\
\end{array} \right) =\veb^0\\
&&B\vex^0+A\left(
\begin{array}{c}
\xi_1+z_1\theta_1 \\
\xi_2+z_2 \theta_2\\
\vdots\\
\xi_{t_A}+z_{t_A}\theta_{t_A}\\
\end{array} \right) =\veb^1\\
&& \Lambda(j-1)< p\le \Lambda(j) \\
&&\xi_h\in[\tau_h,\bar{\tau}_h],\quad h=1,2,\ldots,{t_A}\\
&& z_{h_1}-z_{h_2}\in I_k,\quad \forall h_1,h_2\in\{1,2,\ldots,{t_A}\}\\
&&\vex^0\in\ensuremath{\mathbb{Z}}^{t_B},\xi_h,z_h,p_i\in \ensuremath{\mathbb{Z}}, \quad \forall 1\le i\le n,1\le h\le t_A
\end{eqnarray*}
\end{subequations}
Our argument above shows that $(\text{IP}_7[k])$ can be solved by solving $(\text{IP}_7[k,j])$ for all $1\le j\le n$ and picking the best solution.
It remains to solve each $(\text{IP}_7[k,j])$. Notice that this is an IP with $O(t_A+t_B)$ variables, and thus can be solved in $(t_A+t_B)^{O(t_A+t_B)}poly(n,\log\Delta)$ time by applying Kannan's algorithm. Thus, when each $\xi_h$ lies in one efficient sub-interval, $(\text{IP}_6)$ can be solved in $(t_A+t_B)^{O(t_A+t_B)}poly(n,\log\Delta)$ time.
\smallskip
\noindent\textbf{Step 4. Bounding the number of efficient sub-intervals of $(\xi_1,\xi_2,\cdots,\xi_{t_A})$.}
Recall Eq~\eqref{eq:theta>} and Eq~\eqref{eq:theta<}. For simplicity, we assume $\theta_h>0$, the case of $\theta_h<0$ can be handled in a similar way.
Divide $\ell^i_h-\widetilde{\theta}^i_h$ by $\theta_h>0$ and denote by $r_h\in [0,\theta_h-1]$ and $q_h$ the remainder and quotient, respectively. It is easy to see that if $0\le \xi_h < r_h$, then $d^i(\xi_h)=\lceil\frac{\ell_h^i-\widetilde{\theta}^i_h-\xi_h}{\theta_h}\rceil=q_h+1$. Otherwise, $r_h\le \xi_h<\theta_h$, then $d^i(\xi_h)=\lceil\frac{\ell_h^i-\widetilde{\theta}^i_h-\xi_h}{\theta_h}\rceil=q_h$. We define $r_h$ as one critical point which distinguishes between $d^i(\xi_h)=q_h+1$ and $d^i(\xi_h)=q_h$.
Similarly, divide $u^i_h-\widetilde{\theta}^i_h$ by $\theta_h>0$ and denote by $\bar{r}_h\in [0,\theta_h-1]$ and $\bar{q}_h$ the remainder and quotient, respectively. Using the same argument as above we define $\bar{r}_h$ as one critical point which distinguishes between $\bar{d}^i(\xi_h)=\bar{q}_h$ and $\bar{d}^i(\xi_h)=\bar{q}_h-1$. Critical points can be defined in the same way if $\theta_h<0$.
Overall, we can obtain at most $2n$ distinct critical points for each $\xi_h$, and $2nt_A$ distinct critical points for all $\xi_h$, $\forall 1\le h\le t_A$, which divides the whole interval $(-\infty,\infty)$ into at most $2nt_A+1$ sub-intervals. It is easy to see that once $\xi_h$ lies in one of the sub-interval, all $d^i(\xi_h)$ and $\bar{d}^i(\xi_h)$ take fixed values. Thus, the number of efficient sub-intervals of $(\xi_1,\xi_2,\cdots,\xi_{t_A})$ is $(nt_A)^{O(t_A)}$.
\qed
\
We remark that the exponential term $n^{O(t_A)}$ comes from the enumeration of all efficient sub-intervals for $\xi_h$'s, where $\xi_h$ is a ``global" variable that appears in constraint~\eqref{eq:bb} for every $1\le i\le n$. If we consider $n$-fold IP and there is no $\vex^0$, then we can get rid of $\xi_h$ and $z_h$ in constraint~\eqref{eq:bb} and derive upper and lower bounds for each $y_i$ directly, yielding the following theorem.
\begin{theorem}
If $A\in\ensuremath{\mathbb{Z}}^{s_A\times t_A} $, $t_A=s_A+1$ and $\text{rank}(A)=s_A$, $n$-fold IP can be solved in linear time of $n\cdot poly(t_A,\log \Delta)$. \label{n-6}
\end{theorem}
\begin{proof}
We write the constraints of $n$-fold IP explicitly as follows:
\begin{subequations}
\begin{eqnarray}
&&D\sum_{i=1}^{n}\vex^i=\veb^0\label{con1}\\
&& A\vex^i=\veb^i, \hspace{34mm}\ \forall 1\le i \le n\label{con2}\\
&& \vel^i \le \vex^i\le \veu^i, \hspace{30mm}\ \forall 1\le i \le n\nonumber
\end{eqnarray}
\end{subequations}
Let $\bar{A}$ be the Smith normal form of $A$, then there exist integral matrices $U$, $V$, whose inverse are also integral matrices, such that $ A=U^{-1}\bar{A}V^{-1}$.
Furthermore, $U,V$ can be calculated in time $poly(t_A,\log \Delta)$~\cite{kannan1979polynomial}.
Combining with Constraint~\eqref{con2}, we have $\bar{A}V^{-1}\vex^i=\widetilde{\veb}^i$, where $\widetilde{\veb}^i=U\veb^i$. Let $\vey^i:=V^{-1}\vex^i$, and in the following we will substitute $\vex$ with new variables $\vey$. Thus we get $\bar{A}\vey^i=\widetilde{\veb}^i$, which implies that $\alpha_jy^i_j=\widetilde{b}^i_j$ for $1\le j\le s_A=t_A-1$. This settles the value of all $y^i_j$'s except $y^i_{t_A}$.
Next we consider Constraint~\eqref{con1}. It can be written as $D\sum_{i=1}^{n}V\vey^i=\veb^0$. For simplicity let $\widetilde{D}=DV$, then we have $\widetilde{D}\sum_{i=1}^{n}\vey^i=\veb^0$. Note that only $y^i_{t_A}$'s are variables, $\widetilde{D}\sum_{i=1}^{n}\vey^i=\veb^0$ reduces to equalities with only one variable $\sum_{i=1}^ny^i_{t_A}$, which can be solved directly and we get
$$\sum_{i=1}^ny^i_{t_A}=d_0,$$
for some $d_0$.
Finally we consider the box constraints. From $\vel^i \le \vex^i\le \veu^i$, we get $\vel^i \le V \vey^i\le \veu^i$. Recall that the value of all $y^i_j$'s, except $y^i_{t_A}$, has been determined. Hence, $\vel^i \le V \vey^i\le \veu^i$ reduces to a set of inequalities in $y^i_{t_A}$. Note that each inequality has the form of $\alpha y^i_{t_A}\le \beta$ for some $\alpha$ and $\beta$. Since $y^i_{t_A}$ is an integer, it can be further simplified as $ y^i_{t_A}\le \lfloor \beta/\alpha\rfloor$ if $\alpha>0$, or $ y^i_{t_A}\ge \lceil \beta/\alpha\rceil$ if $\alpha<0$. Hence, $\vel^i \le V \vey^i\le \veu^i$ can be simplified into the following form:
\begin{eqnarray}
&& \tilde{\ell}^i\le y^i_{t_A} \le \tilde{u}^i.\label{n-14}
\end{eqnarray}
For ease of discussion, we further substitute $y^i_{t_A}$'s with a new variable $p_i:=y^i_{t_A}-\tilde{\ell}^i$.
Simple calculations show that $\vew\vex=\sum_{i=1}^{n}\vew^i\vex^i=\sum_{i=1}^{n}\vew^iV\vey^i=c_0+\sum_{i=1}^{n}\widetilde{w}^i_{t_A}p_i$ for some $\widetilde{w}^i_{t_A}$ and fixed value $c_0$. Therefore, we can rewrite the $n$-fold IP as:
\begin{eqnarray}
(\text{IP}_8): & \max& c_0+\sum_{i=1}^{n} \widetilde{w}^i_{t_A}p_i\nonumber\\
&& \sum_{i=1}^{n} p_i=d_0-\sum_{i=1}^n\tilde{\ell}^i \nonumber\\
&& 0 \le p_i \le \tilde{u}^i-\tilde{\ell}^i,\hspace{28mm}\ \forall 1\le i \le n\nonumber
\end{eqnarray}
$(\text{IP}_8)$ can be solved via a simply greedy algorithm. By re-indexing variables, we may assume without loss of generality that $\widetilde{w}^1_{t_A}\ge \widetilde{w}^2_{t_A}\ge \cdots\ge \widetilde{w}^n_{t_A}$. Suppose $\sum_{i=1}^\gamma (\tilde{u}^i-\tilde{\ell}^i)<d_0-\sum_{i=1}^n\tilde{\ell}^i\le \sum_{i=1}^{\gamma+1}(\tilde{u}^i-\tilde{\ell}^i)$, then a simple exchange argument shows that the optimal objective is achieved at $p_j=\tilde{u}^j-\tilde{\ell}^j$ for $1\le j\le \gamma$, $p_{\gamma+1}=d_0-\sum_{i=1}^n\tilde{\ell}^i-\sum_{i=1}^\gamma (\tilde{u}^i-\tilde{\ell}^i)$, and $p_j=0$ for $j\ge \gamma+2$.
Overall, the running time is $n\cdot poly(t_A,\log\Delta)$ where $poly(t_A,\log\Delta)$ is the time to compute Smith normal form of $A$.
\end{proof}
\section{Conclusion}
In this paper, we explore the possibility of developing an algorithm that runs polynomially in $\log\Delta$ for block-structured IP. We obtain positive as well as negative results. Our results seem to suggest that the box constraint $\vel\le\vex\le\veu$ significantly impact the tractability. It remains as an important open problem to give a complete characterization on what kind of box constraints may lead to algorithms polynomial in $\log\Delta$. Another interesting open problem is on 4-block $n$-fold IP, when $A\in\ensuremath{\mathbb{Z}}^{s_A\times t_A} $, $t_A=s_A+1$ and $\text{rank}(A)=s_A$. Currently our algorithm runs in $(t_A+t_B)^{O(t_A+t_B)}\cdot n^{O(t_A)}\cdot poly(\log\Delta)$ time, which is an XP algorithm when taking $t_A,t_B$ as a parameter. It remains open whether there exists an FPT algorithm parameterized by $t_A,t_B$.
\bibliographystyle{splncs04}
|
2,869,038,156,789 | arxiv | \section{Introduction}
\BgThispage
During the last years, the research community has been very active looking for new ways to detect and block web tracking (e.g., \cite{acar_web_2014, englehardt_cookies_2015, li_trackadvisor_2015, metwalley_unsupervised_2015, nikiforakis_cookieless_2013, lerner_internet_2016, iqbal_fingerprinting_2021, castell-uroz_tracksign_2021}). Experts explored numerous online services, finding all kinds of new and exotic ways of exploiting protocols (\cite{acar_web_2014, englehardt_cookies_2015, li_trackadvisor_2015, metwalley_unsupervised_2015}) or abusing programming language APIs (\cite{nikiforakis_cookieless_2013, lerner_internet_2016, iqbal_fingerprinting_2021}) for user profiling purposes. Unfortunately, most of the proposed solutions are very complex and hard to deploy in a real browsing session, limiting their application to offline studies. Currently, the most popular solution to protect against web tracking is the use of content-filtering extensions in web browsers (e.g., uBlock Origin~\cite{hill_ublock_2020}, Adblock Plus~\cite{adblock_plus_adblock_2020}). These tools are based on \textit{filter lists}, manually curated pattern lists containing known tracking domains that are matched against the URLs visited in a browsing session.
However, content-filtering extensions as well as most of the previously proposed approaches suffer from several limitations: $(i)$ They require significant manual work to keep the filter lists up to date (e.g., new methods constantly emerge under different URLs); $(ii)$ URL-based detection and protection methods are easy to evade just by changing the hosting domain or dynamically modifying their URL parameters; $(iii)$ obfuscation, minimization, and webpackaging techniques automatically modify the internal website code, breaking many detection systems (e.g., \cite{wu_machine_2016, ikram_towards_2017}); and $(iv)$ current URL and resource-based blocking methods result in significant website functionality loss (\cite{krishnamurthy_measuring_2007, mazel_comparison_2019}). Some previous works (e.g., \cite{castell-uroz_tracksign_2021, iqbal_fingerprinting_2021, iqbal_adgraph_nodate, smith_sugarcoat_2021}) proposed partial solutions to some of these limitations, usually through the use of machine learning methods. However, these solutions present a trade-off, advancing in some aspects but giving up in others (see Section~\ref{background}).
In this paper, we present ASTrack, a new method that addresses \textit{all} the limitations described above. Unlike previous proposals, ASTrack focuses on the code structure instead of the code itself. For this purpose, ASTrack uses an abstraction of the JavaScript code based on Abstract Syntax Trees (AST). An AST is simply a tree representation of the abstract syntactic structure of the source code, regardless of its particular contents (e.g., variable or function names). An AST can represent the entire code as well as the different functional portions of it (e.g., functions). Our proposal is based on the observation that most websites use common analytics or fingerprinting libraries to collect private information. Thus, when the same code structure (i.e., AST) is used across multiple domains, the AST becomes suspicious for performing tracking, especially if some of the domains were previously known to be tracking domains. By using a syntactic abstraction instead of the code itself, our system is more robust to common evasion techniques, such as minimization or obfuscation. Moreover, this abstraction allows us to selectively prune individual tracking ASTs (e.g., functions) while maintaining the rest of the (legitimate) code unmodified. This increase in blocking granularity compared to previous methods, which usually block URLs or complete resources, can better preserve website functionality and detect tracking code under different URLs or within different files.
Our evaluation of the top 10k most popular websites shows that ASTrack maintains a detection precision of more than 98\%. During our experiments, ASTrack found more than 3,400 new tracking URLs and automatically classified about 50k tracking code pieces, including obfuscated fragments that could not be easily detected with other techniques. Finally, we estimate that website functionality is preserved in approximately 97.7\% of websites.
In summary, the key contributions of this paper are:
\begin{enumerate}
\item A \textbf{syntactic approach} to the detection of web tracking code that is highly adaptive and robust against minimization and obfuscation.
\item A \textbf{high grade of detection granularity}, permitting selective code removal while maintaining the functionality of the website in most cases.
\item A new \textbf{methodology to detect website functionality breakage} by means of computer vision techniques.
\item An \textbf{evaluation of the tracking blocking performance as well as the website functionality loss} for the top 10k most popular websites in the Tranco List.
\end{enumerate}
The rest of the paper is organized as follows: Section \ref{background} presents an overview of web tracking detection systems and the limitations of existing methodologies. Section \ref{proposal} describes ASTrack, our new web tracking detection and removal proposal. Section \ref{evaluation} presents the evaluation of the web tracking detection and removal process. Finally, Section \ref{conclusions} concludes the paper and presents future work.
\BgThispage
\section{Background and Related work}
\label{background}
\subsection{Web tracking}
Traditional web tracking systems are usually \textit{stateful} technologies. They use different techniques to save an identifier inside the storage of the device browsing the web. The identifier will be read again every time the device accesses the same website service. The most common method are the infamous cookies, but there are more exotic approaches such as embedding identifiers in cached documents~\cite{implementing_web_tracking}, in the HTTP redirect cache~\cite{bursztein_tracking_nodate}, in the HTTP authentication cache~\cite{grossman_tracking_nodate}, or inside the HTML5 storage~\cite{ayenson_flash_2011}. However, new privacy regulations (e.g. \cite{gdpr, ccpa, standford_pipl_2021}) impose multiple restrictions on most of these systems. Moreover, many mainstream browsers have implemented countermeasures to this kind of web tracking. For instance, Safari now blocks all third-party cookies~\cite{safari_builtin}, and Firefox blocks third-party cookies from known trackers by default~\cite{firefox_builtin}. Chrome will also ban third-party cookies in the near future~\cite{google_cookies}.
\textit{Stateless} technologies, on the other hand, do not save information inside the device. They directly identify the user based on other measurable properties, such as the IP address or the device configuration exposed by the browser. Stateless technologies are also known as \textit{fingerprinting} methods. Most simple techniques look at numerous properties, such as the screen resolution~\cite{boda_user_2012}, the version of the browser~\cite{unger_shpf_2013} or the fonts installed in the system~\cite{fifield_fingerprinting_2015}, to combine them and create a unique identifier. However, the latest functionalities added to web technologies in the form of new JavaScript APIs~\cite{snyder_browser_2016} permit stateless web tracking algorithms to collect much more precise information. Rendering differences due to specific hardware and software combinations can be abused by means of those APIs to precisely identify the browser being used to explore the web \cite{mowery2012pixel, englehardt_online_2016, cao2017cross, starov_xhound_2017, zhu_eluding_2021}. This kind of web tracking is far more intrusive than the traditional cookies, as it is completely transparent to the user, and there is no easy way to control when, where, or by whom they are being used.
\subsection{Detection}
Most of the solutions proposed in the literature, such as the works from Wu et al.~\cite{wu_machine_2016} or Ikram et al.~\cite{ikram_towards_2017}, apply machine learning (ML) algorithms over the website code to find specific features identifying tracking methods. However, the underlying static analysis used is prone to fail under techniques such as obfuscation, that can modify those features.
Iqbal et al.~\cite{iqbal_fingerprinting_2021} present a \textit{browser fingerprinting} detection system using a combination of two independent machine learning algorithms, one trained with features extracted from a static analysis of the JavaScript code and the other from a dynamic analysis. The second ML complements the first one in case obfuscation or minimization techniques have been used in the code. They also present a way to reduce the breakage of website functionality by means of the replacement of the tracking code with mock versions of it. The system presents high accuracy in detecting browser fingerprinting and a functionality loss reduction of about 20\% whenever the mock functions can be used. However, the system only detects one kind of tracking, and the functionality breakage improvement system does not work on webpackaged files~\cite{noauthor_webpack_nodate}, a common practice on today's Internet. The breakage system is only evaluated on a population of 50 websites.
Our previous work at~\cite{castell-uroz_tracksign_2021} proposes a web tracking detection system based on a deterministic code partition algorithm and a tripartite network representation that allows us to propagate the probability of containing tracking for each piece of code. The system presents high accuracy in finding unknown web tracking. However, the static analysis nature of the system by using directly the code for identification makes it vulnerable to obfuscation or randomized content renaming techniques. Moreover, the partition algorithm makes it impossible to block tracking code without breaking website functionality.
\subsection{Mitigation}
Many of the detection systems presented above include some kind of solution to block the detected web tracking algorithms. However, those solutions are usually complex and difficult to deploy by the common user. In practice, the most commonly used mitigation systems are filter lists. These lists include a collection of patterns identifying URLs suspicious of performing tracking. There is no easy way to create the lists, and usually they are manually curated by the members of an online community. EasyList~\cite{open_source_easylist_2020} and EasyPrivacy~\cite{open_source_easyprivacy_2020} are both examples of such lists. Many popular content blockers, such as AdBlock Plus~\cite{adblock_plus_adblock_2020} and uBlock Origin~\cite{hill_ublock_2020} use those lists to block URLs during loading.
Smith et al. present in~\cite{smith_sugarcoat_2021} an alternative approach: tracking-free resource replacements. Some content blockers allow for resource replacements in real-time. Smith et al. automatically generate clean versions of some of the most popular tracking resources to be used as replacements. Similar to~\cite{iqbal_fingerprinting_2021}, they clean the code using a mock replacement for some of the API functions used to track the user. However, this approximation is not scalable, as the inspected scripts are selected manually, and many tracking systems work with dynamic custom files that are different for each website. Moreover, new tracking resources are created daily.
Some browsers also implement partial solutions to stateless tracking. In particular, the TOR browser and Firefox, by means of an experimental feature~\cite{firefox_fingerprinting}, automatically block some of the API functions commonly used for tracking~\cite{tor_fingerprinting}. Unfortunately, this approach breaks every website where the API is used for legitimate purposes.
\BgThispage
\section{ASTrack}
\label{proposal}
In this section, we introduce ASTrack, a new adaptive method to detect and selectively remove web tracking systems while minimizing the functionality loss associated with them. Our proposal is based on the observation that most websites share code and functionality, usually in the form of popular frameworks and useful libraries. For instance, JQuery libraries or social network interaction buttons are common to many different websites. Web tracking is no different. Most websites use common analytics or fingerprinting libraries (e.g., Google Analytics) to collect user information. It is rare to find websites with completely customized tracking libraries not present on other websites. Thus, web tracking functionality is also shared by many websites. Our proposal is to search for this shared functionality between multiple websites and automatically identify web tracking systems among them.
\subsection{Functionality identification}
As its name implies, ASTrack tracks shared Abstract Syntax Trees (AST) between different websites. An AST represents the code as a tree, whose nodes are its elements, and edges the relation between them. However, ASTrack does not look for the actual elements included in the AST but for the structure of the AST as a whole, looking only at the node types and their relations. The structure of the tree is independent of the names and values of the code, representing the functionality itself. Minimized or obfuscated ASTs share the same structure. Independent trees with exactly the same structure necessarily share the same functionality, despite the different results they may obtain depending on the input values. In summary, looking at AST's structure, we will find functionality shared by different web services.
In order to define a representation of the AST structure, we decided to follow two principles: consistency and simplicity. In our case, consistency means that different codes with the same structure must always be represented equally. On the other hand, simplicity is a key factor in favoring a system that can be deployed in reality, where complex implementations can result in performance constraints. The selected representation corresponds to a simple \textbf{label chain}. The identifier label chain is created by traversing the AST, descending recursively into each branch, and concatenating integer identifiers for each node type found. Fig. \ref{fig:ast_id} shows a simple AST and the process to obtain its identifier for illustration purposes. Each node receives one aperture label (left integer) and one closure label (right integer), except operator nodes, which only have the aperture label. The dotted arrows indicate their concatenation order. When the algorithm enters a node, it concatenates its aperture label into the AST label chain. If the node has branches, it explores them recursively. Finally, if present, it also concatenates the closure label and finishes. The resultant label chain, composed of an ordered node identification chain, is the simplest representation that allows us to unambiguously identify the AST structure. However, not only the main AST identifies functionality, but each nested \textit{function} or \textit{method} also contains partial isolated functionality. Thus, we will generate identifiers for their branches as well. Fig.~\ref{fig:ast_id} shows three identifiers in different colors, one corresponding to the entire AST and two for internal nested function declarations. In order to find the identifier of the nested branches, we only need to look for the pair of labels that enclose them (988 and 989 in our example) within the main AST identifier. Finally, to fix the maximum length of the identifiers, a hash function is applied to each of their label chains. For purposes of simplification, during the rest of the paper we will refer to AST identifiers simply as \textit{AST}s.
\begin{figure}
\centering
\includegraphics[width=0.489\textwidth]{figures/AST_id.png}
\caption{\textbf{AST identification example:} Integer values represent the aperture (left) and closure (right) labels of each node. The dotted arrow indicates the concatenation order to generate the ID. Looking for pairs of aperture and closure labels of function nodes, we find their corresponding identifiers.}
\label{fig:ast_id}
\vspace{-0.3cm}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/ASTrack.png}
\caption{\textbf{ASTrack tracking detection and removal conceptual process:} By computing safety values for each AST loaded by URLs, ASTrack decides which code to remove or not. The safety is calculated by accounting for the number of already-known tracking URLs that include the AST. When the evidence is enough to consider the AST a tracking one, the URLs containing it are reclassified as tracking, and their safety values propagate.}
\label{fig:astrack}
\vspace{-0.3cm}
\end{figure*}
\subsection{Web tracking detection}
ASTrack detects web tracking code by independently classifying the detected ASTs for each inspected URL. To this end, we initially label the URLs as tracking or not based on the most up-to-date filter lists. The classification is done by looking at the overall number of URLs that contain the identified functionality and are already known to perform tracking. The underlying idea is that, if one AST structure is mostly present in tracking URLs, its functionality is most probably used for tracking purposes. Thus, an unknown URL that shares the same functionality (i.e., the same AST) with other tracking URLs will also be a tracking URL, and we can automatically classify it as such.
Fig.~\ref{fig:astrack} conceptually shows the process of the ASTrack automatic identification system during a usual browsing session. In the first step, the user opens a domain that loads a safe URL (i.e., not labeled as tracking). ASTrack automatically computes its inner AST identifiers. The circle represents the URL, and its inner hexagons are the ASTs internally loaded. As ASTrack does not have information about this URL, it will consider its inner ASTs as safe (green color and positive value). In the second step, a new safe URL is detected, and shares one AST with the previous URL. Their shared AST safety level becomes the sum of their individual safety levels, increasing it to 2. The third step introduces an already-known tracking URL. Its ASTs are considered unsafe (negative safety value and light red color) and consequently blocked. In the fourth step, a new tracking URL is detected, but this time it shares one AST with an URL that until now was considered safe. This shared AST safety value becomes zero. ASTrack does not have enough evidence to decide if it should be blocked or not. In this case, ASTrack allows its execution to maintain website functionality as much as possible. In the fifth step, a new tracking URL is detected and shares the same AST as the last URL. The safety value for the AST becomes negative. From now on, whenever the first URL is opened, ASTrack will block this specific AST but not the rest of them. In the sixth and seventh steps, new tracking URLs are detected sharing the same AST, thus decreasing its safety value again. The resultant safety value is considered enough evidence to autoclassify the AST as a tracking AST (red color). Consequently, each URL containing this piece of code is also a tracking URL. In the last step, ASTrack classifies the first safe URL as tracking. From now on, whenever it is detected, it will decrease the safety of all its internal ASTs propagating the information.
Performance-wise, ASTrack should be implementable inside the browser's internal JavaScript engine. The AST identifiers are simple enough to be computed during the initial DOM construction time. Once computed, comparing them to the subset of tracking ASTs is a \textit{set membership} type of problem, whose implementation can be done, for instance, by means of a \textit{bloom filter}. However, for practical reasons, in this research we will compute them and perform the comparison offline.
\BgThispage
\subsection{Web tracking removal}
Maintaining the functionality of the website is a key element for the adoption of any privacy protection method, as many users like the idea of improving their privacy but are discouraged by the associated website breakage. The high granularity obtained by ASTrack, which looks at the inner ASTs instead of the file itself, allows the method to minimize the functionality loss related to privacy protection purposes. In the worst-case scenario, for URLs with no shared ASTs, ASTrack performs equal to the filter lists. It blocks the complete code pertaining to that URL. However, if shared ASTs are present within the code, common as we will see in the next section, ASTrack automatically adapts to distinguish between them, selectively classifying tracking AST branches and URLs. For instance, in step 5 of Fig.~\ref{fig:astrack}, ASTrack has one safe URL within whose ASTs one is identified as unsafe. Thus, ASTrack can selectively prune the branch of this AST without compromising the rest of the code. This progressive detection achieves more detail than the traditional alternatives and minimizes the functionality loss usually suffered as a trade-off for privacy protection.
\BgThispage
\section{Evaluation}
\label{evaluation}
To evaluate ASTrack, we collected a labeled snapshot of all the URLs and resources pertaining to the top 10k most popular websites according to the Tranco List~\cite{tranco_list}. We used Selenium~\cite{jason_huggins_seleniumhq_2020} in combination with Mozilla Firefox and a customized version of uBlock Origin~\cite{hill_ublock_2020} to collect it.
Our customized uBlock intercepts all the HTTP requests, labels them according to its included lists, but allows them to pass through. To compute the ASTs, we used the JavaScript code parser Esprima~\cite{esprima}. For HTML files containing JavaScript, we automatically extract the code and compute its ASTs. Table~\ref{tab:tracking_dataset} contains information about the obtained data set. From the initial population of 10k websites, 8,179 domains were successfully inspected using a timeout of 60 seconds. The rest were not accessible at the time of the collection or made the timeout expire. The labeling process classified 41,274 JavaScript URLs as tracking. From the collected data, we precomputed the inner ASTs of each URL. More than 38\% of them are shared between different websites, validating our observation that many domains share common services.
\begin{table}
\caption{Data set \& Evaluation}
\label{tab:tracking_dataset}
\resizebox{0.485\textwidth}{!}{
\small
\begin{tabular}{lcc}
\hline
& \multicolumn{1}{l}{\textbf{Static evaluation}} & \multicolumn{1}{l}{\textbf{Dynamic evaluation}} \\ \hline
\textbf{Domains} & \multicolumn{2}{c}{8,179} \\
\textbf{URLs} & \multicolumn{2}{c}{615,780} \\
\textbf{JavaScript URLs} & \multicolumn{2}{c}{161,593} \\
\textbf{Tracking URLs} & \multicolumn{2}{c}{41,274} \\
\textbf{Unique ASTs} & \multicolumn{2}{c}{7,015,542} \\
\textbf{Shared ASTs} & \multicolumn{2}{c}{2,683,586} \\ \hline
\textbf{New tracking URLs} & 3,409 & 2,183 \\
\textbf{New tracking JavaScript files} & 3,109 & 2,093 \\
\textbf{Tracking ASTs} & 49,453 & 41,114 \\ \hline
\textbf{Precision} & 98.52\% & 98.47\% \\ \hline
\end{tabular}}
\end{table}
With the collected data set, we feed ASTrack in order to classify the labeled ASTs and detect unknown tracking URLs. The evidence threshold, used to automatically classify an AST as tracking, was manually validated to maximize precision and minimize false positives that would break the functionality. Its value is dynamically computed as the 90\% of the total URLs containing the AST. Similarly, to avoid classifying ASTs without enough evidence, we empirically fixed a minimum of 10 different URLs containing the AST to propagate the tracking information. In order to evaluate ASTrack, we compare two different scenarios: a static evaluation with the complete interconnected graph before propagating the safety values, and a dynamic evaluation with no previous information about the ASTs available, forcing ASTrack to fill the graph.
\begin{figure}
\centering
\includegraphics[width=0.489\textwidth]{figures/deobfuscation.png}
\caption{\textbf{Obfuscated code detection:} ASTrack was able to match obfuscated functions (top) with their alternative clear code (bottom) in different files by looking at their structure.}
\label{fig:deobfuscation}
\end{figure}
\subsection{Static evaluation}
\label{best_case}
This experiment is composed of two phases. In the first phase, we create the complete graph, including all the relations and initial safety values for the shared ASTs. In the second phase, we run the ASTrack algorithm in order to identify tracking ASTs and propagate them to find new tracking URLs. This experiment is used to validate ASTrack's ability to find false negatives inside the filter lists. URLs not present in the filter lists but sharing a tracking AST (more than 90\% of the URLs containing it are known to be tracking) are most probably false negatives. Table~\ref{tab:tracking_dataset} includes the obtained results. ASTrack classified as tracking 49,453 ASTs and found 3,409 new tracking URLs, a 7.62\% increase with respect to the initially labeled data set. Inspecting those URLs, most of them pertained to the usual suspects (e.g., Google, Facebook, Twitter), while some others were files hosted in CDNs, which are hard to block without breaking page functionality on many websites. To validate them, we looked at the subset of files loaded by those URLs, composed of 3,109 JavaScript files. None of the files were already known to perform tracking (i.e., loaded by a different tracking URL). To study them, we first automatically checked their content for the inclusion of some frequent keywords used for tracking. The list of keywords is formed by the keywords previously found by Lerner et al. in~\cite{lerner_internet_2016} and by Iqbal et al. in~\cite{iqbal_fingerprinting_2021} for common stateless web tracking methods. We added some additional keywords for stateful tracking mechanisms (e.g. \textit{getCookie}, \textit{setCookie}, \textit{localStorage} or \textit{sessionStorage}). From the initial 3,109 files, 2,916 contained some of those keywords, with a median of 4 keywords per file. From the remaining 193 files, 50 were randomly selected and manually inspected. Approximately 76\% of them (38 out of 50) were recognized as tracking, while for the rest there was not enough evidence. Overall, the method presented more than 98\% of precision and automatically found about 3,400 false negatives.
Three of the manually inspected files presented obfuscation techniques. ASTrack was able to automatically find structure coincidences with already known tracking files and correctly classify them. Fig.~\ref{fig:deobfuscation} shows some examples of obfuscated code and their alternative clear code found by ASTrack in different files. On the other hand, 1,564 (50.32\%) of the newly detected tracking files are webpackaged files. Webpack~\cite{noauthor_webpack_nodate} is a framework to easily pack different scripts inside only one file. It automatically looks for the needed dependencies and inserts them inside the file to allow for self-contained dynamic content. However, if the resource includes tracking libraries, privacy-protection tools cannot block it without suffering the functionality loss associated with blocking the rest of the included code. In contrast, ASTrack is able to detect inside them code portions used exclusively for tracking purposes.
\begin{lstlisting}[float=b, label=webpack_code, language=Caml, caption={Webpack files structure}, basicstyle=\scriptsize, numbers=none]
(window.webpackJsonp=window.webpackJsonp||[]).push([[261],{XygZ: ...
(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[4367],{4367: ...
(self.__LOADABLE_LOADED_CHUNKS__=self.__LOADABLE_LOADED_CHUNKS__||[]).push([[76429],{122954: ...
(window.webpackJsonpwebpackLogReporter=window.webpackJsonpwebpackLogReporter||[]).push([[5],{93: ...
\end{lstlisting}
\begin{figure}
\centering
\includegraphics[width=0.419\textwidth]{figures/Webpack.png}
\caption{\textbf{Webpack and tracking presence:} 6\% of the JavaScript URLs contain tracking code inside webpackaged files. Traditional privacy-protecting tools suffer from functionality loss when blocked. ASTrack granularity allows us to selectively block only the tracking portions.}
\label{fig:webpack_wild}
\vspace{-0.3cm}
\end{figure}
Listing~\ref{webpack_code} shows four examples of the first lines of webpackaged files. Although their code seems different, the structure of the code is always the same. Thus, thanks to ASTrack's ability to find shared AST structures, we can easily discover webpackaged files in the wild. To this end, we compared the initial portions of the AST identifiers of a subset of webpackaged files, and automatically extracted some patterns to identify them. Searching for those patterns in our collected AST data set, we accounted for the total number of webpackaged files and how many of them are classified as tracking. Fig.~\ref{fig:webpack_wild} shows the obtained results. About 39\% of files were safe, non-packaged files. On the other hand, approximately 48\% of the JavaScript files are webpackaged files, and 32\% include tracking. About a 6\% of the files are both webpackaged and include tracking. This represents 20\% of all the tracking URLs. Unfortunately, filter lists and other methods blocking complete resources will cause websites using those URLs to lose functionality. In contrast, ASTrack permits us to selectively block only the tracking ASTs while maintaining functionality in most cases. To discover if ASTrack is blocking complete files or only portions of them, we accounted for the number of tracking ASTs present in each tracking file. Fig.~\ref{fig:ast_distribution} shows the distribution of the number of tracking ASTs included inside tracking URLs. According to our results, more than 60\% of the detected tracking files include only one or two tracking ASTs (the median is 6). In comparison, the median of ASTs per file, including non-tracking ones, is 38 ASTs per file (35 for non-tracking URLs). Thus, ASTrack is selectively classifying only the branches identifying web tracking.
\begin{figure}
\centering
\includegraphics[width=0.447\textwidth]{figures/ast_distribution.png}
\caption{\textbf{Tracking AST distribution:} Distribution of the number of tracking ASTs included in tracking URLs. Most URLs contain only a few of them. Between all the ASTs included in the file (38 by median), ASTrack was able to discover the few ASTs whose code is used for tracking purposes. }
\label{fig:ast_distribution}
\vspace{-0.3cm}
\end{figure}
\BgThispage
\subsection{Dynamic evaluation}
\label{worst_case}
Our second experiment allows us to evaluate the adaptation properties of ASTrack's algorithm. We run the system with the AST graph empty. Only the URL tracking information in the filter lists is available. In this case, we can study if, by applying the method to a common browsing session, the system is able to gradually discover new web tracking and correctly identify tracking ASTs. To this end, we consecutively feed ASTrack with data from one URL at a time. This forces the algorithm to progressively compute the connections between the ASTs loaded and their safety values. The URL insertion order is set to match the website rank of the Tranco List. As it is based on domain popularity, it is a good representation of a common browsing session, where pages that are very popular are more likely to be accessed before websites with a lower rank. For each domain, its accessed URLs are introduced arbitrarily, as online resources are mostly loaded asynchronously.
Table~\ref{tab:tracking_dataset} includes the obtained results. After the 8,179 domain insertions, ASTrack labeled as tracking 41,114 ASTs. This represents a reduction of about 16\% in comparison to the static evaluation process presented in the last section. However, ASTrack found 43,457 tracking URLs, a decrease of less than 3\% with respect to the complete model approach. Between them, 2,183 (5\%) were not previously included in the initial filter lists. Once more, we studied the 2,093 JavaScript files loaded by those URLs. Interestingly, comparing them with the subset of files found during the static evaluation, about 35\% of them were new resources, not previously classified as tracking. This highlights the main weakness of the filter lists: good precision but low recall. Many URLs incorrectly classified as safe can increase the overall safety value of their inner ASTs and not be detected as tracking using the complete graph. In contrast, progressively feeding ASTrack can help find them. Following the same methodology, we automatically checked for the inclusion of frequent tracking keywords inside the files. 1,868 files included at least one of them, with a median of 5 keywords per file. From the subset of files that do not include keywords and were not detected during the complete graph experiment, we randomly selected 50 files and manually inspected them. The inspection discovered only seven of them mistakenly classified as tracking (mainly \textit{reddit.com} static scripts). Finally, in line with our previous finding, 46.67\% of them were webpackaged files. Overall, as in the static evaluation, more than 98\% of the detected URLs were correctly classified, validating the adaptation properties of ASTrack to automatically discover web tracking.
\BgThispage
\subsection{Tracking removal and website breakage}
\label{tracking_removal}
To evaluate the tracking removal efficiency of ASTrack we measure the functionality loss associated with it. In this work, for practical reasons, we use file replacements to test and validate our new web tracking removal methodology, similar to the proposal in~\cite{smith_sugarcoat_2021}. To generate the file replacements, we automatically remove all the code pertaining to tracking ASTs from the tracking files detected during the static and dynamic evaluations (25,840 files). Unfortunately, although web technologies have been around for about 30 years, there is not yet a defined method to evaluate website breakage. Proposals such as \cite{choudhary_detecting_2011} and \cite{mesbah_automated_2011} were focused on breakage due to JavaScript engine rendering differences, looking at DOM discrepancies. However, cleaning web tracking systems modifies the DOM structure of the website, but may not deteriorate its functionality as they are additional systems that are usually not related to the website content. Until now, subjective manual analysis has been used to detect functionality loss (e.g. \cite{iqbal_adgraph_nodate, iqbal_fingerprinting_2021, smith_sugarcoat_2021}). However, this approach does not scale, limiting the number of evaluated websites to the manual labor you can afford (a few dozens in previous works). In this work, we introduce a new alternative methodology, using computer vision techniques in order to discover website breakage suspicious websites prior to the manual inspection.
The idea is to compute the similarity between screenshots taken with and without the modifications introduced in our process. Note that, as many websites include animations and other dynamically modified content, we need to take care of the expected variability of a website. If the obtained screenshots are not similar enough, we consider them suspicious of functionality loss. The proposed process is:
\begin{enumerate}
\item Collect multiple \textbf{independent screenshot data sets from two vanilla browsers in pairs} for the desired population. Each pair has to be collected in parallel to minimize the impact of external events between them (e.g., network congestion, periodic maintenance).
\item Collect \textbf{one more data set by replacing one vanilla browser with our modified approach}. In our case, we will use the file replacements created by removing the branches found in the last section.
\item Compare the obtained pairs of screenshot data sets using \textbf{Normalized Cross Correlation} (NCC)~\cite{zhao_image_2006} or another equivalent technique to detect the percentage of similarity between them.
\item Compute the \textbf{expected similarity deviation per website} by means of the standard deviation and its confidence intervals between the multiple similarity measures obtained from the vanilla browsers.
\item Check if our \textbf{modified output similarity is within the expected deviation} for each website.
\item For websites that are not similar enough, \textbf{generate a diff file between both screenshots}, highlighting the pixels that are different between them. In a posterior phase, an expert can visually inspect the diff file along with the screenshots to better classify the suspicious websites.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=0.489\textwidth]{figures/similarity.png}
\caption{\textbf{Similarity comparison (CDF):} Distribution of websites in function of their visual similarity percentage (Normalized Cross Correlation). Five data sets comparing two vanilla browsers and one more comparing a vanilla browser with our file replacements were taken.}
\label{fig:similarity}
\vspace{-0.3cm}
\end{figure}
Applying this methodology, we collected a new data set of the top 10k most popular websites with one vanilla browser alongside another browser using our file replacements. The resulting crawl contains information about 8,050 Internet domains, and our plugin replaced almost 23k elements with their clean versions. Surprisingly, although we are not actively blocking URLs, the vanilla browser loaded about 22k more than our system. Thus, this excess of traffic can be directly attributed to tracking purposes. Approximately 72\% of domains (5,751 out of 8,050) benefited from a privacy improvement thanks to our removal system. On average, 4.05 JavaScript files were replaced in websites where tracking was detected. This represents a median of 62\% tracking reduction in comparison with the vanilla browser.
Next, we collected five independent screenshot data sets of two vanilla browsers in parallel for the same 10k websites. We used ImageMagick~\cite{imagemagick_compare} to compute the NCC similarity values and the diff files between all the pairwise obtained data sets. Finally, we computed the expected similarity deviation as well as the 95\% confidence intervals for each website from the measures collected in the vanilla comparisons. Fig.~\ref{fig:similarity} shows the cumulative distribution function (CDF) values for each of the pairwise data set comparisons. The difference obtained by comparing two vanilla browsers is very similar for all five collected data sets. Only about 62\% of websites are completely equal between both vanilla browsers, with another 30\% of them having similarity values higher than 80\%. The remaining 8\% present a difference bigger than 20\% due to dynamic content and animations. In contrast, in our privacy-friendly browser, there are about 6\% to 7\% of websites that present similarity values lower than their counterparts in the vanilla browser. Overall, looking at the expected deviation, 1,753 websites using file replacements (21.7\%) were classified as suspicious of website breakage.
\begin{figure}
\centering
\includegraphics[width=0.449\textwidth]{figures/visualizer.png}
\caption{\textbf{Visualizer:} Self-developed visualization tool to easily inspect and compare screenshots. A diff mask can be applied to highlight the differences (red pixels).}
\label{fig:visualizer}
\vspace{-0.3cm}
\end{figure}
In order to explore them, we developed a small visualization tool that allows us to easily switch between the screenshots. It also allows us to imprint the pixel differences between them as a mask over any of the pictures. Fig.~\ref{fig:visualizer} shows a screenshot of the tool. We included a set of checkboxes to easily classify the reason for the similarity gap and also filters to group them by type. Using the tool, we were able to inspect all the suspicious websites in less than one day. The classification is divided into 9 groups, highlighting the main reasons for the screenshot difference:
\BgThispage
\begin{itemize}
\item \textbf{Animation}: Elements with animations or changing at defined intervals, such as timed sliding banners.
\item \textbf{Banner}: Advertisements or other banners.
\item \textbf{Broken}: Functionality loss or website breakage.
\item \textbf{Cookie}: Missing cookie banners.
\item \textbf{Fonts}: The original font is not available, and a default one is used instead (minimal impact).
\item \textbf{Media}: Dynamically modified media content (e.g., videos, pictures, logos, icons).
\item \textbf{Minor}: Minor dynamic content that usually varies with time, such as clocks, numbers of visits, views, etc.
\item \textbf{Text}: Dynamically modified text content.
\item \textbf{Tracking}: Visually visible tracking elements such as missing social network icons, captchas, anti-adblockers, or country detection pop-ups.
\end{itemize}
The only category considered as website breakage is the ``broken'' one. Note that it does not only contain functionality loss but also usability problems and aesthetically unpleasant modifications that would be obvious for the common user (e.g. missing icons, pictures, or broken animations). Websites have been classified in more than one group when needed. In particular, some anti-adblocking systems detected our system and blocked the website. In this case, the website has been considered both, ``tracking'' and ``broken''.
\begin{figure}
\centering
\includegraphics[width=0.449\textwidth]{figures/category.png}
\caption{\textbf{Visual difference reason:} Main reason for the similarity gap between the vanilla browser and ASTrack (yellow). The figure also includes the same results but blocking URLs instead of only ASTs (blue). ASTrack reduces the broken websites by a 36\% (192 vs. 301) removing only the tracking AST and maintaining the rest of the file intact.}
\label{fig:classification}
\vspace{-0.3cm}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.94\textwidth]{figures/heatmaps.png}
\caption{\textbf{Difference heatmaps:} Heatmaps containing the pixel distribution of the difference between the vanilla browser and our system. Some of them, such as the \textit{cookie} or \textit{banner} removal heatmap, present clear patterns. \textit{Animation} heatmap highlights the usual main sliding-banner position. Others, like \textit{media} an \textit{broken} heatmaps, share many characteristics.}
\label{fig:heatmaps}
\vspace{-0.3cm}
\end{figure*}
Fig.~\ref{fig:classification} shows the classification of the 1,753 suspicious websites. In order to compare the results with the traditional tools, the figure also includes the classification for the same subset of websites, but blocking URLs instead of using the files cleaned by ASTrack. The results show that using ASTs to selectively remove tracking code decreases the number of broken websites by 36\% (192 pages vs 301 pages). Most of the websites broken by blocking URLs but functional by blocking ASTs include broken animations (43.7\%) or missing media files (39.4\%). Overall, using ASTrack only 10.9\% of the suspicious websites presented functionality loss or visual problems. The rest were mostly due to dynamic content modifications (60.58\%), missing cookie banners (16.77\%) or different fonts (3.59\%). Fig.~\ref{fig:heatmaps} contains heatmaps highlighting the visual difference distribution for each of the categories. The figure presents clear patterns for the ``cookie'' and ``banner'' positioning as well as for the animation category, which is mainly composed of sliding animations in the center of the page. As expected, dynamic ``text'' and ``media'' are mostly distributed in the content portion of the website. ``Minor'' category only has a few instances, making them very recognizable. In total, ASTrack functionality loss was only present in 2.3\% of websites (192 out of 8,050).
\section{Conclusions and Future Work}
\label{conclusions}
In this work, we presented ASTrack, a new methodology to detect and selectively remove web tracking systems. To this end, it uses an abstraction of the website's JavaScript AST structure that represents its functionality. The method works by identifying tracking functionality shared by multiple websites. The abstraction from the actual code makes the system robust against obfuscation and other similar renaming techniques, a common problem in many other solutions. Moreover, the high granularity achieved by the method allows the system to automatically prune only the code pieces exclusively dedicated to tracking purposes, minimizing the functionality loss. We also presented a new methodology to discover website breakage that compares the visual differences of the websites in order to highlight those suspicious of being broken.
Our results show that ASTrack presents a detection precision higher than 98\%. Moreover, thanks to its adaptability, in the evaluation of the top 10k most popular websites, ASTrack found more than 3,400 new tracking URLs (7.62\% increase) and identified almost 50k tracking ASTs. Using the selective tracking removal to clean tracking files, ASTrack achieved a 62\% tracking removal in more than 72\% of the websites. Moreover, it also obtained a 36\% decrease in functionality loss in comparison with the filter lists. We estimate that almost 98\% of the websites maintained full functionality due to the high granularity of working with AST's structure, the main contribution of this work. Our future work includes improving the ASTrack tracking removal system to intelligently substitute branches instead of completely removing them, decreasing even more the functionality loss. We also plan to improve web tracking detection by studying simple AST transformations useful to find mostly equivalent ASTs but presenting small differences in their structure. Finally, we also expect to improve the website breakage detection method by applying ML to automatically classify the visual differences of suspicious websites using the obtained heatmaps and data sets.
The data sets and source code of ASTrack are publicly available at~\cite{ismael_castell-uroz_online_2020}.
\BgThispage
\section{Acknowledgments}
This publication is part of the Spanish I+D+i project TRAINER-A (ref.~PID2020-118011GB-C21), funded by MCIN/ AEI/10.13039/501100011033. This work is also partially supported by the NII internship program.
\bibliographystyle{ieeetr}
\balance
|
2,869,038,156,790 | arxiv | \section{Introduction}
With recent developments in data science and computational tools, machine learning algorithms have been increasingly applied in different engineering and science areas to model physical phenomena. The data from physical experiments and numerical simulations are a source of knowledge about the physical world, on which data-driven methods could be performed to extract new physical laws \citep{huang2020data,raissi2017physics,carleo2019machine,kutz2017deep,wang2017physics,li2019accelerating}.
For example, in turbulence RANS modeling in fluid mechanics, traditional modeling methods have failed in many flow scenarios. A unified RANS model that can successfully describe complex flows, including boundary layer, a strong rotation, separation still does not exist according to the author's knowledge \citep{durbin2018some,ling2015evaluation}. On the other hand, advanced measurement and direct numerical simulations provide plenty of data that could be utilized to establish and validate new models. With the above argument, data-driven methods are particularly suitable for turbulence modeling and some other areas in physics and engineering. There have been many attempts to discover new turbulence models using machine learning methods. Milano and Koumoutsakos \citep{milano2002neural} reconstruct near-wall flow applying neural networks and compared their results with linear methods (POD). Zhang and Duraisamy \citep{zhang2015machine} used Gaussian process regression combined with an artificial neural network to predict turbulent channel flow and bypass transition. Beck, Flad, and Munz \citep{beck2019deep} applied residual neural network for Large Eddy Simulation. Chen et. al. proposed an ODE network to generally learn differential equations \citep{chen2018neural}.
The physical laws often appear in the form of tensorial equalities which inherently obey certain types of symmetry. For example, the constitution laws in fluid and solid mechanics should obey translation and rotation invariance \citep{mase2009continuum}. The turbulence RANS model is local tensorial equality between mean velocity gradient and Reynolds stress. The turbulence RANS models should also be rotation invariant \citep{pope2001turbulent,pope1975more}. However, machine learning methods for RANS modeling do not automatically guarantee rotation invariance, if we use Cartesian components of tensors as input and output of training data. This problem has been addressed by \citep{ling2016machine,wang2017physics}. In \citep{ling2016machine,ling2016reynolds}, Reynolds stress is expressed as a general expansion of nonlinear integrity basis multiplied by scalar functions of invariants of strain rate and rotation rate tensors. Machine learning is performed to find these scalar functions of tensor invariants of strain rate and rotation rate tensors. Mathematically this expansion comes from an application of the Caylay-Hamilton theory. The special case used in \citep{ling2016machine,ling2016reynolds} is derived by S.B.Pope in \citep{pope1975more}. Although such construction is general and possible for higher-order tensors and tensor tuples containing multiple tensors, the number of this basis and the derivation complexity will grow exponentially and become prohibitive for real applications \citep{johnson2016handbook,smith1965isotropic}.
Why would this problem of rotation-equivariance be hard to solve? At first glance, if a system has the property of rotation-equivariance, one has more information for this system. Therefore, this added property of rotation-equivariance would lower the performance of a learner. More specifically, adding this new rule of rotation symmetry in a system will require the machine learning algorithm to extract more rules from existing data \citep{mohri2018foundations}.
In this case, the property of rotation-equivariance could be considered as a continuous group action.
There is limited research in the field of deep learning that considers the preservation of symmetries under continuous group actions for physical systems.
To address our second point, continuous information is hard to be absorbed. If we consider a machine learning algorithm as an information compression process from input to output \citep{saxe2019information}, a continuous transformation as rotation will be difficult for learning algorithms to absorb.
Given the universal approximation theorem by \citep{hornik1989multilayer}, it would seem that the application of neural networks, especially deep neural networks could solve any problem. As formulated by \citep{muller1999application,weatheritt2017comparative,qin2019data,han2019solving}, advanced machine learning methods, especially deep neural networks \citep{lecun2015deep}, seem to provide a new opportunity for physical equations approximation. However, in this case of rotation symmetry, if we use a multiple layer perceptron $M$ to learn the relation $f$, then most likely $M$ does not preserve rotation-equivariance. Generally, the neural network function classes do not satisfy rotation equivariance.
There have been previous works considering group-equivariance with convolutional neural networks in image recognition. A general method has been proposed using group convolution \citep{cohen2016group,esteves2020theoretical,esteves2019equivariant}. Based on the idea of using convolution, several methods composed a steerable filter for rotation-equivariance in convolutional neural networks \citep{weiler2018learning,cheng2018rotdcf,finzi2020generalizing,gao2019rotation}. However, these works cannot be applied in physical systems as well. One of the most important reasons is that the rotation operation on the image is different from rotation operation on physical systems. Consider a rotation operation on a specific image. We are thinking of a transformation from polar coordinates centering at a certain point \citep{foley1996computer}. This kind of transformation is different from rotation operation on tensors. Additionally, these methods have a strong restriction that this model must be built on convolutional neural networks. Yet, considering physical systems, convolutional neural networks might not be the best choice since they are designed for image processing.
The problem of rotation-equivariance is also quite impossible to be simply solved by data augmentation and preprocessing. Mentioned by previous works \citep{ling2016machine}, a typical solution is to apply the technique of data augmentation. However, the method of data augmentation fails to have a theoretical guarantee of obtaining the property of rotation-equivariance with finite sample set.
Data augmentation method has a theoretical foundation that at infinite sample limit it will asymptotically reach rotation equivariance.
However, such a dataset is not only difficult to obtain but also requires much higher computation power while training the model. In the case of using naive preprocessing methods, the problem is that there are limited theoretical tools to deal with high-order tensors, and only limited methods to use for low order tensors. It is hard to apply specific techniques, such as diagonalization, in the case of high-order tensors. Since naive data preprocessing methods are impossible to apply, a more complex method with a theoretical guarantee should be proposed in order to solve this problem.
In this paper, we establish Rotation-Equivariant Network (RotEqNet), a new data-driven framework, which guarantees rotation-equivariance at a theoretical level.
Different from previous methods, we first find a method to preserve rotation operation via tensor contraction. In our proposed position standardization algorithm, it could properly link a high-order tensor to a low order tensor with the same rotation operation.
By applying mathematical tools for low order matrices (diagonalization and QR factorization), a desired standard position could be derived by the rotation matrix from the previous step. Standard position algorithm is proven to be rotation-invariant in Theorem \ref{thm:1}, \emph{i.e.} two tensors differ by a rotation would have the same standard position. Therefore, the learning rules based on standard position are forming a quotient space of the original rules in random rotated plural position \citep{weiler2018learning,zhou2019continuity}. In this way, RotEqNet lowers the training difficulty of a randomly positioned dataset. Further, RotEqNet is also proven to be rotation-equivariant, as we have shown in Theorem \ref{thm:2}. These advantages of RotEqNet would result in an observable error reduction compared to previously introduced data-driven methods.
We applied RotEqNet into four different case studies ranging from second-order, third-order, and fourth-order. These case studies are designed based on Newtonian fluids, Large-eddy simulations, and Electrostriction. Improved performances could be observed for using RotEqNet. The error is reduced for 99.6\%, 15.62\%, and 54.63\% for second, third, forth-order case studies, respectively.
Our contribution in this paper is three-fold:
\begin{enumerate}
\item We showed an important property of contraction operation on tensors. Contraction operation will preserve rotation operation on tensor with arbitrary order. This is stated in Lemma \ref{lemma2.3}.
\item We propose a properly designed RotEqNet with a position standardization algorithm to guarantee the property of rotation-equivariant. We proved the property of rotation-invariant of position standardization algorithm in Theorem \ref{thm:1} and the property of rotation-equivariant of RotEqNet rigorously in Theorem \ref{thm:2}.
\item We implement our proposed algorithm and the architecture of RotEqNet. We further conduct case studies to show its credibility in design and superiority compared to baseline methods.
\end{enumerate}
To provide a general architecture of our paper, in Section \ref{sec:preliminaries} we introduce basic definitions of rotation for arbitrary order tensor (tuples) and related concepts. In Section \ref{pre:rotEq} we formulate rotation invariance (equivariance) on supervised learning methods. The RotEqNet and main algorithm is presented in Section \ref{sec:roteqnet}, and numerical results are shown in Section \ref{sec:expr}.
\section{Preliminaries and Problem Description}
\label{sec:preliminaries}
\subsection{Tensor and its operations}
\label{sec:tensorDef}
In this section, we first introduce an abstract way of defining tensor. One reason for us to introduce the more abstract way to think about tensors is that it provides a convenient formalism for the operations we will do on the tonsorial data discussed in the previous section. The operations are
\begin{enumerate}
\item Linear transformation
\item Contraction
\end{enumerate}
The formalism helps us to prove that these two operations commute which lays
theoretical ground for the computation of a representative of rotationally-relatated
tensors. We will call this representative \emph{standard position}
\subsubsection{Abstract definition of tensors}
Following \cite{curtis2012abstract}, fix a vector space $V$ of dimension $n$ over $\mathbb{R}$. A \emph{tensor product}
$V\otimes V$ is a vector space with the property that $\mathbb{R}$-bilinear maps
$V \times V \rightarrow \mathbb{R}$ are in natural one-to-one correspondence
with $\mathbb{R}$-linear maps $V \otimes V \rightarrow \mathbb{R}$.
The tensor product $V\otimes V$ can be constructed as the
quotient vector space $V\times V / C$, where $C$ is generated by
vectors of the following types
\begin{equation}
(i)\;(x+y, z) - (x, z) - (y, z) \\
(ii)\;(x, y+z) - (x, y) - (x, z) \\
(iii)\;(ax, y) - a(x, y) \\
(iv)\;(x, ay) - a(x, y) \\
\end{equation}
where $x$ and $y$ are vectors in $V$ and $a$ is a scalar in $\mathbb{R}$. This means
any element in $C$ can be written as a linear combination of vectors of the
above form. $C$ is not necessarily a vector space of finite dimension.
But the quotient space $V\otimes V$ is. Let $g: V \times V \rightarrow V\otimes V$
be the natural projection map, then we use $x\otimes y$ to denote the image of $(x, y)$
under $g$.
Let $\langle e_1,\cdots, e_n\rangle$ be a basis of $V$, then $e_i\otimes e_j$ for
$i=1,...,n$ and $j=1,...,n$ form a basis of $V\otimes V$.
This means any vector $p \in V\otimes V$ can be written as
\begin{equation}
\sum_{i,j}a_{ij}e_i\otimes e_j
\end{equation}
for some $a_{ij} \in \mathbb{R}$.
Here are some relations of tensors which come directly as a consequence
of the relations generating $C$:
\begin{equation}
a(e_i\otimes e_j) = a e_i \otimes e_j = e_i \otimes a e_j
\end{equation}
\begin{equation}
(a_i e_i + a_j e_j)\otimes (a_k e_k) = a_ia_k(e_i\otimes e_j) +
a_ja_k (e_j\otimes e_k)
\end{equation}
The representation of a tensor in $V\otimes V$ is similar to the representation
of a linear map $V \rightarrow V$, i.e. a matrix. In fact, there is a natural way
to think of a tensor as a linear map:
For each element $e_i\otimes e_j$ in the basis of $V\otimes V$, we can think of it as
a linear map $V \rightarrow V$ by defining $e_i\otimes e_j (v) = e_i<e_j,v>$, where
$<,>$ is the natural inner product on $V$.
Extend the definition linearly to every element in $V\otimes V$, we obtain a way to
identify $V\otimes V$ as the space of linear map $V \rightarrow V$. In fact, the
tensor $\sum_{i,j}a_{i, j} e_i\otimes e_j$ corresponds to the linear map represented
by the matrix $[a_{ij}]$.
We have defined the tensor product $V\otimes V$ over $V$. The definition/construction of order $k$ tensor $\overbrace{V\otimes\cdots\otimes V}^{k}$ follows the same course. We will denote order $k$
tensor by $\otimes^k V$.
The basis of $\otimes^k V$ is given by $e_{i1}\otimes\cdots\otimes e_{ik}$, where
$i=1,...,n$ and $j=1,...,k$. With respect to this basis, any order $k$ tensor can
be written as $\sum_{i1,\cdots,ik}a_{i1,...,ik}e_{i1}\otimes\cdots\otimes e_{ik}$.
Analogous to the order 2 case, we can think of an order $k$ tensor as a $k$-dimensional
matrix, the typical way a tensor in physical experiments are represented.
We will use $T^k$ to denote a tensor of order $k$, i.e. a vector in $\otimes^kV$.
$k$ is called the rank of the tensor.
\subsubsection{Rotation on tensors: a linear transformation}
A linear transformation on higher-order tensor is a generalization of a linear transformation on the first-order tensor, i.e. a vector.
\iffalse
Rotation is a special kind of linear transformation where $R\in SO(n)$. An important connection in our paper is under rotation operation on tensors in a matrix form.
\fi
Let $g: V \rightarrow V$ be a linear transformation. Use the basis
$\langle e_1, \cdots, e_n\rangle$ of $V$, we can represent this expression with the equation
\begin{equation}
g(e_i) = \sum_{j=1}^n a_{ij}e_j
\end{equation}
Let $M(g)$ denote the matrix representation of $g$ with respect to
the basis $\langle e_1, \cdots, e_n\rangle$. Then
\begin{equation}
M(g) = [a_{ij}]^t
\end{equation}
i.e. the transpose of the matrix $[a_{ij}]$
The map $g$ naturally induces a map $\otimes^k g$ on $\otimes^k V$.
On the basis element $e_{i1}\otimes\cdots\otimes e_{ik}$, the action of
$\otimes^k g$ is defined as
\begin{equation}
e_{i1}\otimes\cdots\otimes e_{ik} \mapsto
g(e_{i1}) \otimes\cdots\otimes g(e_{ik})
\end{equation}
For any tensor $T \in \otimes^kV$, we will use $g(T)$ to denote
the extension of $g$ on $\otimes^k V$
There is a convenient way to represent a linear transformation
of 2-tensor as matrix multiplication.
For a 2-tensor $T = \sum_{i, j}b_{ij} e_i\otimes e_j$, use
$M(T)$ be the matrix whose $(i, j)$ term is $b_{ij}$.
\begin{lem}
Rotation operation by matrix $R$ on second-order tensor (matrix) is a change of basis operation.
\begin{equation}
M(R(T)) = M(R)\times M(T) \times M(R)^t,
\end{equation}
\label{lemma2.1}
where $\times$ here means the usual matrix multiplication.
\end{lem}
\begin{rem}
Rotation operation by matrix $R$ on first-order tensor (vectors) $T$ could be viewed as
\begin{equation}
M(R(T)) = M(R)\times M(T).
\end{equation}
\label{lemma2.2}
\end{rem}
The proof here of Lemma \ref{lemma2.1} and Remark \ref{lemma2.2} are left in \ref{appendix:tensor}.
Lemma \ref{lemma2.1} and remark \ref{lemma2.2} will be used in the proof of Theorem \ref{thm:1}. As we have shown in this subsection, one could use a matrix form of rotation operation with certain rules of matrix multiplication to perform a rotation on the tensor. In the following proofs of this paper, we applied this idea to perform rotation operation on tensors via matrix multiplication.
\subsubsection{Contraction on tensors: reduction of order}
Let $\langle, \rangle$ be the standard inner product on $V$. Using this inner product, we can define the contraction of a tensor. It "merges" vectors on the specified axes using the inner product and reduces the rank of the tensor by 2. Formally, let $C(a, b)$ denote the
contraction along axis-$a$ and axis-$b$. Here, the axis means the ordinal of $V$ in $\otimes^kV$. For example, axis-$1$ refers to the first copy of $V$ in
$\otimes^kV$.
On the element $\otimes_{j=1}^k v_{ij}$, $C(a, b)$
acts on it by
pairing $v_{ia}$ and $v_{ib}$ via the inner product $\langle, \rangle$, i.e.
\begin{equation}
C(a, b)(v_{i1}\otimes\cdots\otimes v_{in}) = \langle v_{ia}, v_{ib}\rangle
v_{i1}\otimes\cdots \check{v_{ia}}\cdots\check{v_{ib}}\cdots\otimes v_{in}
\end{equation}
where $\check{v}$ means $v$ is not present.
We can then define $C(a, b)$ on $\otimes^kV$ by extending linearly.
When $k=2$, contraction is nothing other than taking the trace of the
corresponding matrix.
\begin{lem}
Let $R: V \rightarrow V$ be a rotation. Let $T \in \otimes^k V$, then
\begin{equation}
C(a,b)(R(T)) = R(C(a,b)(T))
\end{equation}
\label{lemma2.3}
\end{lem}
Lemma \ref{lemma2.3} shows an interesting connection between rotation operation and contraction. To understand this lemma, it represents that the contraction of a tensor is compatible with a linear transformation if this linear transformation is a rotation. This is an important lemma which is the foundation of the entire analysis in this paper. We would further utilize this lemma for extracting its rotation operation from higher (arbitrary) orders. We show the proof in \ref{appendix:tensor}.
\label{sec:prob}
\subsection{Supervised learning setup}
In our problem, given data set $\mathcal{D}=\{X_i;y_i\}_{i=1, ..., N}$. The data set contains $N$ input-output pairs $(X_i;y_i)$. The input here is a tensor tuple:
\begin{equation}
X_i=[X_1, X_2, ..., X_{N_x}]
\end{equation}
$N_x$ is the length of $X_i$. Normally, we only have one output.
Generally speaking, following the definition of \cite{bishop2006pattern,tao2005supervised}, parametric supervised learning can be viewed as a type of a model composed from two parts. The first part is a
predictor. Given parameter $\theta$, we have:
\begin{equation}
\hat{y}=\mathcal{M}^{\theta}(X_i)
\end{equation}
, where $\hat{y}$ is the prediction output of learning model $M$, $\theta$ is the parameter of $M$. As stated, it predicts value based on input $X_i$.
The second part is an optimizer, which updates the parameter $\theta$ based on a loss function. For a regression model, a typical loss function would be defined as:
\begin{equation}
L(M, \theta)=\frac{1}{N} \sum_{i=1}^{N} \|y_i-M^{\theta}(X_i)\|^2,
\end{equation}
where $\|\cdot\|$ represents 2-norm.
We usually hopes to minimize this loss function. It is formulated by:
\begin{equation}
\hat{\theta} = \arg\min_{\theta} L(M, \theta)
\end{equation}
where $\mathcal{M}$ is a learning model and $\mathcal{M}^{\hat{\theta}}$ is the optimal solution. Specifically, in this work, we applied Neural Networks \citep{specht1991general} and Random Forests \citep{liaw2002classification} in the case studies.
\subsection{Obtaining rotation-equivariance properties in systems using supervised learning}
\label{pre:rotEq}
Group equivariance is an important property for most physical systems. Typical examples of group equivariance could be rotation group equivariance, scaling group equivariance, and translation group equivariance. Mathematically, group equivariance is a property of a mapping $f:X\rightarrow Y$ to commute from $X$ to $Y$ under rotation group actions. Specifically, let $R\in SO(n)$ be a rotation action. $f:X\rightarrow Y$ is rotation-equivariant if
\begin{equation}
f(R(x))=R(f(x)), \;\;\;\forall R \in SO(n),\: x \in X.
\end{equation}
As a special case of rotation-equivariant, a function $f:X\rightarrow Y$ is rotation-invariant if:
\begin{equation}
f(R(x))=f(x), \;\;\;\forall R \in SO(n),\: x \in X.
\end{equation}
Since supervised learning models could be considered as functions, name a machine learning model as $M^{\theta}$. For a rotation operation $R$, we hope to obtain the property that:
\begin{equation}
M^{\theta}(R (x))=R(M^{\theta}(x)), \;\;\;\forall R \in SO(n),\: x \in X
\label{def:rotEq}
\end{equation}
For analysis below in Sec. \ref{sec:analysis}, we prove the rotation-equivariance property following the definition stated here in Equ. \ref{def:rotEq}. In other words, if a system would satisfy the property in Equ. \ref{def:rotEq}, then this system is rotation-equivariant.
\subsection{Modeling symmetric fluid systems via supervised learning}
The machine learning approach to the fluid dynamics modeling involves training a supervised learning model $\mathcal{M}$ using
$X_i$ as features and $Y$ as label.
In our case, the underlying space $S$ of the fluid dynamic system is complete with respect to rotation. This means for all rotation
$R: \mathbb{R}^n \rightarrow \mathbb{R}^n $, $R(p) \in F$ for all $p \in S$.
The objects we want to model via machine learning are rotation-equivariant tensorial
fields on $S$.
Let $X$ be a tensorial field on $S$, for any point $p \in S$, we use $X(p)$ to denote
the tensor at $p$ (for example, pressure at a particular point in a fluid dynamics system). $X$ is said to be \emph{rotation-equivariant} if for all point $p \in S$ and
all rotation $R$
\[
X(R(p)) = R(X(p))
\]
Suppose one has tensorial fields $X_1,\cdots, X_n, Y$ on $S$ such that
$X_i$ and $Y$ are related by some unknown physical law $f$ such that
\[
f(X_1, \cdots, X_n) = Y
\]
Supervised machine learning methods can be used here to learn a function $\mathcal{M}$
that approximates $f$ such that $\mathcal{M}$ generalizes well on new data.
Suppose those tensorial fields are rotation-equivariant, then naturally the model
$f$ as well its proxy $\mathcal{M}$
\section{Rotation Equivariant Network}
\label{sec:roteqnet}
In this section, we would like to propose Rotation Equivariant Network (RotEqNet) to solve rotation problems for high order tensors in fluid systems. RotEqNet is based on the position standardization algorithm, as we would further discuss in Section \ref{RotInvExtrAlgo}. We first provide a general description of the whole architecture in \ref{motivation}.
\subsection{Architecture} \label{motivation}
As shown in Figure\ref{fig:RotEqNet}, RotEqNet generally goes through three important steps: position standardization, prediction of kernel predictor, and position resetting. To be specific, the position standardization is an algorithm to transfer incoming tensor to its standard position. In Figure\ref{fig:RotEqNet}, the 'even order standardization' and 'odd order standardization' sections denote this algorithm in position standardization. Then, $X_s$ is considered as a standard position of input tensor $X$, and $R$ is an extracted rotation operation to transfer between standard position and original position. The output of kernel predictor is only dealing with standard positions. This will result the output $y_s$ in its standard position as well. Finally, apply $R^{-1}$ to output $y_s$ will be our final prediction. A general mathematical description of this process could be described as:
\begin{figure}
\centering
\includegraphics[scale=0.5]{RotEqNetArc.pdf}
\caption{Rotation-equivariant Network Architecture}
\label{fig:RotEqNet}
\end{figure}
\begin{equation}
y=R(M^{\theta}(R^{-1}(T)))
\end{equation}
How would this process help to solve rotation problems for high order tensors? \label{reason}
An important reason is related to a reduced function space for learning. When a learning model is only training with the standard position, it would no longer still have to deal with the entire group action causing a group-equivariant, but only need to focus on the pattern by the related physical equation.
Name the rotation group as G, and consider a full function space $\mathbf{C}(X,Y)$. As mentioned in \citep{weiler2018learning}, instead of performing regression on $\mathbf{C}(X,Y)$, RotEqNet is essentially exploring a much smaller space $\mathbf{C}(X/G,Y/G)$. The reduction of input-output dimensionality makes the training easier. With the same number of samples, the pattern for learning requires a far smaller space to explore.
The second reason is RotEqNet could provide a theoretical guarantee of the property of rotation-equivariant. Utilizing rotation symmetry as a strong prior for most physical systems, RotEqNet have a better generalized result learning from limited amount of data.
The following subsections will introduce position standardization algorithm in a complete manner with proof on its property of rotation-invariant in Theorem \ref{thm:1}. Then, we will demonstrate the proof of showing RotEqNet is rotation-equivariant in Theorem \ref{thm:2}.
\subsection{Position standardization algorithm for High Order Tensors}
\label{RotInvExtrAlgo}
Let $\mathcal{D}$ denote our data set.
The first stage of RotEqNet is to find a good representative of all tensors that are related to each other by rotation. We will call this representative the sample in "standard position," and we will denote it by $(X_s, Y_s)$. We will use $S$ to denote the position standardization algorithm and $S(X, Y) = (X_s, Y_s)$ to mean reducing $(X, Y)$ to its standard position.
$S$ has the following property that $\forall (X, Y) \in \mathcal{D}$ and all rotation operation $R\in SO(n)$,
\begin{equation}
S(R(X), R(Y)) = (X_s, Y_s).
\label{def:standardpos}
\end{equation}
This means,
$S$ produces exactly the same output no matter how $(X, Y)$ is
rotated, \emph{i.e.} it is rotation-invariant.
Intuitively, for a tensor $T$, we are selecting a representative on the orbit $O(T)$, (where $O(T)=\{R\cdot T|R\in SO(n)\}$), as the rotation invariant of a $T$ \citep{pinter2010book}. In our algorithm, we initially perform a tensor contraction to higher-order tensors, reducing the dimension to obtain a lower order tensor. Then using diagonalization for even cases and QR factorization for odd cases, the algorithm could obtain a rotation operation acting on $T$. Finally, it could get a tensor in standard position by rotating $T$ the original tensor with the inverse of the obtained rotation matrix.
This operation is compatible with the theoretical result shown in Lemma \ref{lemma2.3}.
\subsubsection{Tensor of even order}
\begin{figure}
\centering
\includegraphics[scale=0.6]{EvenOrderAlgo.pdf}
\centering
\caption{Rotation-invariant extraction for even order.}
\label{fig:evenorder}
\end{figure}
Given a symmetric tensor of even order $T^n \in \otimes^n V$ ($n$ is even). Let $\mathcal{C}$ denote a sequence of contraction
along the first two axes until we reach a second-order tensor. Applying $\mathcal{C}$ to $T^n$ we get:
\begin{equation}
T^2 = \mathcal{C}(T^n)
\end{equation}
\begin{align}
T^n \underrightarrow{\;\;\mathcal{C}\;\;} T^{n-2}\underrightarrow{\;\;\mathcal{C}\;\;} (...) \underrightarrow{\;\;\mathcal{C}\;\;}T^2
\end{align}
Then we find the orthonormal eigen-vectors of $T^2$ and use them to form the
orthonormal matrix $R$ that diagonalize $T^2$
\begin{equation}
T^2 = R^{-1} \times D \times R
\end{equation}
Since $R$ is an orthonormal matrix, we have
\begin{equation}
R^{-1} = R^T
\end{equation}
We will call $D$ the standard position of $T^2$.
We write $R(T^2) = D$ to shorten the notation
Since contraction and rotation are compatible by Theorem \ref{lemma2.3}. We can apply $R$ to $T^n$ before we apply contraction, and we will have
\begin{equation}
\mathcal{C}(R(T^n)) = D
\end{equation}
For the even tensor $T^n$, we define
\begin{equation}
S(T^n) = R^{-1}(T^n)
\end{equation}
\iffalse
Finally, we could get $T^{n'}_n$ in standard position by action of rotation matrix $R$ on $T^n_n$.
\begin{equation}
T'^{n}_n =R \cdot T^n_n
\end{equation}
where $\cdot$ is understood as the action of rotation matrix on high dimensional tensors.
For the training set, $D={(X^1, y^1), (X^2, y^2), ..., (X^n, y^n)}$, ignoring the notation of orders, is going to be proceeded into $D_s={(X_s^1, y_s^1), (X_s^2, y_s^2), ..., (X_s^n, y_s^n)}$. We use this set to train neural network. We use function $y = NN(X)$ to denote the neural network, where $y$ is the prediction.
When we are predicting using the test set, for an incoming data $X^n_n$, we will first perform contraction to get a $X^2_n$.
\begin{equation}
X^n_n\underrightarrow{\;\;\mathcal{C}\;\;}X^{n-2}_{n-2}\underrightarrow{\;\;\mathcal{C}\;\;} (...) \underrightarrow{\;\;\mathcal{C}\;\;}X^2_2
\end{equation}
Then, performing a diagonalization to obtain rotation matrix $R$.
\begin{equation}
X^2_2=RD^2_2R^{-1}
\end{equation}
Finally, we will predict the output by sending the rotated tensor to neural network.
\begin{equation}
S(T^n) = R(T^n)
\end{equation}
\fi
\subsubsection{Tensor of odd order}
Why would it be different for even order and odd order? Since odd order cannot directly reduce its dimension to 2 by contraction. Due to the fact that each contraction will reduce the dimensionality by 2, the reduced dimension will also be an odd number, which cannot be 2. Involving in this problem, this would further be impossible to extract the rotation matrix, which is impossible to rotate the tensor into a standard position. The following described is the method that we use to solve the problem.
Given a symmetric tensor of an odd order tensor $T^n \in \otimes^n V$ ($n$ is odd). Let $\mathcal{C}$ denote a sequence of contraction
along the first two axes until we reach a third-order tensor. Applying $\mathcal{C}$ to $T^n$ we get:
\begin{equation}
T^3= \mathcal{C}(T^n)
\end{equation}
\begin{figure}
\centering
\includegraphics[scale=0.65]{OddOrderAlgo.pdf}
\caption{Rotation-invariant extraction for odd order.}
\label{fig:my_label}
\end{figure}
After we obtain $T^3$, we could obtain three different order one tensors by contracting it. Name the contracted results, which are first-order tensors i.e. vectors, $V_1, V_2, V_3$. We could get an order two tensor by concatenating them.
\begin{equation}
T^2=(V_1, V_2, V_3)
\end{equation}
Then, we could perform the a similar process as before. We perform QR factorization to obtain rotation matrix $R$.
\begin{equation}
T^2=R\times U^2,
\label{QR factorization}
\end{equation}
For odd tensor, we define:
\begin{equation}
S(T^n) = R^{-1}(T^n)
\end{equation}
The pseudocode of our proposed algorithm is documented in Algorithm 1. We evaluate our method of position standardization algorithm in Section \ref{sec:expr}.
\begin{figure}
\centering
\includegraphics[scale=0.95]{Algorithm1.pdf}
\label{rotexAlgo}
\end{figure}
\subsection{Theoretical Analysis of Rotation-equivariant property}
\label{sec:analysis}
In our analysis, we aim to show the rotation-equivariant property of RotEqNet. As an important first step, we need to analyze the property of rotation invariant (standard position) derived by the position standardization algoriTheorem We hope to show Equ. \ref{def:standardpos} is true. Once Equ. \ref{def:standardpos} is proved true, RotEqNet will automatically be rotation-equivariant based on its architecture.
To sketch an outline about theorems below, Lemma \ref{lemma2.3} would serve as an important fact for preserving rotation information after contraction. Our algorithm has been analyzed by Theorem \ref{thm:1}.
\subsubsection{Main theorems and proofs}
\begin{thm}
\label{thm:1}
$S$ is rotation invariant, \emph{i.e.} for all rotation $R \in SO(n)$ and
symmetric tensor $T \in \otimes^k V$
\begin{equation}
S(R(T)) = S(T)
\end{equation}
\end{thm}
We provide a proof in Appendix \ref{proof:thm2}.
We call $S(T)$ the standard position of $S(T)$
Using Theorem \ref{thm:1} we can automatically have the result on tensor with arbitrary order that the standard position, derived by position standardization algorithm, is rotation invariant.
\begin{thm}
RotEqNet, $M_R$, is rotation-equivariant, \emph{i.e.} for all rotation $R \in SO(n)$ and
tensor $T \in \otimes^k V$
\begin{equation}
M_R(R(T)) = R(M_R(T))
\end{equation}
\label{thm:2}
\end{thm}
\begin{proof}
Name RotEqNet as $M_{R}$, kernel classifier as $M_k$. Consider a input pair $(X, y) \in \mathcal{D}$. Suppose the result of standardize position algorithm $S$ would have $S(X)=P_1(X)$, where $P_1$ denote a rotation operation.
First, consider $M_R(X)$ the process of RotEqNet is defined as:
\begin{equation}
M_R(X)=P_1^{-1}( M_k(S(X)))=P_1^{-1}(M_k(P_1(X)))
\end{equation}
Consider another rotation operation $P_2$ in the matrix form acting on $X$, using Theorem. \ref{thm:1} we know that:
\begin{equation}
S(P_2(X))=S(X)=P_1(X)=(P_1 I) (X)=(P_1 \times P_2^{-1})(P_2(X)),
\label{thm4:1}
\end{equation}
where $I$ is an identity matrix.
Then, consider $M_R(P_2(X))$ the process of RotEqNet is defined as:
\begin{equation}
M_R(P_2(X))=(P_1 P_2^{-1})^{-1}(M_k(S(P_2(X))))=(P_2P_1^{-1}) (M_k(S(P_2 (X))))
\end{equation}
We know that $S(P_2(X))=S(X)$ from Equ. \ref{thm4:1}. Therefore, we have $M_R(S(X))=M_R(S(P_2(X)))$. Substitute $M_R(X)$ back into previous equation,
\begin{equation}
M_R(P_2(X))=P_2P_1^{-1}(M_k(S(X)))=P_2(M_R(X))
\end{equation}
To simplify, for rotation operation $P_2$ on input $X$, we have
\begin{equation}
M_R(P_2(X))=P_2(M_R(X))
\end{equation}
This is showing that $M_R$ is rotation-equivariant by definition. Therefore, RotEqNet is rotation-equivariant.
\end{proof}
it has shown that \textit{Algorithm 1} is able to preserve rotation information for low dimension, and further extract using diagonalization for matrices. This part is a theoretical guarantee of our position standardization algoriTheorem
\section{Case studies}
\label{sec:expr}
In this section, a series of cases are provided to show the performance of RotEqNet. In the following subsections, cases are included from second-order, third-order, and fourth-order. We specifically investigate second-order cases with detailed studies on linear, and nonlinear test equations since, in current applications, second-order physical systems are widely used. Generally, we reported two properties of RotEqNet in every case study.
The first property is a loss reduction property. We apply RotEqNet in each test physical equation and compared it to the baseline models (Neural Networks and Random Forests). Another one is the rotation-invariant property. We examine this property by letting RotEqNet and baseline models to perform prediction on rotations of randomly selected data.
We report detailed information for these case studies in every subsection below. The interpretation of experimental results is also included in each subsection.
\subsection{Case study from Newtonian fluid: a second-order linear case} \label{order2linear}
\subsubsection{Problem statement}
Newtonian fluid is a type of fluid such that its viscous stress changes based on its flow. In this experiment, we aim to use simulation data to demonstrate this rule of Newtonian fluid. This would serve as a case study with second-order linear equations.
Let $\sigma \in \mathbb{R}^{3\times 3}$ be stress tensor,
$p \in \mathbb{R}$ pressure and $S\in \mathbb{R}^{3\times3}$ strain rate.
The rule of Newtonian fluid is an second-order physical equation which satisfies the following condition \citep{batchelor2000introduction}:
\begin{equation}
\sigma = -pI+\mu S \label{newtonianFluidEqu1}
\end{equation}
Another definition of the equation for Newtonian fluid would use the velocity vector field, defined as $\nabla v$. $\nabla v$ could be expressed as a $3\times 3$ matrix. Using this definition, the equation of Newtonian fluid could also be written as:
\begin{equation}
\sigma = -pI+\mu (\nabla v + \nabla v^T) \label{newtonianFluidEqu2}
\end{equation}
This could also be considered as the definition of strain rate Based on this definition, we could observe that $S=\nabla v + \nabla v^T$, and it is symmetric since $S=S^T$. Since $S$ is symmetric and $I$ is an identity matrix, $\sigma$ is also symmetric. Therefore, defining an arbitrary rotation matrix $R$, this system is equipped with the property of rotation-equivariant that $R(\sigma) = R(-pI+\mu S)$.
To quantify the stress for Newtonian fluid simulation, it would be useful to be able to predict the Newtonian fluid stress, given the simulation of pressure and velocity vector field. Based on this scenario, in this subsection, we provide a case study for the machine learning model on inputting the shear of Newtonian fluid and prediction of the stress.
\subsubsection{Data generation and model description} \label{Expr1DGMD}
Based on Eqn.\ref{newtonianFluidEqu2}, we first generate random data to obtain $\nabla v$ and $p$. The generation of random numbers in $\nabla v$ follows a normal distribution from range $(0,1)$. Derived from generated $\nabla v$ and $p$, we could obtain $\sigma$ from Eqn.\ref{newtonianFluidEqu2}. Denote the dataset as $D=\{x_i,y_i\}_{i=1}^{N}$. To form a proper dataset $D$ with $N$ elements for a machine learning model for Newtonian fluid, the input $x$ is set up to be a vector where $x \in \mathbb{R}^{10}$. Specifically, $x$ is composed by $p$ and flattened $S$ in Eqn.\ref{newtonianFluidEqu1}. The output $y\in \mathbb{R}^{9}$ is a vector which is the flattened result of matrix $\sigma$ derived by $p$ and $S$. The dataset $D$ would set up in the description above. To compare the difference of our method to the baseline method, we trained two models with the same hyper-parameter using different amounts of training data, ranging from $10,000, 20,000, ..., 100,000$. $85\%$ of generated data is used for training and $15\%$ of data is used for testing. A rotation set with 10,000 random rotation matrices is also generated for evaluating the property of rotation-equivariant, denoted by $\{R_i\}_{i=1}^{10000}$.
The machine learning model we apply here is neural networks and random forests because of the ability of these two models to approximate arbitrary functions. For Neural Networks, in our implementation, the logistic activation function is used as an activation function for every neuron. The number of neurons for two layers is 512 and 4, respectively. Adam optimizer \citep{kingma2014adam} is applied as the algorithm for optimization, and the learning is set up to be $1\times 10^{-3}$. We also set the batch size to be 64. For random forests, 100 estimators are set up with mean squared error as the criterion. The maximum depth of random forests is 3 to lower the chance of overfitting. We used Sklearn for implementation \citep{pedregosa2011scikit}. The computation environment of this experiment is CPU.
\subsubsection{Results}
\label{expr1:result}
There are two properties to evaluate, including error reduction and rotation-equivariant of RotEqNet. The effect of error reduction is evaluated for the first. A kernel predictor is trained by standard positions derived from training data. Then, the prediction algorithm is applied to both training and testing set to obtain the training and testing performances. The validation error $E$ is defined as the Mean Squared Loss using the formulation that:
\begin{equation}
E=\frac{\sum_{i=1}^{N} (y_i-M^{\theta}(X_i))^2}{N} \label{Error}
\end{equation}
In Eqn. \ref{Error}, $N$ is the number of data in dataset $D$, $M$ is the trained machine learning model, $\theta$ is the derived parameter from model $M$, and $(X_i, y_i)\in D$ describes input-label pair of the dataset. This evaluation $E$ represents the expected error of model $M$ with dataset $D$.
\begin{figure}
\centering
\includegraphics[scale=1.4]{Order2Linear.pdf}
\caption{Error of training with baseline model with random position, RotEqNet, and kernel predictor with the standard position for (a) Neural Networks and (b) Random Forests in the case study of Newtonian Fluid. Different colors represent different experimental groups. The RotEqNet model is trained with random positions and tested with random positions (red curves). Baseline models that trained and tested on raw data are shown as blue curves. The performances of kernel predictors that trained and tested with only standard positions are also shown as black curves. Training errors are shown with lines marked with triangles, testing errors are shown with lines marked with circles. }
\label{fig:expr1}
\end{figure}
\begin{table}
\centering
\begin{tabular}{ |p{2.5cm}|p{4.0cm}|p{4.0cm}| }
\hline
Kernel predictor& Training Error Reduction &Testing Error Reduction\\
\hline
Neural Networks & 99.56\% &99.60\%\\
\hline
Random Forests & 99.56\% & 99.72\% \\
\hline
\end{tabular}
\label{tab1:expr1}
\caption{Evaluation of error reduction for RotEqNet with different kernel predictor.}
\end{table}
Fig. \ref{fig:expr1}(a) shows the error reduction property of RotEqNet. This plot consists of three groups of experimental groups.
The first experiment group focuses on the accuracy of the baseline model, a single feed-forward Neural Network, on raw data with random rotated positions. As shown in Fig. \ref{fig:expr1}(a) with blue curves, triangle curve represents training error and circle curve represents testing error. The second experiment group is RotEqNet with Neural Network as the kernel predictor. As shown in Fig. \ref{fig:expr1}(a) in red curves, triangle curve represents training error, and the circle curve represents testing error. For 100,000 training samples, the testing error of RotEqNet is 0.0037, and the testing error of the baseline method is 1.333. We could observe a huge error reduction, 99.56\% in training, and 99.60\% in testing, for RotEqNet compared to the error of using the baseline model.
For the last experiment group, as performances marked as black curves in the figure, it reports the performance of kernel predictor with standard position only. This experiment would explain why RotEqNet would improve performance since training with standard positions would be an easier task compared to raw data.
Further, Fig. \ref{fig:expr1}(b) shows the error reduction effect of RotEqNet using Random Forest as a kernel predictor. Similarly, as shown in Fig. \ref{fig:expr1}(b) with blue curves, it represents the performance of the baseline method (Random Forests). The second experiment group is RotEqNet with Random Forests as the kernel predictor. As shown in Fig. \ref{fig:expr1}(b) in red curves, triangle curve represents training error and the circle curve represents testing error. We could observe a huge error reduction, 99.56\% in training and 99.72\% in testing, for RotEqNet compared to the error of using only the Random Forest predictor.
For the last experiment group, as performances marked as black curves in the figure, trains the kernel predictor with standard position only. As stated before, this could also serve as a reason for the error reduction effect for RotEqNet on random forests.
According to the reported results, RotEqNet has a good generalization result without overfitting. For cases training with raw data for baseline models, the testing error is typically higher compared to training error. For example, the difference is training and testing errors are 0.44 for Neural Networks, $1.01$ for Random forests when $N=100,000$. This represents that for both Neural Networks and Random Forests would be easy to overfit this task on Newtonian Fluid. By contrast, RotEqNet would help to reduce this difference in training and testing error. As we could observe from the training and testing error of RotEqNet, the errors are much lower. When $N=100,000$, there are only $0.0002$ for Neural Networks and $0.0078$ for Random Forests. In the case of linear second-order equations, the application of RotEqNet would result in better-generalized results in learning.
Another important property to evaluate for RotEqNet is rotation-equivariant. The experiment is designed on the definition of rotation-equivariant mentioned in Eqn. \ref{def:rotEq}. First, we pick a randomly generated data $(X_0, y_0)$. Then we apply the rotation set $\{R_i\}_{i=1}^{10000}$ with 10,000 random rotation matrices to $(X_0, y_0)$. To fully investigate the property of rotation-equivariant, we apply an error evaluation method $E_D$ here to evaluate the error compared to real data, which is defined as:
\begin{equation}
E_D=\frac{\sum_{i=1}^{N} [(M^{\theta}(R_i(X_0))-R_i(y_0)]^2}{N}
\label{ED}
\end{equation}
\begin{table}
\centering
\begin{tabular}{ |p{1.3cm}|p{2.4cm}|p{2.4cm}|p{2.4cm}|p{2.4cm}| }
\hline
Model& Baseline (NN)& RotEqNet (NN)&Baseline (RF)&RotEqNet (RF)\\
\hline
$E_D$ & 0.6362 & \textbf{0.0013} &3.1334 & 1.5513\\
\hline
\end{tabular}
\caption{Evaluation of Rotation-equivariant property between baseline model and RotEqNet.}
\label{tableExpr1}
\end{table}
This error evaluation method ($E_D$) focuses more on the model's error on real data for all rotations. As shown in Tab. \ref{tableExpr1}, for both baseline methods, using neural networks and random forests, there are large errors for $E_M$ and $E_D$. The baseline methods have no theoretical guarantee that it has the property of rotation-equivariant. However, there is an error reduction for both machine learning models when applying with RotEqNet's architecture. Especially for RotEqNet with Neural Networks as kernel predictor, we could observe that $E_D=0.0013$ with $99.8\%$ of error reduction. This could guarantee the rotation-equivariant property of RotEqNet.
\subsection {Case study from large eddy simulation: a second-order nonlinear case} \label{order2_nonlinear}
\subsubsection{Problem statement}
In this case, we consider the subgrid model of large eddy simulation (LES) of turbulent flow by Kosovic \cite{kosovic1997subgrid}. In this case study, as formulated previously in \citep{pitsch2006large,matai2018flow}, we hope to obtain a learned model by simulation data from LES. This would serve as a case study with second-order non-linear equations. LES is defined as:
\begin{equation}
\tau_{ij} = -(C_s \Delta)^2\left\{ 2\sqrt{2S_{mn}S_{mn}}S_{ij} + C_1\left( S_{ik}S_{kj}-\frac{1}{3}S_{mn}S_{mn}\delta_{ij}\right)+C_2\left( S_{ik}\Omega_{kj} - \Omega_{ik}S_{kj} \right)\right\} \label{LESEqu}
\end{equation}
Here $\tau_{ij}$ is subgrid stress, which is a symmetric traceless 2nd order tensor. $S_{ij}$ and $\Omega_{ij}$ are symmetric and anti-symmetric parts of velocity gradient tensor $G_{ij}$, where $\mathrm{Tr}(G) = 0$. Further, $C_s$, $\Delta$, $C_1$, $C_2$ are all constants. The configuration of constants above are reported in the next subsection.
In order to qualify the subgrid stress for LES, this study aims to predict the subgrid stress, given the simulation of velocity gradient tensor. This case study for the machine learning model on inputting the velocity gradient tensor.
\subsubsection{Data generation and model description}
Based on Eqn. \ref{LESEqu}, we first generate random data to obtain a simulated velocity gradient tensor $G_{ij}$. The generation of random numbers follows a normal distribution from range $(0,1)$, and $G_{ij}$ is obtained from a random matrix $G_{raw}$ by subtracting $\frac{1}{3}\mathrm{Tr}(G_{raw})$ from diagonal position. This would keep $\mathrm{Tr}(G)=0$. $S_{ij}$ and $\Omega_{ij}$ could be obtained from $G_{ij}$ by getting its symmetric and anti-symmetric parts. For the setup of constants, $C_s=0.4,\Delta=0.4,C_1=C_2=1.0$. $tau_{ij}$ is computed from the above setting with Eqn. \ref{order2_nonlinear}. Denote the dataset as, $D=\{x_i,y_i\}_{i=1}^{N}$. To form a proper dataset $D$ with $N$ elements for a machine learning model for Newtonian fluid, the input $x$ is set up to be a vector where $x \in \mathbb{R}^{9}$. Specifically, $x$ is composed by flattened $G_{ij}$. The output $y\in \mathbb{R}^{9}$ is a vector, which is the flattened result of matrix $\tau$ derived by $G$ and other constants. To compare the difference of our method to the baseline method, we trained two models with the same hyper-parameter using different amounts of training data, ranging from $10,000, 20,000, ..., 100,000$. $85\%$ of generated data is used for training, and $15\%$ of data is used for testing. A rotation set with 10,000 random rotation matrices is also generated for evaluating the property of rotation-equivariant, denoted by $\{R_i\}_{i=1}^{10000}$. The model setup is the same compared to Sec. \ref{Expr1DGMD}.
\subsubsection{Results}
The effect of error reduction is evaluated for the first. The validation error $E$ is defined as the Mean Squared Loss using the formulation in Eqn. \ref{Error}. This evaluation $E$ represents the expected error of model $M$ with dataset $D$.
\begin{figure}
\centering
\includegraphics[scale=1.4]{Order2Nonlinear.pdf}
\caption{Error of training with baseline model with random position, RotEqNet, and kernel predictor with the standard position for (a) Neural Networks and (b) Random Forests in the case study of large eddy simulation. Different colors represent different experimental groups. The RotEqNet model is trained with random positions and tested with random positions (red curves). Baseline models that trained and tested on raw data are shown as blue curves. The performances of kernel predictors that trained and tested with only standard positions are also shown as black curves. Training errors are shown with lines marked with triangles, testing errors are shown with lines marked with circles. }
\label{fig:expr2}
\end{figure}
Fig. \ref{fig:expr2}(a) shows the error reduction effect of RotEqNet with Neural Network as a kernel predictor for second-order nonlinear cases with three groups of experimental groups.
The first experiment group focuses on the accuracy of the baseline method on raw data with random rotated positions. As shown in Fig. \ref{fig:expr2} with blue curves, triangle curve represents training error and the circle curve represents testing error. The second experiment group is RotEqNet, with Neural Network as a kernel predictor. As shown in Fig. \ref{fig:expr2}(a) in red curves. For 100,000 training samples, the testing error of RotEqNet is 0.1391, and the testing error of the baseline method is 0.2946, with 52.77\% of error reduction.
The performances of the last experiment group are marked as black curves in the figure, with only standard position trained for kernel predictor.
Based on the experimental results, firstly, RotEqNet could reach a better learning performance compared to simply applying Neural Networks (baseline method). Training with standard positions could lower the training difficulty, and therefore RotEqNet could obtain better performance.
Further, Fig. \ref{fig:expr2}(b) shows the error reduction effect of RotEqNet using Random Forest as a kernel predictor. The general performance of using Random Forests as a kernel predictor is relatively worse compared to using Neural Networks as a kernel predictor. In Fig. \ref{fig:expr2}(b), blue curves represent the performance of training with raw data by Random Forests (baseline method); red curves represent the performance of RotEqNet; black curves represent the performance of kernel predictor trained with standard positions. We could observe an error reduction for 36.63\% in training and 57.58\% in testing for RotEqNet with Random Forests.
Moreover, RotEqNet has a good generalization result without overfitting. Applying raw data in learning directly on baseline models, the testing error is much higher compared to the training error. For example, the difference is training and testing errors are $0.0068$ for Neural Networks, $0.1068$ for Random forests when $N=100,000$. It is also observable in Fig. \ref{fig:expr2}(a) that the training error of the baseline model with raw data is the lowest, while the testing error of the baseline model is the highest. In this case study, Neural Networks are worse for the sake of overfitting compared to Random Forests. By contrast, introducing the architecture RotEqNet would help to reduce this difference in training and testing error. As we could observe from the training and testing error of RotEqNet, the errors are much lower. When $N=100,000$, there are only $0.0046$ for Neural Networks and $0.0022$ for Random Forests. In this case study of LES, the application of RotEqNet would result in better-generalized results in learning.
\begin{table}
\centering
\begin{tabular}{ |p{2.5cm}|p{4.0cm}|p{4.0cm}| }
\hline
Kernel predictor& Training Error Reduction &Testing Error Reduction\\
\hline
Neural Networks & -98.44\% &52.77\%\\
\hline
Random Forests & 36.63\% & 57.58\% \\
\hline
\end{tabular}
\caption{Evaluation of error reduction for RotEqNet with different kernel predictor.}
\label{tab1:expr2}
\end{table}
To evaluate the rotation-equivariant property of RotEqNet for second-order nonlinear cases, our experimental process is close to the one stated in Sec. \ref{expr1:result}. First, we pick a randomly generated data $(X_0, y_0)$. Then we apply the rotation set $\{R_i\}_{i=1}^{10000}$ with 10,000 random rotation matrices to $(X_0, y_0)$.
This error evaluation method ($E_D$), as defined in Eqn. \ref{ED}, focuses more on the model's error on real data for all rotations. As shown in Tab. \ref{tableExpr2}, for both baseline methods, using neural networks and random forests, there are large error for $E_D$. The baseline methods have no theoretical way to guarantee that it has the property of rotation-equivariant. However, there is an error reduction for both machine learning models when applying with RotEqNet's architecture. Especially for RotEqNet with Neural Networks as kernel predictor, it is observable that $E_D=0.0025$ with $73.55\%$ error reduction. This could guarantee the rotation-equivariant property of RotEqNet for nonlinear second-order cases.
\begin{table}
\centering
\begin{tabular}{ |p{1.3cm}|p{2.4cm}|p{2.4cm}|p{2.4cm}|p{2.4cm}| }
\hline
Model& Baseline (NN)&RotEqNet (NN)&Baseline (RF)& RotEqNet (RF)\\
\hline
$E_D$ & 0.0945 & \textbf{0.0025} &0.1912 & 0.0084\\
\hline
\end{tabular}
\caption{Evaluation of Rotation-equivariant property between baseline model and RotEqNet.}
\label{tableExpr2}
\end{table}
\subsection{Case study from testing Newtonian Fluid equation: a third-order case} \label{order3}
\subsubsection{Problem statement}
In this section, we study the performance of RotEqNet for tensor with odd order. In this case, we specifically set a third-order test equation. We used a test equation here revised from Newtonian fluid equation from Eqn. \ref{newtonianFluidEqu2}. We name this testing equation as 'testing Newtonian fluid equation' for simplicity. The testing equation revised from Newtonian fluid equation can be described as:
\begin{equation}
\sigma = -pI+\mu (\nabla v + \nabla v^T) \label{newtonianFluidEqu3},
\end{equation}
where $\sigma \in \mathbb{R}^{3\times 3\times 3}$ is testing stress, $p \in \mathbb{R}$ is testing pressure, and $v\in \mathbb{R}^{3\times 3\times 3}$ is testing velocity field. $I\in \mathbb{R}^{3\times 3\times 3}$ is the identity third-order tensor.
Based on this testing equation, we could observe that $(\nabla v + \nabla v^T)^T=\nabla v + \nabla v^T$. Since $\nabla v + \nabla v^T$ is symmetric, and $I$ is symmetric, we have $\sigma$ is also symmetric. Therefore, defining an arbitrary rotation matrix $R$, this system is equipped with the property of rotation-equivariant that $R(\sigma) = R(-pI+\mu (\nabla v + \nabla v^T))$.
In order to qualify stress for testing the Newtonian fluid equation, this study aims to predict the stress, given the simulation of pressure and velocity gradient tensor. This case study for the machine learning model on inputting the pressure and velocity gradient tensor.
\subsubsection{Data generation and model description}
Based on Eqn. \ref{newtonianFluidEqu3}, we first generate random data to obtain $\nabla v$ and $p$. The generation of random numbers in $\nabla v$ follows a normal distribution from range $(0,1)$. $\sigma$ could be obtained using the Eqn.\ref{newtonianFluidEqu3}, derived from generated $\nabla v$ and $p$. Denote the dataset as , $D=\{x_i,y_i\}_{i=1}^{N}$. To form a proper dataset $D$ with $N$ elements for a machine learning model for Newtonian fluid, the input $x$ is set up to be a vector where $x \in \mathbb{R}^{28}$. Specifically, $x$ is composed by $p$ and flattened $(\nabla v + \nabla v^T)$ in Eqn.\ref{newtonianFluidEqu3}. The output $y\in \mathbb{R}^{27}$ is a vector which is the flattened result of matrix $\sigma$. The dataset $D$ would set up in the description above. To compare the difference of our method to the baseline method, we trained two models with the same hyper-parameter using different amounts of training data, ranging from $10,000, 20,000, ..., 100,000$. $85\%$ of generated data is used for training and $15\%$ of data is used for testing. A rotation set with 10,000 random rotation matrices is also generated for evaluating the property of rotation-equivariant, denoted by $\{R_i\}_{i=1}^{10000}$. The model setup is the same as Sec. \ref{Expr1DGMD}.
\subsubsection{Results}
Fig. \ref{fig:expr3}(a) shows the error reduction effect of RotEqNet with Neural Network as a kernel predictor for third-order cases with three groups of experimental groups.
The first experiment group focuses on the accuracy of the baseline model (Neural Network) on raw data with random rotated positions as shown in Fig. \ref{fig:expr3}(a) with blue curves. The second experiment group is RotEqNet, with Neural Network as kernel predictor as shown in Fig. \ref{fig:expr3}(a) in red curves. For 100,000 training samples, the testing error of RotEqNet is 1.8759 and the testing error of baseline method is 2.2232 with 15.62\% of error reduction.
The performances of the last experiment group are marked as black curves in the figure, with only standard position trained for kernel predictor.
Based on the experimental results, for the third-order testing equation, RotEqNet could reach a better learning performance compared to the baseline method. Training with RotEqNet could lower the training difficulty, and therefore RotEqNet could obtain better performance. Moreover, RotEqNet has good generalization capability without overfitting. As shown in the blue curves of Fig. \ref{fig:expr3}, if we apply raw data in learning directly on baseline models, the testing error is much higher compared to the training error. In this case study, introducing the architecture RotEqNet would help to reduce this difference in training and testing error. As we could observe from the training and testing error of RotEqNet, the errors are much lower. When $N=100,000$, there are only $0.0046$ for Neural Networks and $0.0022$ for Random Forests. In this case study of testing the Newtonian fluid equation, the application of RotEqNet would result in better-generalized results in learning.
Further, Fig. \ref{fig:expr3}(b) shows the error reduction effect of RotEqNet using Random Forest as a kernel predictor. The general performance of using Random Forests as a kernel predictor is relatively worse compared to using Neural Networks as a kernel predictor. In Fig. \ref{fig:expr2}(b), blue curves represent the performance of training with raw data by Random Forests (baseline method); red curves represent the performance of RotEqNet; black curves represent the performance of Random Forest trained with standard positions. For the first point, we could observe an error reduction for 0.90\% in training and 6.84\% in testing for RotEqNet with Random Forests. As another point, RotEqNet is also obtaining a better-learned model for the model's capability in generalization. The testing error of the baseline method is observably higher than testing error, while the training and testing performance of RotEqNet is approximately the same. As suggested in Fig. \ref{fig:expr3}(a), in second-order nonlinear cases, RotEqNet could reach a generalized learning result with remarkably lower error compared to baseline methods.
\begin{figure}
\centering
\includegraphics[scale=1.4]{Order3Linear.pdf}
\caption{Error of training with baseline model with random position, RotEqNet, and kernel predictor with the standard position for (a) Neural Networks and (b) Random Forests in the case study of testing Newtonian Fluid equation. Different colors represent different experimental groups. The RotEqNet model is trained with random positions and tested with random positions (red curves). Baseline models that trained and tested on raw data are shown as blue curves. The performances of kernel predictors that trained and tested with only standard positions are also shown as black curves. Training errors are shown with lines marked with triangles, testing errors are shown with lines marked with circles. }
\label{fig:expr3}
\end{figure}
\begin{table}
\centering
\begin{tabular}{ |p{2.5cm}|p{4.0cm}|p{4.0cm}| }
\hline
Kernel predictor& Training Error Reduction &Testing Error Reduction\\
\hline
Neural Networks & 9.42\% &15.62\%\\
\hline
Random Forests & 0.90\% & 6.84\% \\
\hline
\end{tabular}
\caption{Evaluation of error reduction for RotEqNet with different kernel predictor.}
\label{tab1:expr3}
\end{table}
\begin{table}
\centering
\begin{tabular}{ |p{1.3cm}|p{2.4cm}|p{2.4cm}|p{2.4cm}|p{2.4cm}| }
\hline
Model& Baseline (NN)&RotEqNet (NN)&Baseline (RF)& RotEqNet (RF)\\
\hline
$E_D$ & 2.8454 &\textbf{2.6992} & 3.0788 & 3.1068\\
\hline
\end{tabular}
\caption{Evaluation of Rotation-equivariant property between baseline model and RotEqNet.}
\label{tableExpr3}
\end{table}
To evaluate the rotation-equivariant property of RotEqNet for this third-order case, we designed an experimental process that is close to the one stated in Sec. \ref{expr1:result}. As shown in Tab. \ref{tableExpr3}, for baseline method using neural networks, the error is relatively large for $E_D$ compared to RotEqNet. In our experiment, we reached an error reduction of $0.1462$. We would further discuss this result in Section \ref{discussion}.
\subsection{Case study from Electrostriction: a fourth-order case}
\subsubsection{Problem statement}
This case study focuses on a linear relationship of fourth-order tensor. Nye \cite{nye1985physical} has introduced a fourth-order tensor in modeling elastic compliances and stiffnesses, which has been investigated using machine learning methods \citep{yang2019predicting,liu2019deep}. Generally, in the study of the properties of a crystalline and anisotropic elastic medium, a fourth-order tensor coefficient will typically be applied to model the relationship between two symmetric second-order tensors \cite{walpole1984fourth}. In this case, we study Electrostriction, a property causing all electrical non-conductors to change their shape under the application of an electric field. The relationship is described as:
\begin{equation}
T_{ij}=V_{ijkl}V_{kl} \label{4th-order}
\end{equation}
Here $T_{ij} \in \mathbb{R}^{3\times 3}$ is a symmetric traceless second-order strain tensor. $S_{kl} \in \mathbb{R}^{3\times 3}, S_{kl}=S_k S_l$ where $V_k$ and $V_l$ are first-order electric polarization density. Note here $V_{kl}$ is symmetric. $V_{ijkl}\in \mathbb{R}^{3\times 3\times 3 \times 3}$ is the electrostriction coefficient.
Based on the formulation above, this system is symmetric. Since $S_{ij}$ is symmetric, $T_{ij}^T=(V_{ijkl} S_{ij})^T=S_{kl} V_{klij}=T_{ij}$. This could guarantee that $T_{ij}$ is also symmetric. Due to the face that the system is symmetric, applying a random rotation matrix $R$, $R(T)=R(VS)$.
In order to qualify strain for study on Electrostriction, we aim to predict the strain, given the simulation of electrostriction coefficient and electric polarization density.
\subsubsection{Data generation and model description}
Based on Eqn. \ref{4th-order}, we first generate random data to obtain simulated electrostriction coefficient tensor $V_{ijkl}$ and electric polarization density tensor $S_{ij}$. The generation of random numbers follows a normal distribution. $T_{ij}$ is computed from above setting using $V_{ijkl}$ and $S_{ij}$. Denote the dataset as, $D=\{x_i,y_i\}_{i=1}^{N}$. To form a proper dataset $D$ with $N$ elements for a machine learning model for the study on Electrostriction, the input $x$ is set up to be a vector where $x \in \mathbb{R}^{90}$. Specifically, $T_{ij}$ is composed by flattened $V_{ijkl}$ and $S_{ij}$. The output $y\in \mathbb{R}^{9}$ is a vector, which is the flattened result of second-order tensor $T$. To compare the difference of our method to the baseline method, we trained two models with the same hyper-parameter using different amounts of training data, ranging from $10,000, 20,000, ..., 100,000$. $85\%$ of generated data is used for training, and $15\%$ of data is used for testing. A rotation set with 10,000 random rotation matrices is also generated for evaluating the property of rotation-equivariant, denoted by $\{R_i\}_{i=1}^{10000}$. The model setup is the same compared to Sec. \ref{Expr1DGMD}. We use Numpy to generate this simulated dataset by generating a random symmetric fourth-order tensor $V$, and second-order tensor $S$. $T$ is computed from generated $V$ and $S$ by Eqn. \ref{4th-order}.
\begin{figure}
\centering
\includegraphics[scale=1.4]{Order4Linear.pdf}
\caption{Error of training with baseline model with random position, RotEqNet, and kernel predictor with the standard position for (a) Neural Networks and (b) Random Forests in the case study of Electrostriction. Different colors represent different experimental groups. The RotEqNet model is trained with random positions and tested with random positions (red curves). Baseline models that trained and tested on raw data are shown as blue curves. The performances of kernel predictors that trained and tested with only standard positions are also shown as black curves. Training errors are shown with lines marked with triangles, testing errors are shown with lines marked with circles. }
\label{fig:expr4}
\end{figure}
\subsubsection{Results}
\label{sec4:order4:result}
The effect of error reduction is evaluated for the first. The validation error $E$ is defined as the Mean Squared Loss using the formulation in Eqn. \ref{Error}. This evaluation $E$ represents the expected error of model $M$ with dataset $D$. Fig. \ref{fig:expr4} shows the performance of Neural Networks and Random Forests as kernel predictor separately. It is observable that in high-order cases, Neural Networks have huge superiority to Random Forests. Details will be demonstrated in the following paragraphs.
We are starting with Neural Networks, Fig. \ref{fig:expr4}(a) shows the error reduction effect of RotEqNet with Neural Network as a kernel predictor.
As shown in blue curves, the first experiment group focuses on the accuracy of the baseline model on raw data with random rotated positions. The second experiment group is RotEqNet marked with red curves. As shown in black curves, it shows the performance of the kernel predictor trained by standard position. For 10,000 training samples, the testing error of RotEqNet is 4.0106 and the testing error of baseline model is 8.6458 with 53.61\% of error reduction. The testing performance of the kernel predictor is only evaluated on the testing set with only standard positions. It will be helpful to explain the reason for the improved performance of RotEqNet.
To interpret the experimental results, firstly, RotEqNet could reach a better learning performance compared to simply applying Neural Networks (baseline method). A dataset with only standard positions has lower training difficulty compared to random positions. This claim is supported by black curves in Fig. \ref{fig:expr4}(a), the performance of the kernel predictor is much better than the baseline model. RotEqNet could obtain better performance for utilizing rotation symmetry as a prior, and training kernel predictor with only standard positions. Moreover, RotEqNet has a good generalization result without clear overfitting. The training error and testing error of RotEqNet is considerably close to each other, and sometimes, the testing error of RotEqNet is even slightly better than its training error. By contrast, applying raw data in learning directly on $M_{baseline}$ would result in an overfitted model. The testing error is much higher compared to the training error. To demonstrate the improved learning result in generalization, for example, when $N=100,000$, the difference between training and testing errors for RotEqNet is only $0.0024$ while the difference of the baseline method is $2.1118$. As a quick conclusion, for Neural Networks as a kernel predictor, the application of RotEqNet would be better compared to the baseline method.
\begin{table}
\centering
\begin{tabular}{ |p{2.5cm}|p{4.0cm}|p{4.0cm}| }
\hline
Kernel predictor& Training Error Reduction &Testing Error Reduction\\
\hline
Neural Networks & 18.93\% & 54.63\%\\
\hline
Random Forests & 0.58\% & 2.96\% \\
\hline
\end{tabular}
\caption{Evaluation of error reduction for RotEqNet with different kernel predictor.}
\label{tab1:expr4}
\end{table}
Further, Fig. \ref{fig:expr4}(b) shows the error reduction effect of RotEqNet using Random Forest as a kernel predictor.
At first glance, we could find that the curves for Random Forests are quite messy without certain patterns like Fig. \ref{fig:expr4}(a). The general performance of using Random Forests as a kernel predictor is worse in both aspects of performance and generalization. In Tab. \ref{tab1:expr4}, we could observe a training error reduction for 0.58\% and testing error reduction of 2.96\%. Even if we could still see the general error of RotEqNet seems to be slightly lower than the baseline method. This result is not comparable to the error reduction performance with setting Neural Networks as a kernel predictor.
As another point, selecting Random Forests as a kernel predictor fails to extract learning rules with the standard position. As we could observe the black curves in Fig. \ref{fig:expr4}(b) is not showing an improved performance as good as using Neural Networks.
Finally, the learned model of RotEqNet is also not getting a model with better generalization capability. There is no significant reduction of overfitting error compared to the baseline method.
\begin{table}
\centering
\begin{tabular}{ |p{1.3cm}|p{2.4cm}|p{2.4cm}|p{2.4cm}|p{2.4cm}| }
\hline
Model& Baseline (NN)&RotEqNet (NN)&Baseline (RF)& RotEqNet (RF)\\
\hline
$E_D$ & 3.9290 & \textbf{2.7960} &4.8976 & 4.8740\\
\hline
\end{tabular}
\caption{Evaluation of Rotation-equivariant property between baseline model and RotEqNet.}
\label{tableExpr4}
\end{table}
To evaluate the rotation-equivariant property of RotEqNet for this fourth-order case, we designed an experimental process as stated in Sec. \ref{expr1:result}.
The error evaluation measurement ($E_D$), as defined in Eqn. \ref{ED}, focuses more on the model's error on real data for all rotations. As shown in Tab. \ref{tableExpr4}, when using neural networks, baseline method has large error for $E_D$. RotEqNet helps in keeping the rotation-equivariant property as observing error reduction in $E_D$ for $28.86\%$. Considering the case using Random Forests as a kernel predictor, as shown in the previous paragraph, because of the reason that Random Forests are relatively bad in learning fourth-order data, the performance of $E_M$ is still affected, which results in a large error in the prediction of RotEqNet with Random Forest.
\label{discussion}
The large error reduction observed in case studies raised new opportunities in solving the problem of the physical system with rotation symmetry. Most physical systems have the property of rotation symmetry, and currently, there exist few works that could provide a theoretical guarantee to this property for machine learning methods. A key point in this problem is to design a properly defined algorithm to obtain rotation invariant for high-order tensors. This paper has shown RotEqNet with theoretical and experimental results aiming to solve the problem of rotation symmetry.
We first define a standard position as rotation invariant, which is compatible for high-order tensors. It allows us to extract the rotation invariant of high-order tensors using a contraction, diagonalization, and QR factorization. The theoretical guarantee is shown in Thm. \ref{thm:2}, and the algorithm is shown in Alg. \ref{rotexAlgo}. RotEqNet is built on Alg. \ref{rotexAlgo} with a kernel predictor which only deals with standard positions (rotation invariants). By setting kernel predictor with Neural Networks and Random Forests, these two methods are compared with baseline methods in four different case studies focusing on second-order linear, second-order nonlinear, third-order linear, and fourth-order linear cases. There are three important points to address from the observation of case studies.
First, the definition of the standard position is successful. The definition of the standard position is not unique. We aim to define a proper version of the standard position to simplify the learning task by removing the effect of rotation symmetry. In our case, the standard position satisfies the definition of rotation-invariants, which selects a representative point from the orbit of an element via diagonalization (or QR factorization). The experimental results are compatible with this definition of the standard position. We could observe in most of the cases, training kernel predictors with only rotation invariants could reach the lowest error. The reduced error means that the rotation invariant in our definition could lower the difficulty of this learning task as we previously discussed the reason in Sec. \ref{reason}.
Second, RotEqNet is equipped with the property of Rotation-equivariant. As we could observe from the results of case studies, the rotation error $E_M$ is typically low compared to baseline methods.
The perseverance of the property of Rotation-equivariant shows the successful design of RotEqNet and the correctness of Thm. \ref{thm:2}. Operating with Alg. \ref{rotexAlgo}, the property of Rotation-equivariant of RotEqNet could be held if and only if Thm. \ref{thm:2} is correct.
Further, this fact would cause an error reduction for RotEqNet. As stated in the previous paragraph, training with rotation invariants will result in a lower error. Under this situation, adding with the property of Rotation-equivariant, this would cause RotEqNet could process this system with any rotation.
The two reasons above are the main reasons that are causing the error reduction for RotEqNet. There is also one point to mention is the selection of kernel predictor. The model selection of kernel predictor will affect the learning results significantly since the kernel predictor is essential in learning the physical system without the effect of rotation symmetry. Neural Networks is the best model in the design of the data-driven method for physical systems because of its flexibility to approximate arbitrary functions. We only reported the performance of Neural Networks and Random Forests as previous work by Ling \cite{ling2016machine}. As described in Sec. \ref{sec4:order4:result}, the performance of Random Forests is limited compared to Neural Networks. Also, as a general trend in previous experiments, Neural Networks are usually reaching better performance compared to Random Forests. As a quick conclusion, we believe the application of Neural Networks as a kernel predictor has a series of advantages than other machine learning models.
We wish to further discuss about another error evaluation method of rotation-equivariance property that we do not mention in case studies. Consider a type of error evaluation, evaluating rotation error of model itself, the error $E_M$ is defined as:
\begin{equation}
E_M=\frac{\sum_{i=1}^{N} [(M^{\theta}(R_i(X_0))-R_i( M^{\theta}(X_0))]^2}{N}
\label{EM}
\end{equation}
The evaluation of this error is actually trivial since we have already rigorously provided a proof in Theorem \ref{thm:2} showing the rotation-equivariance property of RotEqNet. We applied this evaluation in first two case studies, and the estimated error is around $10^{-16}$ for all these cases.
For future work, there are three directions to this paper: a better definition of standard position, application to other groups, and generalization to non-symmetric systems. For the first direction, for the current definition, the rotation invariant of odd-order tensors is not reaching equivalent performance as even-order tensors. It would be a good work for revising the definition of standard position for odd tensors. Second, besides rotation symmetries, there are also physical systems with other group-equivariant properties such as scaling and transaction. This work could provide a method in solving problems with other groups, but the detailed design of an algorithm should differ from case to case. Third, current work could only deal with the symmetric system. However, for a general case, if the system is not symmetric, there are certain methods to use RotEqNet in a symmetric system for solving a non-symmetric system. A good trick to consider, for example, is to deal with $PP^T$, where $P$ is a matrix. This is a great intuition to extend our current work into non-symmetric physical systems.
\section*{Acknowledgments}
G. Lin would like to acknowledge the support from National Science Foundation (DMS-1555072 and DMS-1736364).
\section{Appendix}
\renewcommand{\theequation}{A-\arabic{equation}}
\label{appendix:tensor}
\subsection{Lemma 2.1}
\begin{proof}
We will use column vector convention to represent vectors in $V$.
Let $v_1$ and $v_2$ be vectors in $V$. Then
\begin{equation}
M(v_1\otimes v_2) = M(v_1)\times M(v_2)^t
\end{equation}
Then,
\begin{align}
M(R(v_1\otimes v_2)) &= M(R(v_1) \otimes R(v_2))\\
& = M(R(v_1)) \times M(R(v_2))^t \\
& = M(R)\times M(v_1)\times M(v_2)^t\times M(R)^t \\
& = M(R)\times M(v_1\otimes v_2) \times M(R)^t
\end{align}
Therefore,
\begin{equation}
M(R(T)) = M(R)\times M(T) \times M(R)^t
\end{equation}
\end{proof}
\subsection{Lemma 2.2}
\begin{proof}
We will use column vector convention to represent vectors in $V$.
Let $v_1$ be vector in $V$.
Then
\begin{equation}
M(R(v_1)) = M(R)\times M(v_1)
\end{equation}
Therefore,
\begin{equation}
M(R(T)) = M(R)\times M(T)
\end{equation}
\end{proof}
\subsection{Lemma 2.3}
\begin{proof}
Since both $C(a, b)$ and $g$ are linear, we may assume that $T$ is of
the form $v_{i1}\otimes\cdots\otimes v_{in}$.
\begin{align}
C(a,b)(g(T)) & = C(a,b)(g(v_{i1})\otimes\cdots\otimes g(v_{in})) \\
& = \langle g(v_{ia}), g(v_{ib})\rangle g(v_{i1})\otimes \cdots
\check{g(v_{ia})}\cdots\check{g(v_{ib})}\cdots\otimes g(v_{in})
\end{align}
Since $g$ is a rotation, it preserves the inner product i.e.
\begin{equation}
\langle g(v_{ia}), g(v_{ib})\rangle = \langle v_{ia}, v_{ib} \rangle
\end{equation}
So
\begin{align}
C(a,b)(g(T)) & = C(a,b)(g(v_{i1})\otimes\cdots\otimes g(v_{in})) \\
& = \langle v_{ia}, v_{ib}\rangle g(v_{i1})\otimes \cdots
\check{g(v_{ia})}\cdots\check{g(v_{ib})}\cdots\otimes g(v_{in}) \\
& = g(C(a, b)(T))
\end{align}
\end{proof}
\subsection{Proof of Theorem 1}
\begin{proof}
\label{proof:thm2}
Since the position standardization algorithm defines standard position differently for even and odd orders. We show our proof on even and odd cases separately.
Suppose $T$ has even order.
Let $\mathcal{C}$ be the sequence of contraction
along the first two axes such that $C(T) = T^2$, where $T^2$ is
a second-order tensor as described in the algorithm.
Given arbitrary even high order tensor $T$, we could perform contraction to a second order tensor $T^2$ via first two indices:
\begin{equation}
T^2 = \mathcal{C}(T^n)
\end{equation}
For $T^2$, using Lemma \ref{lemma2.1}, there exists a
rotation $R$ such that:
\begin{equation}
T^2=R(T^{2}_s),
\label{T_2}
\end{equation}
where $R(T^{2}_s) = R T^{2}_s R^t$. $T_2$ is diagonalizable because it is symmetric. Since $R$ is represented
by a orthonormal matrix, therefore $R^t = R^{-1}$.
Based on Lemma \ref{lemma2.3}, we know rotation commutes with contraction. Therefore, based on $S$ the standard position $S(T)$ is defined as
\begin{equation}
S(T)=R^{-1}(T)
\end{equation}
Consider a rotation operation $P$ in its matrix form. When we act $P$ on $T$ we obtain a new tensor $P(T)$. For this new tensor, applying contraction we could have:
\begin{equation}
P(T^2)=\mathcal{C}(P(T^n))
\end{equation}
For $P(T^2)$, since Equ. \ref{T_2}, applying Lemma \ref{lemma2.1}, \begin{equation}
P(T^2)=P(\mathcal{C}(T^n))=(P\times R)(T^2_{s})
\end{equation}
For its standard position $S(P(T))$ we have:
\begin{equation}
S(P(T))=(R^{-1}\times P^{-1}\times P)(T)=R^{-1}(T)=S(T)
\end{equation}
To simplify, for a rotation operation $P$ acting on an even high order tensor $T$,
\begin{equation}
S(P(T))=S(T).
\end{equation}
This satisfy the definition of rotation invariant. Therefore, for even cases, the standard position $S(T)$ is a rotation invariant.
Suppose $T$ has odd order.
Let $\mathcal{C}$ be the sequence of contraction
along the first two axes such that $C(T) = T^3$, where $T^3$ is
a third-order tensor as described in the algorithm.
\begin{equation}
T^3=\mathcal{C}(T^n)
\end{equation}
Let $V_1, V_2, V_3$ be vectors of contraction operation on $T^3$ via different axes, \emph{i.e.},
\begin{equation}
V_1=C(2,3)(T^3) \\ V_2=C(1,3)(T^3) \\ V_3=C(1,2)(T^3)
\end{equation}
Based on $S$, we have
\begin{equation}
[V_1\;\;V_2\;\;V_3]=R_1\times U_1
\label{thm2:equ1}
\end{equation}
In this case,
\begin{equation}
S(T^n)=R_1^{-1}(T^n)
\label{thm2:equ5}
\end{equation}
Consider any rotation operation $P$ acting on $T^n$. We have,
\begin{equation}
P(T^3)=P(\mathcal{C}(T^n))
\end{equation}
Using QR-factorization,
\begin{equation}
[P\times V_1\;\;P\times V_2\;\;P\times V_3]=R_2\times U_2
\label{thm2:equ2}
\end{equation}
The standard position of $P(T^n)$ will be defined as:
\begin{equation}
S(P(T^n))=R_2^{-1}(P(T^n))
\label{thm2:equ6}
\end{equation}
Using Remark \ref{lemma2.2}, we could obtain
\begin{equation}
\mathcal{C}(2,3)(P(T^3))=P\times \mathcal{C}(2,3)(T^3)=P\times V_1
\end{equation}
Considering $V_2$ and $V_3$, for the same reason, we could know that
\begin{equation}
[P\times V_1\;\;P\times V_2\;\;P\times V_3]=P\times [V_1\;\;V_2\;\;V_3]
\label{thm2:equ3}
\end{equation}
By reorganizing \ref{thm2:equ1}, \ref{thm2:equ2}, and \ref{thm2:equ3},
\begin{equation}
[V_1\;\;V_2\;\;V_3] = R_1\times U_1=P^{-1}\times R_2\times U_2
\end{equation}
Since QR-factorization is unique \citep{golub2012matrix}, we should have that $U_1=U_2$. Therefore,
\begin{equation}
R_2 = P\times R_1
\label{thm2:equ4}
\end{equation}
Plugging \ref{thm2:equ4} into \ref{thm2:equ6}, comparing the result of \ref{thm2:equ5} we have:
\begin{equation}
S(P(T^n))=R_2^{-1}(P(T^n))=(R_1^{-1}\times P^{-1} \times P)(T^n) = R_1^{-1}(T^n)=S(T^n)
\end{equation}
Here, we shown that given any rotation operation $P$ on $T^n$ ($n$ is odd). By position standarization algorithm $S$, we will always have:
\begin{equation}
S(P(T^n))=S(T^n)
\end{equation}
This satisfy the definition of rotation invariant. Therefore, for odd cases, the standard position $S(T)$ is a rotation invariant.
Combining with the proof on even and odd cases, we have shown $S$ is rotation-invariant.
\end{proof}
|
2,869,038,156,791 | arxiv | \section*{References}
\begin{enumerate} \frenchspacing
\item[1.] V.\ Drinfeld, Sov.Math.Dokl.\ {\bf 32} (1985) 254;
{\it Proc.Int.Cong.\ Mathematicians}, Berkeley 1986, (1987) 798.
M. Jimbo, Lett.Math.Phys.\ {\bf 10} (1985) 63.
\item[2.] W. Pusz and S. Woronowicz, Rep.Math.Phys.
{\bf 27} (1989) 231. T. Curtright and G. Ghandour, unbublished.
M. Salam and B. Wybourne, J.Phys. {\bf A24} (1991) L317.
\item[3.] A.\ Polychronakos, Mod.Phys.Lett. {\bf A5} (1990) 2325.
\item[4.] V. Spiridonov, in {\em Quarks '90}, V. Matveev et al. (eds), 1991,
World Scientific, p. 233; Z. Chang, H. Guo, and H. Yan, Comm.Theor.Phys.
{\bf 14} (1990) 475;
M.\ Chaichian and P.\ Kulish, Phys.Lett. {\bf 234B} (1990) 72;
O. Greenberg, Phys.Rev. {\bf D43} (1991) 4111.
\item[5.] F. Wilczek, Phys.Rev.Lett. {\bf 49} (1982) 957;\\
Y-H. Chen, F. Wilczek, E. Witten, and B. Halperin, Int.J.Mod.Phys.
{\bf B3} (1989) 1001.\\
C. Aneziris, A. Balachandran, and D. Sen, Int.J.Mod.Phys. {\bf A6} (1991) 4721;
{\em ibid.} {\bf A4} (1989) 5459.
\item[6.] D. Fairlie and C. Zachos, in {\em Quantum Field Theory,
Statistical Mechanics, Quantum Groups, and Topology}, NATO ARW Series,
T. Curtright et al. (eds.), World Scientific, 1992. Also see C. Zachos in
{\em Differential Geometric Methods in Theoretical Physics XX}, S. Catto
and A. Rocha (eds.), World Scientific, 1992.
\item[7.] H. Saleur, Nucl.Phys. {\bf B336} (1990) 363. Also see
H. Hinrichsen and V. Rittenberg, preprint CERN-TH.6411/92, February 1992.
\item[8.] T. Curtright and C. Zachos, Phys.Lett. {\bf 243B} (1990) 237.
\item[9.] P.\ Roche and D.\ Arnaudon, Lett.Math.Phys.\ {\bf 17} (1989) 295.
For a concise review, see C. Zachos, ``Paradigms of Quantum Algebras", in
{\em Symmetries in Science V}, B.\ Gruber et al. (eds.), Plenum, 1991, p. 593.
\item[10.] L. Mezincescu and R. Nepomechie, Phys.Lett. {\bf 246B} (1990) 412.
\item[11.] T. Curtright, G. Ghandour and C. Zachos, J.Math.Phys.
{\bf 32} (1991) 676.
\item[12.] P. Freund and I. Kaplansky, J.Math.Phys {\bf 17 } (1976) 228.
\end{enumerate}
\end{document}
|
2,869,038,156,792 | arxiv | \section{Introduction and results}
This note continues the theme of function theory on symplectic manifolds (albeit only in dimension two) and its relations to the theory of quasi-states, as initiated and developed, for example, in \cite{Buhovski_conv_rate_poiss_br}, \cite{Cardin_Viterbo_comm_hamiltonians}, \cite{EP_C_zero_rigidity_of_poiss_br}, \cite{EP_qs_sympl}, \cite{EPZ_qm_Poisson_br}, \cite{Zapolsky_qs_pbr_surf}, \cite{EPR_Poisson_br_qs_sympl_integr}.
Consider the following definition.
\begin{defin}
Let $M$ be a manifold of dimension $n$ and let $\Omega$ be a volume form on $M$. For $F_1,\dots,F_n \in C^\infty(M)$ define the bracket $\{F_1,\dots,F_n\} \in C^\infty(M)$ by the relation
$$\{F_1,\dots,F_n\}\Omega = dF_1\wedge \dots \wedge dF_n\,.$$
We say that the $F_i$ commute if the bracket vanishes.
\end{defin}
\begin{rem}In case $n=2$ the bracket is the same as the Poisson bracket with respect to the area form $\Omega$ (which is symplectic). Commutativity coincides with the linear dependence everywhere of the differentials $dF_i$. Although we are mainly interested in the Poisson bracket in dimension two, it makes sense to introduce this more general definition because the same method applies in order to obtain a statement which holds for the bracket on higher-dimensional manifolds as well.
\end{rem}
We use throughout the uniform, or $C^0$, norm, defined for a function $F\fc X \to \R$, where $X$ is a set, as $\|F\|:=\sup_{x \in X} |F(x)|$. For a compactly supported continuous function $F\fc M \to \R$, where $M$ is an $n$-dimensional manifold with a volume form $\Omega$ we also define the $L^1$ norm as $\|F\|_{L^1}:=\int_M |F|\Omega$. Note that for $F_1,\dots,F_n \in C^\infty(M)$ we have
$$\|\{F_1,\dots,F_n\}\|_{L^1} = \int_M |dF_1\wedge\dots\wedge dF_n|\,.$$
The main result is
\begin{thm}\label{thm_main_result}Let $M$ be a closed $n$-dimensional manifold with a volume form $\Omega$. Let $\ve \geq 0$. If $F_1,\dots,F_n \in C^\infty(M)$ satisfy
$$\|\{F_1,\dots,F_n\}\|_{L^1} \leq 2\ve\,,$$
then there are $F_1',\dots,F_n' \in C^\infty(M)$ with $\|F_i-F_i'\| \leq \ve^{1/n}$ and $\{F_1',\dots,F_n'\} \equiv 0$.
\end{thm}
\begin{rem}Note the constant $1$ before $\ve^{1/n}$. For a discussion of its sharpness see section \ref{section_disc_open_q}.
\end{rem}
Loosely rephrased, this theorem means that if $n$ smooth functions are almost commuting in the $L^1$ sense, then they can be approximated in the uniform norm by smooth functions which commute.
Let us point out some immediate consequences of this result. First, recall Cardin and Viterbo's definition of Poisson commuting continuous functions on a symplectic manifold, see \cite{Cardin_Viterbo_comm_hamiltonians}:
\begin{defin}Let $(M,\omega)$ be a symplectic manifold. Two continuous functions $F,G$ on $M$ are said to Poisson commute if there are $F_k,G_k \in C^\infty(M)$, $k\in \N$, such that $F_k \to F$, $G_k \to G$ and $\{F_k,G_k\} \to 0$ as $k \to \infty$, all in the uniform norm.
\end{defin}
We have
\begin{coroll}\label{coroll_Poisson_comm_cont_fcns}Let $(M,\omega)$ be a closed surface with an area form. Then two continuous functions $F,G \fc M \to \R$ Poisson commute if and only if there are $F_k,G_k \in C^\infty(M)$, $k \in \N$, such that $F_k\to F$, $G_k \to G$, as $k \to \infty$, in the uniform norm, and $\{F_k,G_k\} \equiv 0$ for all $k$.
\end{coroll}
That is, two continuous functions on a closed two-dimensional symplectic manifold Poisson commute if and only if they can be approximated, in the uniform norm, by Poisson commuting smooth functions.
To state the next corollary, we need to recall the notion of a quasi-state, due to Aarnes, \cite{Aarnes_quasi-states}. The reader is also referred to \cite{EPZ_qm_Poisson_br}, \cite{Zapolsky_qs_pbr_surf}, \cite{EPR_Poisson_br_qs_sympl_integr} for a connection with function theory on symplectic manifolds.
\begin{defin}
If $Z$ is a compact (Hausdorff) space, let $C(Z)$ denote the Banach algebra of all real-valued continuous functions on $Z$. Denote by $C(F)$ the closed subalgebra of $C(Z)$ generated by $F$, that is $C(F)=\{\phi\circ F\,|\,\phi\in C(\im F)\}$. A functional $\eta \fc C(Z) \to \R$ is called a quasi-state if it satisfies
\begin{enumerate}
\item $\eta(1) = 1$;
\item $\eta(F) \geq 0$ for $F \geq 0$;
\item for each $F \in C(Z)$ the restriction $\eta|_{C(F)}$ is linear.
\end{enumerate}
\end{defin}
In \cite{EP_qs_sympl}, Entov and Polterovich show that if $(M,\omega)$ is a closed surface with an area form, then a quasi-state on $M$ is linear on Poisson commutative subspaces of $C^\infty(M)$. We combine their result with corollary \ref{coroll_Poisson_comm_cont_fcns} to obtain
\begin{coroll}\label{coroll_qs_linear_comm_cont_fcns}Let $(M,\omega)$ be a closed surface with an area form. Then a quasi-state on $M$ is linear on Poisson commuting subspaces of $C(M)$.
\end{coroll}
Theorem \ref{thm_main_result} will be proved using the following lemma, which is of independent interest. First, for a map $\phi \fc \R^n \to \R^n$ define the $i$-th displacement function $\Delta_i\phi \fc \R^n \to [0,\infty)$, where $i = 1,\dots,n$, by $\Delta_i\phi(x) = |p_i(x) - p_i(\phi(x))|$, $p_i \fc \R^n \to \R$ being the projection on the $i$-th coordinate.
\begin{lemma}\label{lemma_geom_meas_thry}Let $K \subset \R^n$ be a compact set of measure $\leq \ve$. Then there is a smooth map $\phi \fc \R^n \to \R^n$ such that the displacement functions satisfy $\|\Delta_i\phi\| \leq \ve^{1/n}$, $i = 1,\dots,n$, and $\phi(K)$ has measure zero.
\end{lemma}
\begin{acknow}I would like to thank Barney Bramham for useful discussions, and Marco Mazzucchelli for listening to the preliminary version of the results, kindly proofreading the manuscript, and for useful comments.
\end{acknow}
\section{Proofs}
We begin by proving theorem \ref{thm_main_result}, assuming lemma \ref{lemma_geom_meas_thry}.
\begin{prf}[of theorem \ref{thm_main_result}]
Consider the evaluation map $\alpha \fc M \to \R^n$, $\alpha(x) = (F_1(x),\dots,F_n(x))$. Define $n_\alpha \fc \R^n \to \N \cup \{\infty\}$ by $n_\alpha(z) = \#\alpha^{-1}(z)$. The area formula (see for example \cite[theorem 3.2.3]{Federer_geom_meas}) states that $n_\alpha$ is almost everywhere real-valued and moreover
$$\int_{\R^n}n_\alpha\Omega_0 = \int_M \alpha^*\Omega_0\,.$$
Here $\Omega_0 = dx_1\dots dx_n$ is the standard density\footnote{A density on an $n$-dimensional manifold $M$ is a section of the bundle $\Lambda^nT^*M\otimes o(M)$, where $o(M)$ is the orientation line bundle of $M$.} on $\R^n$ and $\alpha^*\Omega_0$ is the pull-back density on $M$. Now $\alpha^*\Omega_0 = |dF_1\wedge \dots \wedge dF_n|$ and so
$$\int_{\R^n}n_\alpha\Omega_0 = \int_M |dF_1\wedge \dots \wedge dF_n| = \|\{F_1,\dots,F_n\}\|_{L^1}\,.$$
Denote $K = \im \alpha$. It is a compact subset of $\R^n$. Since $M$ is closed and $\R^n$ is non-compact, the degree of $\alpha$ is zero, hence zero modulo $2$, which means that $n_\alpha \geq 2$ almost everywhere on $K$. Consequently we obtain
$$2|K| = 2\int_{\im \alpha}\Omega_0 \leq \int_{\R^n} n_\alpha\Omega_0 = \|\{F_1,\dots,F_n\}\|_{L^1} \leq 2\ve\,,$$
where $|\cdot|$ is the Lebesgue measure. This shows that $|K| \leq \ve$. Lemma \ref{lemma_geom_meas_thry} yields a smooth map $\phi \fc \R^n \to \R^n$ with $\|\Delta_i\phi\| \leq \ve^{1/n}$ for all $i$ and $|\phi(K)|=0$. Define $\alpha' = \phi \circ \alpha \fc M \to \R^n$ and $F_i' = p_i \circ \alpha' \fc M \to \R$. Since $\|\Delta_i\phi\| \leq \ve^{1/n}$ for all $i$, we see that
\begin{align*}
\|F_i-F_i'\| &= \sup_M|F_i-F_i'|\\
&= \sup_M|p_i \circ \alpha - p_i \circ \phi \circ \alpha|\\
&= \sup_{\im \alpha}|p_i - p_i \circ\phi|\\
&\leq \sup_{\R^n}|p_i - p_i \circ\phi|\\
&= \|\Delta_i\phi\| \leq \ve^{1/n}\,.
\end{align*}
Moreover, since $\im \alpha' = \phi(K)$ has measure zero, the $dF_i'$ are everywhere linearly dependent, and so $\{F_1',\dots,F_n'\} \equiv 0$, as required. \qed
\end{prf}
We now prove corollary \ref{coroll_Poisson_comm_cont_fcns}.
\begin{prf}
The ``if'' part being clear, let us show the ``only if'' part. Without loss of generality assume $\int_M \omega = 1$. Suppose $F,G \in C(M)$ Poisson commute, so that there are $F_k,G_k \in C^\infty(M)$ for $k \in \N$ with $F_k \to F$, $G_k \to G$, $\{F_k,G_k\}\to 0$ in the uniform norm as $k \to \infty$. Denote $\ve_k = \frac 1 2\|\{F_k,G_k\}\|$. Then
$$\|\{F_k,G_k\}\|_{L^1} = \int_M|\{F_k,G_k\}|\omega \leq 2\ve_k\,.$$
Theorem \ref{thm_main_result} provides smooth functions $F_k',G_k'$ with $\|F_k-F_k'\|, \|G_k-G_k'\| \leq \sqrt{\ve_k}$ and $\{F_k',G_k'\} \equiv 0$. Now as $k \to \infty$,
$$\|F-F_k'\| \leq \|F-F_k\| + \|F_k-F_k'\| \leq \|F-F_k\| + \sqrt{\ve_k} \to 0\,,$$
and similarly for the $G_k'$. Thus $F_k' \to F,G_k' \to G$ as $k \to \infty$ in the uniform norm, and $\{F_k',G_k'\}\equiv 0$ for all $k$, as claimed. \qed
\end{prf}
For the proof of corollary \ref{coroll_qs_linear_comm_cont_fcns} recall that a quasi-state $\eta$ is Lipschitz with respect to the uniform norm, that is $|\eta(F) - \eta(G)| \leq \|F-G\|$ for continuous $F,G$, see \cite{Aarnes_quasi-states}.
\begin{prf}[of corollary \ref{coroll_qs_linear_comm_cont_fcns}]Denote the quasi-state by $\eta$. A quasi-state being homogeneous by definition, it suffices to show its additivity on Poisson commuting continuous functions. Thus let $F,G \in C(M)$ Poisson commute. Corollary \ref{coroll_Poisson_comm_cont_fcns} says there are $F_k,G_k \in C^\infty(M)$ such that $\{F_k,G_k\} \equiv 0$ for all $k$ and $F_k \to F, G_k \to G$ as $k \to \infty$ in the uniform norm. We have
$$|\eta(F+G) - \eta(F) - \eta(G)| = \lim_{k \to \infty}|\eta(F_k+G_k) - \eta(F_k) - \eta(G_k)| = 0\,,$$
where the first equality is due to the fact that $\eta$ is Lipschitz, while the second follows from the aforementioned result of Entov and Polterovich that a quasi-state on $M$ is linear on Poisson commuting subspaces of $C^\infty(M)$. \qed
\end{prf}
Introduce some notation. Let $\|\cdot \|$ be the Euclidean norm on $\R^n$. For $p \in \R^n$ and $\delta > 0$ let $B(p,\delta) \subset \R^n$ denote the open Euclidean ball of radius $\delta$ centered at $p$. For $\nu \in \Z^n$ we denote $C_\nu = \prod_{i=1}^n[\nu_i,\nu_i+1] \subset \R^n$, and call any such set an integer cube; also define $m_\nu = (\nu_1+\frac 1 2, \dots, \nu_n + \frac 1 2)$, which is the center of $C_\nu$.
For the proof of lemma \ref{lemma_geom_meas_thry} we need the following technical result.
\begin{lemma}\label{lemma_technical}For $\ve \in (0,\frac 1 6]$ there is a smooth map $\psi \fc \R^n \to \R^n$ which sends every integer cube to itself, is the identity on $\bigcup_{\nu \in \Z^n}B(m_\nu,\ve)$, and for every $\nu \in \Z^n$ maps $C_\nu - B(m_\nu, 2\ve)$ onto $\partial C_\nu$.
\end{lemma}
\begin{prf}[of lemma \ref{lemma_geom_meas_thry} assuming lemma \ref{lemma_technical}]
If $K$ has measure zero, the identity map does the job. Otherwise let $\gamma = |K|^{-1/n}$ and let $m_\gamma \fc \R^n \to \R^n$ be the dilation by $\gamma$, $m_\gamma(x) = \gamma x$. We have $|m_\gamma(K)| = 1$. Suppose we proved the claim of the lemma for sets of measure $1$, and let $\phi'$ be a map corresponding to $m_\gamma(K)$. Then $\phi=m_\gamma^{-1}\phi'm_\gamma$ satisfies the requirements of the lemma for $K$. Hence there is no loss of generality in assuming $|K|=1$.
If $K = C_\nu$ is an integer cube, we let $\Phi$ denote the time-$1$ map of the flow of the smooth vector field $X$ defined by $X(x) = 2\sqrt n \sigma(\|x-\nu\|)(\nu-x)/\|x-\nu\|$, where $\sigma \fc [0,\infty) \to [0,1]$ is a smooth function such that $\sigma|_{[0,\frac 1 {10}]} =0$, $\sigma|_{[\sqrt n + 2,\infty]} =0$, $\sigma|_{[\frac 1 9, \sqrt n + 1]}=1$. Then $\Psi(K) \subset C_\nu$ and $\Psi(K)$ avoids $\bigcup_{\nu' \in \Z^n}B(m_{\nu'},\frac 1 3)$.
Otherwise let $\Phi$ be the smooth map defined as follows. Denote by $\cC$ the collection of integer cubes meeting $K$, and for $C \in \cC$ let $\nu_C \in \Z^n$ be the unique integer $n$-tuple such that $C = C_{\nu_C}$. Since $K$ has measure $1$ and is not an integer cube, for each $C\in\cC$ there is $p_C \in \Int C$ and $\ve_C > 0$ such that $\ol{B(p_C,2\ve_C)} \subset \Int C$ and $B(p_C,2\ve_C) \cap K = \varnothing$. Let $\ve = \min(\frac 1 6, \min_{C \in \cC}\ve_C)$. Let $Z_C = \bigcup_{t\in[0,1]}B(tp_C+(1-t)m_{\nu_C},2\ve)$. Define the constant vector field $X_C$ on $Z_C$ via $X_C = m_{\nu_C} - p_C$ and extend it to a smooth field, still denoted by $X_C$, on $\R^n$ with compact support in $\Int C$. Let $X = \sum_{C \in \cC}X_C$ and let $\Phi$ be the time-$1$ map of the flow of $X$. Then $\Phi$ maps $B(p_C,2\ve)$ isometrically onto $B(m_{\nu_C}, 2\ve)$, and $\Phi(K) \cap \bigcup_{\nu \in \Z^n}B(m_\nu,2\ve) = \varnothing$.
Let $\psi$ be a map guaranteed by lemma \ref{lemma_technical} for $\ve$ defined as above, and put $\phi = \psi \circ \Phi$. It is easy to see that $\phi$ satisfies the requirements of the lemma.\qed
\end{prf}
Now it only remains to prove lemma \ref{lemma_technical}.
\begin{prf}
The required map is constructed in stages.
Start with a smooth map $a \fc [0,1]\to[0,1]$ which coincides with the identity map near $\frac 1 2$ and whose derivatives all vanish at $0$ and $1$. For example, $a$ can be defined by $a(t) = (t\rho(t)-1)\rho(1-t) + 1$ for $t \in (0,1)$, $a(0) = 0$, $a(1)=1$, where
$$\rho(t) = \frac{e^{-\Lambda/t}}{e^{-\Lambda/t}+e^{-\Lambda/(1-t)}}\,,$$
$\Lambda > 0$ being a sufficiently large number.
Next, define $b_n \fc [0,1]^n \to [0,1]^n$ by $b(x_1,\dots,x_n)=(a(x_1),\dots,a(x_n))$.
Now let $c \fc \partial [-\frac 1 2, \frac 1 2]^n \to \partial [-\frac 1 2, \frac 1 2]^n$ be defined as follows. If $F \subset \partial [-\frac 1 2, \frac 1 2]^n$ is an $(n-1)$-dimensional face, let $i \fc F \to [0,1]^{n-1}$ be an isometry, and let $c|_F:=i^{-1}\circ b_{n-1} \circ i$.
Let $p \fc \R^n - \{0\} \to S^{n-1}$ be the radial projection. Define $f \fc S^{n-1} \to \partial [-\frac 1 2, \frac 1 2]^n$ by $f = c \circ \big(p|_{\partial [-\frac 1 2, \frac 1 2]^n}\big)^{-1}$. Then $f$ is a smooth one-to-one and onto map from $S^{n-1}$ to $\partial [-\frac 1 2, \frac 1 2]^n$. It is a diffeomorphism when restricted to the preimage of any $(n-1)$-dimensional open face of $\partial[-\frac 1 2, \frac 1 2]^n$, and its critical values fill the complement of the union of the open faces.
Let us construct $\psi$. Let $\lambda \fc \R \to [0,1]$ be a smooth function such that $\lambda(t) = 0$ for $t \leq \ve$, $\lambda(t) = 1$ for $t \geq 2\ve$. For $x \in C_\nu-m_\nu$ put
$$\psi(x) := \big(1-\lambda\big(\|x-m_\nu\|\big)\big)x+\lambda\big(\|x-m_\nu\|\big)\Big(m_\nu + f\Big(\frac{x-m_\nu}{\|x-m_\nu\|}\Big)\Big)\,,$$
and $\psi(m_\nu):=m_\nu$. It is an exercise to check that $\psi$ is a well-defined smooth map. Since a cube is convex, $\psi$ maps every integer cube to itself. It also follows from its definition that it is the identity on $\bigcup_{\nu \in \Z^n}B(m_\nu,\ve)$ and maps the complement of $B(m_\nu,2\ve)$ in $C_\nu$ onto $\partial C_\nu$, as required. \qed
\end{prf}
\section{Discussion and open questions}\label{section_disc_open_q}
The result stated in theorem \ref{thm_main_result} can be viewed as complementary to the so-called rigidity of Poisson brackets as shown in \cite{Buhovski_conv_rate_poiss_br}, \cite{EP_C_zero_rigidity_of_poiss_br}. Rigidity means that the functional $C^\infty \times C^\infty \to [0,\infty)$, $(F,G) \mapsto \|\{F,G\}\|$ is lower semi-continuous in the $C^0$ topology, or more informally, that it is impossible to significantly reduce the $C^0$ norm of the Poisson bracket of two smooth functions by an arbitrarily small $C^0$ perturbation. Theorem \ref{thm_main_result} means that if two functions have small Poisson bracket, the two functions can be perturbed in the $C^0$ norm so that the new functions have vanishing bracket. In view of this it is natural to ask
\begin{question}Is an analog of theorem \ref{thm_main_result} true on higher-dimensional symplectic manifolds? More precisely, given a closed symplectic manifold $(M,\omega)$ is there a constant $C>0$ such that for functions $F,G \in C^\infty(M)$ with $\|\{F,G\}\| = 1$ there are functions $F',G' \in C^\infty(M)$ such that $\|F-F'\|,\|G-G'\| \leq C$ and $\{F',G'\}\equiv 0$? If not, what kind of obstruction prevents this from happening?
\end{question}
The constant $1$ appearing as the factor before $\ve^{1/n}$ in theorem \ref{thm_main_result} is conjecturally not sharp.
\begin{question}What is the sharp constant in theorem \ref{thm_main_result}?
\end{question}
We believe that it is $\frac 1 2$. It cannot be less than $\frac 1 2$ because, as it is fairly easy to show, for any closed connected manifold $M$ of dimension $n$ there is a map $\alpha \fc M \to \R^n$ having its image equal to $[0,1]^n$ and with function $n_\alpha$ equal almost everywhere to $2$ on $\im \alpha$. The intermediate value theorem implies that any continuous map $\alpha' \fc M \to \R^n$ satisfying $\|\Delta_i(\alpha - \alpha')\| < \frac 1 2$ for all $i$ has image of positive measure. In terms of the bracket it means that there are $n$ smooth functions on $M$ with the $L^1$-norm of the bracket equal to $2 = 2\cdot 1$ such that if they are perturbed in the uniform norm by less than $\frac 1 2$, the bracket of the new functions is not identically zero.
Lemma \ref{lemma_geom_meas_thry} reminds of the classical isoperimetric inequality, in that it relates a volume measurement, that is the measure of the set, to a linear measurement, that is the maximal displacement of a smooth map contracting it to a set of measure zero. If for a compact $K \subset \R^n$ we denote $\text{thickness}\,(K) = \inf \{\max_{i=1,\dots,n}\|\Delta_i\phi\|\,|\,\phi \fc \R^n\to\R^n \text{ smooth with } |\phi(K)|=0\}$, then lemma \ref{lemma_geom_meas_thry} states that
$$\big(\text{thickness}\,(K)\big)^n \leq |K|\,.$$
|
2,869,038,156,793 | arxiv | \section{0pt}{4pt plus 2pt minus 2pt}{1pt plus 0.5pt minus 0.5pt}
\titlespacing\subsection{0pt}{3pt plus 2pt minus 2pt}{1pt plus 0.5pt minus 0.5pt}
\titlespacing\subsubsection{0pt}{4pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\setlength{\abovecaptionskip}{1pt}
\setlength{\belowcaptionskip}{1pt}
\setlength{\dbltextfloatsep}{3pt plus 1pt minus 1pt}
\setlength{\textfloatsep}{3pt plus 1pt minus 1pt}
\setlength{\intextsep}{3pt plus 1pt minus 1pt}
\setlength{\belowdisplayskip}{2pt} \setlength{\belowdisplayshortskip}{2pt}
\setlength{\abovedisplayskip}{2pt} \setlength{\abovedisplayshortskip}{2pt}
\setlength{\topsep}{4pt plus 2pt minus 2pt}
\setlength{\skip\footins}{2pt plus 1pt minus 1pt}
\newtheorem{problem}{Problem}
\newtheorem{proposition}{Proposition}[section]
\newtheorem{lemma}{Lemma}[section]
\newtheorem{observation}{Observation}[section]
\newtheorem{corollary}{Corollary}[section]
\newtheorem{theorem}{Theorem}[section]
\theoremstyle{definition}
\newtheorem{definition}{Definition}[section]
\theoremstyle{remark}
\newtheorem*{remark}{Remark}
\SetKwProg{Fn}{Function}{}{}
\SetKwComment{Comment}{$\triangleright$\ }{}
\newcommand{\changed}[1]{\textcolor{black}{#1}}
\def\ch#1#2{\sout{#1}\,\textcolor{blue}{#2}}
\newcommand{\JY}[1]{\textcolor{red}{JY: #1}}
\newcommand{\TODO}[1]{\textcolor{red}{#1}}
\newcommand{\textsc{PnP}\xspace}{\textsc{PnP}\xspace}
\newcommand{\textsc{ORC}\xspace}{\textsc{ORC}\xspace}
\newcommand{\textsc{MCTS}\xspace}{\textsc{MCTS}\xspace}
\newcommand{\textsc{DIPN}\xspace}{\textsc{DIPN}\xspace}
\newcommand{\textsc{GN}\xspace}{\textsc{GN}\xspace}
\newcommand{\textsc{DQN}\xspace}{\textsc{DQN}\xspace}
\newcommand{\textsc{UCT}\xspace}{\textsc{UCT}\xspace}
\defgc-\textsc{VPG}\xspace{gc-\textsc{VPG}\xspace}
\defgo-\textsc{PGN}\xspace{go-\textsc{PGN}\xspace}
\def\textsc{VFT}\xspace{\textsc{VFT}\xspace}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\def\r#1{\textcolor{red}{#1}}
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
\font\titlefont=ptmb at 13pt
\title{\titlefont
Visual Foresight Tree for Object Retrieval from Clutter with Nonprehensile Rearrangement
}
\author{Baichuan Huang\quad Shuai D. Han\quad Jingjin Yu\quad Abdeslam Boularias
\thanks{B. Huang, S. D. Han, J. Yu, and A. Boularias
are with the Department of Computer Science,
Rutgers, the State University of New Jersey, Piscataway, NJ, USA.
Emails: {\tt\small \{baichuan.huang, shuai.han, jingjin.yu,
abdeslam.boularias\}@rutgers.edu}.
}%
}
\begin{document}
\maketitle
\thispagestyle{empty}
\pagestyle{empty}
\begin{abstract}
This paper considers the problem of retrieving an object from a set of
tightly packed objects by using a combination of robotic pushing and
grasping actions.
Object retrieval from confined spaces that contain clutter is an important
skill for robots in order to effectively operate in households and everyday
environments.
The proposed solution, Visual Foresight Tree (\textsc{VFT}\xspace), cleverly rearranges the clutter
surrounding the target object so that it can be grasped easily.
Rearrangement with nested nonprehensile actions is challenging as it
requires predicting complex object interactions in a combinatorially
large configuration space of multiple objects.
We first show that a deep neural network can be trained to accurately
predict the poses of the packed objects when the robot pushes one of
them. The predictive network provides visual foresight and is used in
a tree search as a state transition function in the space of scene images.
The tree search returns a sequence of consecutive push actions that
result in the best arrangement of the clutter for grasping the target
object.
Experiments in simulation and using a real robot and objects show that the
proposed approach outperforms model-free techniques as well as model-based
myopic methods both in terms of success rates and the number of executed actions,
on several challenging tasks.
A video introducing \textsc{VFT}\xspace, with robot experiments, is accessible at \href{https://youtu.be/7cL-hmgvyec}{\texttt{\textcolor{OrangeRed}{https://youtu.be/7cL-hmgvyec}}}. Full source code will also be made available upon publication of this manuscript.
\end{abstract}
\section{Introduction}\label{sec:intro}
In many application domains, robots are tasked with retrieving objects that are
surrounded by multiple tightly packed objects.
As a result, the objects to retrieve cannot be directly grasped due to the lack
of free space for inserting a gripper around them.
Therefore, the robot needs to re-arrange the scene to create sufficient clearance
around the target object before attempting to grasp it. Scene rearrangement can
be achieved through a sequence of small horizontal {\it nested} push actions that
can move multiple objects simultaneously. In this paper, we address the problem
of finding the minimum number of push actions that lead to a scene where the
target object can be grasped and retrieved.
To solve the object retrieval problem, the robot must imagine how the scene would
look like after any given sequence of pushing actions, and select the shortest
sequence that leads to a state where the target object can be grasped.
The huge combinatorial search space makes this problem computationally challenging,
hence the need for efficient planning algorithms, as well as fast predictive models
that can return the predicted future states in a few milliseconds.
Moreover, objects in clutter typically have unknown mechanical properties such as mass
and friction coefficients. While it is possible to utilize off-the shelf physics engines
to simulate contacts and collisions of rigid objects in clutter, simulation is highly sensitive to the accuracy of the provided mechanical parameters. To overcome the
problem of manually specifying these parameters, and to enable full autonomy of the
robot, most recent works on object manipulation utilize machine learning techniques
to train predictive models from data~\cite{Hafner2020Dream,DBLP:journals/corr/abs-1812-00568,8207585}. The predictive
models take as input the state of the robot's environment and a control action, and
predict the state after applying the control action.
\begin{figure}[t]
\centering
\begin{minipage}{.63\linewidth}
\subfloat[Hardware setup]{\includegraphics[width = \linewidth, trim = 0 0 30 0, clip]{figures/hardware-setup.png}\label{fig:intro-setup}}
\end{minipage}
\begin{minipage}{.35\linewidth}
\subfloat[First push]{\includegraphics[width = \linewidth, trim = 0 20 0 20, clip]{figures/intro-push-1.png}\label{fig:intro-push}}
\subfloat[Second push]{\includegraphics[width = \linewidth, trim = 0 20 0 20, clip]{figures/intro-push-2.png}\label{fig:intro-push}}
\subfloat[Third push]{\includegraphics[width = \linewidth, trim = 0 20 0 20, clip]{figures/intro-push-3.png}\label{fig:intro-grasp}}
\subfloat[Grasp]{\includegraphics[width = \linewidth, trim = 0 20 0 20, clip]{figures/intro-grasp.png}\label{fig:intro-grasp}}
\end{minipage}
\caption{\label{fig:intro}
(a) The hardware setup for object retrieval in a clutter includes
a Universal Robots UR-5e manipulator
with a Robotiq 2F-85 two-finger gripper,
and an Intel RealSense D435 RGB-D camera.
The objects are placed in a square workspace.
(b)(c)(d) Three push actions (shown with green arrows)
are used to create space
accessing the target (purple) object.
The push directions are toward top-left, top-right,
and bottom-right, respectively.
(e) The target object is successfully grasped and retrieved.
}
\end{figure}
In this work, we propose to employ {\it visual foresight trees} (VFT) to address the computational and modeling challenges related to the object retrieval problem.
The core component of the proposed solution is a deep neural network that predicts future images of the clutter that result from multiple pushing actions. A second neural network is used to evaluate the graspability of the target object in predicted future images.
A Monte Carlo tree search utilizes the two neural networks to obtain the shortest sequence of pushing actions that lead to an arrangement where the target can be grasped.
To the best of our knowledge, the proposed technique is the first model-based learning solution to the object retrieval problem. Extensive experiments on the real robot and objects shown in
Fig.~\ref{fig:intro} demonstrate that the proposed approach succeeds in retrieving target objects with manipulation sequences that are shorter than model-free reinforcement learning techniques and a limited-horizon planning technique.
\section{Related works}\label{sec:related}
\input{texs/03-related-works}
\input{texs/02-problem}
\input{texs/04-outline}
\section{Visual Foresight Tree}\label{sec:method}
\textsc{VFT}\xspace contains three main components: a grasp network (\textsc{GN}\xspace)
for estimating grasp rewards, a push prediction network \textsc{DIPN}\xspace for
predicting push outcomes, and a Monte-Carlo Tree Search (\textsc{MCTS}\xspace) module for
sequential decision-making.
\subsection{Grasp Network}
The Grasp Network (\textsc{GN}\xspace), adapted from \cite{huang2020dipn}, takes the image $s_t$ as input, and outputs a pixel-wise reward prediction $R(s_t) = [R(s_t, a^1),\dots,R(s_t,a^n)]$ for grasps $a^1,\dots,a^n$. Table $R(s_t)$ is a one channel image with the same size as input image $s_t$ ($224\times224$ in our experiments), and a value $R(s_t, a^i)$ represents the
expected reward of the grasp at the corresponding action.
To train GN, we set the reward to be $1$ for grasps where the robot successfully picks up only the target object, and $0$ otherwise.
GN is the reward estimator for states in VFT (in Section~\ref{subsec:vft}).
A grasp action $a^\text{grasp} = (x, y, \theta)$ specifies the grasp location and the end-effector angle.
\textsc{GN}\xspace is trained while keeping the orientation
of the end-effector fixed relative to the support surface, while randomly varying the poses of the objects. Therefore, \textsc{GN}\xspace assumes that the grasps are aligned to the principal axis of the input image.
To compute
reward $R$ for grasps with $\theta \neq 0$,
the input image is rotated by $\theta$ before passing it to \textsc{GN}\xspace.
As a result, for each input image, \textsc{GN}\xspace generates $16$ different grasp $R$ reward tables.
The training process of the \textsc{GN}\xspace used in this work is different from that of previous works~\cite{zeng2018learning, huang2020dipn}.
The objective in previous works is to grasp all the objects; the goal of \textsc{ORC}\xspace
is to retrieve the target.
One straightforward adaption to this new objective is to only give reward when the grasp center is inside the target object, which is the approach that was followed in~\cite{xu2021efficient}.
However, we found that by providing reward for successfully grasping any object, we can achieve a higher sample efficiency. The proposed training approach is similar in spirit to Hindsight Experience Replay (HER)~\cite{NIPS2017_453fadbd}.
To balance between exploration and exploitation, grasp actions are randomly sampled from
$P(s, a^\text{grasp}) \propto b R(s, a^\text{grasp})^{b-1}$ where $b$ is set to $3/2$ in the experiments.
\begin{comment}
During \textsc{GN}\xspace's training, the Q-value is a
single-step reward, defined as $1$ for grasping any object and $0$ otherwise.
However, training in this way will cause the Grasp Network predicts greedily.
The network will give high Q values to easily graspable objects,
but a low Q value to a graspable object if there is another easy-to-grasp
object around.
To alleviate this problem, we add exploration during the training:
when deciding a grasp action to perform, the grasp actions with Q value
larger than $0.2$ are first sorted according to their Q-values.
Then, we randomly draw a grasp action based on the probability density function
$P(a^\text{grasp})=b Q(a^\text{grasp})^{b-1}$: $b=1.5$.
In this way, \textsc{GN}\xspace is able to assign proper Q values to every graspable objects.
\TODO{We need evaluation result for this.}
\TODO{I don't have a benchmark like evaluation, can we put two Q value images as comparison? Showing the greedy Grasp training gives low q value to graspable object?}
\TODO{There are three things to compare here:
the old training method;
the new training method without exploration;
the new training method with exploration.
I think it should be fine to use q-value tables. }
\end{comment}
After training, \textsc{GN}\xspace can be used for selecting grasping actions in new scenes.
Since the network returns
reward $R$ for all possible grasps, and not only for
the target object, the first post-processing step consists in selecting a
small set of grasps that focus on the target object. This is achieved by computing
the overlap between the surface of the target object and the projected footprint
of the robotic hand, and keeping only grasps that maximize the overlap.
Then, grasps with the highest predicted values obtained from the trained
network are ranked, and the best choice without incurring collisions
is selected for execution.
\subsection{Push Prediction Network}
\textsc{DIPN}\xspace~\cite{huang2020dipn} is a network that takes an RGB-D image, 2D masks
of objects, center positions of objects, and a vector of the starting and
end points of a push action. It outputs predicted translations and rotations
for each passed object.
The predicted poses of objects are then used to create a synthetic image.
Effectively, \textsc{DIPN}\xspace imagines what happens to the clutter if the robot executes
a certain push.
The de-cluttering tasks considered in~\cite{huang2020dipn} required only
single-step predictions. The \textsc{ORC}\xspace challenge requires highly accurate predictions
for multiple consecutive pushes in the future. To adapt \textsc{DIPN}\xspace for \textsc{ORC}\xspace, we
augmented its architecture, replacing ResNet-18 with ResNet-10~\cite{he2016deep}
while increasing the dimension of outputs from $256$ to $512$ to predict
motions of more objects simultaneously. The number of decoder MLP layers is
also increased to six, with sizes $[768, 256, 64, 16, 3, 3]$. Other augmentations
are reported in Section~\ref{sec:experiments}. Finally, we trained the
network with $200,000$ random push actions applied on various objects.
This number is higher than the $1,500$ actions used in~\cite{huang2020dipn}
as we aim for accuracy needed for long-horizon visual foresight. Given a
sequence of candidate push actions, the DIPN is able to predict complex interactions
such as the ones shown in Fig.~\ref{fig:predictions}
\begin{figure}[ht!]
\centering
\includegraphics[width = \linewidth]{figures/predictions.pdf}
\caption{\label{fig:predictions}
Example of $4$ consecutive pushes showing that
\textsc{DIPN}\xspace can accurately predict push outcomes over a long horizon.
We use purple arrows to illustrate push actions.
The first and second columns are the predictions
and ground truth (objects' positions after executing the pushes)
in simulation.
The third and fourth columns show result on a real system.
The last column is the side view of the push result.
Each row represents the push outcome with the previous row
as the input observation.
}
\end{figure}
\subsection{Visual Foresight Tree Search (\textsc{VFT}\xspace)}\label{subsec:vft}
We introduce \textsc{DIPN}\xspace for predicting single-step push outcome and \textsc{GN}\xspace for
generating/rating grasps as building blocks for a multi-step procedure capable
of long-horizon planning. A natural choice is Monte-Carlo Tree Search
(\textsc{MCTS}\xspace)~\cite{mcts2012}, which balances scalability and optimality.
In essence, \textsc{VFT}\xspace fuses \textsc{MCTS}\xspace and \textsc{DIPN}\xspace to generate an optimal multi-step
push prediction, as graded by \textsc{GN}\xspace.
A search node in \textsc{VFT}\xspace corresponds to an input scene or one imagined by \textsc{DIPN}\xspace.
\textsc{MCTS}\xspace prioritizes the most promising states when expanding the search tree;
in \textsc{VFT}\xspace, such states are the ones leading to a successful target retrieval
in the least number of pushes.
In a basic search iteration, \textsc{MCTS}\xspace has four essential steps:
selection, expansion, simulation, and back-propagation.
First, the {\em selection} stage samples a search node and a push action based on a selection function.
Then, the {\em expansion} stage creates a child node of the selected node.
After that, the reward value of the new child node is determined by
a {\em simulation} from the node to an end state.
Finally, the {\em back-propagation} stage updates the estimated Q-values of
the parent nodes.
For describing \textsc{MCTS}\xspace with visual foresight, let $N(n)$ be the number of visits
to a node $n$ and $Q(n) = \{r_1, \dots, r_{N(n)}\}$ as the estimated Q-values of each visit.
We use $N_{max}$ to denote the number of iterations the MCTS performes;
we may also use alternative computational budget to stop the search~\cite{mcts2012}.
The high-level workflow of our algorithm is depicted in Alg.~\ref{alg:vft}, and illustrated in Fig.~\ref{fig:pipeline}.
We will describe one iteration (line~\ref{alg:iteration}-\ref{alg:bac-end})
of \textsc{MCTS}\xspace in \textsc{VFT}\xspace along with the pseudo code in the remaining of this section.
\begin{figure*}[ht!]
\centering
\includegraphics[width = \linewidth]{figures/mcts-tree-small.pdf}
\caption{\label{fig:tree}
An overview of a real complete VFT with $150$ iterations.
The size of each image-node $n$ is proportional to its corresponding reward $R(n)$.
}
\end{figure*}
\newcommand\mycommfont[1]{\footnotesize\normalfont\textcolor{gray}{#1}}
\SetCommentSty{mycommfont}
\begin{algorithm}
\small
\DontPrintSemicolon
\SetKwFunction{FMain}{VFT}
\SetKwFunction{FMCTS}{MCTS}
\Fn{\FMain{$s_t$}}{
\While{\normalfont there is a target object in workspace}{
$R(s_t) \gets \textsc{GN}\xspace(s_t)$\;
\If{\normalfont $\max_{a^{\text{grasp}}} R(s_t, a^{\text{grasp}}) > R_{g}^*$}{Execute $\argmax_{a^{\text{grasp}}}R(s_t, a^{\text{grasp}})$\tcp*[f]{Grasp}}
\lElse{Execute \FMCTS{$s_t$}\tcp*[f]{Push}}
}
}
\vspace*{3pt}
\SetKwProg{Pn}{Function}{:}{}
\Pn{\FMCTS{$s_t$}}{
Create root node $n_0$ with state $s_t$ \;
$N(\cdot) \gets 0$, $Q(\cdot) \gets \varnothing$\tcp*{Default $N$, $Q$ for a search node}
\For{$i \gets 1, 2, \dots, N_{max}$}{
$n_c \gets n_0$ \;\label{alg:iteration}
\Comment{\bf Selection}
\While{$n_c$ \normalfont is not expandable}{\label{alg:selection}
$n_c \gets \pi_{\text{tree}}(n_c)$\label{alg:tree-policy}
\tcp*{Use (\ref{equation:uct}) to find a child node}
}
\Comment{\bf Expansion}
$a^{\text{push}} \gets$ sample from untried push actions in $n_c$ \label{alg:expansion1}\;
$n_c \gets \textsc{DIPN}\xspace(n_c, a^{\text{push}})$\label{alg:expansion2}\tcp*{Generate node by push prediction}
\Comment{\bf Simulation}
$r \gets 0$, $d \gets 1$, $s \gets n_c.\text{state}$\label{alg:sim-start}\tcp*{$s$ is the state of $n_c$}
\While{$s$ \normalfont is not a terminal state}{
$a^{\text{push}} \gets$ randomly select a push action in $s$\label{alg:simulation-roll} \;
$s \gets \textsc{DIPN}\xspace(s, a^{\text{push}})$ \label{alg:simulation-pred}\tcp*{Simulate to next state}
$R(s) \gets \textsc{GN}\xspace(s)$\;
$r \gets \max\{r, \gamma^{d} \max_{a^{\text{grasp}}} R(s, a^{\text{grasp}})\}$\; \label{alg:sim-gn}
$d \gets d + 1$
} \label{alg:sim-end}
\Comment{\bf Back-propagation}
\While{$n_c$ \normalfont is not root\label{alg:bac-start}}{
$N(n_c) \gets N(n_c) + 1$ \;
$R(n_c\text{.state}) \gets \textsc{GN}\xspace(n_c\text{.state})$\;
$r \gets \max\{r, \max_{a^{\text{grasp}}} R(n_c\text{.state}, a^{\text{grasp}})\}$ \; \label{alg:bac-gn}
$Q(n_c) \gets Q(n_c) \cup \{r\}$\tcp*{Record the reward}
$r \gets r \cdot \gamma$\;
$n_c \gets \text{parent of } n_c$
} \label{alg:bac-end}
}
$n_{\text{best}} \gets \argmax_{n_i \in \text{children of } n_0}(\textsc{UCT}\xspace(n_i, n_0))$ \label{alg:best-action} \;
\Return push action $ a^{\text{push}}$ that leads to $n_{\text{best}}$ from the root
}
\caption{\label{alg:vft}
Visual Foresight Tree Search}
\end{algorithm}
\textbf{Selection.}
The first step of \textsc{MCTS}\xspace is to select an {\em expandable} search node
(line~\ref{alg:selection}-\ref{alg:tree-policy}) using a tree policy
$\pi_\text{tree}$.
Here, {\em expandable} means the node has some push actions that
are not tried via selection-expansion; more details of the push action space
will be discussed later in the expansion part.
To balance between exploration and exploitation,
when the current node $n_c$ is already fully expanded,
$\pi_\text{tree}$ uses Upper Confidence Bounds for Trees (\textsc{UCT}\xspace)~\cite{mcts2012}
to rank its child node $n_i$. We customize \textsc{UCT}\xspace as
\begin{equation}\label{equation:uct}
\textsc{UCT}\xspace(n_i, n_c) = \frac{Q^{m}(n_i)}{\min\{N(n_i), m\}} +
C \sqrt{\frac{\ln{N(n_c)}}{N(n_i)}}.
\end{equation}
Here, $C$ is an exploration weight.
In the first term of (\ref{equation:uct}), unlike typical \textsc{UCT}\xspace that favours
the child node that maximizes $Q(n_i)$,
we keep only the most promising rollouts of $n_i$ and denote by $Q^m(n_i)$ the average returns of the top $m$ rollouts of $n_i$.
In our implementation, $m = 3$ and $C = 2$.
We also use (\ref{equation:uct}) with parameters $m = 1$ and $C = 0$
to find the best node, and thus the best push action to execute, after the search is completed, as shown in line~\ref{alg:best-action}.
\begin{wrapfigure}[6]{r}{0.95in}
\vspace*{-5pt}
\includegraphics[width=0.95in]{figures/action-space-small.png}
\end{wrapfigure}
\textbf{Expansion.}
Given a selected node $n$, we use \textsc{DIPN}\xspace to generate a child node
by randomly choosing an untried push action $a^\text{push}$
(line~\ref{alg:expansion1}-\ref{alg:expansion2}).
The action $a^\text{push}$ is uniformly sampled at random from the
selected node's action space,
which contains two types of push actions, shown as blue and red
arrows in the right figure:
\begin{enumerate*}
\item
For each object, we apply principal component analysis to
compute its feature axis.
For example, for a rectangle object, the feature axis will be parallel to
its long side.
Four push actions are then sampled with directions perpendicular
or parallel to the feature axis,
pushing the object from the outside to its center.
\item
To build a more complete action space, eight additional actions are evenly
distributed on each object's contour, with push direction also
towards the object's center.
\end{enumerate*}
\textbf{Simulation.}
After we generated a new node via expansion,
in line~\ref{alg:sim-start}-\ref{alg:sim-end},
we estimate the node's Q-value by
uniformly randomly select push actions at random (line~\ref{alg:simulation-roll})
and use \textsc{DIPN}\xspace to predict future states (line~\ref{alg:simulation-pred})
until one of the following two termination criteria is met:
\begin{enumerate*}
\item The total number of push actions used to reach a simulated state
is larger than a constant $D^*$.
\item
The maximum predicted reward value of a simulated state
exceeds a threshold $R_{gp}^*$.
\end{enumerate*}
In line~\ref{alg:sim-gn}, when calculating $r$, a discount factor $\gamma$
is used to penalize a long sequence of action.
Here, we use $\max\textsc{GN}\xspace$ to reference the maximum value in a grasp reward table.
In our implementation, \textsc{GN}\xspace is only called once for each unique state
and the output is saved by a hashmap.
\textbf{Back-propagation.} \label{back-propagation}
After simulation, the terminal grasp reward is back-propagated
(line~\ref{alg:bac-start}-\ref{alg:bac-end})
through its parent nodes to update their $N(n)$ and $Q(n)$.
Denote by $r_0$ the max grasp reward of a newly expanded
node $n_0$, and $n_1, n_2, \dots, n_k$ as the sequence of $n_0$'s parents in the ascending order up to node $n_k$. With $Q(n_0) = \{r_0\}$, the Q-value of $n_k$ in this iteration
is then $\max_{0 \leq j < k} \gamma^{k - j}\max Q(n_j)$,
which corresponds to the max reward of states
along the path~\cite{DBLP:journals/corr/abs-1912-07024}.
Here, $\gamma$ is a discount factor to penalize a long sequence of actions.
As a result, for each parent $n_k$, $N(n_k)$ increases by $1$, and
$\max_{0 \leq j < k} \gamma^{k - j}\max Q(n_j)$
is added to $Q(n_k)$.
An overview of a complete search tree is plotted in Fig.~\ref{fig:tree}.
\section{Experimental Evaluation}\label{sec:experiments}
We performed an extensive evaluation of the proposed method, \textsc{VFT}\xspace, in simulation
and on the real hardware system illustrated in Fig.~\ref{fig:intro}. \textsc{VFT}\xspace is compared with multiple state-of-the-art
approaches~\cite{zeng2018learning, huang2020dipn, xu2021efficient}, with
necessary modifications for solving \textsc{ORC}\xspace, i.e., minimizing the number of actions
in retrieving a target.
The results convincingly demonstrate \textsc{VFT}\xspace to be robust and more efficient
than the compared approaches.
Both training and inference are performed on a machine with an Nvidia GeForce
RTX 2080 Ti graphics card, an Intel i7-9700K CPU, and 32GB of memory.
\begin{figure*}[ht!]
\centering
\includegraphics[width = \linewidth]{figures/testcases.png}
\caption{\label{fig:testcases}
22 Test cases used in both simulation and real world experiments.
The target objects are blue.
Images are zoomed in for better visualization.
}
\end{figure*}
\subsection{Experiment Setup}
The complete test case set includes
\begin{enumerate*}
\item the full set of $14$ test cases from \cite{xu2021efficient}, and
\item $18$ hand-designed and more challenging test cases where the objects are tightly packed.
\end{enumerate*}
All test cases are constructed using wood blocks with different shapes, colors,
and sizes.
We set the workspace's dimensions to $44.8 \text{cm} \times 44.8 \text{cm}$.
The size of the images is $224 \times 224$. Push
actions have a minimum $5$cm {\em effective push distance},
defined as the end-effector's moving distance after object contact.
Multiple planned push actions may be concatenated if they are in the
same direction and each action's end location is the same as the next
action's start location.
In all scenes, the target object is roughly at the center of the scene.
The hyperparameters for \textsc{VFT}\xspace are set as follows. The number of iterations
$N_{max}= 150$. The discount factor $\gamma=0.8$. The maximum depth $D^*$ of the tree is capped at $4$. The terminal threshold of grasp reward $R_{gp}^* = 1.0$. Threshold $R_g^*$ that decides to grasp or to push is $0.8$ in the simulation experiments and $0.7$ in the real hardware experiments.
\subsection{Network Training Process}
\textsc{VFT}\xspace contains two deep neural networks: \textsc{GN}\xspace and \textsc{DIPN}\xspace. Both are trained in
simulation to capture the physical properties and dynamics of the environment.
No prior knowledge is given to the networks except the dimensions of the gripper
fingers.
\textsc{GN}\xspace is trained on-policy with \num{20000} grasp actions. Similar to
\cite{zeng2018learning, huang2020dipn, xu2021efficient}, randomly-shaped objects are uniformly dropped onto the workspace to construct the training scenarios.
\textsc{DIPN}\xspace \cite{huang2020dipn} is trained in a supervised manner with \num{200000}
random push actions from simulation. In the push data set, $20\%$ of the scenes contain randomly placed objects, and $80\%$ contain densely packed objects. A Huber loss of $2$ is used.
We note that a total of $2000$ actions ($500$ grasps and $1500$ pushes)
are sufficient for the networks to achieve fairly accurate results
(see, e.g.,~\cite{huang2020dipn}). Because training samples are readily
available from simulation, it is not necessary to skimp on training data.
We thus opted to train with more data to evaluate the full potential of \textsc{VFT}\xspace.
\subsection{Compared Methods and Evaluation Metrics}
\textbf{Goal-Conditioned VPG (gc-\textsc{VPG}\xspace).} Goal-conditioned VPG (gc-\textsc{VPG}\xspace) is a modified
version of Visual Pushing Grasping (VPG)~\cite{zeng2018learning}, which uses two
DQNs \cite{mnih2015human} for pushing and grasping predictions. VPG by itself does not
focus on specific objects; it was conditioned \cite{xu2021efficient} to focus on the
target object to serve as a comparison point, yielding gc-\textsc{VPG}\xspace.
\textbf{Goal-Oriented Push-Grasping.} In
\cite{xu2021efficient}, many modifications are applied to VPG to render the resulting
network more suitable for solving \textsc{ORC}\xspace, including adopting a three-stage training
strategy and an efficient labeling method \cite{NIPS2017_453fadbd}. For convenience,
we refer to this method as go-\textsc{PGN}\xspace (the authors of \cite{xu2021efficient} did not provide a
short name for the method).
\textbf{\textsc{DIPN}\xspace.}
As an ablation baseline for evaluating the utility of employing deep tree search, we replace
\textsc{MCTS}\xspace from \textsc{VFT}\xspace with a search tree of depth one. In this baseline, \textsc{DIPN}\xspace is used to evaluate
all candidate push actions. The push action whose predicted next state has the
highest grasp reward for the target object is then chosen. This is similar to
how \textsc{DIPN}\xspace is used in \cite{huang2020dipn}; we thus refer to it simply as \textsc{DIPN}\xspace.
In our evaluation, the main metric is the total number of push and grasp
actions used to retrieve the target object.
For a more complete comparison to \cite{zeng2018learning, xu2021efficient},
we also list \textsc{VFT}\xspace's grasp success rate, which is the ratio of successful grasps
in the total number of grasps during testing. The completion rate, i.e., the chance of eventually grasping the target object, is also reported. Similar to
\cite{huang2020dipn}, when \textsc{DIPN}\xspace is used, a $100\%$ completion rate often reached.
We only collected evaluation data on \textsc{DIPN}\xspace and \textsc{VFT}\xspace. For the other two baselines,
gc-\textsc{VPG}\xspace and go-\textsc{PGN}\xspace, results for are directly quoted from \cite{xu2021efficient} (at
the time of our submission, we could not obtain the trained model or the information
necessary for the reproduction of gc-\textsc{VPG}\xspace and go-\textsc{PGN}\xspace). While our hardware setup (robot, gripper, camera, and objects) is identical to that of~\cite{xu2021efficient}, and the poses of objects are also identical,
we note that there are some small
differences between the evaluation setups:
\begin{enumerate*}
\item We use PyBullet~\cite{coumans2019} for simulation,
while \cite{xu2021efficient} uses CoppeliaSim~\cite{6696520};
the physics engine is the same (Bullet).
\item \cite{xu2021efficient} uses an RD2 gripper in simulation and
a Robotiq 2F-85 gripper for real experiment;
all of our experiments use 2F-85.
\item \cite{xu2021efficient} has a $13$cm push distance,
while we only use a $5$cm effective distance
(the distance where fingers touch the objects)
\item \cite{xu2021efficient} uses extra top-sliding pushes which
expand the push action set.
\end{enumerate*}
At the same time, we confirm that these relatively minor differences do not
provide our algorithm any unfair advantage.
\subsection{Simulation Studies}
Fig.~\ref{fig:baseline-hist} and Table.~\ref{tab:10table} show the evaluation results
of all algorithms on the $10$ simulation test cases from \cite{xu2021efficient}. Each experiment is repeated $30$ times, and the average number of actions until task completion in each experiment is reported.
Our proposed method, \textsc{VFT}\xspace, which uses an average of $2.00$ actions, significantly
outperforms the compared methods. Specifically, \textsc{VFT}\xspace uses one push action and one
grasp action to solve the majority of cases, except for one instance with a
half-cylinder shaped object, which is not included during the training of the
networks. Interestingly, when only one push is necessary, \textsc{VFT}\xspace, with its main
advantage as multi-step prediction, still outperforms \textsc{DIPN}\xspace due to its extra
simulation steps.
The algorithms with push prediction performs better than gc-\textsc{VPG}\xspace and go-\textsc{PGN}\xspace in all metrics.
\begin{figure}[ht!]
\centering
\includegraphics[width = .97\linewidth]{figures/baseline-hist.png}
\caption{\label{fig:baseline-hist}
Simulation results per test case for the $10$ problems from
\cite{xu2021efficient}.
The horizontal axis shows the average number of actions used to solve
a problem instance: the lower, the better.
}
\end{figure}
\begin{table}[ht!]
\centering
\begin{tabular}{c|c|c|c}
& Completion & Grasp Success & Number of Actions \\ \hline
gc-\textsc{VPG}\xspace \cite{xu2021efficient} & $89.3\%$ & $41.7\%$ & $5.78$ \\ \hline
go-\textsc{PGN}\xspace \cite{xu2021efficient} & $99.0\%$ & $90.2\%$ & $2.77$ \\ \hline
\textsc{DIPN}\xspace \cite{huang2020dipn} & $100\%$ & $100\%$ & $2.30$ \\ \hline
\textsc{VFT}\xspace (ours) & $100\%$& $100\%$ & $\mathbf{2.00}$ \\ \hline
\end{tabular}
\caption{Simulation results for the $10$ test cases from \cite{xu2021efficient}.}
\label{tab:10table}
\end{table}
To probe the limit of \textsc{VFT}\xspace's capability, we evaluated the methods on harder cases
demanding multiple pushes. The test set includes $18$ manually designed instances
and $4$ cases from \cite{xu2021efficient} (see Fig.~\ref{fig:testcases}).
As shown in Fig.~\ref{fig:bar-sim-22} and Table.~\ref{tab:22table-sim}, \textsc{VFT}\xspace uses
fewer actions than \textsc{DIPN}\xspace as \textsc{VFT}\xspace looks further into the future. Though we could not
evaluate the performance of gc-\textsc{VPG}\xspace and go-\textsc{PGN}\xspace on these settings for a direct comparison because we could not obtain the information necessary for the reproduction of these systems,
notably, the average number of actions ($2.45$) used by \textsc{VFT}\xspace on harder instances is
even smaller than the number of actions ($2.77$) go-\textsc{PGN}\xspace used on the $10$ simpler cases.
\begin{figure}[ht!]
\centering
\includegraphics[width = .97\linewidth]{figures/bar-sim-22.png}
\caption{\label{fig:bar-sim-22}
Simulation result per test case for the $22$ harder problems
(Fig.~\ref{fig:testcases}).
The horizontal axis shows the average number of actions used to solve
a problem instance: the lower, the better.
}
\end{figure}
\begin{table}[ht!]
\centering
\begin{tabular}{c|c|c|c}
& Completion & Grasp Success & Num. of Actions \\ \hline
\textsc{DIPN}\xspace \cite{huang2020dipn} & $100\%$ & $98.3\%$ & $4.31$ \\ \hline
\textsc{VFT}\xspace (ours) & $100\%$ & $98.8\%$ & $\mathbf{2.45}$ \\ \hline
\end{tabular}
\caption{Simulation result for the $22$ test cases in
Fig.~\ref{fig:testcases}.}
\label{tab:22table-sim}
\end{table}
\subsection{Evaluation on a Real System}
We repeated the $22$ hard test cases on a real robot system (Fig.~\ref{fig:intro-setup}).
Both \textsc{VFT}\xspace and \textsc{DIPN}\xspace are evaluated. We also bring the experiment result from
\cite{xu2021efficient} on its $4$ real test cases for comparison. All cases are
repeated at least 5 times to get the mean metrics. The result, shown in Fig.~\ref{fig:bar-real-22}, Table.~\ref{tab:22table-real}, and Table.~\ref{tab:4table}
closely matches the results from simulation. We observe a slightly lower grasp
success rate due to the more noisy depth image on the real system. The real
workspace's surface friction is also different from simulation. However, \textsc{VFT}\xspace and
\textsc{DIPN}\xspace can still generate accurate foresight.
\begin{figure}[ht!]
\centering
\includegraphics[width = .97\linewidth]{figures/bar-real-22.png}
\caption{\label{fig:bar-real-22}
Real experiment results per test case for the $22$ harder problems
(Fig.~\ref{fig:testcases}).
The horizontal axis shows the average number of actions used to solve
a problem instance: the lower, the better.
}
\end{figure}
\begin{table}[ht!]
\centering
\begin{tabular}{c|c|c|c}
& Completion & Grasp Success & Num. of Actions \\ \hline
\textsc{DIPN}\xspace \cite{huang2020dipn} & $100\%$ & $97.0\%$ & $4.78$ \\ \hline
\textsc{VFT}\xspace (ours) & $100\%$ & $98.5\%$ & $\mathbf{2.65}$ \\ \hline
\end{tabular}
\caption{Real experiment results for the $22$ Test cases in
Fig.~\ref{fig:testcases}.}
\label{tab:22table-real}
\end{table}
\begin{table}[ht!]
\centering
\begin{tabular}{c|c|c|c}
& Completion & Grasp Success & Num. of Actions \\ \hline
go-\textsc{PGN}\xspace \cite{xu2021efficient} & $95.0\%$ & $86.6\%$ & $4.62$ \\ \hline
\textsc{DIPN}\xspace \cite{huang2020dipn} & $100\%$ & $100\%$ & $4.00$ \\ \hline
\textsc{VFT}\xspace (ours) & $100\%$ & $100\%$ & $\mathbf{2.60}$ \\ \hline
\end{tabular}
\caption{Real experiment results for cases $19$ to $22$ in
Fig.~\ref{fig:testcases}.}
\label{tab:4table}
\end{table}
We also explored our system on everyday objects (Fig.~\ref{fig:car-result}),
where we want to retrieve a small robotic vehicle surrounded by soap boxes.
Without seeing neither the soap boxes nor the small vehicles, the robot is
able to strategically push the soap boxes away in two moves only and retrieve the vehicle.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.24\linewidth]{figures/car-0.png}
\hfill
\includegraphics[width=0.24\linewidth]{figures/car-1.png}
\hfill
\includegraphics[width=0.24\linewidth]{figures/car-3.png}
\hfill
\includegraphics[width=0.24\linewidth]{figures/car-4.png}
\caption{\label{fig:car-result}
Test scenario with soap boxes and masked 3D printed vehicle.
Two push actions and one grasp action.
}
\end{figure}
We report that the running time to decide one push action is around
$2$ minutes when the number of \textsc{MCTS}\xspace iterations is set to be $150$.
In this letter, our main focus is action optimization.
\section{Conclusion and Discussions}\label{sec:conclusion}
In conclusion, through an organic fusion of Deep Interaction Prediction Network (\textsc{DIPN}\xspace) and MCTS, the proposed Visual Foresight Tree (\textsc{VFT}\xspace) is able to make high quality multi-horizon prediction for optimized object retrieval
from dense clutter. The effectiveness of \textsc{VFT}\xspace is convincingly demonstrated
with extensive evaluation.
As to the limitations of \textsc{VFT}\xspace, because of the large \textsc{MCTS}\xspace tree that needs
to be computed, the time required is relatively long. This can be improved
with multi-threading because the rollouts have sufficient independence.
Currently, only a single thread is used to complete the \textsc{MCTS}\xspace.
It would also be interesting to develop a network for directly estimating
the reward for rollout policy, which would reduce the inference
time.
{\small
\def\url#1{}
\bibliographystyle{formatting/IEEEtran}
\section{Preliminaries}\label{sec:problem}
\subsection{Problem Statement}
The Object Retrieval from Clutter (\textsc{ORC}\xspace) challenge asks a robot manipulator to retrieve a target object from a set
of objects, densely packed together. The objects may have different shapes, sizes,
and colors.
Objects other than the target object are unknown a priori.
Focusing on a mostly planar setup, the following assumptions are made:
\begin{enumerate*}
\item The hardware setup (Fig.~\ref{fig:intro-setup}) contains a manipulator, a
planar workspace with a uniform background color, and a camera on top of the
workspace.
\item The objects are rigid and are amenable to the gripper's prehensile
and non-prehensile capabilities, limited to straight line planar push actions and top-down grasp actions.
\item The objects are confined to the workspace without overlapping. As a result,
the objects are visible to the camera.
\item The target object, to be retrieved, is visually distinguishable from the
others.
\end{enumerate*}
Under these assumptions, the \emph{objective} is to retrieve only the target
object, while minimizing the number of pushing/grasping actions that are used.
Each grasp or push is considered as one atomic action.
While a mostly planar setup is assumed in our experiments, the proposed data-driven solution is general and can be applied to arbitrary object shapes and arrangements.
Fig.~\ref{fig:intro} illustrates an example problem from the experiments and a
sequence of push and grasp actions that solves it.
\begin{figure*}[ht!]
\centering
\includegraphics[width = 0.98\linewidth]{figures/system-detail.pdf}\label{fig:pipeline}
\caption{\label{fig:pipeline}
Overview of the proposed technique for object retrieval from clutter with nonprehensile rearrangement.
}
\end{figure*}
\subsection{Manipulation Motion Primitives}
\label{sec:primitives}
Similar to studies closely related to the \textsc{ORC}\xspace challenge, e.g.,~\cite{zeng2018learning, huang2020dipn, xu2021efficient}, we employ a set of pre-defined and parameterized
pushing/grasping manipulation primitives. The decision-making problem then entails
the search for the optimal order and parameters of these primitives.
A grasp action $a^\text{grasp} = (x, y, \theta)$ is defined as
a top-down overhead grasp motion at image pixel location $(x, y)$,
with the end-effector rotated along the world $z$-axis by $\theta$ degrees.
In our implementation, a grasp center $(x, y)$ can be any pixel in a
down-sampled $224 \times 224$ image of the planar scene, while rotation angle
$\theta$ can be one of $16$ values evenly distributed between $0$ and $2\pi$.
To perform a complete grasp action, the manipulator moves the open gripper
above the specified location, then moves the gripper downwards until a contact
with the target object is detected, closes the fingers, and transfers the
grasped object outside of the workspace.
When objects are densely packed, the target object is generally not directly
graspable due to collisions between the gripper and surrounding objects.
When this happens, non-prehensile push actions can be used to create
opportunities for grasping. For a push action $a^\text{push} = (x_0, y_0, x_1, y_1)$,
the gripper performs a quasi-static horizontal motion. Here, $(x_0, y_0)$ and
$(x_1, y_1)$ are the start and the end location of the gripper center,
respectively. The gripper's orientation is fixed along the motion direction during
a push maneuver.
\begin{comment}
\begin{enumerate}[label=(A\arabic*), leftmargin=25pt]
\item \label{assumption-setup}
{\em Hardware setup.}
The setup (Fig.~\ref{fig:intro-setup}) includes a manipulator,
a planar workspace with a uniform background color,
and a camera on top of the workspace.
\item \label{assumption-objects}
{\em Object physic properties.}
Objects are general cylinders\footnote{
From differential geometry, a cylinder is defined as any ruled surface
spanned by a one-parameter family of parallel lines.};
their sizes are within the gripper's grasping capability.
\item \label{assumption-segmentation}
{\em Object placement properties.}
Objects are directly placed on the workspace,
with no overlapping between each other:
they are completely visible in the camera's view.
\item \label{assumption-target}
{\em Target object.}
The target object is distinguishable from the others by, e.g.,
color, shape, or texture.
\item \label{assumption-action}
{\em Manipulation actions.}
The robot can only perform a set of pre-defined push actions
and grasp actions.
\item \label{assumption-decision}
{\em Decision-making process.}
After the camera takes an RGB-D image of the workspace,
an \textsc{ORC}\xspace algorithm selects the next action to perform based on the image.
Then, the manipulator executes the selected action.
This process is performed iteratively.
\item \label{assumption-objective}
{\em Objective.}
The objective is to retrieve {\em only} the target object,
while minimizing the number of actions used.
One grasp or one push is considered as one action.
\end{enumerate}
Similar to the most relevant learning-based manipulation
studies~\cite{zeng2018learning, huang2020dipn, xu2021efficient},
we use a set of pre-defined manipulation motion primitives to avoid
the computation complexity and inconsistency of motion planning,
while maintaining a high level of action efficiency.
For both grasp and push actions, the end-effector is always perpendicular
to the workspace.
A grasp action $a^\text{grasp} = (x, y, \theta)$ is defined as
a top-down overhead grasp motion at image pixel location $(x, y)$,
with the end-effector rotated along the world $z$-axis by $\theta$ degrees.
Specifically in this work, a grasp action can locate at a pixel
in a down-sampled $224 \times 224$ input image,
while $\theta$ can be one of the 16 values evenly distributed
between 0 and 360 degrees.
To perform a complete grasp action, the manipulator
moves the open gripper above the specified location,
then moves the gripper downwards, closer the fingers,
and transfers the grasped object (if any) outside of the workspace.
An object successfully grasped is considered as retrieved.
In the scenario where objects are densely packed,
the target object is usually not directly graspable due to
potential collisions between the fingers and the other objects.
When this happens, push actions can be used to facilitate grasp.
For a push action $a^\text{push} = (x_0, y_0, x_1, y_1)$,
the gripper performs a quasi-static horizontal motion.
Here, $(x_0, y_0)$ and $(x_1, y_1)$ are
the start and the end location of the gripper center, respectively.
The gripper's orientation is fixed along the motion direction during a push.
As a result, we have 16 grasp pixel-wise $G_{g_i}$
($i$ is an integer from 1 to 16), we use $Q_{g_{A}}$ to denote all of them.
For grasping, the greedy strategy is applied here, a single pixel the $\max Q_{g_{A}}$ is chosen, the pixel position $(x_i, y_i)$ in the corresponding $G_{g_i}$ is recorded, and will be transformed back as $(x, y)$ based on the rotation angle $\theta$ applied to $o$. The $o$ contains a depth image, the depth value $z$ can be obtained according to $(x, y)$. That is a grasp action $a^{grasp} = (x, y, z \theta)$ is acquired.
We use a set of wood blocks as the main objects to handle;
we also run the proposed method on other objects,
e.g., soap boxes, 3D printed vehicles, for extra experiment.
To facilitate the real world demonstration,
we always mask the target object with purple tape.
See Fig.~\ref{fig:intro} for an example problem and a sequence of
feasible push and grasp actions to solve the instance.
\end{comment}
\begin{comment}
In retrieving an object from clutter problem, a robot arm is tasked to use its gripper to only grasp the target object, meanwhile it can do planar push so the target object is free to grasp.
The workspace has a uniform background color and the objects have different shapes, sizes, and colors.
We consider this problem in scenarios where objects are initially densely packed.
A camera is placed on top of the workspace for state observation.
Given camera images, the target object must be removed using two basic motion primitives, grasp, and push, with a minimum number of actions. The end-effector is a 2-finger gripper. One grasp or one push is considered as one action.
A grasp action $a^{grasp} = (x, y, z, \theta)$ is a top-down grasp centered that is the 2-finger gripper will perpendicular to the workspace.
The $(x, y, z)$ is the position where the center of the gripper move to.
The $\theta$ is the rotation of the gripper around the $(x, y)$.
A push action $a^{push} = (x_0, y_0, z_0, x_1, y_1, z_1)$ is a quasi-static side-pushing, which facilitates grasping. The $(x_0, y_0, z_0)$ is the start of position of a push action, which is also the center of the gripper, the $(x_1, y_1, z_1)$ is the end position.
\end{comment}
\subsubsection{Robot grasping in clutter: Using a hierarchy of supervisors for learning from demonstrations \cite{7743488}}
\noindent {\bf Object Singulation.}
A closely related problem is the singulation of individual
items, which consists in separating objects in a pile of clutter by moving them into an arrangement where there is sufficient empty space between each pair of objects~\cite{6224575}.
Singulation is typically achieved through a combination of pushing and grasping actions.
In contrast with the present work, pushing actions for singulation are typically proposed by a neural network in a model-free manner, i.e. by using reactive policies without explicitly reasoning about future states~\cite{10.1007/978-3-030-28619-4_32,6385903}. In fact, singulation does not generally require long-horizon reasoning.
For instance, linear push policies were learned in~\cite{8560406} to increase grasp access for robot bin picking, by using model-free reinforcement learning. The tasks considered in~\cite{8560406} can be solved through single pushing actions because the lower density of clutter compared to the tasks considered in our work.
\noindent {\bf Rearrangement Planning.}
Object retrieval in clutter is closely related to rearrangement planning.
The approach recently presented in~\cite{DBLP:journals/corr/abs-1912-07024} also uses a Monte Carlo tree search, but the objectives of rearrangement tasks are different from ours.
In object retrieval, we focus on finding the minimum number of pre-grasp pushing actions that lead to grasping a single target object. This objective requires highly accurate predictions of future poses of individual objects in clutter. In rearrangement planning, objects are divided into groups, and
exact locations of individual objects is less important than the locations of their corresponding groups. Moreover, the techniques presented in~\cite{8624977,zeng2020transporter,DBLP:journals/corr/abs-1912-07024} directly learn a reactive policy instead of a predictive network that can imagine future images and reason about them, as done in the present work.
\noindent {\bf Object Retrieval.}
Several other works also addressed the problem of retrieving a target object from clutter. Some of these works focus on online planning for object search under partial observability without learning~\cite{8793494}.
Other related works learn only the quality of pushing and grasping actions\cite{DBLP:journals/corr/abs-1903-01588}, without visual foresight, which is necessary for tightly packed clutter. Similarly, scene exploration and object search were learned using model-free reinforcement learning, based on active and interactive perception~\cite{DBLP:journals/corr/abs-1911-07482}, and teacher-aided exploration~\cite{kurenkov2020visuomotor}. A planning approach with a human-operator guiding a robot to reach for a target object in clutter was presented in~\cite{DBLP:journals/corr/abs-1904-03748}. In contrast to these approaches, ours is fully autonomous. The work presented in~\cite{xu2021efficient} is most related to ours, with a similar robotic setup and objects. It is however based on deep Q-learning, which is model-free and which does not predict future states. We show in Section~\ref{sec:experiments} that our model-based technique significantly outperforms the one from~\cite{xu2021efficient} on the same tasks considered in~\cite{xu2021efficient} as well as on more challenging ones.
\section{Overview of the Proposed Approach}\label{sec:outline}
When objects are tightly packed, the robot needs to carefully select an appropriate
sequence of pushes that create a sufficient volume of empty space around the target
object before attempting to grasp it.
In this work, we are interested in challenging scenarios where multiple push
actions may be necessary to de-clutter the surroundings of the target, and where the
location, direction, and duration of each push action should be carefully optimized
to minimize the total number of actions.
Collisions among multiple objects often occur while pushing a single object, further
complicating the matter.
To address the challenge, we propose a solution that uses a neural network
to forecast the outcome of a sequence of push actions in the future, and estimates
the probability of succeeding in grasping the target object in the resulting scene.
The optimal push sequence is selected based on the forecasts.
A high-level description of the proposed solution pipeline is depicted in Fig.~\ref{fig:pipeline}. At the start of a planning iteration, an RGB-D image
of the scene is taken, and Mask R-CNN~\cite{He_2017_ICCV} is used to classify
the objects as {\it unknown clutter} or {\it target object}.
With the target object located, a second network called Grasp Network (\textsc{GN}\xspace) predicts
the probability of grasping the target. \textsc{GN}\xspace is a Deep Q-Network (\textsc{DQN}\xspace) \cite{mnih2015human}
adopted from prior works~\cite{zeng2018learning, huang2020dipn} for \textsc{ORC}\xspace. It
takes the image input, and outputs the estimated grasp success probability for
each grasp action. If the maximum estimated grasp success probability is larger
than a threshold, the target object is considered as directly graspable and the
robot executes the corresponding optimal grasp action; otherwise, push actions
must be performed to create space for grasping.
When push actions are needed, the next action is selected using Monte-Carlo
Tree Search (\textsc{MCTS}\xspace). In our implementation, which we call the Visual Foresight
Tree (\textsc{VFT}\xspace), each search state corresponds
to an image observation of the workspace. Given a push action and a state, \textsc{VFT}\xspace
uses the Deep Interaction Prediction Network (\textsc{DIPN}\xspace)~\cite{huang2020dipn} as the
state transition function. Here, \textsc{DIPN}\xspace is a network that predicts the motions
of multiple objects and generates a synthetic image corresponding to the scene
after the imagined push. \textsc{VFT}\xspace uses \textsc{GN}\xspace to obtain a reward value for each search node and detect
whether the search terminates. Both \textsc{DIPN}\xspace and \textsc{GN}\xspace are trained offline on different
objects.
\begin{comment}
It is straightforward that when objects are packed together,
making the target object graspable while taking a minimum number
of push actions is the key to solve \textsc{ORC}\xspace efficiently.
Based on this intuition, we propose a solution pipeline that
uses a push prediction network to accurately predict the outcome
of a sequence of push actions, and select the optimized push
sequence based on the predictions.
A high level description of the proposed solution pipeline
is provided in Fig.~\ref{fig:pipeline}.
In the beginning of a planning iteration,
we first take an RGB-D image of the workspace.
The image is re-projected orthographically and
cropped to the workspace’s boundary.
Then, Mask R-CNN~\cite{He_2017_ICCV} is used to classify the objects
and find the target object.
With the target object located, we use a Grasp Network (\textsc{GN}\xspace) to tell
whether the object is directly graspable.
\textsc{GN}\xspace is a Deep Q-Learning Network (\textsc{DQN}\xspace)~\cite{zeng2018learning, huang2020dipn} that takes the image input,
and outputs the estimated grasp success rate for each grasp action.
When the maximum estimated grasp success rate is larger than a threshold,
we consider the target object directly graspable and perform
a grasp action;
otherwise, push actions must be performed to create space for grasping.
When push actions are needed, the next action is selected using a
Monte-Carlo Tree Search (\textsc{MCTS}\xspace) method.
In our \textsc{MCTS}\xspace implementation, each search state corresponds to an
image observation of the workspace.
Given a push action and a state,
\textsc{MCTS}\xspace uses Deep Interactive Prediction Network (\textsc{DIPN}\xspace)~\cite{huang2020dipn}
as the state transition function.
Here, \textsc{DIPN}\xspace is a network that can accurately predict the transition
of multiple objects and generate a synthesized image after the push.
\textsc{MCTS}\xspace uses \textsc{GN}\xspace to provide reward value and detect whether the problem
terminates.
\end{comment}
\begin{comment}
The only input to the system is an RGB-D, plus the feature of the target object.
When deciding on the action, a state observation $o$ is given as an RGB-D image, re-projected orthographically, cropped to the workspace’s boundary.
The state observation $o$ is used as the input to the Grasp Network, which produces pixel-wised grasp Q value map $Q_g$ (except the horizon for training the Grasp Network is one).
Mask R-CNN~\cite{He_2017_ICCV} is used to detect objects, so we can obtain the masks of each object. We also use known feature to locate the target object (In the experiments, the known feature is the unique color).
The mask of the target object will be applied on the grasp Q value map $Q_g$, so $Q_g^T$ only contains the grasp Q value for the target object.
We compare the max Q value $\max Q_g^T$ with the grasp threshold $Q_g^*$, if $\max Q_g^T \geq Q_g^*$, the grasp action will be executed. If not, the observation $o$ will be passed to the MCTS module. In the MCTS module, the Grasp Network is used as a reward function, and the DIPN is used as the transition function to complete the tree search. Finally the best push action will be produced and executed. In the following subsections, we step into depth of three main modules.
\end{comment}
\subsection{Grasp Network Adaption}
The Grasp Network we are using is from the previous work~\cite{huang2020dipn}, which is similar to the DQN for grasping as presented in~\cite{zeng2018learning}.
The Grasp Network takes the state observation $o$ as the input, and outputs a pixel wised grasp Q value $Q_g$.
$Q_g$ is a one channel image with same size as the $o$, and the Q value in $Q_g$ represents the predicted reward if the robot grasps at the corresponding position.
The Q value is a single-step grasp reward, defined as 1 for a successful grasp and 0 otherwise.
Since we are all care about the target object, the successful grasp is defined as that an object is grasped and the object is the target object.
The output $Q_g$ contains an assumption that is the orientation of the grasp is fixed, the left finger is on the left hand side of the pixel and right finger is on the right hand side as shown in Fig ?.
To produce the orientation $\theta$ required for a grasp action, the state observation $o$ is rotated evenly distributed 16 times between 0 and 360 degrees, before passing to the Grasp Network.
As a result, we have 16 grasp pixel wised $G_{g_i}$ ($i$ is an integer from 1 to 16), we use $Q_{g_{A}}$ to denote all of them.
For grasping, the greedy strategy is applied here, a single pixel the $\max Q_{g_{A}}$ is chosen, the pixel position $(x_i, y_i)$ in the corresponding $G_{g_i}$ is recorded, and will be transformed back as $(x, y)$ based on the rotation angle $\theta$ applied to $o$. The $o$ contains a depth image, the depth value $z$ can be obtained according to $(x, y)$. That is a grasp action $a^(grasp) = (x, y, z \theta)$ is acquired.
The main difference compared to the previous work is the objective and the learning strategy.
The objective in the previous work~\cite{zeng2018learning, huang2020dipn} are grasping all the objects, which is not the case in this paper, we only grasp the target object. Only given rewards to the target object would be a straight forward adaption as implemented in~\cite{xu2021efficient}. Nevertheless, for faster training, and high accuracy, xu2021efficient
\subsection{Push Prediction (DIPN)}
\subsection{Monte Carlo Tree Search Implementation}
|
2,869,038,156,794 | arxiv | \section{Introduction}
\noindent
Statistical modelling allows for the learning of the relationship between two
variables, where the said relationship is responsible for generating the data
available on the variables. Thus, let $\boldsymbol{X}$ be a random variable that represents a behavioural or structural
parameter of the system, and $\boldsymbol{Y}$ is another variable that bears influence on
$\boldsymbol{X}$ s.t. $\boldsymbol{Y}=\boldsymbol{f}(\boldsymbol{X})$, where
the functional relation $\boldsymbol{f}(\cdot)$ that we seek to learn, is itself a
random structure, endowed with
information about the error made in predicting the values of $\boldsymbol{Y}$ (or $\boldsymbol{X}$)
at which the noise-included measurement of $\boldsymbol{X}$ (or $\boldsymbol{Y}$) has been realised.
Such a function can be modelled as a realisation from an adequately chosen
stochastic process.
In general, either or both variables could be tensor-valued, such that, data
comprising measurements of either variable, is then shaped as a
hypercuboid. Typically, the structure/behaviour of a system is parametrised
using a set of scalar-valued parameters, (say $d$ number of such parameters),
which can, in principle be collated into a $d$-dimensional vector. Then $\boldsymbol{X}$
is typically, the system parameter vector. The other, observed variable
$\boldsymbol{Y}$, can be tensor-valued in general. There are hypercuboidally-shaped data
that show up in real-world applications, \ctp{mardia_book, bijma_face,
werner_cvs, theobald_covs, fuhrman}. For example, in computer vision, the
image of one person might be a matrix of dimensions $a\times b$, i.e. image
with resolution of $a$ pixels by $b$ pixels. Then, repetition across $n$
persons inflates the data to a cuboidally-shaped dataset. Examples of handling
high-dimensional datasets within computer vision exist \ctp{ian_face,
fu_thesis, pang, wang_book, qiang}. In health care, the $p$ number of health
parameters of $n$ patients, when charted across $k$ time-points, again
generates a high-dimensional data, which gets further enhanced, if the
experiment involves tracking for changes across $\ell$ groups of $n$ patients
each, where each such group is identified by the level of intervention
\ctp{chari, clarke, oberg, chari2, sarakar_phd, wang_healthcare, fan2107}.
Again, in ecological datasets, there could be $n$ spatial locations at each of
which, $p$ traits of $k$ species could be tracked, giving rise to a
high-dimensional data \ctp{leitao, warton, dunstanscott}.
It is a shortcoming of the traditional modelling strategies that we treated
these groupings in the data as independent--or for that matter, even the
variation in parameter values of any group across the $k$ time points, is
ignored, and a mere snapshot of each group is traditionally considered, one at
a time. In this work, we advance a method for the consideration of parameters
across all relevant levels of measurement, within one integrated framework, to
enable the learning of correlations across all such levels, thus permitting
the prediction of the system parameter vector, with meaningful uncertainties,
and avoid information loss associated with categorisation of data.
While discussing the generic methodology that helps address the problem of
learning the inter-variable relationship $\boldsymbol{f}(\cdot)$, given general
hypercuboid-shaped data, we focus on developing such learning when this data
displays discontinuities. In such a learning exercise, the inter-variable functional relation
$\boldsymbol{f}(\cdot)$, needs to be modelled using a
high-dimensional stochastic process (a tensor-variate Gaussian Process, for
example), the covariance function of which is non-stationary. The correlation
between a pair of data slices, (defined by two such measured values of $\boldsymbol{Y}$,
each realised at two distinct values of the system parameter $\boldsymbol{X}$), is
sometimes parametrically modelled as a function of the distance between the
values of the system parameter at which these slices are realised,
i.e. ``similarity'' in values of $\boldsymbol{Y}$ can be modelled as a function of
``similarity'' in the corresponding $\boldsymbol{X}$ values. However, if there are
discontinuities in the data, then such a mapping between ``similarities'' in
$\boldsymbol{X}$ and $\boldsymbol{Y}$ no longer holds. Instead, discontinuities in data call
for a model of the correlation that adapts to the discontinuities in the
data. We present such correlation modelling in this paper, by modelling each
scalar-valued hyperparameter of the correlation structure of the
high-dimensional stochastic process, as a random function of the sample path
of that process; this random function then, can itself be modelled as a
realisation of a scalar-variate stochastic process--a scalar-variate Gaussian
Process (GP) for example (Section~\ref{sec:model}).
Thus, the learning of $\boldsymbol{f}(\cdot)$ is
double-layered, in which multiple scalar-variate GPs inform a high-dimensional
(tensor-variate) GP. Importantly, we show below (Section~\ref{sec:suffice}) that no more
than 2 such layers in the learning strategy suffice. Thus, the data on the
observable $\boldsymbol{Y}$ can be shown to be sampled from a compound tensor-variate and
multiple scalar-variate Gaussian Processes.
Acknowledgement of non-stationarity in correlation learning is not new
\ctp{paciorek}. In some approaches, a transformation of the space of the
input variable is suggested, to accommodate non-stationarity \ctp{sampson,
snoek, ohagan}. When faced with learning the dynamically varying covariance
structure of time-dependent data, others have resorted to
learning such a covariance, using Generalised Wishart Process
\ctp{wilson}. In another approach, latent parameters that bear information on
non-stationarity, have been modelled with GPs and learnt simultaneously with
the sought function \ctp{tolvanen}, while others have used multiple GPs to
capture the non-stationarity \ctp{gramacy, heinonen}. However, what has not
been presented, is a template for including non-stationarity in
high-dimensional data, by nesting lower-dimensional Gaussian Processes with
distinct covariances, within a tensor-variate GP (Section~\ref{sec:model} and
Section~\ref{sec:kernel}), using a Metropolis-within-Gibbs inference scheme
(Section~\ref{sec:inference}), to perform with-uncertainties learning of a
high-dimensional function, given discontinuities that show up in the
hypercuboidally-shaped datasets in general, and illustration of the method on
a cuboidally-shaped, real-world dataset (Section~\ref{sec:application},
Section~\ref{sec:prediction}). This is what we introduce in this paper. Our
model is capacitated to learn the temporally-evolving covariance of
time-dependent data (Section~\ref{sec:temporal}), if such is the data at hand, but the focus of our
interest is to follow the learning of the sought tensor-valued functional
relation between a system parameter vector and a tensor-valued observable,
with inverse Bayesian prediction of the system parameter values, at which test
data on the observable is measured (Section~\ref{sec:results},
Section~\ref{sec:prediction}). Additionally, flexibility of our model design
permits both inverse and forward predictions. So we also
predict new data at chosen system parameter values given our model and
results, and perform model checking, by comparing such generated data against
the empirically observed data (Section~3 of Supplementary Materials).
\section{Model}
\label{sec:model}
\noindent
Let system parameter vector $\boldsymbol{S}\in{\boldsymbol{\cal X}}\subseteq{\mathbb
R}^d$, be affected by observable $\boldsymbol{V}$, where $\boldsymbol{V}$ is ($k-1$-th ordered)
tensor-valued in general, i.e. is $\boldsymbol{V}\in{\boldsymbol{\cal
Y}}\subseteq{\mathbb R}^{m_1\times m_2\times\ldots\times m_{k-1}}$,
$m_i\in{\mathbb Z},\forall\:i=1,\ldots,k-1$.
That $\boldsymbol{V}$ bears influence on $\boldsymbol{S}$ suggests
the relationship $\boldsymbol{V}=\boldsymbol{\xi}(\boldsymbol{S})$
where $\boldsymbol{\xi}:{\boldsymbol{\cal X}}\subseteq{\mathbb R}^d\longrightarrow {\boldsymbol{\cal Y}}\subseteq{\mathbb R}^{m_1\times
m_2\times\ldots\times m_{k-1}}$.
\begin{definition}
\label{defn:defn1}
{We define functional relationship $\boldsymbol{\xi}(\cdot)$, between $\boldsymbol{S}$ and
$\boldsymbol{V}$, as a ``tensor-valued function'', with
$\displaystyle{\prod\limits_{i=1}^{k-1} m_i}$-number of component
functions, where these components suffer inter-correlations. Thus, the
learning of $\boldsymbol{\xi}(\cdot)$ is equivalent to learning the component
functions, inclusive of learning the correlation amongst these component
functions.
Inverse of
$\boldsymbol{\xi}(\cdot)$, is defined as the tensor-valued function of same
dimensionalities as $\boldsymbol{\xi}(\cdot)$, comprising inverses of each
component function of $\boldsymbol{\xi}(\cdot)$, assuming inverse of each component
function exists.}
\end{definition}
The inversion of the sought function $\boldsymbol{\xi}(\cdot)$--where
$\boldsymbol{V}=\boldsymbol{\xi}(\boldsymbol{S})$--allows for the forward prediction of $\boldsymbol{v}^{(new)}$ given a
measured value $\boldsymbol{s}^{(new)}$ of $\boldsymbol{S}$, as well as for the inverse prediction
of the value of $\boldsymbol{S}$ at which a given measurement of $\boldsymbol{V}$ is recorded.
It may be queried: why do we undertake the seemingly more difficult learning
of the tensor-valued $\boldsymbol{\xi}(\cdot)$ (that outputs the tensor $\boldsymbol{V}$), than of
the vector-valued $\boldsymbol{g}(\cdot)$ (that outputs the vector $\boldsymbol{S}$). We do this,
because we want to retain the capacity of predicting both new data at a given
value of the system parameter ($\boldsymbol{S}$), as well as predict the system parameter
at which a new measurement of the observable $\boldsymbol{V}$ is realised.
\begin{remark}
{
If we had set ourselves the task of learning $\boldsymbol{g}(\cdot)$, where
$\boldsymbol{g}(\boldsymbol{V})=\boldsymbol{S}$, i.e. $\boldsymbol{g}(\cdot)$ is a ``vector-valued'' function,
and therefore lower dimensional with fewer number of
component functions than the tensor-valued $\boldsymbol{\xi}(\cdot)$--we could not have
predicted value of $\boldsymbol{V}$ at a given $\boldsymbol{s}$.
The $d$-dimensional vector-valued inverse function $\boldsymbol{g}^{-1}(\cdot)$ cannot yield a value of the $\displaystyle{\prod\limits_{i=1}^{k-1}
m_i}$ number of components of the tensor $\boldsymbol{V}$ at this given $\boldsymbol{S}$, if
$\displaystyle{\prod\limits_{i=1}^{k-1} m_i} > d$.
}
\end{remark}
The learning of the function $\boldsymbol{\xi}(\cdot)$, uses the training data
${\bf D}:=\{(\boldsymbol{s}_i,\boldsymbol{v}_i)\}_{i=1}^N$. Conventional prediction of
$\boldsymbol{S}=\boldsymbol{s}^{(test)}$, at which test data $\boldsymbol{v}^{(test)}$ on $\boldsymbol{V}$ is realised,
suggests: $\boldsymbol{s}^{(test)} := \boldsymbol{\xi}^{-1}(\boldsymbol{V})\vert_{\boldsymbol{v}^{(test)}}$.
\begin{itemize}
\item However, this
there is no objective way to include the uncertainties learnt
in the learning of the function $\boldsymbol{\xi}(\cdot)$, to propagate into the
uncertainty of this prediction. This underpins an advantage of Bayesian
prediction of one variable, given test data on the other, subsequent to
learning of $\boldsymbol{\xi}(\cdot)$ using training data ${\bf D}$.
\item Conventional fitting
methods (such as fitting with splines, etc), also fumble
when measurements of both/either of the
r.v.s $\boldsymbol{S}$ and $\boldsymbol{V}$, are accompanied by measurement errors; in light of
this, it becomes difficult to infer the function that fits the data the
best. In fact, the uncertainty in the learning of the sought function is also
then difficult to quantify.
\item Secondly, there is no organic way of quantifying the smoothness of the
sought $\boldsymbol{\xi}(\cdot)$, in th econventional approach. Ideally, we would prefer
to learn this smoothness from the data itself. However, there is nothing
intrinsic to the fitting-with-splines/wavelets method that can in principle,
quantity the smoothness of the curve, given a training data.
\item Lastly, when
$\boldsymbol{V}$ is an r.v. that is no longer a scalar, but higher-dimensional (say
tensor-valued in general), fitting with splines/wavelets starts to become
useless, since in such cases of sought tensor-valued function $\boldsymbol{\xi}(\cdot)$
(in general), the component functions of $\boldsymbol{\xi}(\cdot)$ are correlated, but
methods such as parametric fitting approaches, cannot capture such
correlation, given the training data. As we have remarked above, such
correlation amongst the components functions of $\boldsymbol{\xi}(\cdot)$ is the same
correlation structure amongst the components of the tensor-valued $\boldsymbol{V}$--so in
principle, the sought correlation can be learnt from the training data.
\end{itemize}
In light of this, we identify a relevant Stochastic Process that can give a
general, non-restrictive description of the sought function $\boldsymbol{\xi}(\cdot)$--a
Gaussian Process for example. The joint probability density of a set of
realisations of a sampled $\boldsymbol{\xi}(\cdot)$, is then driven by the Process under
consideration, where each such realisation of the function, equals a value of
the output variable $\boldsymbol{V}$. Thus, the joint also represents the likelihood of
the Process parameters given the relevant set of values of $\boldsymbol{V}$, i.e. the
data. We impose judiciously chosen priors, to write the posterior probability
density of the Process parameters given the data. Generating samples from this
posterior then allows for the identification of the 95$\%$ HPD credible
regions on these Process parameters, i.e. on the learnt function
$\boldsymbol{\xi}(\cdot)$. It is possible to learn the smoothness of the function
generated from this Process, via kernel-based parameterisation of the
covariance structure of the GP under consideration. Thus, we focus on the
pursuit of adequate covariance kernel parametrisation.
\begin{proposition} {When possible, covariance matrices of the GP that is
invoked to model the sought function $\boldsymbol{\xi}(\cdot)$, are kernel-parametrised
using stationary-looking kernel functions, hyperparameters of
which are modelled as dependent on the sample paths (or rather sample
functions) of this GP. We show below (Lemma~\ref{lemma:1}) that such a
model can address the anticipated discontinuities in data.}
\end{proposition}
As LHS of equation $\boldsymbol{V}=\boldsymbol{\xi}(\boldsymbol{S})$ is $k-1$-th ordered
tensor-valued, $\boldsymbol{\xi}(\cdot)$ is tensor-variate function of equal
dimensionalities. So we model
$\boldsymbol{\xi}(\cdot)$ as a realisation from a tensor-variate GP.
\begin{definition}
{Modelling $\boldsymbol{\xi}(\cdot)$ as sampled from a tensor-variate GP, where the
$k-1$-th ordered tensor-valued variable $\boldsymbol{V}=\boldsymbol{\xi}(\boldsymbol{S})$, we get that the
joint probability of the set of values of sampled function $\boldsymbol{\xi}(\cdot)$, at each of the $n$ design
points $\boldsymbol{s}_1,\ldots \boldsymbol{s}_n$ (that reside within the training data ${\bf
D}=\{(\boldsymbol{s}_i,\boldsymbol{v}_i)\}_{i=1}^n$), follows
the $k$-variate Tensor Normal distribution \ctp{kolda, richter, tensor,
manceur}:
$$[\boldsymbol{\xi}(\boldsymbol{s}_1),\ldots,\boldsymbol{\xi}(\boldsymbol{s}_n)]\sim{\cal TN}(\boldsymbol{M},\boldsymbol{\Sigma}_1,\ldots,\boldsymbol{\Sigma}_k),$$
where mean of this density is a $k$-th ordered mean tensor $\boldsymbol{M}$ of
dimensions $m_1\times\ldots\times m_k$, and $\boldsymbol{\Sigma}_j$ is the $m_j\times
m_j$-dimensional, $j$-th covariance matrix; $j=1,\ldots,k$. In other words,
likelihood of $\boldsymbol{M},\boldsymbol{\Sigma}_1,\ldots, \boldsymbol{\Sigma}_k$ given ${\bf D}$ is the
$k$-variate Tensor Normal density:
\begin{equation}
{\cal L}(\boldsymbol{M},\boldsymbol{\Sigma}_1,...,\boldsymbol{\Sigma}_k\vert {\bf D}) \propto \exp(-\Vert ({\bf
D}_{\boldsymbol{V}} -\boldsymbol{M})\times_1 \boldsymbol{A}_1^{-1} \times_2 \boldsymbol{A}_2^{-1} ... \times_k \boldsymbol{A}_k^{-1} \Vert^2/2),
\label{eqn:eqn1}
\end{equation}
where $n$ observed values of the $k-1$-th dimensional tensor-valued $\boldsymbol{V}$
are collated to form the $k$-th ordered tensor ${\bf D}_{\boldsymbol{V}}$. The notation
$\times_j$ in Equation~\ref{eqn:eqn1} presents the $j$-mode product of a
matrix and a tensor \ctp{oseledets2011tensor}. Here $\boldsymbol{A}_j$ is the unique
square-root of the positive definite covariance matrix $\boldsymbol{\Sigma}_j$, i.e.
$\boldsymbol{\Sigma}_j= \boldsymbol{A}_j \boldsymbol{A}^{ T}_j$.}
\end{definition}
One example of a computational algorithm that can be invoked to
realise such a square root of a matrix, is Cholesky decomposition\footnotemark.
\footnotetext{
The covariance tensor of this $k$-th order Tensor Normal distribution, has been
decomposed into $k$ different covariance matrices by Tucker decomposition,
\ctp{hoff_2011, manceur, kolda, Xu}, to yield the $k$ number of covariance matrices,
$\boldsymbol{\Sigma}_1,\ldots, \boldsymbol{\Sigma}_k$, where the $j$-th covariance matrix $\boldsymbol{\Sigma}_j$
is an $m_j\times m_j$-dimensional square matrix, $j=1,\ldots,k$.
As \ctn{Hoff, manceur} suggest, a $k$-th ordered random tensor $\boldsymbol{\Sigma} \in R^{ m_1 \times
m_2...\times m_k}$ can be decomposed to a $k$-th ordered tensor $\boldsymbol{Z}$
and $k$ number of covariance matrices $\boldsymbol{\Sigma}_1,\ldots, \boldsymbol{\Sigma}_k$ by
{Tucker product}, according to
$
\boldsymbol{\Sigma} = \boldsymbol{Z} \times_1 \boldsymbol{\Sigma}_1 \times_2 \boldsymbol{\Sigma}_2 ... \times_k \boldsymbol{\Sigma}_k$,
It can be proved that all tensors can be
decomposed into a set of covariance matrices \ctp{tucker}, though not uniquely. This may
cause difficulty in finding the correct combination of covariance matrices
that present the correlation structure of the data at hand. One way to solve
this problem is to use priors for the respective covariance parameters.}
We employ this likelihood in Equation~\ref{eqn:eqn1} to write the joint posterior
probability density of the mean tensor and covariance matrices, given the data. But prior to doing
that, we identify those
parameters--if any--that can be estimated in a pre-processing stage of the
inference, in order to reduce the computational burden of inference. Also,
it would be useful to find ways of
(kernel-based) parametrisation of the sought covariance matrices, thereby reducing
the number of parameters that we need to learn.
To this effect, we undertake the estimation of the mean tensor is $\boldsymbol{M} \in R^{ m_1 \times
m_2...\times m_k}$. It is empirically estimated as the sample mean $\overline{\boldsymbol{v}}$ of the sample
$\{\boldsymbol{v}_1,\ldots,\boldsymbol{v}_n\}$, s.t. $n$ repetitions of $\overline{\boldsymbol{v}}$ form
the value $\boldsymbol{m}$ of $\boldsymbol{M}$.
However, if necessary, the mean tensor itself can be regarded as a random variable and learnt from the data \ctp{chakrabarty2015bayesian},
The modelling of the covariance structure of this GP is discussed in the following subsection.
Ultimately, we want to predict the value of one variable, at which a
new or test data on the other variable is observed.
\begin{proposition}
{
To perform inverse prediction of value $\boldsymbol{s}^{(test)}$ of the input variable
$\boldsymbol{S}$, at which test data $\boldsymbol{v}^{(test)}$ on $\boldsymbol{V}$ is realised, we will
\begin{enumerate}
\item[---]sample from the posterior probability density of $\boldsymbol{s}^{(test)}$
given the test data $\boldsymbol{v}^{(test)}$, and (modal) values of the unknowns that
parametrise the covariance matrices the high-dimensional GP invoked to model
$\boldsymbol{\xi}(\cdot)$, subsequent to learning the marginals of each such unknown
given the training data, using MCMC.
\item[---]sample from the joint posterior probability density
of $\boldsymbol{s}^{(test)}$ and all other unknowns parameters of this
high-dimensional GP, given training, as well as test data, using MCMC.
\end{enumerate}}
\end{proposition}
Computational speed of the first approach, is higher, as marginal
distributions of the GP parameters are learnt separately. When the training
data is small, or if the training data is not representative of the test data
at hand, the learning of $\boldsymbol{s}^{(test)}$ via the second method may affect the
learning of the GP parameters.
\subsection{3 ways of learning covariance matrices}
\noindent
Let the $ij$-th element of $p$-th covariance matrix $\boldsymbol{\Sigma}_p^{(m_p\times
m_p)}$ be $\sigma_{ij}^{(p)}$; $j,i=1,\ldots,m_p$, $p\in\{1,\ldots,
k\}$.
\begin{definition}
{At a given $p$, $\sigma_{ij}^{(p)}$ bears information about
covariance amongst the $i$-th and $j$-th slices of the $k$-th ordered data
tensor ${\boldsymbol{D}}_{\boldsymbol{V}}=(\boldsymbol{v}_1,\ldots,\boldsymbol{v}_{m_p})$,
s.t. $m_1\times\ldots\times m_{p-1}\times m_{p+1}\times\ldots\times
m_k$-dimensional $i$-th ``slice'' of data tensor
$\boldsymbol{D}_{\boldsymbol{V}}$ is measured value $\boldsymbol{v}_i$ of $k-1$-th ordered tensor-valued $\boldsymbol{V}$,
where the $i$-th slice is
realised at the $i$-th design point $\boldsymbol{s}_i$.
}
\end{definition}
The covariance between the $i$-th and $j$-th slices of data $\boldsymbol{D}_{\boldsymbol{V}}$
decreases as the slices get increasingly more disparate, i.e. with increasing
$\parallel \boldsymbol{s}_i - \boldsymbol{s}_j\parallel$. In fact, we can model $\sigma_{ij}^{(p)}$
as a decreasing function $K_p(\cdot,\cdot)$ of this disparity $\parallel\boldsymbol{s}_i
- \boldsymbol{s}_j \parallel$, where $K_p(\boldsymbol{s}_i,\boldsymbol{s}_j)$ is the covariance kernel
function, computed at the $i$-th and $j$-th values of input variable $\boldsymbol{S}$. In
such a model, the number of distinct unknown parameters involved in the
learning of $\boldsymbol{\Sigma}_p$ reduces from $m_p(m_p+1)/2$, to the
number of hyper-parameters that parametrise the kernel function
$K_p(\cdot,\cdot)$.
However, kernel parametrisation is not always possible. \\
--Firstly, this parametrisation may cause information loss and this may not be
acceptable\\ \ctp{aston}. \\
--Again, we will necessarily avoid kernel parametrisation, when we cannot find
input parameters, at which the corresponding slices in the data are realised.
In such situations, \\
--we can learn the elements of the covariance matrix directly using MCMC,
though direct learning of all distinct elements of $\boldsymbol{\Sigma}_p$ is feasible, as
long as total number of all unknowns learnt by MCMC $\lesssim 200$.\\
--we can use an empirical estimation for the covariance matrix $\boldsymbol{\Sigma}_p$.
We collapse each of the $m_p$ number of $k-1$-th ordered tensor-shaped slices
of the data, onto the $q$-th axis in the space ${\cal Y}$ of $\boldsymbol{V}$, where we
can choose any one value of $q$ from $\{1,\ldots,k-1\}$. This will reduce each
slice to a $m_q$-dimensional vector, so that $\sigma_{ij}^{(p)}$ is covariance
computed using the $i$-th and $j$-th such $m_q$-dimensional vectors.
Indeed such an empirical estimate of any covariance matrix is easily
generated, but it indulges in linearisation amongst the different
dimensionalities of the observable $\boldsymbol{V}$, causing loss of information about
the covariance structure amongst the components of these high-dimensional
slices. This approach is inadequate when the sample size is small
because the sample-based estimate will tend to be incorrect; indeed
discontinuities and steep gradients in the data, especially in small-sample
and high-dimensional data, will render such estimates of the covariance
structure incorrect. Importantly, such an approach does not leave any scope
for identifying the smoothness in the function $\boldsymbol{\xi}(\cdot)$ that represents
the functional relationship between the input and output variables. Lastly,
the uncertainties in the estimated covariance structure of the GP remain
inadequately known.
\begin{proposition}
{
We model the covariance matrices as \\
--kernel parametrised,\\
--or empirically-estimated, \\
--or learnt directly using MCMC. }
\end{proposition}
An accompanying computational worry is the inversion of any of the covariance
matrices; for a covariance matrix that is an $m_p\times m_p$-dimensional
matrix, the computational order for matrix inversion is well known to be
${\cal O}(m^3_p)$ \ctp{FW92}.
\section{Kernel parametrisation}
\label{sec:kernel}
\begin{proposition}
{
Kernel parametrisation of a covariance matrix, when undertaken, uses
an Squared Exponential (SQE) covariance kernel
\begin{equation}
K(\boldsymbol{s}_i, \boldsymbol{s}_j) := A\left[\exp\left(-(\boldsymbol{s}_i - \boldsymbol{s}_j)^T \boldsymbol{Q}^{-1}
(\boldsymbol{s}_i-\boldsymbol{s}_j)\right)\right],\quad\forall i,j=1,\ldots,d,
\label{eqn:kernel2}
\end{equation}
where $\boldsymbol{Q}$ is a diagonal matrix, the diagonal elements of which are the
length scale hyperparameters $\ell_1,\ldots,\ell_d\in{\mathbb R}_{>0}$ that tell us how quickly
correlation fades away in each of the $d$-directions in input space ${\cal
X}$, s.t. the inverse matrix $\boldsymbol{Q}^{-1}$ is also diagonal, with
the diagonal elements given as
${\displaystyle{\frac{1}{\ell_1},\ldots,\frac{1}{\ell_d}}}$, where
$q_c:=1/\ell_c$ is the smoothness hyperparameter along the $c$-th direction in
${\cal X}$, $c=1,\ldots,d$. We learn these $d$ unknown parameters from the
data.
Here $A$ is the global amplitude, that is subsumed as a scale factor, in one
of the other covariance matrices, distinct elements of which are learnt directly using MCMC.}
\end{proposition}
\begin{remark}
{
We avoid using a model for the kernel in which amplitude depends on the
locations at which covariance is computed, i.e. the model:
$K(\boldsymbol{s}_i, \boldsymbol{s}_j) := a_{ij}\left[\exp\left(-(\boldsymbol{s}_i - \boldsymbol{s}_j)^T \boldsymbol{Q}^{-1}
(\boldsymbol{s}_i-\boldsymbol{s}_j)\right)\right]$, and use a model endowed with a global
amplitude $A$. This helps avoid learning a very large
number ($d(d+1)/2$) of amplitude parameters $a_{ij}$
directly from MCMC. }
\end{remark}
A loose interpretation of this amlitude modelling is that we have scaled all
local amplitudes $a_{ij}$ to be $\leq 1$ using the global factor $A$
($=\max\limits{ij}\{a_{ij}\}$), and these scaled local amplitudes are then
subsumed into the argument of the exponential in the RHS of the last equation,
s.t. the reciprocal of the correlation length scales, that are originally
interpreted as the elements of the diagonal matrix $\boldsymbol{Q}^{-1}$, are now
interpreted as the smoothing parameters modulated by such local
amplitudes. This interpretation is loose, since the same smoothness parameter
cannot accommodate all (scaled by a global factor) local amplitudes$\in(0,1]$,
for all $\boldsymbol{s}_i-\boldsymbol{s}_j$.
\subsection{Including non-stationarity, by modelling hyperparameters of
covariance kernels as realisations of Stochastic Process}
\noindent
By definition of the kernel function we choose, (Equation~\ref{eqn:kernel2}),
all functions $\boldsymbol{\xi}(\cdot)$ sampled from the tensor-variate GP, are endowed with
the same length scale hyperparameters $\ell_1,\ldots,\ell_d$, and global
amplitude $A$. However, the data on the output variable $\boldsymbol{V}$ is not
continuous, i.e. similarity between $\boldsymbol{s}_i$ and $\boldsymbol{s}_j$ does not imply
similarity between $\boldsymbol{\xi}(\boldsymbol{s}_i)$ and $\boldsymbol{\xi}(\boldsymbol{s}_j)$, computed in a universal
way $\forall \boldsymbol{s}_i,\boldsymbol{s}_j\in{\cal X}$. Indeed, then a stationary definition of
the correlation for all pairs of points in the function domain, is wrong. One
way to generalise the model for the covariance kernel is to suggest that the
hyperparameters vary as random functions of the sample path.
\begin{theorem}
\label{th:1}
{
For ${\boldsymbol{V}}=\boldsymbol{\xi}(\boldsymbol{S})$, with $\boldsymbol{S}\in{\cal X}$ and $\boldsymbol{V}\in{\cal Y}$, if the map $\boldsymbol{\xi}:{\cal
X}\longrightarrow{\cal Y}$ is a Lipschitz-continuous map over the bound
set ${\cal X}\subseteq{\mathbb R}^d$, where absolute
value of correlation
between $\boldsymbol{\xi}(\boldsymbol{s}_1)$ and $\boldsymbol{\xi}(\boldsymbol{s}_2)$ is
$$\vert corr(\boldsymbol{\xi}(\boldsymbol{s}_1),\boldsymbol{\xi}(\boldsymbol{s}_2))\vert := \displaystyle{K\left(
\langle (\boldsymbol{s}_1 -\boldsymbol{s}_2),\boldsymbol{q}\rangle^2 \right)}, \quad\forall
\boldsymbol{s}_1,\boldsymbol{s}_2\in{\cal X},$$
with $$K(\boldsymbol{s}_1,\boldsymbol{s}_2) := \displaystyle{\exp\left[-{\langle (\boldsymbol{s}_1
-\boldsymbol{s}_2),\boldsymbol{q}\rangle^2}\right]},$$
then the vector $\boldsymbol{q}$ of correlation hyperparameters is finite, and each element
of $\boldsymbol{q}$ is
$\boldsymbol{\xi}$-dependent, i.e.
$$\boldsymbol{q}(\boldsymbol{\xi}) = (q_1(\boldsymbol{\xi}),\ldots,q_d(\boldsymbol{\xi}))^T\in{\mathbb R}^d. $$
}
\end{theorem}
\begin{proof}
{
For $\boldsymbol{S}\in{\cal X}$, where ${\cal X}$ is a bounded subset of ${\mathbb R}^d$, and $\boldsymbol{V}\in{\cal Y}$, the mapping $\boldsymbol{\xi}:{\cal
X}\longrightarrow{\cal Y}$ is a defined to be Lipschitz-continuous map, i.e.
\begin{equation}
d_{\cal Y}(\boldsymbol{\xi}(\boldsymbol{s}_1) - \boldsymbol{\xi}(\boldsymbol{s}_2)) \leq L_{\boldsymbol{\xi}} d_{\cal
X}(\boldsymbol{s}_1,\boldsymbol{s}_2),\quad \forall \boldsymbol{s}_1,\boldsymbol{s}_2\in{\cal X},
\label{eqn:tensor_lip}
\end{equation}
--for constant $L_{\boldsymbol{\xi}}\in{\mathbb R}$, s.t. the infinum over all such
constants is the finite Lipschitz constant for $\boldsymbol{\xi}$;\\
--$({\cal X}, d_{\cal X})$ and $({\cal Y}, d_{\cal Y})$ are metric spaces.
Let metric $d_{\cal X}(\cdot,\cdot)$ be the $L_2$ norm:
$$d_{\cal X}(\boldsymbol{s}_1,\boldsymbol{s}_2) := \parallel \boldsymbol{s}_1-\boldsymbol{s}_2\parallel, \quad\forall\boldsymbol{s}_1,\boldsymbol{s}_2\in{\cal X},$$
and the metric $d_{\cal Y}(\boldsymbol{\xi}(\cdot),\boldsymbol{\xi}(\cdot))$ be defined as (square
root of the
logarithm of) the inverse of
the correlation:
$$d_{\cal Y}(\boldsymbol{\xi}(\boldsymbol{s}_1),\boldsymbol{\xi}(\boldsymbol{s}_2)) := \sqrt{-\log\vert corr(\boldsymbol{\xi}(\boldsymbol{s}_1),\boldsymbol{\xi}(\boldsymbol{s}_2))\vert},\quad\forall \boldsymbol{s}_1,\boldsymbol{s}_2\in{\cal X},$$
--where correlation being a measure of affinity,
$\log\vert 1/corr(\cdot,\cdot)\vert$, transforms this affinity into a squared distance for this
correlation model; so the transformation $\sqrt{\log\vert
1/corr(\cdot,\cdot)\vert}$ to a metric is undertaken; \\
--and the given kernel-parametrised correlation is:
$$\vert corr(\boldsymbol{\xi}(\boldsymbol{s}_1),\boldsymbol{\xi}(\boldsymbol{s}_2))\vert := \exp[-{\langle (\boldsymbol{s}_1-\boldsymbol{s}_2),\boldsymbol{q}\rangle}^2],
\quad\forall \boldsymbol{s}_1,\boldsymbol{s}_2\in{\cal X},\:\boldsymbol{q}\in{\mathbb R}^d,$$
so that
$$d_{\cal Y}(\boldsymbol{\xi}(\boldsymbol{s}_1),\boldsymbol{\xi}(\boldsymbol{s}_2))={\langle
(\boldsymbol{s}_1-\boldsymbol{s}_2),\boldsymbol{q}\rangle}. $$
Then for the map $\boldsymbol{\xi}$ to be Lipschitz-continuous, we require:
\begin{equation}
\displaystyle{\sum\limits_{i=1}^d q_i^2(\boldsymbol{s}_1^{(i)}-\boldsymbol{s}_2^{(i)})^2} \leq
\displaystyle{L_{\boldsymbol{\xi}}^2\sum\limits_{i=1}^d (\boldsymbol{s}_1^{(i)}-\boldsymbol{s}_2^{(i)})^2},
\label{eqn:last}
\end{equation}
where the vector of correlation hyperparameters, $\boldsymbol{q}=(q_1,\ldots, q_d)^T$, is
finite given finite $L_{\boldsymbol{\xi}}$.
By choosing to define
\begin{equation}
q_{max}:= \max(q_1,\ldots,q_d),
\label{eqn:defn_qmax}
\end{equation}
and\\
$$(q_i^{'})^2:=\displaystyle{\left(\frac{q_i}{q_{max}}\right)^2}\leq 1,\forall i=1,\ldots,d,$$
inequation~\ref{eqn:last} is valid, if we choose the $\xi$-dependent, Lipschitz
constant $L_{\boldsymbol{\xi}}$ (that exists for this Lipschitz map) to be:
$$ L_{\boldsymbol{\xi}}^2 = q_{max}^2,$$
i.e. the map $\boldsymbol{\xi}$ is Lipschitz-continuous, if $q_{max}$ is
$\boldsymbol{\xi}$-dependent.\\
Then recalling definition, $q_{max}$ from Equation~\ref{eqn:defn_qmax}, it
follows that in general, $q_i$ is $\boldsymbol{\xi}$-dependent, $\forall i=1,\ldots, d$.
}
\end{proof}
Given discontinuities in the data on $\boldsymbol{V}$, the function $\boldsymbol{\xi}(\cdot)$ is not
expected to obey the Lipschitz criterion defined in
inequation~\ref{eqn:tensor_lip} globally. We anticipate sample function
$\boldsymbol{\xi}(\cdot)$ to be locally or globally discontinuous.
\begin{lemma}
\label{lemma:1}
{Sample function $\boldsymbol{\xi}(\cdot)$ can be s.t.
\begin{enumerate}
\item[Case(I)] $\exists \boldsymbol{s}_2\in{\cal X}$, s.t. $\nexists$ finite Lipschitz
constant $L_{\boldsymbol{\xi}}^{(1,2)}>0$, for which $d_{\cal Y}(\boldsymbol{\xi}(\boldsymbol{s}_1) - \boldsymbol{\xi}(\boldsymbol{s}_2)) \leq
L_{\boldsymbol{\xi}}^{(1,2)} d_{\cal X}(\boldsymbol{s}_1,\boldsymbol{s}_2)$.
Here the bounded set ${\cal X}\subset{\mathbb R}^d$.
\item[Case(II)] $\exists \boldsymbol{s}_2, \boldsymbol{s}_3\in{\cal X}$, with
$\parallel\boldsymbol{s}_2-\boldsymbol{s}_1\parallel\neq \parallel\boldsymbol{s}_3-\boldsymbol{s}_1\parallel$,
s.t. $d_{\cal Y}(\boldsymbol{\xi}(\boldsymbol{s}_1) - \boldsymbol{\xi}(\boldsymbol{s}_2)) \leq L_{\boldsymbol{\xi}}^{(1,2)} d_{\cal
X}(\boldsymbol{s}_1,\boldsymbol{s}_2)$, but $d_{\cal Y}(\boldsymbol{\xi}(\boldsymbol{s}_1) - \boldsymbol{\xi}(\boldsymbol{s}_3)) \leq
L_{\boldsymbol{\xi}}^{(1,3)} d_{\cal X}(\boldsymbol{s}_1,\boldsymbol{s}_3)$; $L_{\boldsymbol{\xi}}^{(1,2)}\neq L_{\boldsymbol{\xi}}^{(1,3)}$. In such a case, the Lipschitz
constant used for the sample function $\boldsymbol{\xi}(\cdot)$ is defined to be
$$L_{\boldsymbol{\xi}}=\max\{L_{\boldsymbol{\xi}}^{(i,j)}\}_{i\neq j; \boldsymbol{s}_i,\boldsymbol{s}_j\in{\cal X}}.$$
\end{enumerate}
If each function in the set $\{\boldsymbol{\xi}_1(\cdot),\ldots,\boldsymbol{\xi}_n(\cdot)\}$ is\\
--either globally Lipschitz, or is as described in Case~II, \\
--and Case~I does not hold true, then\\
$\forall \boldsymbol{s}_1,\boldsymbol{s}_2\in{\cal X},\:\:$ $\exists$ a finite $L_{max}>0$, where
$$L_{max}:= \max\limits_{\boldsymbol{\xi}}\{L_{\boldsymbol{\xi}_1}, L_{\boldsymbol{\xi}_2},\ldots, L_{\boldsymbol{\xi}_n}\},$$
where $L_{\boldsymbol{\xi}_i}$ is the $i$-th Lipschitz constant defined for the $i$-th
sample function $\boldsymbol{\xi}_i(\cdot)$, $i=1,\ldots,n$, \\i.e. $\exists$ a
finite Lipschitz constant for all $n$ sample functions.\\
$\Longrightarrow \exists$ a universal correlation hyperparameter vector $\boldsymbol{q}_{max}$ for
all $n$ sample functions (=$L_{max}$, by Theorem~\ref{th:1}).
}
\end{lemma}
\begin{lemma}
{
Following on from Lemma~\ref{lemma:1},
if for any $\boldsymbol{\xi}_i(\cdot)\in\{\boldsymbol{\xi}_1(\cdot),\ldots,\boldsymbol{\xi}_n(\cdot)\}$
Case~I holds, $\Longrightarrow$ finite maxima of
$\boldsymbol{\xi}_i(\cdot)\in\{\boldsymbol{\xi}_1(\cdot),\ldots,\boldsymbol{\xi}_n(\cdot)\}$ does not exist, \\$\Longrightarrow\nexists$ a finite Lipschitz constant $L_{max}$
for all $n$ sample functions, \\$\Longrightarrow\nexists$ a universal correlation
hyperparameter vector $\boldsymbol{q}_{max}$, for all sample functions,\\
i.e. we need to model correlation hyperparameters to vary with the sample function.
}
\end{lemma}
\begin{remark}
Above, $q_1,\ldots,q_d$ are hyperparameters of the correlation kernel;
they are interpreted as the reciprocals of the length-scales
$\ell_1,\ldots,\ell_d$, i.e. $\ell_i=1/q_i, \forall i=1,\ldots,d$.
\end{remark}
\begin{remark}
\label{rem:2nd}
{If the map $\boldsymbol{\xi}:{\cal X}\longrightarrow{\cal Y}$ is Lipschitz-continuous,
(i.e. if hyperparameters $q_1,\ldots,q_d$ are $\boldsymbol{\xi}$-dependent, by
Theorem~\ref{th:1}), then by Kerkheim's Theorem \ctp{kerkheim}, $\boldsymbol{\xi}$ is
differentiable almost everywhere in ${\cal X}\subset{\mathbb R}^d$; this is a
generalisation of Rademacher's Theorem to metric differentials (see
Theorem 1.17 in \ctn{hazlas}). However, in our case, the function
$\boldsymbol{\xi}(\cdot)$ is not necessarily differentiable given discontinuities in the
data on the observable $\boldsymbol{V}\in{\cal Y}$, and therefore, is not necessarily
Lipschitz. }
\end{remark}
Thus, Theorem~\ref{th:1} and Lemma~\ref{lemma:1} negate
usage of a universal correlation length scale independent of sampled function
$\boldsymbol{\xi}(\cdot)$, in anticipation of discontinuities in the sample function.
\begin{proposition}
{
For ${\boldsymbol{V}}=\boldsymbol{\xi}(\boldsymbol{S})$, with $\boldsymbol{S}\in{\cal X}\subseteq{\mathbb R}^d$ and
$\boldsymbol{V}\in{\cal Y}\subseteq{\mathbb R}^{(m_1\times\ldots\times m_k)}$,
$$\vert corr(\boldsymbol{\xi}(\boldsymbol{s}_1),\boldsymbol{\xi}(\boldsymbol{s}_2))\vert := \displaystyle{\exp\left[-{\langle (\boldsymbol{s}_1
-\boldsymbol{s}_2),\boldsymbol{q}({\boldsymbol{\xi}})\rangle^2}\right]}, \quad\forall \boldsymbol{s}_1,\boldsymbol{s}_2\in{\cal X},$$
where $\boldsymbol{\xi}(\cdot)$ is a sample function of a tensor-variate GP.
Thus, in this updated model, $c$-th component $q_c=1/\ell_c$ of correlation
hyperparameter $\boldsymbol{q}({\boldsymbol{\xi}})$ is
modelled as randomly varying with the sample function, $\boldsymbol{\xi}(\cdot)$, of the
tensor-variate GP, $\forall c=1,\ldots,d$.\\
In the Metropolis-within-Gibbs-based inference that we undertake,
one sample function of the tensor-variate GP generated, in every iteration,
$\Longrightarrow q_c$ that we model above
$${\mbox{as randomly
varying with the sample path of the tensor-variate GP,}}$$
$$\equiv{\mbox{is randomly varying with the iteration number variable}}\quad
T\in\{0,1,\ldots,t_{max}\}\subset{\mathbb Z}_{\geq 0},$$
$$\Longrightarrow{\mbox{We model}}\quad\ell_c = g_c(t),\quad c=1,\ldots,d,$$ where
this scalar-valued random function $g_c:\{0,1,\ldots,t_{max}\}\subset{\mathbb Z}_{>0}\longrightarrow
{\mathbb R}_{\geq 0}$, is modelled as a realisation from a
scalar-variate GP. }
\end{proposition}
Scalar-variate GP that $g_c(\cdot)$ is sampled
from, is independent of the GP that $g_{c'}(\cdot)$ is sampled from; $c\neq
c'; c,c'=1,\ldots,d$.
In addition, parameters that define the correlation function of the
generative scalar-variate GP can vary, namely the amplitude $A$ and scale
$\delta$ of one such GP might be different from another. Thus,
scalar-valued functions sampled from GPs with varying correlation parameters
$A$ and $\delta$--even for the same $c$ value--should be marked by
these descriptor variables $A>0$ and $\delta>0$.
\begin{proposition}
{
We update the relationship between iteration number $T$ and correlation length
scale hyperparameter $\ell_c$ in the $c$-th direction in input space to be: $$\ell_c = g_{c,\boldsymbol{x}}(t), \quad{\mbox{where
vector of descriptor variables is}}\quad \boldsymbol{X}:=(A,\delta)^T, \quad{with}$$
--$A_c$ the amplitude variable of the SQE-looking covariance function of
the scalar-variate GP that $g_{c,\boldsymbol{x}}(\cdot)$ is a realisation of.
$A_c$ takes the value $a_c\geq 0$; \\
--$\delta_c$ the length scale variable of the SQE-looking covariance
function of the scalar-variate GP that $g_{c,\boldsymbol{x}}(\cdot)$ is a realisation of;
$\delta_c \in{\mathbb R}_{> 0}$.\\
Then the scalar-variate
GPs that $g_{c,\boldsymbol{x}}(\cdot)$ and $g_{c,\boldsymbol{x}^{/}}(\cdot)$ are sampled from, have
distinct correlation functions if $\boldsymbol{x}\neq \boldsymbol{x}^{/}$. Here $c=1,\ldots,d$.}
\end{proposition}
\begin{proposition}
{
Current value of correlation length scale hyperparameter
$\ell_c$, acknowledges information on only the past $t_0$ number of
iterations as in:
\begin{eqnarray}
\ell_{c} &=& g_{c,\boldsymbol{x}}(t-t^{'}),\quad {\mbox{if}}\:\:t\geq t_0,\:\: c=1,\ldots,d;\: t^{'}=1,\ldots, t_0,\nonumber \\
\ell_{c} &=& \ell_c^{(const)},\quad {\mbox{if}}\:\:t =0,1,\ldots,t_0-1,\:\: \quad
c=1,\ldots,d,
\label{eqn:lenscl}
\end{eqnarray}
where $\ell_c^{(const)}$ is an unknown constant that we learn from the data,
during the first $t_0$ iterations. }
\end{proposition}
As $g_{c,\boldsymbol{x}}(t)$ is a realisation from a scalar-variate GP, the joint
probability distribution of $t_0$ number of values of the function
$g_{c,\boldsymbol{x}}(t)$--at a given $\boldsymbol{x}=(a, \delta)^T$--is Multivariate
Normal, with $t_0$-dimensional mean vector $\boldsymbol{M}_{c,\boldsymbol{x}}$ and $t_0\times
t_0$-dimensional covariance matrix $\boldsymbol{\Psi}_{c,\boldsymbol{x}}$, i.e.
\begin{equation}
[g_{c,\boldsymbol{x}}(t-1),\ldots, g_{c,\boldsymbol{x}}(t-2), g_{c,\boldsymbol{x}}(t-t_0)] \sim {\cal MN}(\boldsymbol{M}_{c,\boldsymbol{x}}, \boldsymbol{\Psi}_{c,\boldsymbol{x}}).
\label{eqn:multvar}
\end{equation}
\begin{definition}
{Here $t_0$ is the number of iterations that we look back at, to collect the
dynamically-varying ``look back-data'' ${\bf D}_{c,t}^{(orig)}
:= \{\ell_{c,t-t_0},\ldots,\ell_{c,t-1}\}$ that is employed to learn
parameters of the scalar-variate GP that $g_{c,\boldsymbol{x}}(\cdot)$ is modelled with.
\begin{enumerate}
\item[---]The mean vector $\boldsymbol{M}_{c,\boldsymbol{x}}$ is empirically estimated as the mean
of the dynamically varying look back-data, s.t.
at the $t$-th iteration it is estimated as a $t_0$-dimensional vector with
each component
${\hat{m}}_{c,\boldsymbol{x}}^{(t)}:=[\ell_{c,t-t_0}+\ldots+\ell_{c,t-1}]/t_0$.
\item[---]$t_0\times t_0$-dimensional covariance matrix is dependent on the
iteration-number and this is now acknowledged in the notation to state: $\boldsymbol{\Psi}_{c,\boldsymbol{x}}(t)=\left[a_c\exp\left(
-\frac{(t_i-t_j)^2}{\delta_c^2}\right)\right],\:i,j=t-1,\ldots,t-t_0$.
\end{enumerate}
}
\end{definition}
In the $t$-th iteration, upon the empirical estimation of
the mean as given above, it is subtracted from the ``look back-data'' ${\bf D}_{c,t}^{(orig)}$
so that the subsequent
mean-subtracted look back-data is ${\bf D}_{c,t}
:= \{\ell_{c,t-t_0}-{\hat{m}}_{c,\boldsymbol{x}}^{(t)},\ldots,\ell_{c,t-1}-{\hat{m}}_{c,\boldsymbol{x}}^{(t)}\}$. It is
indeed this mean-subtracted sample that we use.
\begin{definition}
In light of this declared usage of the mean-subtracted ``look back-data''
${\bf D}_{c,t}$, we update the likelihood over what is declared in Equation~\ref{eqn:multvar}, to
\begin{equation}
[g_{c,\boldsymbol{x}}(t-1),\ldots, g_{c,\boldsymbol{x}}(t-2), g_{c,\boldsymbol{x}}(t-t_0)] \sim {\cal MN}(\b0,
\boldsymbol{\Psi}_{c,\boldsymbol{x}}(t)),\quad \forall c=1,\ldots,d.
\label{eqn:multvar2}
\end{equation}
\end{definition}
\subsection{Temporally-evolving covariance matrix}
\label{sec:temporal}
\begin{theorem} { The dynamically varying covariance matrix of the
Multivariate Normal likelihood in Equation~\ref{eqn:multvar2}, at
iteration number $t\geq t_0$, is $$\boldsymbol{\Psi}_{c,\boldsymbol{x}}(t) \sim
\displaystyle{{\cal GWP}(d, \boldsymbol{G}_c, k(\cdot,\cdot))},\quad where:$$
the number of iterations we look back to is $t_0$; \\$k(\cdot,\cdot)$ is
the covariance kernel parametrising the covariance function of the
scalar-variate GP that generates the scalar-valued function
$g_{c,\boldsymbol{x}}(\cdot)$, at the vector $\boldsymbol{x}=(a_c,\delta_c)^T$ of descriptor
variables,
s.t. $k(t_i,t_j)=\exp\left(-\frac{(t_i-t_j)^2}{\delta_c^2}\right),\:\forall
t_i,t_j=t-1,\ldots,t-t_0$; \\$\boldsymbol{G}_c$ is a positive definite square scale
matrix $\boldsymbol{G}_c$ of dimensionality $t_0$, containing the amplitudes of this
covariance function; \\$c=1,\ldots,d$, with the space ${\cal X}$ of input
variable $\boldsymbol{S}$ $d$-dimensional. }
\end{theorem}
\begin{proof}
{
The covariance
kernel $k(\cdot,\cdot)$ that parametrises the covariance function of the
scalar-variate GP
that generates $g_{c,\boldsymbol{x}}(t)$, is s.t. $k(t_i,t_i)$=1 $\forall i=1,\ldots,t_0$.
In a general model, at each iteration, a new value of the vector $\boldsymbol{x}_c$ of
descriptor variables in the $c$-th direction in the space ${\cal X}$ of the
input variable $\boldsymbol{S}$, is generated,
s.t. in the $t-t_i$-th iteration, it
is $\boldsymbol{x}_{c,i}=(a_{c,i},\delta_{c,i})^T$; $t-t_i= t-1,\ldots,t-t_0$
$$\Longrightarrow{\mbox{at}}\quad T=t,\quad \{g_{c,\boldsymbol{x}_1}(t),\ldots, g_{c,\boldsymbol{x}_{t_0}}(t)\}
\quad{\mbox{is a sample of the random variable}}\quad g_{c,\boldsymbol{x}}(t).$$
Now, $corr(g_{c,\boldsymbol{x}}(t-t_i),g_{c,\boldsymbol{x}}(t-t_j)) =
k(t_i,t_j)\delta({c,c^{/}})\delta({\boldsymbol{x},\boldsymbol{x}^{/}})$, where $\delta({\cdot,\cdot})$ is the Delta function.\\
$\Longrightarrow$ sample estimate of $Cov(g_{c,\boldsymbol{x}}(t-t_i),g_{c,\boldsymbol{x}}(t-t_j))$ is
$$Cov(g_{c,\boldsymbol{x}}(t-t_i),g_{c,\boldsymbol{x}}(t-t_j))=
\displaystyle{\sum\limits_{k=1}^{t_0} a_{c,i} a_{c,j} g_{c,\boldsymbol{x}_k}(t-t_i) g_{c,\boldsymbol{x}_k}(t-t_j)},\quad \forall t-t_i, t-t_j = t-1,\ldots,t-t_0,$$
is the $ij$-th element of matrix $\boldsymbol{\Psi}_{c,\boldsymbol{x}}(t)$.\\
This definition of the covariance holds since mean of the
r.v. $g_{c,\boldsymbol{x}}(t)$ is 0, as we have sampled the function from a zero-mean
scalar-variate GP.
$${\mbox{Let}}\quad \boldsymbol{g}_{c,\boldsymbol{x}_k}(t) := (g_{c,\boldsymbol{x}_k}(t-t_1),\ldots,
g_{c,\boldsymbol{x}_{t_k}}(t-t_0))^T,\quad k=1,\ldots,t_0.$$
Let $\boldsymbol{G}_c$ be a $t_0\times t_0$-dimensional diagonal matrix, the $i$-th diagonal element
of which is $a_{c,i}^2$. Then factorising the scale matrix $\boldsymbol{G}_c=\boldsymbol{L}_{G_c} \boldsymbol{L}_{G_c}^T$,
$\boldsymbol{L}_{G_c}$ is diagonal with the $i$-th diagonal element
$a_{c,i}$; $i=1,\ldots,t_0$. This is defined for every $c\in\{1,\ldots,d\}$.
Then at iteration number $T=t$, we define the current covariance matrix
$$\boldsymbol{\Psi}_{c,\boldsymbol{x}}(t) :=
\displaystyle{\sum\limits_{k=1}^{t_0} \boldsymbol{L}_{G_c}\left(\boldsymbol{g}_{c,\boldsymbol{x}_k}(t)\right)^T \boldsymbol{g}_{c,\boldsymbol{x}_k}^T(t) \boldsymbol{L}_{G_c}^T}.$$
Then
$\boldsymbol{\Psi}_{c,\boldsymbol{x}}(t)$ is distributed according to the Wishart distribution w.p,
$\boldsymbol{G}_c$ and $d$ \ctp{eaton}, i.e. the dynamically-varying covariance matrix is:
$$ \boldsymbol{\Psi}_{c,\boldsymbol{x}}(t) \sim {\cal{GWP}}(d, \boldsymbol{G}_c, k(\cdot,\cdot)).$$
}
\end{proof}
\begin{remark}
{If interest lies in learning the covariance matrix at any time point,
we could proceed to inference here from, in attempt of the learning of the
unknown parameters of this ${\cal{GWP}}$ process given the lookback-data ${\bf
D}_{c,t}$.
Our
learning scheme then would then involve compounding a Tensor-Variate GP
and a ${\cal{GWP}}$.}
\end{remark}
The above would be a delineated route to recover the temporal variation
in the correlation structure of time series data (as studied, for example by \ctn{wilson}).
\begin{remark}
{In our study, the focus is on high-dimensional data that display
discontinuities, and on learning the relationship $\boldsymbol{\xi}(\cdot)$ between the
observable $\boldsymbol{V}$ that generates such data, and the system parameter
$\boldsymbol{S}$--with the ulterior aim being parameter value prediction. So learning the
time-varying covariance matrix $\Psi(t)$ is not the focus of our method
development. }
\end{remark}
We want to learn $\boldsymbol{\xi}(\cdot)$ given training data ${\bf D}$.
The underlying motivation is to sample a new $g_{c,\boldsymbol{x}}(\cdot)$ from
a scalar-variate GP, at new values of
$a_1,\ldots,a_d,\delta_1,\ldots,\delta_d$, to subsequently sample a new tensor-valued
function $\boldsymbol{\xi}(\cdot)$, from the
tensor-normal GP, at a new value of its $d$-dimensional correlation
length scale hyperparameter vector $\boldsymbol{\ell}$.
\subsection{2-layers suffice}
\label{sec:suffice}
\noindent
One immediate concern that can be raised is the reason for limiting the
layering of our learning scheme to only 2. It may be argued that just as
we ascribe stochasticity to the length scales $\ell_1,\ldots,\ell_d$ that
parametrise the correlation structure of the tensor-variate GP that models
$\boldsymbol{\xi}(\cdot)$, we need to do the same to the descriptor
variables $a,\:\delta$ that parametrise the correlation structure of the
scalar-variate GP that model $g_{c,\boldsymbol{x}}(t)$. Following this argument, we would
need to hold $a,\:\delta$--or at least model the scale
$\delta$--to be dependent on the sample path of the scalar-variate GP,
i.e. set $\delta$ to be dependent on $g_{c,\boldsymbol{x}}(\cdot)$.
However, we show below that a global choice of
$\delta$ is possible irrespective of the sampled function $g_{c,\boldsymbol{x}}(\cdot)$, given that
$g_{c,\boldsymbol{x}}:\{t-1,\ldots,t-t_0\}\subset{\mathbb Z}_{\geq 0}\longrightarrow{\mathbb R}_{\geq 0}$ is always
continuous (a standard result). In contrast, the function $\boldsymbol{\xi}(\cdot)$ not being
necessarily Lipschitz (see Remark~\ref{rem:2nd}), implies that the correlation
kernel hyperparameters $q_c$, are $\boldsymbol{\xi}$-dependent, $\forall c=1,\ldots,d$.
\begin{theorem}
\label{th:main1}
{
Given $\ell_c=g_{c,\boldsymbol{x}}(t)$, with $T\in{\cal N}\subset{\mathbb Z}_{\geq 0}$ and
$\ell_c\in{\mathbb R}$,
the map $g_{c,\boldsymbol{x}}:{\mathbb Z}_{\geq
0}\longrightarrow{\mathbb R}_{\geq 0}$ is a Lipschitz-continuous map,
$\forall c=1,\ldots,d$. Here ${\cal N}:=\{t-t_1,\ldots,t-t_0\}$}
\end{theorem}
The proof of this standard theorem is provided in Section~4 of the
supplementary Materials.
\begin{theorem}
\label{th:main2} { For any sampled function $g_{c,\boldsymbol{x}}:{\cal N}\longrightarrow{\mathbb R}_{\geq 0}$ realised from
a scalar-variate GP that has a covariance function that is kernel-parametrised with an SQE kernel function, parametrised by
amplitude and scale hyperparameters, the Lipschitz constant that defines
the Lipschitz-continuity of $g_{c,\boldsymbol{x}}(\cdot)$, is $g_{c,\boldsymbol{x}}$-dependent,
and is given by the reciprocal of the scale hyperparameter, s.t. the set of $t_0$
values of scale hyperparameters, for each of the $t_0$ samples
of $g_{c,\boldsymbol{x}}(\cdot)$ taken from the scalar-variate GP, admits a finite minima. }
\end{theorem}
\begin{proof} { For $\ell_c=g_{c,\boldsymbol{x}}(T)$, $g_{c,\boldsymbol{x}}:{\cal N}\subset{\mathbb
Z}_{\geq 0}\longrightarrow{\cal G}\subset{\mathbb R}_{\geq 0}$ is a
Lipschitz-continuous map, (Theorem~\ref{th:main1}), with $T\in{\cal N}$
and $\ell_c\in{\cal G}$. (${\cal N}$ is defined in
Theorem~\ref{th:main1}). Distance between any $t-t_1,t-t_2\in{\cal N}$ is
given by metric $$d_{\cal N}(t-t_1,t-t_2) := \vert t_1-t_2\vert.$$ Distance
between $g_{c,\boldsymbol{x}}(t-t_1)$ and $g_{c,\boldsymbol{x}}(t-t_2)$ is given by metric $$d_{\cal
G}(g_{c,\boldsymbol{x}}(t-t_1),g_{c,\boldsymbol{x}}(t-t_2)) := \sqrt{-\log\vert
corr(g_{c,\boldsymbol{x}}(t-t_1), g_{c,\boldsymbol{x}}(t-t_2))\vert},$$ s.t. $d_{\cal
G}(g_{c,\boldsymbol{x}}(t-t_1),g_{c,\boldsymbol{x}}(t-t_2))\geq 0$, and is finite (since
$t-t_1,t-t_2$ live in a bound set, and $g_{c\boldsymbol{x}}(\cdot)$ is continuous).
The parametrised model of the correlation is
$$\vert corr(g_{c,\boldsymbol{x}}(t-t_1), g_{c,\boldsymbol{x}}(t-t_2))\vert := \displaystyle{K\left(
\frac{(t_1 - t_2)^2}{\delta_g^2} \right)} \equiv
\displaystyle{\exp\left[-\frac{(t_1
-t_2)^2}{\delta_g^2}\right]},$$
s.t. $\vert corr(g_{c,\boldsymbol{x}}(t-t_1), g_{c,\boldsymbol{x}}(t-t_2))\vert\in(0,1]$, where
$\delta_g > 0$ is the scale hyperparameter.
Now, Lipschitz-continuity of $g_{c,\boldsymbol{x}}(\cdot)$ implies
\begin{equation}
d_{\cal G}(g_{c,\boldsymbol{x}}(t-t_1),g_{c,\boldsymbol{x}}(t-t_2)) \leq L_g d_{\cal N}(t-t_1,t-t_2),
\label{eqn:lips}
\end{equation}
where the Lipschitz constant $L_g$ is $g_{c,\boldsymbol{x}}$-dependent (Theorem~\ref{th:1}). As $d_{\cal N}(t-t_1,t-t_2)\equiv \vert
t_1-t_2\vert \leq t_0$, where $t_0$ is a known finite integer, and as
$d_{\cal G}(\cdot,\cdot)$ is defined as $\vert t_1-t_2\vert/\delta_g,\:\delta_g>0$ (using
definition of $d_{\cal G}(\cdot,\cdot)$), $L_g$ exists for $t_1, t_2$, and is
finite. We get
\begin{equation}
L_g =\displaystyle{\frac{1}{\delta_g}}.
\label{eqn:lipdel}
\end{equation}
As $t-t_1,t-t_2$ is any point in ${\cal N}$, $L_g$ exists for all points in ${\cal N}$.
Let set $\boldsymbol{L}:=\{L_{g_1},\ldots,L_{g_{t_0}}\}$, where $L_{g_i}$
defines the Lipschitz-continuity condition
(inequation~\ref{eqn:lips}) for the
$i$-th sample function $g_i(\cdot)$ from a scalar-variate GP.
$$\exists
L_{max}:={\displaystyle{\max_{g}}}[\boldsymbol{L}]={\displaystyle{\max_{g}}}\{L_{g_1},\ldots,L_{g_{t_0}}\},\quad{\mbox{where}}\quad
L_{max}>0\quad{\mbox{and is finite.}}$$ Thus, $L_{max}$ is a Lipschitz
constant that defines the Lipschitz continuity for any sampled function in
$\{g_{c,\boldsymbol{x}}(t-t_1),\ldots,g_{c,\boldsymbol{x}}(t-t_0)\}$, at any
iteraion number $t$ in a chain of finite and known number of iterations.\\
Then by Equation~\ref{eqn:lipdel}, $\exists \delta >0$, s.t.
$$\delta:={\displaystyle{\max_{g}}\left\{\frac{1}{\delta_{g_1}},\ldots,\frac{1}{\delta_{g_{t_0}}}\right\}}=
{\displaystyle{\min_{g}}\{{\delta_{g_1}},\ldots,{\delta_{g_{t_0}}}\}};\quad{\mbox{where}}\:\delta_{g_i}>0\forall i=1,\ldots,t_0.$$
Here $L_{g_i} =\displaystyle{\frac{1}{\delta_{g_i}}}; \: i=1,\ldots,t_0$.
}
\end{proof}
\begin{theorem}
{
Given $\ell_c=g_{c,\boldsymbol{x}}(t)$, where $g_{c,\boldsymbol{x}}:{\cal N}\longrightarrow{\cal G}$
is a Lipschitz-continuous function, sampled from a scalar-variate GP, the
covariance function of which, computed at any 2 points $t-t_1,t-t_2$ in the input
space ${\cal N}$, is kernel parametrised as
$$Cov(t_1,t_2) = \displaystyle{a_c K\left(
\frac{(t_1 - t_2)^2}{\delta_c^2} \right)} \equiv
\displaystyle{a_c \left(\exp\left[-\frac{(t_1
-t_2)^2}{\delta_c^2}\right]\right)},$$
where ($a_c$, the amplitude hyperparameter and) the scale hyperparameter of this kernel is $\delta_c$ that is independent of the sample function $g_{c,\boldsymbol{x}}(\cdot)$; $c\in\{1,\ldots,d\}$.}
\end{theorem}
\begin{proof}
{
By Theorem~\ref{th:main2},
$\delta_c:={\displaystyle{\min_{g_c}}\{{\delta_{g_{c,1}}},\ldots,{\delta_{g_{c,n}}}\}}$
exists for any $c\in\{1,\ldots, d\}$. Then the scalar-variate GP that models the sample function
$g_{c,\boldsymbol{x}}(\cdot)$ has a covariance kernel that is marked by the finite
scale hyperparameter $\delta_c$, independemt of the sample function.}
\end{proof}
\begin{remark}
\label{rem:distinguish}
{
That a stationary scale hyperparameter $\delta$ that is independent of the
sample path can define the covariance kernel of the scalar-variate GP that
$g_{c,\boldsymbol{x}}(\cdot)$ is sampled from, owes to the fact that any such sample
function $g_{c,\boldsymbol{x}}(\cdot)$ is continuous given the nature of the map (from
a subset of integers to reals). However, when the sample function from a GP is
not continuous, (such as $\boldsymbol{\xi}(\cdot)$ that is modelled with the
tensor-variate GP discussed above), a set of values of the sample function-dependent
scale hyperparameter(s) of the covariance kernel of the corresponding GP,
will not admit a minima, and therefore, in such a case, a global scale
hyperparameter cannot be ascribed to the covariance kernel of the generating
GP. This is why we need to retain the correlation length scale hyperparameter
$\ell_c$ to be dependent on the tensor-valued sample function $\boldsymbol{\xi}(\cdot)$,
but the scale hyperparameter $\delta_c$ is no longer dependent on the
scalar-valued sample function $g_{c,\boldsymbol{x}}(\cdot)$. In other words, we do not
require to add any further layers to our learning strategy, than the two
layers discussed.}
\end{remark}
\subsection{Learning using a Compound Tensor-Variate $\&$ Scalar-Variate GPs}
\noindent
We find inference defined by a sequential sampling from the
scalar-variate GPs (for each of the $d$ directions of input
space), followed by that from tensor-variate GP, directly relevant to our
interests. Thus our learning involves a Compound
tensor-variate and multiple scalar-variate GPs. To abbreviate,
we will refer below to such a Compound Stochastic Process, as a ``$nested-GP$'' model.
\begin{remark}
{
As $\delta_c, a_c$ are not stochastic, hereon, we absorb the dependence of the function $g(\cdot)$ on the direction
index, via the descriptor parameters, and refer to this function as
$g_{\boldsymbol{x}_c}(t)$; $c=1,\ldots,d$.}
\end{remark}
\begin{definition} {
$Nested-GP$ model:\\
for $\boldsymbol{V}=\boldsymbol{\xi}(\boldsymbol{S})$,
$$\boldsymbol{\xi}(\cdot)\sim{\mbox{tensor-variate GP}},$$ s.t. joint
probability of $n$ observations of $k-1$-th ordered tensor-valued variable
$\boldsymbol{V}$ (that comprise training data ${\bf D}$), is $k$-th ordered Tensor Normal, with $k$ covariance matrices--which
are empirically estimated, or learnt directly using MCMC, or kernel
parametrised, s.t. length scale parameter $\ell_1,\ldots,\ell_d$ of this
covariance kernel, is each modelled as a dynamically varying function
$\ell_c=g_{\boldsymbol{x}_c}(t)$, where
$$g_{\boldsymbol{x}_c}(t)\sim\:c-{\mbox{th scalar-variate GP}},$$
$\Longrightarrow$joint probability of the last $t_0$ observations of $\ell_c$ (that
comprise ``lookback data'' ${\bf D}_{c,t}$), is
Multivariate Normal, the covariance function of which is parametrised by a
kernel indexed by the $c$-th, stationary descriptor parameter
vector $\boldsymbol{x}_c=(a_c,\delta_c)^T$, where $a_c$ is the amplitude and
$\delta_c$ the scale-length hyperparameter of the SQE-looking covariance
kernel; $c=1,\ldots,d$.}
\end{definition}
\begin{definition} {
$Nonnested-GP$ model:\\
for $\boldsymbol{V}=\boldsymbol{\xi}(\boldsymbol{S})$,
$$\boldsymbol{\xi}(\cdot)\sim{\mbox{tensor-variate GP}},$$ s.t. joint
probability of observations of
$\boldsymbol{V}$ is $k$-th ordered Tensor Normal, with $k$ covariance matrices--which
are empirically estimated, or learnt directly using MCMC, or kernel
parametrised, s.t. length scale parameter $\ell_1,\ldots,\ell_d$ of this
covariance kernel, is each treated as a stationary unknown. All learning
is undertaken using training data ${\bf D}$.}
\end{definition}
\section{Inference}
\label{sec:inference}
\noindent
We undertake inference with Metropolis-within-Gibbs.
Below $\theta^{(t\star)}$ indicates proposed value of parameter $\theta$ in the $t$-th
iteration, while $\theta^{(t)}$ refers
to the value current in the $t$-th iteration.
\begin{itemize}
\item $Nested-GP$:
\begin{enumerate}
\item In $t>t_0$-th iteration, propose amplitude and
scale-length of $c$-th scalar-variate GP as:
$$ a_c^{(t\star)} \sim {\cal TN}(a_c^{(t-1)}, 0, v_a^{(c)}),\quad\forall
c=1,\ldots,d, $$
$$ \delta_c^{(t\star)} \sim {\cal N}(\delta_c^{(t-1)}, 0,
v_\delta^{(c)}),\quad\forall c=1,\ldots,d,$$ where ${\cal N}(\cdot)$ is
Normal, and ${\cal TN}(\cdot,0,\cdot)$ is
a Truncated Normal density left-truncated at 0. \\$v_a^{(c)},
v_\delta^{(c)}$ refer to constant, experimentally-chosen variances.
\item As length scale hyperparameter $\ell_c=g_{\boldsymbol{x}_c}(t)\sim GP(0,
\exp\left(-(\cdot-\cdot)^2/ 2\delta_c^2\right))$,
probability of
the current lookback data ${\bf D}_{c,t}$ given parameters of this $c$-th
scalar-variate GP, is Multivariate Normal with mean vector $\boldsymbol{0}$ and a
current covariance matrix
$\boldsymbol{\Psi}_c^{(t-1)}:=\left[ a_c^{(t-1)} \exp\left(-\frac{(t_i
- t_j)^2}{2(\delta_c^{(t-1)})^2}\right)\right]; \quad
t_i,t_j=t-1,\ldots,t-t_0.$ Similarly, the likelihood of the proposed
parameters
can be defined. These enter computation of the acceptance ratio in the first
block of Metropolis-within-Gibbs.
\item At the updated parameters $\delta_c,a_c$, at $T=t$, length scale hyperparameters $\ell_1,\ldots,\ell_d$ are
rendered Normal variates s.t. $$\ell_c^{t\star}\sim{\cal N}(\ell_c^{(t-1)},
a_c^{(t\star)}),$$ under a Random Walk paradigm, when the mean of this
Gaussian distribution is the current value of the $\ell_c$ parameter; $\forall c=1,\ldots,d$.
\item The proposed and current values of $\ell_1,\ldots,\ell_d$ inform on the
acceptance ratio in the 2nd block of our inference, along with other,
directly learnt parameters, of the covariance structure of the tensor-variate
GP that $\boldsymbol{x}(\cdot)$ is sampled from.
\end{enumerate}
\item $Nonnested-GP$:
\begin{enumerate}
\item In the first block of Metropolis-within-Gibbs, $\ell_1,\ldots,\ell_d$
are updated, once proposed as Normal variates, with experimentally chosen
constant variance of the respective proposal density.
\item Updating of directly-learnt elements of relevant covariance matrices is
undertaken in the 2nd block, and the acceptance ratio that invokes the
tensor-normal likelihood, is computed to accept/reject these proposed
values, at the $\ell_c$ variable values that are updated in the first block
of Metropolis-within-Gibbs.
\end{enumerate}
\end{itemize}
Details on inference is presented in Section~1 of the Supplementary
Materials.
\section{Application}
\label{sec:application}
\noindent
We illustrate our method using an application on astronomical data. In this
application, we are going to learn the location of the Sun in the
Milky Way modelled as a 2-dimensional disk.
The training data ${\bf D}$ is cuboidally-shaped, and is of
dimensionalities $m_1\times m_2\times m_3$, where $m_1=2, m_2=50, m_3\equiv
n=216$, i.e. the 3-rd ordered tensor $\boldsymbol{D}_{\boldsymbol{V}}$ comprises of $n=216$ matrices
of dimension $50\times 2$, where $i$-th value of the matrix-variate observable
$\boldsymbol{V}^{(50\times 2)}$ is realised at $i$-th value of system parameter vector $\boldsymbol{S}$, s.t.
${\bf D}=\{(\boldsymbol{s}_i,\boldsymbol{v}_i)\}_{i=1}^n$. The 3rd-ordered tensor $\boldsymbol{D}_{\boldsymbol{V}}^{(m_1\times m_2\times n)}=(\boldsymbol{v}_1,\vdots,\ldots,\vdots\boldsymbol{v}_n)$
The training data comprises the $m_1=$2-dimensional velocity vectors of a
sample of $m_2=50$ number of real stars that exist around the Sun, in a model
Milky Way disk, where the matrix-variate r.v. $\boldsymbol{V}^{(50\times 2)}$ comprising
such velocity vectors of this chosen stellar sample, are generated via
numerical simulations conducted with $n=216$ different astronomical models of
the Galaxy, with each such model of the Galaxy distinguished by a value of the
Milky Way feature parameter vector $\boldsymbol{S}\in{\mathbb R}^d$, $d$=2
\ctp{dc2007}. Thus, $\boldsymbol{V}=\boldsymbol{v}_i$ at the $i$-th design point $\boldsymbol{s}_i$,
$i=1,\ldots,216$. As $\boldsymbol{V}$ is affected by $\boldsymbol{S}$, we write $\boldsymbol{V}=\boldsymbol{\xi}(\boldsymbol{S})$, and
aim to learn the high-dimensional function $\boldsymbol{\xi}(\cdot)$, with the aim of
predicting value of either $\boldsymbol{V}$ or $\boldsymbol{S}$, at a given value of the other.
In particular, there exists the test data $\boldsymbol{v}^{(test)}$ that comprises the
$m_1=2$-dimensional velocity vectors of the 50 identified, stellar neighbours
of the Sun, as measured by the Hipparcos satellite \ctp{dc2007}. It is the
same 50 stars for which velocity vectors are simulated at each design
point. However, we do not know the real Milky Way feature parameter vector
$\boldsymbol{s}^{(test)}$ at which $\boldsymbol{V}=\boldsymbol{v}^{(test)}$ is realised.
Since we are observing velocities of stars around the Sun, the observed
velocities will be impacted by certain Galactic features. These features
include location of the Sun $\boldsymbol{S}$. Thus, the observed matrix $\boldsymbol{v}^{(test)}$,
can be regarded as resulting from the Galactic features (including the sought
solar location) to bear certain values. So, fixing all Galactic features other
than the location $\boldsymbol{S}$ of the Sun in the simulations that generate the
training data, the matrix $\boldsymbol{V}$ of stellar velocities is related to $\boldsymbol{S}$,
i.e. $\boldsymbol{V}=\boldsymbol{\xi}(\boldsymbol{S})$. The input variable $\boldsymbol{S}$ is then also the location
from which an observer on Earth (or equivalently the Sun, on Galactic length
scales), observes the 2-dimensional velocity vectors of $m_2$
(=50) of our stellar neighbours.
\ctn{dc2007} generated the training data by first placing a regular
2-dimensional polar grid on a chosen annulus in an 2-dimensional astronomical
model of the MW disk. In the centroid of each grid cell, an observer was
placed. There were $n$ grid cells, so, there were $n$ observers placed in this
grid, such that the $i$-th observer measures velocities of $m_{2i}$ stars
that land in her grid cell, at the end of a simulated evolution of a sample
of stars that are evolved in this model of the MW disk, under the influence
of the feature parameters that mark this MW model. We indexed the $m_{2i}$
stars by their location with respect to the observer inside the grid cell, and
took a sample of $m_2=50$ stars from this collection of $m_{2i}$ stars;
$i=1,\ldots,n=216$. Thus, each observer records a matrix (or sheet) of 2-dimensional velocity vectors of $m_2$ stars. The test data measured by the Hipparcos satellite is
then the 217-th sheet, except we are not aware of the value of $\boldsymbol{S}$ that this
sheet is realised at.
The solar location vector is 2-dimensional, i.e. $d$=2 since the Milky Way
disk is assumed to be 2-dimensional, i.e. $\boldsymbol{S}=(S_1,S_2)^T$, s.t in this polar
grid, $S_1$ tells us about the radial distance between the Galactic centre and
the observer, while $S_2$ denotes the angular location of the observer in the
MW disk, w.r.t. a pre-fixed axis in the MW, namely, long axis of an elongated
bar of stars that lies pivoted at the Galactic centre, as per the astronomical
model of the MW that was used to generate the training data.
In \ctn{chakrabarty2015bayesian}, the matrix of velocities was vectorised, so
that the observable was then a vector. In our case, the observable is $\boldsymbol{V}$--a
matrix. The process of vectorisation, causes \ctn{chakrabarty2015bayesian} to
undergo loss of correlation infomation. Our work allows for clear
quantification of such covariances. More importantly, our work provides a
clear template for implementing methodology for learning given
high-dimensional data that comprise measurements of a tensor-valued
observable. As mentioned above, the empirical estimate of the mean tensor is
obtained, and used as the mean of the Tensor Normal density that represents
the likelihood.
To learn $\boldsymbol{\xi}(\cdot)$, we model it as a realisation from a high-dimensional
GP, s.t, joint of $n$ values of $\boldsymbol{\xi}(\cdot)$--computed at
$\boldsymbol{s}_1,\ldots,\boldsymbol{s}_n$--is 3-rd Tensor Normal, with3 covariance matrices:
that inform on:\\
--amongst-observer-location covariance ($\boldsymbol{\Sigma}_3^{(216\times216)}$),\\
--amongst-stars-at-different-relative-position-w.r.t.-observer covariance ($\boldsymbol{\Sigma}_2^{(50\times 50)}$), and \\
--amongst-velocity-component covariance ($\boldsymbol{\Sigma}_1^{(2\times 2)}$).
The elements of $\boldsymbol{\Sigma}_2$ are not learnt by MCMC.\\
--Firstly, there is no input space variable that can be identified, at which
the $ij$-th element of $\boldsymbol{\Sigma}_2$ can be considered to be realised;
$i,j=1,\ldots,50$, where this $ij$-th element gives the covariance amongst
the $i$-th and $j$-th, $216\times 2$-dimensional matrices within the 3-rd
ordered tensor $\boldsymbol{D}_{\boldsymbol{V}}$. Effectively, the 41st star could have been
referred to as the 3rd star in this stellar sample, and the vice versa, i.e.
there is no meaningful ordering in the labelling of the sampled stars with
these indices. Therefore, we cannot use these labels as values of an input
space variable, in terms of which, the covariance between the $i$-th and $j$-th
$216\times 2$-dimensional velocity matrices can be kernel-parametrised. \\
--Secondly, direct learning of the 50(51)/2 distinct elements of $\boldsymbol{\Sigma}_2$,
using MCMC, is ruled out, given that this is a large number. \\
--In light of this, we will perform empirical estimation of
$\boldsymbol{\Sigma}_2$.
\begin{definition}
{Covariance between the $216\times 2$-dimensional stellar velocity matrix
$\boldsymbol{W}_i:=[v^{(i)}_{pq}]$ of the sampled star labelled by index $i$, and the
matrix $\boldsymbol{W}_j:=[v^{(j)}_{pq}]$
of the star labelled as $j$, ($p=1,\ldots,216; q=1,2$), is estimated as ${\widehat{\sigma_{ij}^{(2)}}}$, where:\\
${\widehat{\sigma_{ij}^{(2)}}}=$
$$ \displaystyle{
\frac{1}{2-1} \times
\sum_{q=1}^2 \left[
\frac{1}{216} \times
\left(\sum_{p=1}^{216} (v^{(i)}_{pq} - \bar{v}^{(i)}_q)
\times
(v^{(j)}_{pq} - \bar{v}^{(j)}_q)
\right)\right]},$$
where $\bar{v}^{(i)}_q=\displaystyle{\frac{\left(\sum_{p=1}^{216} v^{(i)}_{pq}\right)}{216}}$ is the sample mean of the $q$-th column of the
matrix $\boldsymbol{V}_i=[v^{(i)}_{pq}]$. }
\end{definition}
The 3 distinct elements of the $2\times 2$-dimensional covariance matrix
$\boldsymbol{\Sigma}_1$ are learnt directly from MCMC. These include the 2 diagonal
elements $\sigma_{11}^{(1)}$, $\sigma_{22}^{(1)}$ and
$\rho:=\displaystyle{\frac{\sigma^{(1)}_{12}}{\sqrt{\sigma^{(1)}_{11}\sigma^{(1)}_{22}}}}$
We perform kernel parametrisation of $\boldsymbol{\Sigma}_3$, using the SQE kernel such
that the $jp$-th element of $\boldsymbol{\Sigma}_3$ is kernel-parametrised as
$[\sigma_{jp}] = \displaystyle{\exp\left(-(\boldsymbol{s}_j-\boldsymbol{s}_p)^T \boldsymbol{Q}^{-1} (\boldsymbol{s}_j-\boldsymbol{s}_p)\right)}, j,p=1,\ldots,216.$
Since $\boldsymbol{S}$ is a 2-dimensional vector, $\boldsymbol{Q}$ is a 2$\times$ 2 square diagonal
matrix, the elements $\ell_{1}, \ell_{2}$ of which, represent the
the correlation length scales.
Then in the ``$nonnested-GP$'' model, we learn the (modelled as stationary)
$\ell_1, \ell_2$, along with $\sigma_{11}^{(1)}$, $\sigma_{22}^{(1)}$ and
$\rho$.
Under the $nested-GP$ model, $\ell_c$ is modelled as $\ell_c=g_{\boldsymbol{x}_c}(t)$,
where at iteration number $T=t$ $g_{\boldsymbol{x}_c}(t)$ is sampled from the $c$-th
zero-mean, scalar variate GP, amplitude $a_c$ and correlation length scale
$\delta_c$ of which we learn, for $c=1,2$, in addition to the parameters
$\sigma_{11}^{(1)}$, $\sigma_{22}^{(1)}$ and
$\rho$.
The likelihood of the training data given the covariance matrices of the
tensor-variate GP, is then given as per Equation~\ref{eqn:eqn1}:
\begin{equation}
\begin{aligned}
&{\cal L}({\bf D}\vert
\ell_1,\ell_2,\sigma_{11}^{(1)},\sigma_{22}^{(1)},\rho)=(2\pi)^{-m/2}(\prod_{i=1}^{3}|\boldsymbol{\Sigma}_i|^{-m/2m_i})
\\ &\times \exp(-\Vert ({\boldsymbol{D}}_{\boldsymbol{V}}-\hat{\boldsymbol{M}})\times_1 {\boldsymbol{A}_1}^{-1}\times_2 {\hat{\boldsymbol{A}_2}}^{-1} \times_3 \boldsymbol{A}_3^{-1} \Vert^2/2).
\label{eqn:eqn3_bef}
\end{aligned}
\end{equation}
where $\boldsymbol{\Sigma}_p = \boldsymbol{A}_p \boldsymbol{A}^{T}_p$, $p=1,2,3$ and ${\hat{\boldsymbol{M}}}$ is the
empirical estimate of the mean tensor and $\hat{\boldsymbol{\Sigma}_2}$ is the empirical
estimate of the covariance matrix $\boldsymbol{\Sigma}_2$ such that ${\hat{\boldsymbol{\Sigma}_2}} =
{\hat{\boldsymbol{A}_2}} {\hat{\boldsymbol{A}_2}}^{T}$. Here $m_3=216$, $m_2=50$, $m_1=2$, and
$m=m_1 m_2 m_3$. One or more of the covariance matrices is kernel
parametrised, where the kernel is a function of pairs of values of the input
variable $\boldsymbol{S}$--this explains the dependence of the RHS of this equation on
the whole of ${\bf D}$, with the data tensor $\boldsymbol{D}_{\boldsymbol{V}}$ contributing partly
to training data ${\bf D}$.
This allows us to write the joint posterior probability density of the unknown
parameters given training data ${\bf D}$. We generate posterior samples from
it using Metropolis-within-Gibbs. To write this posterior, we impose
non-informative priors $\pi_0(\cdot)$ on each of our unknowns (Gaussian with
wide, experimentally chosen variances, and mean that is the arbitrarily chosen
seed value of $\ell_{\cdot}$; Jeffry's priors on $\boldsymbol{\Sigma}_1$). The posterior
probability density of our unknown GP parameters, given the training data is
then
\begin{equation}
\begin{aligned}
\pi(\ell_{1}, \ell_{2}, \sigma_{11}^{(1)},\sigma_{22}^{(1)}, \rho\vert{\bf D})
\propto {\cal L}({\boldsymbol{D}}_{\boldsymbol{V}} \vert \boldsymbol{\Sigma}_1,\boldsymbol{\Sigma}_3)\times \pi_0(\ell_{1}) \pi_0(\ell_{2}) \pi_0(\boldsymbol{\Sigma}_1).
\end{aligned}
\label{eqn:marginal_bef}
\end{equation}
The results of our learning and estimation of the mean and covariance
structure of the GP used to model this tensor-valued data, is discussed below
in Section~\ref{sec:results}.
\begin{definition}
{
The joint posterior probability density of the
unknown parameters given
the training data ${\bf D}$ that comprises the velocity tensor $\boldsymbol{D}_{\boldsymbol{V}}$, under the $nested-GP$
model is given by
\begin{equation}
\begin{aligned}
&\pi(\delta_{1}, \delta_{2}, a_1, a_2. \ell_1,\ell_2,
\sigma_{11}^{(1)},\sigma_{22}^{(1)}, \rho\vert {\bf D}) \propto
(2\pi)^{-m/2} \left(\prod_{i=1}^{3}|\boldsymbol{\Sigma}_i|^{-m/2m_i}\right)\\
&\times \exp(-\Vert ({\boldsymbol{D}}_{\boldsymbol{V}}-\hat{\boldsymbol{M}})\times_1 {\boldsymbol{A}_1}^{-1}\times_2
{\hat{\boldsymbol{A}_2}}^{-1} \times_3 \boldsymbol{A}_3^{-1} \Vert^2/2)\times\\
&\displaystyle{\prod\limits_{c=1}^2 \frac{1}{\sqrt{\det(2\pi\boldsymbol{\Psi}_{\boldsymbol{x}_c})}}
\exp\left[-\frac{1}{2}
(\boldsymbol{\ell}_c^{(t_0)})^T \left(\boldsymbol{\Psi}_{\boldsymbol{x}_c}\right)^{-1}(\boldsymbol{\ell}_c^{(t_0)})\right]}
\times \pi_0(\boldsymbol{\Sigma}_1),
\end{aligned}
\label{eqn:marginal_bef2}
\end{equation}
where $\boldsymbol{\ell}_c^{(t_0)}:=(\ell_c^{(t-t_0)},\ldots,\ell_c^{(t-1)})^T$, and
$ij$-th element of the covariance matrix $\boldsymbol{\Psi}_{\boldsymbol{x}_c}$ is
$\displaystyle{\left[{a_c\exp\left[-\frac{(t_i-t_j)^2}{2(\delta_c)^2}\right]}\right]}$,
$i,j=1,\ldots,t_0$. N.B. the $t$-dependence of the covariance matrix
$\boldsymbol{\Psi}_{\boldsymbol{x}_c}$ is effectively suppressed, given that this dependence comes in
the form $t-t_i -(t-t_j)$.
}
\end{definition}
We generate posterior
samples using MCMC, to identify the marginal posterior probability
distribution of each unknown. The marginal then allows for the computation of
the 95$\%$ HPD.
\section{Inverse Prediction--2 Ways}
\label{sec:prediction}
\noindent
We aim to predict the location vector $\boldsymbol{s}^{(test)}$ of the Sun in the Milky
Way disk, at which real (test) data
$\boldsymbol{v}^{(test)}$ on the 2-dimensional velocity vectors of 50 identified stellar
neighbours of the Sun, measured by the {\it Hipparcos} satellite. We undertake
this, subsequent to learning of relation $\boldsymbol{\xi}(\cdot)$ between solar location
variable $\boldsymbol{S}$ and stellar velocity matrix-valued variable $\boldsymbol{V}$, using
astronomically-simulated (training data).
\begin{definition}
{The tensor that includes both test and training data has dimensions of $217\times 50\times 2$. We call this augmented data
$\boldsymbol{D}^*=\{\boldsymbol{v}_1,...,\boldsymbol{v}_{50},\boldsymbol{v}^{(test)}\}$, to distinguish it from the
tensor $\boldsymbol{D}_{\boldsymbol{V}}$ that lives in the training data. Here $\boldsymbol{v}_i$ is realised
at design point $\boldsymbol{s}_i$, but the $\boldsymbol{s}^{(test)}$ at which $\boldsymbol{v}^{(test)}$ is
realised, is not known.}
\end{definition}
\begin{remark}
{This 217-th sheet of (test) data is realised at the unknown value
$\boldsymbol{s}^{(test)}$ of $\boldsymbol{S}$, and upon its inclusion, the updated covariance
amongst the sheets generated at the different values of $\boldsymbol{S}$, is renamed
$\boldsymbol{\Sigma}_1^*$, which is now rendered $217\times 217$-dimensional. Then
$\boldsymbol{\Sigma}_1^*$ includes information about $\boldsymbol{s}^{(test)}$ via the
kernel-parametrised covariance matrix $\boldsymbol{\Sigma}_3$. The effect of inclusion of
the test data on the other covariance matrices is less; we refer to them as
(empirically estimated) ${\hat{\boldsymbol{\Sigma}_2^*}}$ and $\boldsymbol{\Sigma}_3^*$. The updated
(empirically estimated) mean tensor is ${\hat{\boldsymbol{M}}}^*$. }
\end{remark}
The likelihood for the augmented data is:
\begin{equation}
\begin{aligned}
{\cal L}(\boldsymbol{D}^*|\boldsymbol{s}^{(test)}, \boldsymbol{\Sigma}_1^*,\boldsymbol{\Sigma}_3^*) =&
\displaystyle{(2\pi)^{-m/2}\left(\prod\limits_{i=1}^{3}|\boldsymbol{\Sigma}_i^*|^{-m/2m_i}\right)}\times \\
&{\displaystyle{\exp\left[-\Vert (\boldsymbol{D}^*-{\hat{\boldsymbol{M}}}^*)\times_1 ({\boldsymbol{A}_1^*})^{-1} \times_2 ({\hat{\boldsymbol{A}_2^*}})^{-1} \times_3 ({\boldsymbol{A}_3^*})^{-1} \Vert^2/2\right]}}
\label{eqn:eqn4}
\end{aligned}
\end{equation}
where ${\hat{\boldsymbol{A}_2^*}}$ is the square root of ${\hat{\boldsymbol{\Sigma}_2^*}}$.
Here $m_1=217$, $m_2=50$, $m_3=2$, and $m=m_1
m_2 m_3$. Here $\boldsymbol{A}_1^*$ is the square root of $\boldsymbol{\Sigma}_1^*$ and
depends on $\boldsymbol{s}^{(test)}$.
The posterior of the unknowns given the test+training data is:
\begin{equation}
\begin{aligned}
\pi(s_1^{(test)},s_2^{(test)},\boldsymbol{\Sigma}_1^*,\boldsymbol{\Sigma}_3^*\vert \boldsymbol{D}^*) \propto &
{\cal L}(\boldsymbol{D}^*|s_1^{(test)},s_2^{(test)},\boldsymbol{\Sigma}_1^*,\boldsymbol{\Sigma}_3^*)\times\\
& \pi_0(s_1^{(test)})\pi_0(s_2^{(test)})\pi_0(q_{2}^{(*)})\pi_0(q_{1}^{(*)}) \pi_0(\boldsymbol{\Sigma}_3^*).
\end{aligned}
\label{eqn:marginal}
\end{equation}
\begin{remark}
{We use $\pi_0(s_p^{(test)})={\cal U}(l_p, u_p),\:p=1,2$, where $l_p$ and $u_p$
are chosen depending on the spatial boundaries of the fixed area of the Milky
Way disk that was used in the astronomical simulations by
\ctn{dc2007}. Recalling that the observer is located in a two-dimensional
polar grid, \ctn{dc2007} set the lower boundary on the value of the angular
position of the observer to 0 and the upper boundary is $\pi/2$ radians,
i.e. 90 degrees, where the observer's angular coordinate is the angle made by
the observer-Galactic centre line to a chosen line in the MW disk. The
observer's radial location is maintained within the interval [1.7, 2.3] in
model units, where the model units for length are related to galactic unit for
length, as discussed in Section~\ref{sec:astro}.}
\end{remark}
In the second method for prediction, we
infer $\boldsymbol{s}^{(test)}$ by
sampling from the posterior of $\boldsymbol{s}^{(test)}$ given the
test data and the modal values of the parameters
$q_{1}, q_{2}, \sigma_{11}^{(1)},
\rho,\sigma_{22}^{(1)}$ that were learnt using the training data.
Let modal value of $\boldsymbol{\Sigma}_3$, learnt using ${\bf D}$ be
$[(\sigma_3^{(M)})_{jp}]_{j=1;p=1}^{217,217}$,
Similarly, the modal value $\boldsymbol{\Sigma}_1^{(M)}$ that was learnt using the
training data, is used.
The posterior of $\boldsymbol{s}^{(test)}$, at learnt (modal) values is then
\begin{equation}
\begin{aligned}
&\pi(s_1^{(test)},s_2^{(test)}\vert \boldsymbol{D}^*,\boldsymbol{\Sigma}_1^{(M)},\boldsymbol{\Sigma}_3^{\star}) \propto \\
&{\cal L}(\boldsymbol{D}^*|s_1^{(test)},s_2^{(test)},\boldsymbol{\Sigma}_1^{(M)},\boldsymbol{\Sigma}_3^{\star})\times \pi_0(s_1^{(test)})\pi_0(s_2^{(test)})
\times \pi_0(q_{2}^{(M)})\pi_0(q_{1}^{(M)}) \pi_0(\boldsymbol{\Sigma}_3)|\boldsymbol{V}^*).
\end{aligned}
\label{eqn:marginalpred}
\end{equation}
where ${\cal L}(\boldsymbol{D}^*|s_1^{(test)},s_2^{(test)},\boldsymbol{\Sigma}_1^*,\boldsymbol{\Sigma}_3^{(M)})$ is as given in Equation~\ref{eqn:eqn3_bef}, with $\boldsymbol{\Sigma}_3$ replaced by $\boldsymbol{\Sigma}_3^*$, and $\boldsymbol{\Sigma}_1$ replaced by its modal value $\boldsymbol{\sigma}_1^{(M)}$. The priors on $s^{(test)}_1$ and $s^{(test)}_2$ are as discussed above.
For all parameters, we use Normal proposal densities that have experimentally chosen variances.
\begin{figure}[!t]
\begin{center}
{
\includegraphics[width=10cm]{hist_nohyperGP.eps}
}
\end{center}
\caption{Results from run done with training data ${\bf D}$ with the
$nonnested-GP$ model, are shown in grey (or red in the
electronic version), while results from run undertaken with
training and test data, $\boldsymbol{D}^{\star}$, in this $nonnested-GP$ model,
are depicted in black. Traces of the logarithm of the likelihood are
displayed from the two runs in the top left panel. Reciprocal of the
length scale parameters are the shown in the top middle and right
panels; here $q_c=\ell_c^{-1}, \: c=1,2$. Histograms representing
marginal posterior probability density of the learnt diagonal
elements $\sigma_{11}^{(1)}$ and $\sigma_{22}^{(1)}$, of the
covariance matrix $\boldsymbol{\Sigma}_1$, are shown in the mid-row, left and
middle panels (given respective data). Histograms representing marginals
of the parameter
$\rho=\displaystyle{\frac{\sigma_{12}}{\sqrt{\sigma_{11}^{(1)}\sigma_{22}^{(1)}}}}$
are displayed in the mid-row right
panel. Prediction of the values of the input parameter
$\boldsymbol{S}=(S_1,S_2)^T$ is possible only in the run performed with both
training and test data. Marginals of $S_1$ and $S_2$ values learnt via
MCMC-based sampling from the joint of all unknown parameters given
$\boldsymbol{D}^{\star}$, are shown in the lower panel, as approximated by histograms.}
\label{fig:nohypergp_with_wo_hist}
\end{figure}
\section{Results}
\label{sec:results}
\noindent
In this section, we present the results of learning the unknown
parameters of the 3rd-order tensor-normal likelihood, given the training as
well as the training+test data.
While Figure~1 of the Supplementary Materials and
Figure~\ref{fig:nohypergp_with_wo_hist} here depict results obtained from using the
$nonnested-GP$, in the following figures, results of the learning of all
relevant unknown parameters, using the $nested-GP$ model, are included.
Figures that depict results from the $nested-GP$
approach will include results of the learning of amplitude $a_c$ and
smoothing parameters $d_c:=1/\delta_c$ parameters. Also, our modelling
under the $nested-GP$ paradigm relies on a lookback-time $t_0$ which gives the
number of iterations over which we gather the generated
$\ell_c$ values.
\begin{figure}
\begin{center}
{
\includegraphics[width=10cm]{hist_hyperGP200.eps}
}
\end{center}
\caption{Results from run done with test+training data ${\boldsymbol{D}}^{\star}$
within the $nested-GP$ model, shown in black, as distinguished from the
results of learning given the same data, and the $nonnested-GP$
model depicted in grey (or red in the electronic copy of the
thesis). Here the used value of $T_0$ is 200 iterations. Histograms
approximating the marginal posterior probability densities of each
sought unknown is depicted. Here, sought hyperparameter values $a_c$ and
$\delta_c$ are relevant only to the $nested-GP$ model ($c=1,2$). Here, we
have undertaken sampling from the joint posterior of all parameters,
including the input parameter values $s_1^{(test)}$ and $s_2^{(test)}$,
at which the test data are realised. Histograms approximating marginal
posterior of each learnt unknown are presented. }
\label{fig:hyper200_nohyper_gp}.
\end{figure}
\begin{figure}
\begin{center}
{
\includegraphics[width=10cm]{nested_vs_original_trace.eps}
}
\end{center}
\caption{ Traces of parameters learnt using the training data ${\bf D}$,
in the run performed with the $nonnested-GP$ model, are compared to
traces of the corresponding parameter obtained in the run performed
with the $nested-GP$ model. Traces of parameters learnt within the
$nonnested-GP$ model are in grey (or red in the e-version) while the
traces obtained using the $nested-GP$ model are shown in black. }
\label{fig:hyper200_nohyper_gp_nopred_trace}.
\end{figure}
\subsection{Effect of discontinuity in the data, manifest in our results}
\noindent
One difference between the learning of parameters from the $nested-GP$, as
distinguished from the $nonnested-GP$ models is the quality of the inference,
in the sense that the uncertainty of parameters (i.e. the 95$\%$ HPDs) learnt
using the $nested-GP$ models, is less than that learnt using the $nonnested-GP$
models. This difference in the learnt HPDs is most marked for the learning of
values of $Q_1$ and $S_1$, and $S_2$ to a lesser extent.
We explain this, by invoking the discontinuity in the training
data--distribution of $S_1$ in this data is sharply discontinuous, though
there is a less sharp discontinuity in the distribution of $S_2$ noted. We
refer to Figure~8 of \ctn{dc2007}, page 152. This figure is available at
{\url{https://www.aanda.org/articles/aa/pdf/2007/19/aa6677-06.pdf}}, and
corresponds to the base astronomical model used in the simulations that
generate the training data that we use here. This figure informs on the
distribution of location $\boldsymbol{S}$; compatibility of the stellar velocity matrix
$\boldsymbol{v} (=\boldsymbol{\xi}(\boldsymbol{s}))$ realised (in astronomical simulations) at a given $\boldsymbol{s}$, to
the test velocity matrix $\boldsymbol{v}^{(test)}$ (recorded by the {\it Hipparcos}
satellite), is parametrised, and this compatibility parameter plotted against
$\boldsymbol{s}$ in this figure. In fact, this figure is a contour plot of the distribution of such a
compatibility parameter, in the space ${\cal D}$, where $\boldsymbol{S}\in{\cal
D}\subset{\mathbb R}^2$. The 2 components of $\boldsymbol{S}$ are represented in
polar coordinates, with $S_1$ the radial and $S_2$ the angular component. We
see clearly from this figure, that the distribution across $S_1$ is highly
discontinuous, at given values of $S_2$ (i.e. at fixed angular bins). In fact,
this distribution is visually more discontinuous, than the distribution across
$S_2$, at given values of $S_1$, i.e. at fixed radial bins (each of which is
represented by the space between two bounding radial arcs). In other words,
the velocity matrices that are astronomically simulated at different $\boldsymbol{S}$
values, are differently compatible with a given reference velocity matrix
($\boldsymbol{v}^{(test)}$)--and, the distribution of velocity matrix variable $\boldsymbol{V}$, is
discontinuous across values of $\boldsymbol{S}$, and in fact, less smoothly distributed
at fixed $s_2$, than at fixed $s_1$. Thus, this figure brings forth the
discontinuity with the input-space variable $\boldsymbol{S}$, in the data tensor
$\boldsymbol{D}_{\boldsymbol{V}}$ that is part of the training data.
Then, it is incorrect to use a stationary kernel to parametrise the covariance
$\boldsymbol{\Sigma}_3$, that informs on the covariance between velocity matrices
generated at different values of $\boldsymbol{S}$. Our implementation of the $nested-GP$
model tackles this shortcoming of the model. However, when we implement the
$nonnested-GP$ model, Metropolis needs to explore a
wider volume of the state space to accommodate parameter values, given the
data at hand--and even then, there is a possibility for incorrect inference
under the stationary kernel model. This explains the noted trend of higher
95$\%$ HPDs on most parameters learnt using the $nonnested-GP$ model, compared
to the $nested-GP$ model, as observed in comparison of results from runs done
with training data alone, or both training and test data; compare
Figure~\ref{fig:hyper200_nohyper_gp} to
Figure~\ref{fig:hyper200_nohyper_gp_nopred_trace}, and note the comparison in
the traces as displayed in
Figure~\ref{fig:hyper200_nohyper_gp_nopred_trace}. Indeed, this also explains
the bigger difference noted in these figures when we compare the learning of
$q_1$ over $q_2$, in runs that use the stationary model, as distinguished from
the non-stationary model. After all, the discontinuity across $S_1$ is
discussed above, to be higher than across $S_2$.
\subsection{Effect of varying lookback times, i.e. length of historical data}
\noindent
To check for the effect of the lookback time $t_0$, we
present traces of the covariance parameters and kernel hyperparameters
learnt from runs undertaken within the $nested-GP$ model, but different $t_0$
values of 50 and 100, in Figure~\ref{fig:50_100_hypergp}, which we can
compare to the traces obtained in runs performed under the $nested-GP$ model,
with $t_0=200$, as displayed in
Figure~\ref{fig:hyper200_nohyper_gp_nopred_trace}.
\begin{figure}
\begin{center}
{
\includegraphics[width=10cm]{trace_50vs100.eps}
}
\end{center}
\caption{Comparison of traces of unknown smoothness parameters of
$\boldsymbol{\Sigma}_3$ and hyperparameters of GPs invoked to model these
parameters, obtained in runs performed with training data ${\bf D}$ and
$t_0=50$ (in grey, or red in the e-version) and $t_0=100$ (in black).
}
\label{fig:50_100_hypergp}.
\end{figure}
It is indeed interesting to note the trends in traces of the
the smoothness parameters $q$ that are
the reciprocal of $\ell$ parameters, and values of the amplitude ($a_1,
a_2$) and values of length scale hyperparameters
($\delta_1, \delta_2$), evidenced in
Figure~\ref{fig:50_100_hypergp} and the results in black in
Figure~\ref{fig:hyper200_nohyper_gp_nopred_trace}). A zeroth-order model for
these parameters that are realisations from a non-stationary process, is a moving averages time-series model--$MA(t_0)$ to be precise.
We note the increase in
fluctuation amplitude of the traces, with
decreasing $t_0$. For smaller values of lookback time $t_0$, the average
covariance between $g_{\boldsymbol{x}_c}(t_1)$ and $g_{\boldsymbol{x}_c}(t_2)$ is higher, than when $t_0$ is
higher, where the averaging is performed over a $t_0$-iteration long interval
that has its right edge on the current iteration; here
$\boldsymbol{x}_c=(a_c,\delta_c)^T$, $c=1,2$ and as
introduced above, we model the length scale parameter of the kernel that
parametrises $\boldsymbol{\Sigma}_3$, as $\ell_c=g_{\boldsymbol{x}_c}(t)$. Here $g_{\boldsymbol{x}_c}(\cdot)$ is modelled as
a realisation from a scalar-variate GP with covariance kernel that is
itself kernel-parametrised using an SQE kernel with amplitude $a_c$ and
correlation-length $\delta_c$. Then higher covariances between values of
$g_{\boldsymbol{x}_c}(\cdot)$ at different $t$-values in general would suggest higher values of
the global amplitude of this parametrised kernel, and higher values of the
length-scales of this SQE kernel.
Indeed an important question is, what is the ``best'' $t_0$, given our
data. Such question is itself of relevance, and discussed intensively under
distributed lag models, often within Econometrics \ctp{shirley}. An interesting trend noted in the parameter traces presented in
Figure~\ref{fig:50_100_hypergp} for $t_0=50,100$, and to a lesser extent for
$t_0=200$, in the results in black in
Figure~\ref{fig:hyper200_nohyper_gp_nopred_trace}, is the global near-periodic
existence of crests and troughs in these traces. This periodic fluctuation is
more marked for smoothness $q_1$ (=$1/\ell_1$) and the hyperparameters of the
scalar-variate GP used to model $g_{\boldsymbol{x}_1}(\cdot)$, than for $q_2$ (and $a_2$
and $\delta_2$).
From the point of view of a polynomial (of order $t_0$) model for the lag
operator--that transfers information from the
past $t_0$ realisations from a stochastic process to the current
iteration--the shape of the trace will be dictated by parameters of ths
model. If this polynomial admits complex roots, then coefficients of the relevant
lag terms will behave like a damped sine function with iterations. For a
different value of $t_0$, such a pronounced oscillatory trend might not be
equally apparent. Loosely
speaking, the value of $\ell_c$ in any iteration, represented by a moving
average, will manifest the result of superposition of the different
(discontinuous) modal neighbourhoods present in the data. The more multimodal
the data, i.e. larger the number of ``classes'' (by correlation-length scales)
of functional form $\boldsymbol{\xi}(\cdot)$ sampled from the tensor-variate GP, s.t.
superposition of the sample paths will cause a washing-out of the effect of
the different modes, and a less prominent global trend will be manifest in the
traces. However, for data that is globally bimodal, the superposition of the
two ``classes'' of sampled functions $\boldsymbol{\xi}(\cdot)$ will create a periodicity
in the global trend of the generated $\ell_c$ values (and thereby of the
smoothness parameter values $q_c$, where $q=\ell_c^{-1}$).
Again, the larger the value $t_0$ of the lookback-time parameter, the moving
average is over a larger number of samples, and hence greater is the
washing-out effect. Thus, depending on the discontinuity in the data, it is
anticipated that there is a range of optimal lookback-time values, for which,
the global periodicity is most marked. This is what we might be noticing in
the trace of $q_1$ at $t_0=100$ displaying the global periodicity more
strongly than that at $t_0=200$ (see Figure~\ref{fig:50_100_hypergp} and
Figure~\ref{fig:hyper200_nohyper_gp_nopred_trace}).
Another point is that the strength of this global periodicity will be stronger
for the correlation-length scale along that direction in input-space, the
discontinuity along which is stronger. Indeed, as we have discussed above, the
discontinuity in the data with varying $S_1$ is anticipated to be higher than
with $S_2$. So we would expect a more prominent periodic trend in the trace
of $q_1$ than $q_2$. This is indeed what to note in
Figure~\ref{fig:50_100_hypergp}. A simulation study can be undertaken to
explore the effects of empirical discontinuities.
The arguments above qualitatively explain the observed trends in the traces of
the hyperparameters, obtained from runs using different $t_0$. That in spite
of discrepancies in $a_c$ and $\delta_c$, with $t_0$, values of the length
scale parameter $\ell_c$ (and therefore its reciprocal $q_c$) are concurrent
within the 95$\%$ HPDs, is testament to the robustness of
inference. Stationarity of the traces betrays the achievement of convergence
of the chain.
We notice that the reciprocal correlation length scale $q_{1}$ is a
couple of orders of magnitude higher than $q_{2}$; correlation between
values of the sampled function $\boldsymbol{\xi}(\cdot)$, at 2 different $S_1$
values (at the same $s_2$), then wanes more quickly than correlation
between sampled functions computed at same $s_1$ and different $S_2$
values. Here $\boldsymbol{s}=(s_1,s_2)^T$ and given that $\boldsymbol{S}$ is the location of
the observer who observes the velocities of her neighbouring stars on
a two-dimensional polar grid, $S_1$ is interpreted as the radial
coordinate of the observer's location in the Galaxy and $S_2$ is the
observer's angular coordinate. Then it appears that the velocities
measured by observers at different radial coordinates, but at the same
angle, are correlated over shorter radial-length scales than
velocities measured by observers at the same radial coordinate, but
different angles. This is understood to be due to the astro-dynamical
influences of the Galactic features included by \ctn{dc2007}
in the simulation that generates the training data that we use
here. This simulation incorporates the joint dynamical effect of the
Galactic spiral arms and the elongated Galactic bar (made of stars)
that rotate at different frequencies (as per the astronomical model
responsible for the generation of our training data), pivoted at the
centre of the Galaxy. An effect of this joint handiwork of the bar and
the spiral arms is to generate distinctive stellar velocity
distributions at different radial (i.e. along the $S_1$ direction)
coordinates, at the same angle ($s_2$). On the other hand, the stellar
velocity distributions are more similar at different $S_2$ values, at
the same $s_1$. This pattern is borne by the work by \ctn{chakrabarty05}, in
which the radial and angular variation of the standard deviations of
these bivariate velocity distributions are plotted. Then it is
understandable why the correlation length scales are shorter along the
$S_1$ direction, than along the $S_2$ direction.
Furthermore, for the correlation parameter $\rho$, physics suggests that the
correlation will be zero among the two components of a velocity vector. These
two components are after all, the components of the velocity vector in a
2-dimensional orthogonal basis. However, the MCMC chain shows that there is a
small (negative) correlation between the two components of the stellar
velocity vector.
\subsection{Predicting $\boldsymbol{s}^{(test)}$}
\noindent
Figure~\ref{fig:nohypergp_with_wo_hist}, displays histogram-representations of
marginal posterior probability densities of the solar location coordinates
$s^{(test)}_1$, $s^{(test)}_2$; $q_{1}^{*}$ and $q_{2}^{*}$ that get updated
once the test data is added to augment the training data, and parameters
$\sigma_{11}^{1*}$, $\sigma_{22}^{1*}$ and $\rho^*$. 95$\%$ HPD credible
regions computed on each parameter in this inference scheme, are displayed in
Table~1 of Supplementary Materials. These figures display these parameters in the $nonnested-GP$ model. When the $nested-GP$ model is used, histogram-representations of the marginals of the aforementioned parameters, are displayed in Figure~\ref{fig:hyper200_nohyper_gp}.
Prediction of $\boldsymbol{s}^{(test)}$ using the $nested-GP$ models gives rise to similar results as when the $nonnested-GP$ models are used, (see Figure~\ref{fig:hyper200_nohyper_gp} that compares the marginals of the solar location parameters sampled from the joint of all unknowns, given all data, in $nested-GP$ models, against those obtained when $nonnested-GP$ models are used).
The marginal distributions of $s_1^{(test)}$ indicates that the
marginal is unimodal and converges well, with modes at about 2 in model units.
The distribution of $s_2^{(test)}$ on the other hand is quite
strongly skewed towards values of $s_2^{(test)}\lesssim 1$ radians,
i.e. $s_2^{(test)}\lesssim 57$ degrees, though the probability mass in
this marginal density falls sharply after about 0.4 radians,
i.e. about 23 degrees. These values tally quite well with previous
work \ctp{chakrabarty2015bayesian}. In that earlier work, using the training data that we use in this work,
(constructed using the the astronomical model $sp3bar3{\_}18$
discussed by \ctn{chakrabarty2015bayesian}), the marginal distribution of
$s_1^{(test)}$ was learnt to be bimodal, with modes at about 1.85 and
2, in model units. The
distribution of $s_2^{(test)}$ found by \ctn{chakrabarty2015bayesian} is however more
constricted, with a sharp mode at about 0.32 radians (i.e. about 20
degrees). We do notice a mode at about this value in our inference,
but unlike in the results of \ctn{chakrabarty2015bayesian}, we do not find the
probability mass declining to low values beyond about 15 degrees. One
possible reason for this lack of compatibility could be that in
\ctn{chakrabarty2015bayesian}, the matrix of velocities $\boldsymbol{V}$ was vectorised, so that
the training data then resembled a matrix, rather than a 3-tensor as
we know it to be. Such vectorisation could have led to some loss of correlation information, leading to their results.
Model checking of our models and results is undertaken in Section~3 of the
Supplementary Materials.
\subsection{Astronomical implications}
\label{sec:astro}
\noindent
The radial coordinate of the observer in the Milky Way, i.e. the solar radial
location, is dealt with in model units, but will need to be scaled to real
galactic unit of distance, which is kilo parsec (kpc). Now, from independent
astronomical work, the radial location of the Sun is set as 8 kpc. Then our
learnt value of $S_1^{(test)}$ is to be scaled to 8 kpc, which gives 1 model
unit of length to be ${{m}}:=\displaystyle{\left(\frac{8 \mbox{kpc}}{\mbox{learnt value of\:\:}S_1^{(test)}}\right)}$. Our main interest in learning the solar location is to find the frequency $\Omega_{bar}$ with which the Galactic bar is rotating, pivoted at the galactic centre, (loosely speaking). Here $\Omega_{bar}=\displaystyle{\frac{v_0}{\mbox{1 model unit of length}}=\frac{v_0}{{m}}}$, where $v_0=220$ km/s (see \ctn{dc2007} for details). The solar angular location being measured as the angular distance from the long-axis of the Galactic bar, our estimate of $S_2$ actually tells us the angular distance between the Sun-Galactic centre line and the long axis of the bar. These estimates are included in Table~\ref{tab:tab3}.
\begin{table*}[!h]
\caption{$95\%$ HPD on each Galactic feature parameter learnt from the solar
location coordinates learnt using the two predictive inference schemes listed
above and as reported in a past paper for the same training and test data.}
\label{tab:tab3}
\centering
\begin{tabular}{|l|l|l|}
\hline
& $95\%$ HPD for $\Omega_{bar}$ (km/s/kpc)& for angular distance of\\
& &bar to Sun (degrees)\\ \hline
{\mbox{from posterior predictive}} & $[48.11,57.73]$ & $[4.53,43.62]$
\\ \hline
{\mbox{from joint posterior}} & $[48.25,57.244]$ & $[2.25,46.80]$
\\ \hline
{\mbox{from Chakrabarty et. al (2015)}} & $[46.75, 62.98]$ & $[17.60, 79.90]$
\\ \hline
\end{tabular}
\end{table*}
Table~\ref{tab:tab3} displays the Galactic feature parameters that are derived
from the learnt solar location parameters, under the different inference
schemes using the $nonnested-GP$ model, namely, sampling from the joint posterior
probability of all parameters given all data, and from the posterior
predictive of the solar location coordinates given test data and GP parameters
already learnt from training data alone. The derived Galactic feature
parameters are the bar rotational frequency $\Omega_{bar}$ in the real
astronomical units of km/s/kpc and the angular distance between the bar and
the Sun, in degrees. The table also includes results from \ctn{chakrabarty2015bayesian}, the reference for which is in the main paper.
\section{Conclusions}
\noindent
Our work presents a method for learning tensor-valued functional relations
between a sytem parameter vector, and a tensor-valued observable, multiple
measurements of which build up a hypercuboidally-shaped data, that is in
general not continuous, thus demanding a non-stationary covariance structure
of the invoked GP. We clarify the need for generalising a stationary
covariance to one in which the hyperparameters (correlation length scales
along each direction of the space of the system parameter vector) need to be
treated as dependent on the sample function of the invoked GP. We address this
need by modelling the sought tensor-valued function with a tensor-variate GP,
each parameter of the covariance function of which, is modelled as a
dynamically varying, scalar-valued function that is treated as a realisation
from a scalar-variate GP with distinct covariance structure, that we
parametrise. We employ Metropolis-within-Gibbs-based inference, that allows
comprehensive and objective uncertainties on all learnt unknowns. Subsequent
to the learning of the sought tensor-valued function, we make an inverse
Bayesian prediction of the system parameter values at which test data on the
observable is realised. While in this work we focussed on the learning given
discontinuous data, the inclusion of non-stationarity in the covariance is a
generic cure for non-stationary data; we will consider an application to a
temporally varying, econometric dataset in a future contribution.
\renewcommand\baselinestretch{1.}
\small
\bibliographystyle{ECA_jasa}
|
2,869,038,156,795 | arxiv | \section{Modeling Directed Graphs with Undirected Graphs}\label{sec:construction}
To prove Theorem~\ref{thm:directed} we must show how to construct, for a given protected directed graph $G$, a corresponding undirected protected graph $H$ with the same cop number, roughly the same capture time, and not ``too many more'' vertices. Since this construction is rather involved, we devote this section to an explanation of the construction, leaving the actual proof of Theorem~\ref{thm:directed} for Section~\ref{sec:main}.
Let $G$ be a protected directed graph and let $k = \cnumdir(G)$. We construct a reflexive protected undirected graph $H$ as follows. The vertex set of $H$ is comprised of sets $S$, $T_0$, $T_1$, $T_2$, $C_0$, $C_1, C_2, C^*, R_0, R_1, R_2$, and $R^*$. We refer to vertices in $C_0$, $C_1$, and $C_2$ as {\em cop vertices}, while vertices in $R_0$, $R_1$, and $R_2$ are {\em robber vertices}. $S \cup T_0 \cup T_1 \cup T_2$ is referred to as the {\em reset clique}, with $S$ itself comprising the {\em core} and $T_i$ comprising the {\em $i$th wing} of the clique. Vertices in $C^*$ are {\em cop starter vertices}, while vertices in $R^*$ are {\em robber starter vertices}. (See Figure~\ref{fig:directed_overview}.) All vertices in the reset clique are protected, while all other vertices in $H$ are unprotected. Throughout the construction, the indices on the $C_i$ and $R_i$ should be taken modulo 3 when necessary; for example, when $i=2$, the expression ``$C_{i+1}$'' refers to $C_0$.
\begin{figure}[hb]
\begin{center}
\includegraphics{fig1.ps}
\end{center}
\caption{: cop vertices and robber vertices. Jagged edges are protected. A thick edge indicates that all possible edges of this sort are present.}\label{fig:directed_overview}
\end{figure}
The core of the reset clique contains $4k$ vertices, namely $s_0, s_1, \dots, s_{4k-1}$. In addition, for $i \in \{0,1,2\}$, the set $T_i$ contains $k$ vertices, namely $t_0^i, t_1^i, \dots, t_{k-1}^i$. Every pair of vertices in the reset clique is joined by a protected edge. Under the proper circumstances, the reset clique will permit the robber to ``reset'' the game back to its initial state. As with the $C_i$ and $R_i$, indices on vertices within the $T_i$ should be taken modulo $k$ and indices on vertices in $S$ should be taken modulo $4k$.
The sets $C_0$, $C_1$, and $C_2$ each contain $k$ copies of every vertex in $G$. For $v \in V(G)$, $i \in \{0, 1, 2\}$, and $j \in \{0, 1, \dots, k-1\}$, we denote by $\kappa(v;i,j)$ the $j$th copy of $v$ belonging to $C_i$. Within each $C_i$, the copies of a given vertex form a clique; that is, for all $v, i, j,$ and $j'$, the vertices $\kappa(v;i,j)$ and $\kappa(v;i,j')$ are joined by an unprotected edge. Aside from these edges and loops, there are no edges with both endpoints in any $C_i$. The sets $R_0$, $R_1$, and $R_2$ each contain one copy of every vertex in $G$, and each $R_i$ is independent. We denote by $\rho(v;i)$ the copy of $v$ in $R_i$. Our intent is that for the bulk of the game, the cops occupy vertices within the $C_i$ (that is, ``cop vertices''), while the robber occupies a vertex within one of the $R_j$ (that is, a ``robber vertex'').
We seek to construct $H$ so that play of the game on $H$ mirrors play on $G$, in the sense that we can equate a cop occupying $\kappa(v;i,j)$ in $H$ with a cop occupying $v$ in $G$; likewise, the robber occupying $\rho(v;i)$ in $H$ is analogous to the robber occupying $v$ in $G$. We also aim to greatly restrict the flexibility that both players enjoy. In particular, our intent is that under any optimal cop strategy, if the cops are positioned within $C_i$ after some cop turn, then the robber must be positioned within $R_i$. Moreover, we aim to force the cops to move from $C_0$ to $C_1$ to $C_2$ to $C_0$ and so on; forcing the cops to keep moving in the ``forward direction'' among the $C_i$ will allow us to simulate playing on the directed graph $G$. Likewise, we aim to force the robber to move from $R_0$ to $R_1$ to $R_2$ to $R_0$ and so on.
We now add edges among the cop and robber vertices. For all $\overrightarrow{uv} \in E(G)$, all $i \in \{0, 1, 2\}$, and all $j \in \{0, 1, \dots, k-1\}$, add unprotected edges joining $\kappa(u;i,j)$ to $\kappa(v;i+1,j)$ and joining $\rho(u;i)$ to $\rho(v;i+1)$. (Note that $u$ and $v$ need not be distinct.) These edges ensure that each ``forward'' movement by a cop or robber (from $C_i$ to $C_{i+1}$ or from $R_i$ to $R_{i+1}$) corresponds to following an edge from $G$ in the forward direction. If $\overrightarrow{uv}$ is unprotected, then we also add an unprotected edge joining $\kappa(u;i,j)$ to $\rho(v;i+1)$. These edges allow a cop to capture the robber in $H$ provided that she would be able to do so in $G$. (See Figure~\ref{fig:crvertices_overview}.)
\begin{figure}[hb]
\begin{minipage}{0.49\textwidth}
\begin{center}
\includegraphics{fig2.ps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\textwidth}
Each $R_i$ contains one copy of each vertex in $G$; each $C_i$ contains $k$ copies.
\medskip\medskip
For $\overrightarrow{uv} \in E(G)$, every copy of $u$ in $C_i$ is adjacent to every copy of $v$ in $C_{i+1}$, and the copy of $u$ in $R_i$ is adjacent to the copy of $v$ in $R_{i+1}$.
\medskip\medskip\medskip\medskip
Not pictured:
\medskip
\begin{itemize}
\item For $\overrightarrow{uv} \in E(G)$, each copy of $u$ in $C_i$ is adjacent to the copy of $v$ in $R_{i+1}$.
\medskip
\item Every vertex in $C_i$ is adjacent to all vertices in $R_{i-1} \cup R_i$.
\medskip
\item The $j$th copy of $v$ in $C_i$ is adjacent to $t_j^i$, $s_{4j+i}$, $s_{4j+i+1}$, $s_{4j+i+2}$, and $s_{4j+i+3}$.
\medskip
\item The copy of $v$ in $R_i$ is adjacent to all of $S \cup T_i$ along protected edges.
\end{itemize}
\end{minipage}
\caption{: cop vertices and robber vertices.}\label{fig:crvertices_overview}
\end{figure}
Next, for $i \in \{0,1,2\}$, add protected edges joining each vertex of $R_i$ to each vertex of $S \cup T_i$. These edges permit the robber to ``escape'' to the reset clique, should the cops fail to adequately defend it. Additionally, add unprotected edges joining each $\kappa(v;i,j)$ to $s_{4j+i}$, $s_{4j+i+1}$, $s_{4j+i+2}$, $s_{4j+i+3}$, and $t_{j}^i$. These edges permit cops in $C_i$ to defend the core and $i$th wing of the reset clique, but only by positioning themselves in very special ways (see below). We also add unprotected edges joining all vertices of $C_i$ to all vertices of $R_i \cup R_{i-1}$. These edges force the robber to keep moving ``forward'' from $R_0$ to $R_1$ to $R_2$ to $R_0$ and so forth, in order to stay one step ``ahead'' of the cops.
The set $C^*$ of cop starter vertices contains $k$ vertices, namely $c^*_0, c^*_1, \dots, c^*_{k-1}$. For $j \in \{0, 1, \dots, k-1\}$, we add unprotected edges joining $c^*_j$ to $s_{4j+3},s_{4j+4},s_{4j+5},s_{4j+6}, t_j^0,t_j^1$, and $t_j^2$. We also add unprotected edges joining every cop starter vertex to every cop vertex and every robber vertex. Finally, add unprotected edges joining all robber starter vertices to all cop vertices and joining each $r^*_i$ to all vertices in $R_{i+1}$, along with protected edges joining each pair of robber starter vertices and protected edges joining each $r^*_i$ to all vertices in the core and $i$th wing of the reset clique. These edges ensure that when the cops choose to start the game by occupying all of the cop starter vertices, the robber must start on one of the robber starter vertices. (In fact, under optimal play the cops must start on the cop starter vertices, but this is less clear; see Section~\ref{sec:main}.)
\section{Main Results}\label{sec:main}
We are now ready to prove Theorems~\ref{thm:directed} and \ref{thm:main}. To prove Theorem~\ref{thm:directed}, we must show that $\cnumprot(H) = \cnumdir(G) = k$ and that $\captprot(H)$ is roughly equal to $\captdir(G)$ (where $G$ is any given protected directed graph and $H$ is the protected undirected graph constructed from $G$ in Section~\ref{sec:construction}). We begin by describing how we ``expect'' the game on $H$ to be played.
To simplify the analysis, we introduce some additional terminology. For all $v \in V(G)$ and $i \in \{0,1,2\}$, we refer to $\kappa(v;i,j)$ as a {\em $j$-vertex}; $c^*_j$ is also considered a $j$-vertex. In the game on $H$, we say that the cops occupy a {\em stable position} if either the cops occupy all $k$ cop starter vertices, or all cops occupy vertices within $C_i$ for some $i$ with one cop on a $j$-vertex for all $j \in \{0, 1, \dots, k-1\}$. We say that the game is in a {\em canonical configuration} if the cops occupy a stable position in some $C_i$ and either:
\begin{itemize}
\item It's the cops' turn and the robber occupies a vertex in $R_{i+1}$, or
\item It's the robber's turn and the robber occupies a vertex in $R_i$.
\end{itemize}
As we will see below, the cops will want to prevent the robber from ever reaching the reset clique. To do this, they must ensure that they always defend the robber's neighbors in the clique; this requires them to position themselves carefully. Recall that we say a cop {\em defends} a vertex $v$ when there is an unprotected edge joining the cop's current vertex with $v$.
\begin{lemma}\label{lem:stable}
The cops defend all vertices of the core of the reset clique in $H$ if and only if they occupy a stable position. Moreover, if the cops occupy a stable position, then they defend the $i$th wing of the reset clique if and only if they occupy either $C_i$ or the cop starter vertices.
\end{lemma}
\begin{proof}
It is clear from the construction of $H$ that if the cops occupy a stable position, then they defend all vertices of the core. Suppose now that the cops defend all vertices of the core. Every cop vertex and cop starter vertex defends exactly four vertices within the core, while no other vertices in the graph defend any vertices of the core. Since there are $4k$ vertices in the core and only $k$ cops, all $k$ cops must occupy cop vertices or cop starter vertices and, moreover, no two cops can have a common neighbor in the core. By construction, if any two cops both occupy a $j$-vertex for some $j$, then they have a common neighbor (namely $s_{4j+3}$); consequently, we must have one cop on a $j$-vertex for all $j \in \{0, 1, \dots, k-1\}$. Finally, by symmetry, it suffices to show that if one cop occupies a vertex of $C_0$, then so must the other $k-1$ cops. Suppose otherwise, and choose $\ell$ so that the cop occupying an $\ell$-vertex sits in $C_0$, while the cop occupying an $(\ell-1)$-vertex does not. By construction, these two cops both defend $s_{4\ell}$, so the cops cannot defend the entire core.
The second half of the claim is clear.
\end{proof}
An important consequence of Lemma~\ref{lem:stable} is that when the robber resides in the reset clique, the cops can force him to leave only by occupying all $k$ cop starter vertices. When they do so, the robber must move to one of the robber starter vertices, lest he be captured on the cops' next turn. Thus motivated, we define an {\em initial configuration} to be a game state in which the cops occupy all $k$ cop starter vertices, the robber occupies a robber starter vertex, and it is the cops' turn.
\begin{lemma}\label{lem:copvx}
Suppose that the cops occupy a stable position and it is the robber's turn. If the robber moves to any cop vertex, then the cops can capture him within the two subsequent rounds.
\end{lemma}
\begin{proof}
Suppose the robber moves to a $j$-vertex within $C_i$. In response, the cop currently on a $j$-vertex moves to any $j$-vertex in $C_i$, while any other cop moves to a cop starter vertex. (Note that this is always possible because each vertex in $G$ has at least one in-neighbor and one out-neighbor, so no matter which cop vertex a cop occupies, she always has at least one $j$-vertex neighbor in each of $C_0$, $C_1$, and $C_2$.) The cop on the cop starter vertex now defends all cop and robber vertices, while the cop on a $j$-vertex in $C_i$ defends all robber starter vertices and the robber's neighbors in the reset clique. Thus, no matter how the robber moves, the cops can win on their next turn.
\end{proof}
\begin{lemma}\label{lem:initial}
Suppose that the game is in an initial configuration with the robber on $r^*_i$. Under optimal play by both players, either the robber loses within the next three rounds, or:
\begin{description}
\item[(1)] After one more round, the game reaches a canonical configuration with the cops in $C_i$ and the robber in $R_{i+1}$, and
\item[(2)] For the remainder of the game, the robber never moves to an undefended vertex in the reset clique.
\end{description}
Moreover, if the game reaches a canonical configuration as in (1), then the robber may occupy whichever vertex of $R_{i+1}$ he chooses.
\end{lemma}
\begin{proof}
We begin with claim (2). If the robber ever moves into the reset clique, then by Lemma~\ref{lem:stable} and the ensuing discussion, the game must eventually return to an initial configuration. All initial configurations are equivalent up to symmetry, so the rounds leading up to this second initial configuration have served no purpose for the cops. Thus, under optimal play by the cops, the game never returns to an initial configuration, so the robber must never reach the reset clique.
We next turn to claim (1). If the cops all remain on the cop starter vertices, then the robber can simply remain on $r^*_i$; this is clearly suboptimal for the cops. Otherwise, if the cops do not move to a stable position in $C_i$, then by Lemma~\ref{lem:stable}, they leave some vertex $v$ of $S \cup T_i$ undefended. Thus the robber can, on his ensuing turn, move to the reset clique; by claim (2), this cannot happen under optimal play. Suppose therefore that the cops move to a stable position within $C_i$. By Lemma~\ref{lem:copvx}, the robber cannot move to a cop vertex without being captured in short order. The cops defend all of the robber's neighbors in the reset clique, so the robber cannot move there, either; likewise, he cannot remain in place or move to a different robber starter vertex. The only remaining option is for the robber to move to $R_{i+1}$, resulting in a canonical configuration of the desired type; since $r^*_i$ is adjacent to all of $R_{i+1}$, the robber may occupy any vertex of $R_{i+1}$ he chooses.
\end{proof}
Recall that one of our main goals in constructing $H$ is to greatly restrict the freedom enjoyed by both players. Claim (2) of the lemma above shows that the cops cannot allow the robber to safely enter the reset clique; this is the primary means by which we restrict the movements of the cops. Conversely, the robber's movements are restricted by the threat of capture. As we next show, these restrictions are severe enough that (under optimal play) the game is nearly always in a canonical configuration.
\begin{lemma}\label{lem:canonical}
Suppose that, on a cop turn, the game is in a canonical configuration with the cops in $C_{\ell}$ and the robber in $R_{\ell+1}$. Under optimal play, either the robber loses within the next three rounds, or the game proceeds to a canonical configuration with the cops in $C_{\ell+1}$ and the robber in $R_{\ell+2}$.
\end{lemma}
\begin{proof}
If some cop currently defends the robber's vertex, then the cops win immediately; suppose otherwise. Since the robber occupies a vertex in $R_{\ell+1}$, his current vertex is adjacent to all of $S \cup T_{\ell+1}$. Thus, by Lemma~\ref{lem:stable}, unless the cops move to a stable position in $C_{\ell+1}$, the robber can safely move to a vertex in the reset clique. By Lemma~\ref{lem:initial}, this cannot happen under optimal play.
We may thus suppose that the cops move to a stable position in $C_{\ell+1}$. The cops now defend all of $R_{\ell}$, all of $R_{\ell+1}$, the robber starter vertices, and all of the robber's neighbors in the reset clique. Moreover, by Lemma~\ref{lem:copvx}, if the robber moves to a cop vertex, then he can be captured within the following two rounds. The robber's only remaining option is to move into $R_{\ell+2}$, resulting in a canonical configuration of the desired form.
\end{proof}
We are finally ready to prove Theorem~\ref{thm:directed}.
\begin{theorem}
\label{thm:directed}
Fix $k \ge 2$, and let $G$ be a protected (not necessarily reflexive) directed graph with $\cnum(G) = k$ such that every vertex in $G$ has at least one in-neighbor and one out-neighbor. If $H$ is constructed from $G$ as specified above, then $\cnum(H) = k$ and $\captdir(G) + 1 \le \capt(H) \le \captdir(G) + 2$. In addition, $\size{V(H)} = (3k+3)\size{V(G)} + 8k + 3$.
\end{theorem}
\begin{proof}
It is clear from the construction of $H$ that $\size{V(H)} = (3k+3)\size{V(G)} + 8k + 3$. To establish the rest of the claim, we show that the cops can win the game on $H$ within $\captdir(G)+2$ rounds and that the robber can evade capture on $H$ for at least $\captdir(G)$ turns. As shown in Lemmas~\ref{lem:initial} and \ref{lem:canonical}, optimal play ensures that the game on $H$ will typically be in a canonical configuration. We associate each canonical configuration in $H$ with a configuration of the game in $G$ in the natural way. Each of the $k$ cops in $H$ occupies some cop vertex $\kappa(v;i,j)$; we view this cop as occupying the vertex $v$ in $G$. Likewise, when the robber occupies some vertex $\rho(w;i)$ in $H$, we imagine that he occupies vertex $w$ in $G$.
We begin by giving a cop strategy to capture the robber on $H$ within $\captdir(G) + 2$ rounds. Throughout the game on $H$, the cops will imagine playing a game on $G$ and use their strategy on $G$ to guide their play on $H$. The cops begin by occupying all $k$ cop starter vertices. To avoid immediate capture, the robber must then occupy one of the robber starter vertices, say (without loss of generality) $r^*_0$. The cops now turn to the game on $G$ and choose their starting positions in that game. For convenience, index the cops from 0 to $k-1$. If cop $j$ occupies vertex $v$ in $G$, then she moves to vertex $\kappa(v;0,j)$ in $H$. (This is always possible because each cop starter vertex is adjacent to all vertices in $C_0$.) Thus, in $H$, the cops move to a stable position within $C_0$ that corresponds to their choice of initial positions on $G$. As argued in Lemma~\ref{lem:initial}, if the robber is to survive for more than three more rounds, then he must occupy some vertex $\rho(w;1)$. The cops now imagine that, in the game on $G$, the robber has chosen to occupy vertex $w$.
The game on $H$ has now entered a canonical configuration. The cops imagine their next move (under optimal play) in $G$ and mirror this move in $H$, while simultaneously moving to a stable position within $C_1$. In particular, if cop $j$ moves from $v$ to $w$ in $G$, then she moves from $\kappa(v;0,j)$ to $\kappa(w;1,j)$ in $H$. (Note that since $\overrightarrow{vw} \in E(G)$, vertices $\kappa(v;0,j)$ and $\kappa(w;1,j)$ are adjacent in $H$.) As argued in Lemma~\ref{lem:canonical}, either the robber moves to $R_2$ or he loses within the following two rounds. Suppose the latter. Since the game on $G$ has not yet ended, it has lasted at most $\captdir(G)-1$ rounds. One round was played in the game on $H$ before the game on $G$ even began and as many as two more rounds might yet be played. In total, the game on $H$ lasts at most $\captdir(G)+2$ rounds. Now suppose instead that the robber moves from his current position $\rho(x;1)$ to some vertex $\rho(y;2)$ in $R_2$. By construction, vertices $\rho(x;1)$ and $\rho(y;2)$ are adjacent in $H$ only if $\overrightarrow{xy} \in E(G)$. Thus, in the game on $G$, the cops may imagine that the robber has moved from $x$ to $y$. The game on $H$ remains in a canonical configuration and, moreover, this configuration corresponds to the configuration of the game on $G$.
The game on $H$ continues in this manner until either the robber fails to move to a canonical configuration as outlined in Lemma~\ref{lem:canonical} or until the cops capture the robber on $G$. In the former case, as argued above, the game on $H$ lasts at most $\captdir(G)+2$ rounds. In the latter case, some cop $j$ has followed an unprotected edge in $G$ from her vertex $v$ to the robber's vertex $w$ (where $v$ and $w$ are not necessarily distinct). (Recall that in the game of Cops and Robbers with Protection, this is the only way that the game can end; in particular, unlike in ordinary Cops and Robbers, the game does not end if the robber moves to the cop's current vertex.) In this case, in $H$, cop $j$ presently occupies $\kappa(v;i,j)$ for some $i$, while the robber occupies $\rho(w;i+1)$. Since $\overrightarrow{vw}$ is an unprotected edge in $G$, these two vertices are joined in $H$ by an unprotected edge, so cop $j$ may proceed to capture the robber in $H$. Since at most $\capt(G)$ rounds have elapsed in $G$ and one additional round was played in $H$ before the game in $G$ even began, in total at most $\capt(G)+1$ rounds have been played in $H$.
To show that $\capt(H) \ge \captdir(G)+1$, we use a similar argument, except that this time we give a strategy for the robber. We assume throughout that the cops play optimally on $H$. At the outset of the game on $H$, there are two possibilities: either the cops begin by occupying all $k$ cop starter vertices, or they don't. In the latter case, by Lemma~\ref{lem:stable}, some vertex of the reset clique remains undefended; the robber chooses to begin there. The robber can henceforth remain in the clique until the cops do occupy all $k$ cop starter vertices, at which point the robber moves to $r^*_0$. If instead the cops begin by occupying the cop starter vertices, the robber simply begins on $r^*_0$; since this is clearly a more efficient line of play for the cops, we may suppose that this is what happens.
Thus, the game begins in an initial configuration with the robber on $r^*_0$. By Lemma~\ref{lem:initial}, the cops must always defend all of the robber's neighbors in the reset clique. They can do this only by moving to a stable position within $C_0$. As above, we can associate this stable position with an initial position for the cops in the game on $G$. The robber now considers the game on $G$ and chooses his initial position in that game; say he decides to begin on vertex $v$. In the game on $H$, the robber moves to $\rho(v;1)$. The game on $H$ has now entered a canonical configuration that corresponds to the current configuration of the game on $G$.
It is now the cops' turn. The cops all occupy vertices of the form $\kappa(u;0,j)$, while the robber occupies some vertex $\rho(v;1)$. By construction, these two vertices are adjacent along an unprotected edge if and only if $\overrightarrow{uv}$ is an unprotected edge in $G$. Thus, some cop can capture the robber on their ensuing turn on $H$ if and only if she can do so on $G$. Otherwise, to prevent the robber from reaching the reset clique, the cops must move to a stable position in $C_1$. As before, this cop movement in $H$ corresponds to a legal cop movement in $G$. The robber imagines that the cops have played thus on $G$, decides which vertex to move to in that game, and moves (in $H$) to the corresponding vertex in $R_2$. As above, the game continues in this manner, with each player's moves in $H$ corresponding to legal moves in $G$. Eventually, the cops capture the robber in $H$. By construction, the cops cannot capture the robber on $H$ until such time as they can also capture him on $G$. Since the robber plays optimally in $G$, this takes at least $\captdir(G)$ rounds; since one additional round was played in $H$, we have $\capt(H) \ge \captdir(G)+1$, as claimed.
\end{proof}
Armed with Theorem~\ref{thm:directed}, we are ready to prove Theorem \ref{thm:main}.
\begin{theorem}\label{thm:main}
For fixed $k \ge 2$, the maximum capture time of an $n$-vertex graph with cop number $k$ is $\Theta(n^{k+1})$.
\end{theorem}
\begin{proof}
It follows from Proposition \ref{prop:main_upper} that the capture time of an $n$-vertex graph with cop number $k$ is $O(n^{k+1})$, so it suffices to establish a matching lower bound. In particular, we will show that there exist arbitrarily large graphs $H$ with cop number $k$ and capture time at least $\left( \frac{\size{V(H)}}{40k^4}\right )^{k+1}$.
We first show how to construct a protected directed graph $G$ with $\cnumdir(G) = k$ and $\captdir(G) \ge \left(\frac{n}{2k}\right)^{k+1}$, where $n = \size{V(G)}$. By Theorem~\ref{thm:directed}, it then follows that there exists a protected reflexive undirected graph $G'$ with $\cnumprot(G') = k$, $\captprot(G') \ge \left(\frac{n}{2k}\right)^{k+1},$ and $\size{V(G')} = (3k+3)n+8k+3$. Finally, Lemma~\ref{lem:protected} implies the existence of a reflexive undirected graph $H$ with $\cnum(H) = k$, $\capt(H) \ge \left(\frac{n}{2k}\right)^{k+1}$, and $\size{V(H)} < 4k^2 \size{V(G')} < 20k^3n$ (for sufficiently large $n$). Thus, $\capt(H) \ge \left( \frac{\size{V(H)}}{40k^4}\right )^{k+1}$, as claimed.
Our goal in constructing $G$ is to restrict the cops' actions so greatly that they have only one reasonable line of play -- a line of play that happens to take a long time to resolve. This will greatly simplify the analysis of the game.
The vertex set of $G$ consists of five parts: $S$, the {\em reset clique}; $C_0, C_1, \dots, C_{k-1}$, the {\em cop tracks}; $R$, the {\em robber track}; $X$, the set of {\em escape vertices}; and one special vertex $\omega$. The reset clique consists of the $k$ vertices $s_0, s_1, \dots, s_{k-1}$. Each cop track $C_i$ consists of the $q_i$ vertices $c_{i,0}, c_{i,1}, \dots, c_{i,q_i-1}$, where the $q_i$ will be specified later. Likewise, the robber track consists of the $p-1$ vertices $r_0, \dots, r_{p-2}$ (where $p$ will be specified later) along with the $k$ vertices $r_{p-1}^0, r_{p-1}^1, \dots, r_{p-1}^{k-1}$. The set $X$ consists of the $k$ escape vertices $x_0, x_1, \dots, x_{k-1}$. The vertices in the reset clique are reflexive and protected, $\omega$ is reflexive and unprotected, and all other vertices are irreflexive.
Between each pair of vertices $s,t$ in the reset clique, we add edges $\vec{st}$ and $\vec{ts}$, both protected. For each $i \in \{0, 1, \dots, k-1\}$, we add an unprotected edge from $c_{i,0}$ to $s_i$. On the cop tracks, we add unprotected edges from $c_{i,j}$ to $c_{i,j+1}$ for all $j$ (where $j$ is taken modulo $q_i$ where appropriate). On the robber track, we add unprotected edges from $r_j$ to $r_{j+1}$ for $j \in \{0, 1, \dots, p-3\}$, along with unprotected edges from $r_{p-2}$ to $r_{p-1}^0, r_{p-1}^1, \dots, r_{p-1}^{k-1}$ and from each of $r_{p-1}^0, r_{p-1}^1, \dots, r_{p-1}^{k-1}$ to $r_0$. We also add unprotected edges from every vertex on the robber track to every escape vertex, from every escape vertex to every vertex in the reset clique, and from every vertex in the reset clique to $r_0$. Finally, we add unprotected edges from every vertex in $C_i$ to the escape vertex $x_i$, from each $c_{i,q_i-1}$ to $r_{p-1}^i$, and from $\omega$ to all vertices {\em except} those in the reset clique. (Refer to Figure~\ref{fig:construction}.)
\begin{figure}[hb]
\begin{center}
\includegraphics{fig3.ps}
\end{center}
\caption{: the graph $G$ (with $k=3$). Jagged edges are protected. A thick edge from one set of vertices to another indicates that all possible edges of this sort are present.}\label{fig:construction}
\end{figure}
Before proceeding, we make four observations. First: the cops can defend the reset clique only by occupying all of $c_{0,0}, c_{1,0}, \dots, c_{k-1,0}$. Second: a cop can enter $\omega$ only by beginning the game there; once he leaves, he can never return. Third: the cops defend all $k$ escape vertices if and only if either some cop occupies $\omega$, or each cop track contains a cop. When each cop track contains a cop, we say that the cops occupy a {\em stable position}. Fourth: from a stable position, if the cop on $C_i$ ever leaves $C_i$, then he can never return; consequently, the cops can never again occupy all of $c_{0,0}, c_{1,0}, \dots, c_{k-1,0}$, so they can never again defend the reset clique. Thus if, on the robber's turn, the cops ever fail to occupy a stable position (and no cop occupies $\omega$), then the robber can move to some undefended escape vertex and subsequently to the reset clique, whence he can never be captured. Thus the cops must occupy a stable position on every robber turn or else lose the game.
At this point, we remark that because $k$ cops are needed to defend the reset clique, we have $\cnumdir(G) \ge k$. We explain later why $\cnum(G) \le k$.
We are now ready to outline the robber's strategy for surviving ``long enough'' against $k$ cops. At the beginning of the game, if the cops' initial placement leaves any vertex of the reset clique undefended, then the robber begins on some such vertex; he moves between undefended vertices in the clique until the cops occupy all of $c_{0,0}, c_{1,0},\dots, c_{k-1,0}$, at which point he moves to $r_0$. Otherwise, the cops must have begun the game on precisely these vertices, so the robber begins on $r_0$. In either case, the game reaches a configuration in which the cops occupy $c_{0,0}, c_{1,0}, \dots, c_{k-1,0}$, the robber occupies $r_0$, and it is the cops' turn; we refer to this as the {\em initial configuration} of the game. In an initial configuration, the cops occupy a stable position. If, on any robber turn, the cops {\em do not} occupy a stable position, then (as explained above) the robber moves to an undefended escape vertex, moves from there to the reset clique, and forever after remains safely in the reset clique. Thus, so long as the robber remains on the robber track, the cops must always maintain a stable position (unless, of course, some cop can capture the robber with her next move). Conversely, so long as the cops always occupy a stable position, they defend all $k$ escape vertices, so the robber cannot leave the robber track.
In a stable position, we have one cop in $C_i$ for each $i$ -- that is, we have one cop in each cop track. To maintain a stable position, each cop must remain within her track, and consequently must move ``forward'' on each turn. That is, the cop who begins on $c_{i,0}$ must first move to $c_{i,1}$, then to $c_{i,2}$, and so forth. Likewise, since the cops always occupy a stable position, the robber must move from $r_0$ to $r_1$, then to $r_2$, and so forth. Once cop $i$ reaches $c_{i,q_i-1}$, there are two reasonable possibilities for her next move. If the robber occupies $r_{p-1}^i$ at that time, then cop $i$ may (and surely will) capture him. If instead the cops cannot capture the robber, then to maintain a stable position, cop $i$ must return to $c_{i,0}$. Similarly, once the robber reaches $r_{p-2}$, he has some flexibility. Assuming the cops occupy a stable position, the robber cannot leave the robber track. Thus, if any vertex $r_{p-1}^i$ is undefended, the robber moves there, and the game continues as before -- with the robber moving next to $r_0$, then to $r_1$, and so forth. If instead all of the $r_{p-1}^i$ are defended, then the robber moves to $r_{p-1}^0$, where he will be captured.
This process can end only with the robber's capture. This occurs if, and only if, on some robber turn, cop $i$ occupies vertex $c_{i,q_i-1}$ for all $i$, while the robber occupies $r_{p-2}$. We refer to this as a {\em terminal configuration}. Suppose that the game first reaches a terminal configuration in the $T$th round after reaching the initial configuration. Since each cop must walk along her track over and over, never leaving and never pausing, we see that $T$ must be congruent to $-1$ modulo $q_i$ for all $i$. Likewise, $T$ must be congruent to $-1$ modulo $p$. We now choose $p, q_0, \dots, q_{k-1}$. Fix an arbitrary positive integer $r$. Let $p$ be the $r$th smallest prime number, $q_0$ the next-smallest, $q_1$ the next-smallest after that, and so forth. Since $p$ and the $q_i$ are all prime, $T$ must be congruent to $-1$ modulo $pq_0q_1\dots q_{k-1}$, hence $T \ge pq_0q_1\dots q_{k-1}-1$.
At this point, we note that $\cnumdir(G) \le k$. If the cops all start on vertex $\omega$, then the robber must start in the reset clique. The cops can now easily force the game into an initial configuration. Henceforth, if the cops continue to follow the cop tracks, then the robber can never leave the robber track, so the game reaches a terminal configuration after $T$ rounds -- at which point the cops win.
Returning to the matter of $\captdir(G)$, recall that the $r$th-smallest prime number lies between $r(\log r + \log \log r - 1)$ and $r(\log r + \log \log r)$. Thus,
$$\captdir(G) \ge T \ge [r(\log r + \log \log r - 1)]^{k+1}.$$
In addition,
\begin{align*}
n = \size{V(G)} &= \size{S} + \sum_{i=0}^{k-1}\size{C_i} + \size{R} + \size{X} + 1\\
&\le k + k(r+k)(\log (r+k) + \log \log (r+k)) + r(\log r + \log \log r) + k + 1\\
&\le 2kr(\log r + \log \log r - 1),
\end{align*}
for $r$ sufficiently large relative to $k$. Thus,
$$\captdir(G) \ge [r(\log r + \log \log r - 1)]^{k+1} \ge \left (\frac{n}{2k}\right )^{k+1},$$
as claimed.
\end{proof}
\section{Directed Graphs}\label{sec:directed}
While our primary goal in this paper was to construct undirected graphs with large capture time, the tools we have established enable us to say a few things about directed graphs. Most notably, Theorem~\ref{thm:directed} shows that the Cops and Robbers played on directed graphs is very closely connected to the game played on undirected graphs. This is significant because Cops and Robbers on directed graphs is not well understood; very little work has been done in the area. It is our hope that the techniques used in Theorem~\ref{thm:directed}, if not the theorem itself, can be used to establish new results on Cops and Robbers in the directed setting.
What work has been done on this topic -- most notably in \cite{FKL12}, \cite{GR95}, and \cite{LO17} -- has focused on strongly-connected directed graphs. Theorem~\ref{thm:main} implies that for $k \ge 2$, there exist $n$-vertex strongly-connected directed graphs with cop number $k$ and capture time $\Theta(n^{k+1})$: simply construct an undirected graph with these properties and replace each undirected edge $uv$ with the two edges $\overrightarrow{uv}$ and $\overrightarrow{vu}$. Moreover, the argument used to prove Proposition~\ref{prop:main_upper} can be applied to directed graphs, giving an $O(n^{n+1})$ bound on the capture time.
However, one thing is not immediately clear: how large can the capture time be for an $n$-vertex strongly-connected directed graph with cop number 1? One cannot simply apply the argument that Bonato et al.~\cite{BGHK09} used to show that an undirected graph with cop number 1 has capture time $O(n)$; it does not extend to this more general setting. There is good reason for this: in fact there exist $n$-vertex strongly-connected directed graphs with cop number 1 and capture time $\Omega(n^2)$, as we next show.
While Theorem~\ref{thm:main} requires $k \ge 2$, this is only needed to satisfy the hypotheses of Theorem~\ref{thm:directed}: when $k=1$, the main construction does in fact create a directed graph with cop number 1 and capture time $\Theta(n^2)$. (Note that this shows that the bound on $k$ in Theorem~\ref{thm:main} is best possible: one cannot hope to adjust the construction so that it works when $k=1$.) It is not hard to adjust this construction so that the graph produced is also reflexive and strongly-connected. Our construction uses protected directed graphs; we remark that Lemma~\ref{lem:protected} can be extended to the directed graph setting by making the natural adjustments to the proof (\cite{Mam13}, Lemma 3.1).\footnote{
In particular:
\begin{itemize}
\item Take $P$ to be a doubly-directed incidence graph of a projective plane;
\item Add an edge from $(i,p)$ to $(j,q)$ if and only if $\overrightarrow{pq} \in E(P)$ or $i=j$;
\item $\overrightarrow{vw} \in E(G')$ when there is an unprotected edge from $\pi(v)$ to $\pi(w)$ in $G$ or when $\overrightarrow{vw} \in E(H)$ and either $\pi(v) = \pi(w)$ or there is a protected edge from $\pi(v)$ to $\pi(w)$ in $G$.
\end{itemize}}
\begin{theorem}\label{thm:directed_capture}
The maximum capture time among $n$-vertex strongly-connected directed graphs with cop number 1 is $\Theta(n^2)$.
\end{theorem}
\begin{proof}
As noted above, the argument used to prove Proposition~\ref{prop:main_upper} shows that every $n$-vertex directed graph with cop number 1 has capture time $O(n^2)$. To establish the matching lower bound, we need only construct a strongly-connected protected directed graph $G$ with cop number 1 and capture time $\Omega(n^2)$; we may then apply Lemma~\ref{lem:protected} to obtain a corresponding protected undirected graph.
We use a construction similar to that used in Theorem~\ref{thm:main} (with $k=1)$, with only a few modifications. (Since $k=1$, to simplify the notation, we write $c_i$, $r_{p-1}$, $q$, and $s$ in place of $c_{i,0}$, $r_{p-1}^0$, $q_0$, and $s_0$, respectively.) First, we add protected loops at every vertex of $G$. Next, we add a new vertex $\psi$, unprotected edges from every vertex on the cop vertex to $\psi$, an unprotected edge from $\psi$ to $\omega$, and an unprotected edge from $\omega$ to $\psi$. (These new edges allow the cop to return to $\omega$, but he must take two steps to do so; this gives the robber time to escape back to the starter clique.) We also add an unprotected edge from the escape vertex $s$ to $\omega$. The graph is now strongly-connected. From any vertex on the cop track, one can reach $\omega$ by way of $\psi$; from a vertex on the robber track, one can reach $\omega$ by way of $s$; from any vertex in the starter clique, one can reach $\omega$ by first entering the robber track. From $\omega$, one can proceed directly to any vertex except the single vertex in the starter clique, which we can reach from $s$. Thus for every vertices $u$ and $v$ in $G$ there is a path from $u$ to $\omega$ to $v$, so $G$ is strongly-connected.
Additionally, for each vertex $c_i$ on the cop track, we add a second vertex $c'_{i}$ with the same in-neighbors and out-neighbors as $c_{i}$. Likewise, for each vertex $r_i$, we add a twin vertex $r'_i$. Finally, for all $i \in \{0, \dots, q-1\}$ and all $j \in \{0, \dots p-1\}$ we add unprotected edges from $c_{i}$ to $r_j$ and from $c'_{i}$ to $r'_j$. These edges will let the cop force the robber to move forward in his track, rather than following loops. Note that by construction of the $c'_i$ and $r'_i$, there are also edges from $c_{q-1}$ to $r'_{p-1}$ and from $c'_{q-1}$ to $r_{p-1}$; as in Theorem~\ref{thm:main}, these edges will allow the cop to eventually capture the robber.
It is clear that $G$ is reflexive and strongly-connected. To see that $G$ has cop number 1, we explain how one cop can capture the robber. The cop begins on $\omega$. To avoid immediate capture, the robber must begin in the starter clique. The cop now moves to $c_0$, forcing the robber to leave the clique. The robber must move to $r'_0$ (note that moving to $r_0$ would result in capture). The cop next moves to $c'_1$. The robber cannot remain where he is, nor can he move to the escape vertex $s$ or to $r'_1$; his only option is to move to $r_1$. In response the cop moves to $c_2$, and so on. As in the proof of Theorem~\ref{thm:main}, this process continues until the cop occupies $c_{q-1}$ (respectively, $c'_{q-1}$) while the robber occupies $r_{p-2}$ (resp. $r'_{p-2}$) on the robber's turn; the robber has no way to avoid capture on the cop's next turn.
Finally, we give a robber strategy showing that $G$ has capture time $\Omega(n^2)$. The robber begins in the starter clique and remains there until the cop moves to either $c_0$ or $c'_0$. The robber then moves to either $r'_0$ or $r_0$, respectively. If the cop moves to $c_1$ then the robber moves to $r'_1$; if the cop moves to $c'_1$ then the robber moves to $r_1$; if the cop remains in place, then so does the robber; if the cop moves anywhere else, then the robber moves to $s$ and subsequently back to the starter clique. The robber plays similarly on all future turns. Under optimal play by the cop, the robber can never return to the starter clique, so we may suppose both players remain on their tracks. Moreover, under optimal play the cop clearly never remains in place, since this simply wastes a turn. Thus the cop and robber both keep moving forward along their tracks until the cop occupies $c_{q-1}$ (respectively, $c'_{q-1}$) while the robber occupies $r_{p-2}$ (resp. $r'_{p-2}$) on the robber's turn, after which the cop's win is ensured. As in Theorem~\ref{thm:main}, choosing $p$ and $q$ to be sufficiently large consecutive primes now yields the desired conclusion.
\end{proof}
\section{Computational Complexity}\label{sec:complexity}
We close the paper by mentioning an interesting corollary of Theorem~\ref{thm:directed} in the area of computational complexity. Let $\textsc{C\&R}(G,k)$ denote the decision problem associated with determining whether $k$ cops can capture a robber on the undirected graph $G$. Goldstein and Reingold \cite{GR95} conjectured in 1995 that $\textsc{C\&R}$ is complete for the complexity class EXPTIME -- the class of decision problems solvable in time $O(2^{p(n)})$ for some polynomial $p$ (where, as usual, $n$ denotes the size of the input). This conjecture was recently affirmed by the author using a rather involved argument (see~\cite{Kin15}). As it turns out, Theorem \ref{thm:directed} yields a proof that is considerably shorter and, arguably, more elegant.
In addition to $\textsc{C\&R}$, we will need to refer to the following decision problems corresponding to variants of Cops and Robbers:
\begin{itemize}
\item $\textsc{C\&Rp}(G,k)$, wherein $G$ is a protected undirected graph;
\item $\textsc{C\&Rpd}(G,k)$, wherein $G$ is a protected directed (not necessarily reflexive) graph in which each vertex has at least one in-neighbor and one out-neighbor;
\item $\textsc{C\&Rdsc}(G,k)$, wherein $G$ is a strongly-connected directed reflexive graph.
\end{itemize}
While Goldstein and Reingold could not resolve the complexity of $\textsc{C\&R}$, they did show that $\textsc{C\&Rdsc}$ is EXPTIME-complete (\cite{GR95}, Theorem 4). This result, in conjunction with Theorem~\ref{thm:directed} and Mamino's result that $\textsc{C\&Rp}$ reduces to $\textsc{C\&R}$, yields a short proof that $\textsc{C\&R}$ is EXPTIME-complete.
\begin{cor}
$\textsc{C\&R}$ is EXPTIME-complete.
\end{cor}
\begin{proof}
$\textsc{C\&R}$ is easily seen to belong to EXPTIME, so it suffices to show that it is EXPTIME-hard. $\textsc{C\&Rdsc}$ trivially reduces to $\textsc{C\&Rpd}$, since an unprotected directed graph can be viewed as a protected directed graph in which each edge happens to be unprotected. Theorem~\ref{thm:directed} shows that $\textsc{C\&Rpd}$ reduces to $\textsc{C\&Rp}$. Finally, $\textsc{C\&Rp}$ reduces to $\textsc{C\&R}$ (\cite{Mam13}, Lemma 3.1). Since $\textsc{C\&Rdsc}$ is EXPTIME-hard (\cite{GR95}, Theorem 4), it follows that $\textsc{C\&R}$ is also EXPTIME-hard.
\end{proof}
|
2,869,038,156,796 | arxiv | \section{#1}}
\newcommand{\subsect}[1]{\subsection{#1}}
\newcommand{\subsubsect}[1]{\subsubsection{#1}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\def\footnote{\footnote}
\footskip 1.0cm
\def\sxn#1{\bigskip\bigskip \sect{#1} \medskip}
\def\subsxn#1{\bigskip \subsect{#1} \medskip}
\def\subsubsxn#1{\bigskip \subsubsect{#1} \medskip}
\begin{document}
\thispagestyle{empty}
\setcounter{page}{0}
\begin{flushright}
PSU/TH/156 \\
UFIFT-HEP-95-16\\
SU-4240-598 \\
September 1995
\end{flushright}
\vspace*{15mm}
\centerline {\LARGE Chern-Simons Duality and the Quantum Hall Effect}
\vspace*{15mm}
\centerline {\large A. P. Balachandran$^{1}$, L. Chandar$^{1,2}$,
B. Sathiapalan$^{3}$}
\vspace*{5mm}
\centerline {\it $^{1}$ Department of Physics, Syracuse University,}
\centerline {\it Syracuse, NY 13244-1130, U.S.A.}
\centerline {\it $^{2}$ Department of Physics, University of Florida,}
\centerline {\it Gainesville, FL 32611, U.S.A.}
\centerline {\it $^{3}$ Department of Physics, Pennsylvania State University,}
\centerline {\it 120, Ridge View Drive, Dunmore, PA 18512, U.S.A.}
\vspace*{25mm}
\normalsize
\centerline {\bf Abstract}
\vspace*{5mm}
In previous work on the quantum Hall effect on an annulus, we used
$O(d,d;{\bf Z})$ duality transformations
on the action describing edge excitations to generate the Haldane hierarchy of
Hall conductivities. Here we generate the corresponding hierarchy of ``bulk
actions'' which are associated with Chern-Simons (CS) theories, the connection
between the bulk
and edge arising from the requirement of anomaly cancellation.
We also find a duality transformation for the CS
theory exactly analogous to the $R\rightarrow \frac{1}{R}$ duality of the
scalar field theory at the edge.
\baselineskip=24pt
\setcounter{page}{1}
\newpage
\sxn{Introduction}
Chern Simons (CS) gauge theories
are known to be particularly appropriate for describing the quantum Hall
effect (QHE) \cite{heff1,heff2}.
The low energy
effective action for the electromagnetic vector potential,
obtained by integrating out the electronic
degrees of freedom in a Hall system, is known to have a CS term.
The coefficient of this term is proportional to the Hall conductivity.
This fact is easily shown as follows. Let us assume that the effective action
(apart from the usual Maxwell term) is
\begin{equation}
S_{eff}[A] =-\frac{1}{2}\sigma _{XY}\int d^{3}x \epsilon
^{\mu\nu\lambda}A_{\mu}\partial
_{\nu}A_{\lambda} \label{One}
\end{equation}
Then we get for the expectation value of the
current,
\begin{equation}
< j ^{\mu} _{em} >_{A} =
- \frac {\delta}{\delta A _{\mu}} S_{eff}[A] = \sigma _{XY}
\epsilon ^{\mu \nu \rho}
\partial _{\nu} A_{\rho}
\end{equation}
We thus see that there is a current in the X- direction
when there is an electric field in the Y- direction. This is the
Hall effect, the Hall conductivity being $\sigma _{XY}$.
In studying the Hall effect, we will be interested in CS theories
involving several vector potentials (one of which is the electromagnetic
field). These comprise the ``statistical'' gauge fields and
the fields describing excitations in the bulk
\cite{heff1,heff2,zee,froh1,frad}.
The former are introduced for the purpose of changing the statistics
of the excitation fields in the action while the
latter represent collective degrees of freedom such as vortices
or other quasiparticles and can describe both bosons and fermions.
Experimentally, the Hall conductivity in certain systems is
quantized in integers or in certain
definite fractions \cite{qhe} corresponding respectively to integer and
fractional QHE (IQHE and FQHE). Several scenarios have been
proposed to explain such quantizations.
The hierarchy schemes of Haldane \cite{hald} and Jain \cite{jain} are perhaps
the most
attractive, in that they explain most of the observed experimental
fractions. CS theories of the type mentioned above, involving
several vector potentials, lend themselves naturally to these
schemes \cite{zee,lee}. The Jain scheme has already been written in this form
\cite{zee} while we will work out a similar description of the
Haldane scheme in this paper.
However, the CS action is not gauge invariant on a manifold with
a boundary (like an annulus), such a manifold being the appropriate geometry
for
a physical Hall sample. One has to
include non-trivial dynamical degrees of freedom at the edge
\cite{froh1,wil,stone,renn,froh2} to restore gauge invariance.
In this way, one predicts the
existence of edge states, which nicely corroborates completely different
arguments showing the existence of chiral edge curents in a Hall sample
\cite{halperin}. These states have been studied in detail by
many authors \cite{froh1,wil,stone,renn,froh2,edge,dim,dhe}. In \cite{dhe},
we described them by a conformal field theory of massless chiral
scalar fields taking values on a torus. The most general action for these
scalar fields contains a symmetric matrix $G_{ij}$ and an antisymmetric
matrix $B_{ij}$. In \cite{dhe}, we saw that the Hall
conductivity depends on $G_{ij}$. In this paper, we will show, in
detail, how
the anomaly cancellation
argument enables us to relate this matrix to a corresponding
matrix in the bulk CS theories. Thus, once we implement the
hierarchy arguments in the CS theory, we have a rationale for
particular choices of this matrix made in \cite{dhe}.
Now, as in string theories having torally compactified spatial dimensions,
there are
certain duality transformations of the edge theory that leave the spectrum
invariant \cite{dua}.
These transformations change $G_{ij}$ and $B_{ij}$ in well-defined
ways and hence also change the Hall conductivity \cite{dhe}.
It was shown in \cite{dhe} that one can reproduce most of
the conductivity fractions
of the Haldane and Jain schemes by means of these
generalized duality transformations. The connection of the edge theory with
the CS theory in the bulk then suggests that similar
transformations can be implemented in the bulk CS theory also.
This conjecture turns out to be at least partly realizable in that one can
implement a duality transformation of the type $R \rightarrow 1/R$ in the bulk.
The demonstration of this result is a generalization of the one that has been
used in \cite{bus} to implement duality in scalar theories. We think
that both this proof and the result are interesting
and could have implications in many other areas as well.
This paper is organized as follows: In Section 2 we
describe how to implement the Haldane construction using CS
theories. In Section 3 we describe the connection between the bulk and
edge actions. Finally, in Section 4 we show how to implement duality in the CS
theory.
\sxn{The Haldane Hierarchy and CS Theory}
In this Section we would like to describe Haldane's construction
using CS gauge fields. Let us first recall the physical arguments.
The Haldane
approach exploits the superfluid analogy and treats
the Hall fluid as a bosonic condensate.
We have a system with $N_{e}$ electrons per unit area in a magnetic
field of strength $B$. The number of flux quanta per unit
area is $Be /2 \pi $ (in units where $\hbar = c =1$), which we denote
by $N_{\phi}$. In the usual integer effect with one filled Landau level,
we have the equality
\begin{equation}
N_{\phi} = N_{e} \label{iqhe1}
\end{equation}
which tells that the degeneracy of the Landau levels is exactly equal to the
number of flux quanta piercing the Hall sample. This equation is a consequence
of solving the Landau level problem for {\em fermions}. The
incompressibility follows from the existence of a gap between
Landau levels and the fermionic nature of the electrons that
fixes the number of electrons that one can place in one Landau level.
There is another way to think of the same system. According to
(\ref{iqhe1}) the number of flux quanta is equal to the number
of electrons.
It is therefore like
attaching one flux quantum to each electron.
The composite object behaves like a boson and can Bose condense.
The resulting superfluid is the Hall fluid. The energy gap
follows from the usual arguments for superfluidity due to Feynman
\cite{feynman}, where he showed, using the bosonic nature of the condensate,
that the only low energy excitations are long wavelength density fluctuations.
However, in the Hall fluid, since the density is tied to the fixed external
magnetic field by (\ref{iqhe1}), there are no density fluctuations.
Thus we have no massless excitations (in the bulk) at all, that is, the fluid
is incompressible.
A simple generalization of the above arguments can be applied to
a system that obeys
\begin{equation}
N_{\phi}=m N_{e}\;\; , \; m\in 2{\bf Z}+1 .\label{fqhe1}
\end{equation}
It can be interpreted as the attachment of
an odd number ($m$) of flux tubes to each electron. Since
the composite will be bosonic, there will be bose condensation
and one can again invoke arguments from superfluidity.
Thus we have a new incompressible
state at the filling fraction $1/m$. These are the Laughlin fractions.
Thus in the Haldane approach, the final
dynamical degrees of freedom are bosonic objects, a circumstance which suggests
that we rewrite the original action, which describes
fermions (electrons) in a magnetic field, in terms of a new set
of variables that describe these dynamical excitations.
Thus we will re-express the fermionic electron field in terms
of a bosonic field and a statistical gauge field.
As we shall see below, one can implement, in this way, the
ideas described in previous paragraphs,
in terms of a low energy effective field theory.
One can also proceed to generalize these ideas to get other filling
fractions. The system admits Nielsen-Olesen vortices \cite{nielsen} as
excitations or quasiparticles.
As the magnetic field is changed, it is energetically
favourable for the excess or deficit magnetic field to organize itself as
flux tubes threading vortices in the condensate, so that quasiparticles
which
are one of these Nielsen-Olesen vortices are formed. At a certain point
a large number of these quasiparticles form and condense, so
that we now have a finite number density of quasiparticles
and a new ground state is also created.
If one were to think of these quasiparticles or vortices as carrying a new form
of charge, then the gauge field to which they couple are in
fact the duals to the Goldstone (phase) mode of the
condensate \cite{zee,lee}. The way one shows this point \cite{zee,lee} is by
noticing that if the electron current is represented in a dual representation
using a one-form, then the ``electric field'' associated to this one-form has a
behaviour, outside a vortex, identical to that of a usual electric field
outside an ordinary electric charge.
Thus the flux
quanta, in this dual representation, are the electrons
themselves. The quasiparticles are bosons. Clearly, their bosonic nature will
be maintained if an even number (say $2p_{1}$) of dual flux quanta (that is,
electrons)
get attached to each of these quasiparticles. These statements can be
summarized by the following two equations:
\begin{equation}
N_{\phi} = m N_{e} + N^{(1)} \label{ad}
\end{equation}
\begin{equation}
N_{e} = 2 |p_{1} N^{(1)}| \label{mod}
\end{equation}
Here $N^{(1)}$ is the number density of quasiparticles. Unlike $N_e$, it can
have either sign
depending on whether the
associated fluxons point in a direction parallel or antiparallel to
the original magnetic field.
If we make $p_1$ negative whenever $N^1$ is, then we can omit the
modulus sign in equation (\ref{mod}). In that case, we can solve these
equations to get the filling fraction:
\begin{equation}
\frac {N_{e}}{N_{\phi}} = \frac{1}{m+ \frac{1}{2p_{1}}}
\end{equation}
This is the second level of the hierarchy.
We can next imagine that there are new quasiparticle excitations
over the ground state as we increase the magnetic field further.
These new quasiparticles can have ``flux tubes" attached to them and
can in turn condense. Now the new flux quanta are dual representations
of the quasiparticles of the first level. This process can be iterated
as many times as one wants and it generates a series of filling fractions.
The equations describing this process are the hierarchy equations of
\cite{zee}:
\begin{equation} \label{eq:hier1}
N_{\phi} = m N_{e} + N^{(1)} ,
\end{equation}
\begin{equation} \label{eq:hier2}
N_{e} = 2 |p_{1} N^{(1)}| + N^{(2)} ,
\end{equation}
\begin{equation}
N^{(1)} = 2 |p_{2} N^{(2)}| + N^{(3)} ,\label{eq:hier3}
\end{equation}
\[
.......
\]
In equations (\ref{eq:hier2}), as in (\ref{ad}), the
quasiparticle density $N^{(2)}$ can be less than zero, but should still be such
that $N_e$ itself does not become negative.
We can in fact choose to omit the modulus signs in these equations if we allow
the $p_i$'s also to be less than zero whenever the $N^i$'s are, so that their
product themselves are always non-negative. We shall assume that this
is done in the
following, where we implement these ideas using CS fields. The basic
techniques are described in \cite{zee}.
Following \cite{zee}, we describe the electron by a scalar field
coupled to a
statistical gauge field, $a_\mu$. Furthermore if this bosonic order parameter
develops an expectation value, then we have a massless Goldstone boson
$\eta$ -
the phase of the original scalar field. It fulfills the equation $\partial
_{\mu}\partial ^{\mu} \eta =0$, being massless. The electron current
$\partial ^\mu
\eta$ can be represented by a dual vector field $\alpha_\mu$ defined by
$\partial ^\mu \eta = \epsilon^{\mu \nu \lambda} \partial_\nu
\alpha_\lambda$. The field equation of $\eta$ turns into an identity in this
dual representation. We can also implement a minimal coupling to the external
electromagnetic vector potential $A_\mu$. The action thus far is
\begin{eqnarray}
&&\int _{(D\backslash H)\times {\bf R^{1}}}d^{3}x\; [-eJ^\mu ( A_\mu -
a_\mu )-
\frac{e^2}{4\pi}\epsilon ^{\mu \nu \lambda}a_\mu \partial_\nu a_\lambda ] ,
\nonumber\\
&&J^\mu =\epsilon ^{\mu \nu \lambda} \partial_\nu \alpha_\lambda
\label{4.1}
\end{eqnarray}
\[ D\backslash H \equiv \mbox{ Disk D with a hole H removed (or an
annulus)} \]
where the last term is an abelian CS term for the statistical gauge field
$a_\mu$ and ${\bf R^{1}}$ accounts for time. Its coeffecient has been
chosen to ensure that it converts the
boson to a fermion as may be seen in the following way: On varying (\ref{4.1})
with respect to $\alpha_\mu$, we get
\begin{equation}
\epsilon ^{\mu\nu\lambda}\partial _{\nu}A_{\lambda}= \epsilon
^{\mu\nu\lambda}\partial _{\nu}a_{\lambda} .\label{cancellation}
\end{equation}
On varying with respect to $a_\mu$, we get
\begin{equation}
\epsilon ^{\mu\nu\lambda}\partial_\nu \alpha_\lambda = \frac{e}{2\pi} \epsilon
^{\mu\nu\lambda}\partial_\nu a_\lambda .
\label{4.2}
\end{equation}
so that
\begin{equation}
\epsilon ^{\mu\nu\lambda}\partial_\nu \alpha_\lambda = \frac{e}{2\pi} \epsilon
^{\mu\nu\lambda}\partial_\nu A_\lambda .\label{4.3}
\end{equation}
This equation relates the number density $J^0 =N_e $ of electrons to the
number
density $N_\phi =\frac{e}{2\pi}B$ of flux quanta $\frac{2\pi}{e}$. In fact it
says that $N_e = N_\phi$. Thus there is one flux quantum per electron which
converts the latter to a fermion.
The filling fraction $\nu$ is 1 for (\ref{4.1}) since $N_{\phi}=N_e$. It thus
describes the
IQHE (see (\ref{iqhe1})). We can also eliminate $\alpha$ and
$a$ to get an effective action dependent only on the electromagnetic gauge
field. Thus the electromagnetic current $-eJ^{\mu}$ of (\ref{4.1}) is equal to
$-\frac{e^{2}}{2\pi}\epsilon ^{\mu\nu\lambda}\partial _{\nu}A_{\lambda}$ by
(\ref{4.3}) and this current is reproduced by
\begin{equation}
S= -\frac{e^2}{4\pi} \int _{M}d^{3}x\;\epsilon^{\mu \nu \lambda} A_\mu
\partial_\nu A_\lambda ,\label{4.4}
\end{equation}
\[
M= (D \backslash H) \times {\bf R^{1}} .
\]
This is the electromagnetic CS term ( and a signature of the Hall effect )
for the Hall conductivity $\sigma_H =\frac{e^2}{2\pi}$.
One can immediately generalize (\ref{4.1}) to obtain the Laughlin fractions
by
changing the coefficient $\frac{e^2}{4\pi}$ to $\frac{e^2}{4\pi m}$ with
$m$ odd: \begin{equation}
S^{(0)}= \int _{M}d^{3}x\; [-eJ^\mu ( A_\mu - a_\mu )-
\frac{e^2}{4\pi m}\epsilon ^{\mu \nu \lambda}a_\mu \partial_\nu a_\lambda ]
, \;\; m\in 2{\bf Z}+1 .\label{4.1tr}
\end{equation}
This changes (\ref{4.2},\ref{4.3}) to
\begin{equation}
\epsilon ^{\mu\nu\lambda}\partial_\nu \alpha_\lambda = \frac{e}{2\pi m}
\epsilon
^{\mu\nu\lambda}\partial_\nu a_\lambda ,\label{4.5}
\end{equation}
\begin{equation}
\epsilon ^{\mu\nu\lambda}\partial_\nu \alpha_\lambda = \frac{e}{2\pi m}
\epsilon ^{\mu\nu\lambda}\partial_\nu A_\lambda .\label{4.6}
\end{equation}
Equation (\ref{4.5}) says that $N_e = \frac{1}{m} N_\phi$. Since $m$ is
odd, this
is the same as (\ref{fqhe1}) and therefore implies that the composite is
bosonic, as it should be for this description of the electron to be consistent.
The filling fraction now is $\nu = \frac{1}{m}$ while (\ref{4.4}) is changed to
\begin{equation}
\bar{S}^{(0)}= -\frac{e^2}{4\pi m}\int _{M}d^{3}x\; \epsilon^{\mu \nu
\lambda} A_\mu
\partial_\nu
A_\lambda
\label{4.7}
\end{equation}
This is the CS action giving the first level of the Haldane hierarchy.
Next, we modify (\ref{4.1tr}) by adding a coupling of the quasiparticle current
$J^{(1)\mu}$ to the gauge field $\alpha_\mu$. Thus we have the action
\begin{equation}
\int _{M}d^{3}x\;[-eJ^\mu ( A_\mu - a_\mu )- \frac{e^2}{4\pi
m}\epsilon^{\mu \nu
\lambda} a_\mu \partial_\nu a_\lambda + 2 \pi J^{(1)\mu}\alpha_\mu ]
\label{4.11}
\end{equation}
The choice of the coefficient $2\pi$ in the last term can be motivated as
follows. Suppose that there is a vortex localised at $z$ so that
$J^{(1)0}(x)=\delta ^{2}(x-z)$ while the electron density $J^{(0)}$ is some
smooth function. Then since $J^{(0)}=\frac{e}{2\pi m}\epsilon ^{0ij}\partial
_{i}a_{j}$ by equations of motion, $\epsilon ^{0ij}\partial _i a_j $ is also
smooth. Now variation of $\alpha$ gives $\frac{2\pi}{e}J^{(1)0}=\epsilon
^{0ij}(\partial _i A_j +\partial _i a_j )$ so that the magnetic flux attached
to the vortex is the flux quantum $\frac{2\pi}{e}$. As this is the unit of
magnetic flux we want to attach to the vortex, the choice of $2\pi$ is seen to
be correct.
Suppose next that the quasiparticles condense. Then we can write
$J^{(1)\mu} = \partial^\mu \eta^{(1)}$
where $\eta^{(1)}$ is the Goldstone boson phase degree of freedom. As before,
$\eta^{(1)}$ being massless and hence $\partial _\mu \partial ^\mu \eta
^{(1)}=0$, one can write a dual version of the current by
defining a field $\beta_\mu$ according to
\begin{equation}
J^{(1)\mu}= \partial^\mu \eta^{(1)} = \epsilon^{\mu \nu \lambda} \partial_\nu
\beta_\lambda .\label{4.12}
\end{equation}
We can also introduce a statistical gauge field $b_\mu$ and attach flux tubes
of $b$ to the quasiparticle.
Since the quasiparticles correspond to vortices which are assumed to be
bosonic, here we attach an even number of the elementary $b$ flux tubes to
each vortex to
preserve the bosonic nature. Bearing this in mind, we add some more CS terms
to (\ref{4.11}) to get
\begin{equation}
S^{(1)}= \int _{M}[-e ( A - a )d\alpha -
\frac{e^2}{4\pi m}
a da + 2 \pi \alpha d\beta - e bd \beta - \frac{e^2}{4\pi (2 p_1)} b db ]
,\;\; m\in 2{\bf Z}+1,\;\; p_i \in {\bf Z}.\label{4.13}
\end{equation}
[Here, we have used the form notation to save writing the antisymmetric symbol
repeatedly. A symbol $\xi =A,\alpha ,\beta ,a$ or $b$ now denotes the
one-form $\xi _\mu dx^\mu $.]
The equations of motion from (\ref{4.13}) are
\begin{eqnarray}
&& \frac{e}{2\pi}dA=\frac{e}{2\pi}da +d\beta ,\nonumber\\
&& md\alpha =\frac{e}{2\pi}da ,\nonumber\\
&& d\alpha = \frac{e}{2\pi}db ,\nonumber\\
&& d\beta = -\frac{e}{2 \pi (2 p_1)} db
\label{4.14}
\end{eqnarray}
The equations for $\alpha$ and $\beta$ here are seen to be precisely the
hierarchy equations (\ref{eq:hier2}) and (\ref{eq:hier3}) [with $N^{(2)}=0$] on
eliminating $a$ and $b$.
Now these equations for $\alpha$ and $\beta$ are reproduced also by
\begin{eqnarray}
\bar{S}^{(1)}& = & \int _{M}[-e A d\alpha + \pi m
\alpha d
\alpha + 2\pi \alpha d\beta + \pi (2p_1 ) \beta d
\beta ]\label{4.15} \\
& = & \int _{M}[-e A d\alpha + \pi ( \alpha \;\; \beta
)\left(
\begin{array}{cc} m & 1 \\
1 & 2p_1 \end{array}\right)\left( \begin{array}{c} d\alpha \\
d\beta \end{array} \right ) ].\label{4.16}
\end{eqnarray}
We have here used matrix notation to display the form of the ``metric" in the
CS
theory.
The generalization to higher levels is as follows: Introduce $d$ vector
fields $\alpha_I$; $I=1, \cdots, d$. [In the above
example, $d=2$, $\alpha_1 = \alpha$, $\alpha_2 = \beta$.] Then consider
the Lagrangian form
\begin{equation}
{\cal L} = -e A d\alpha_1 + \pi \alpha_I K^{IJ} d\alpha_J
\label{4.17}
\end{equation}
with
\begin{eqnarray}
&&\alpha _I =\alpha _{I\mu}dx^{\mu}, \nonumber\\
&&K^{IJ} = \left( \begin{array}{cccccc} m & 1 & 0& \cdot & \cdot & \cdot \\
1 & 2p_1 & 1 & 0 & 0 & \cdot \\
0 & 1 & 2p_2 & 1 & 0 & \cdot \\
\cdot & 0 & 1 & 2p_3 & 1 & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot& \cdot \end{array} \right) .
\label{4.18}
\end{eqnarray}
The equation of motion for $\alpha_1$ gives
\begin{equation}
e dA =2\pi K^{1J} d\alpha_J
\label{4.19}
\end{equation}
while the equations of motion for the remaining $\alpha _{I}$'s give
\begin{equation}
K^{IJ}d\alpha _{J}=0 \mbox{ for }I\neq 1.\label{ineq}
\end{equation}
These are the hierarchy equations. We can solve for the $d\alpha_I$'s:
\begin{equation}
d\alpha_I = \frac{e}{2\pi} (K ^{-1})_{I1} dA .
\label{4.20}
\end{equation}
Substitute back into (\ref{4.18}) to get
\begin{equation}
\bar{\cal L} = -\frac{e^2}{4 \pi} A (K^{-1})_{11} dA .
\label{4.21}
\end{equation}
This is the CS Lagrangian form that gives rise to the Haldane hierarchy. Its
filling fraction $\nu$ is just $(K^{-1})_{11}$ where $K^{IJ}$ is given
by (\ref{4.18}). $\nu$ is in fact the continued fraction obtained in the
Haldane hierarchy:
\begin{equation}
\nu =\frac{1}{m-\frac{1}{2p_1 -\frac{1}{2p_2 -\frac{1}{2p_3 -\ldots }}}} .
\label{hierarh}
\end{equation}
We can prove (\ref{hierarh}) easily. Let
\begin{equation}
\Delta (\xi _1 ,\xi _2 ,\ldots \xi _n ) =\mbox{det} \left[ \begin{array}{ccccc}
\xi _1 & 1& 0 & \cdot & \cdot \\
1 &\xi _2 & 1 & 0 & \cdot \\
0 & 1 & \cdot & \cdot & \cdot \\
\cdot &\cdot & \cdot & \cdot &\cdot \\
\cdot & \cdot & \cdot & 1& \xi _n \end{array} \right] .\label{deter}
\end{equation}
Then
\begin{equation}
\Delta (\xi _1 ,\xi _2 ,\ldots \xi _n )= \xi _1 \Delta (\xi _2 ,
\ldots \xi _n ) -\Delta (\xi _3,\ldots \xi _n ) ,\label{determ}
\end{equation}
and
\begin{equation}
\nu =\frac{\Delta (2p_1 ,2p_2 ,\ldots 2p_n )}{\Delta
(m,2p_1 ,\ldots 2p_n )}. \label{determi}
\end{equation}
We get (\ref{hierarh}) from (\ref{determ}) and (\ref{determi}).
\sxn{Anomaly Cancellation and the Bulk-Edge Connection}
In this section we will show that the requirement of gauge
invariance forces the ``metric" $K^{IJ}$ introduced in the
previous section to be the same as the inverse of the target space ``metric''
$G_{IJ}$
of the scalar theory describing the edge excitations. In our
previous work \cite{dhe} on edge excitations, we had assumed
the form (2.25) for $(G^{-1})^{IJ}$. The results
of this and the previous section provide the necessary motivation
for this assumption.
Let us consider the CS action
\begin{equation} \label{cs}
S = \frac{1}{2} \int _{{\tilde M}} \alpha d \alpha
\end{equation}
without any electromagnetic coupling.
If ${\tilde M}$ has a closed (compact and boundaryless) spatial slice, this
action has the gauge invariance
\begin{equation}
\alpha \rightarrow \alpha + d \Lambda \label{gtr}
\end{equation}
If ${\tilde M}$ is a manifold such as $M$ where the spatial slice
$\Sigma$ has a boundary like an annulus $D\backslash H$, then the
gauge variation results in a surface term. For $M=D\backslash H \times
{\bf R^1}$, we have, for the variation of the action,
\begin{equation} \label{st}
\delta S = \frac{1}{2} \int _{\partial D \times R^{1}}\Lambda d \alpha -\frac{1}{2}\int
_{\partial H \times R^{1}}\Lambda d\alpha ,
\end{equation}
where as usual we assume that $\Lambda $ vanishes in the infinite past
and future.
One can recover gauge invariance at the boundary by adding to the
action the following two-dimensional action containing a new
scalar field $\phi$ :
\begin{equation}
\Delta S = -\frac{1}{2}\int _{\partial M} d\phi \alpha +\frac{1}{4}
\int _{\partial M}d^{2}x (\tilde{D}_{\mu}\phi )(\tilde{D}^{\mu}\phi )
\label{ba}
\end{equation}
Here the gauge transformation law for $\phi$ is
\begin{equation}
\phi \rightarrow \phi - \Lambda \label{phitr}
\end{equation}
so that $\tilde{D}_{\mu}\phi$ is $\partial _{\mu}\phi +\alpha _{\mu}$. [The
coefficient
$\frac{1}{4}$ outside the kinetic energy term in (\ref{ba}) is determined by
requiring that the edge current be chiral, that is, that we can impose the
following condition consistently with the equations of motion:
\begin{equation}
\tilde{D}_{-}\phi \equiv (\tilde{D}_{0}-\tilde{D}_{\theta})\phi =0 \label{coef}
\end{equation} (see \cite{dhe}).]
The combined action $S+\Delta S$ is gauge invariant.
A more formal way of justifying the above procedure to recover
gauge invariance is to first
look at the generators of the ``edge'' gauge transformations in the absence of
the scalar field action at tbe boundary.
The operator that generates the transformation (\ref{gtr})
at a fixed time
with $\Lambda
|_{\partial D} \neq 0$ and $\Lambda |_{\partial H}=0$ ($\Lambda $ being a
function on the annulus $D\backslash H$, the choice $\Lambda |_{\partial H}=0$
being made for simplicity) is
\begin{equation}
Q(\Lambda ):= \int _{D\backslash H} d\Lambda \alpha .\label{Chargel}
\end{equation}
The algebra generated by these operators is specified by (\cite{csb})
\begin{equation}
{[} Q(\Lambda ),Q(\Lambda ') ]=-i\int _{\partial D}\Lambda d\Lambda '
.\label{KM}
\end{equation}
If one tries to impose the gauge invariance condition
$Q(\Lambda )|\cdot \rangle =0$ on physical
states $|\cdot \rangle$, one is led to a contradiction because the commutator
of two $Q$'s acting on a (physical) state would also have to vanish,
whereas (\ref{KM}) specifies the value of this commutator to be a
non-zero $c$-number.
However,
if we now augment this action by the above action (\ref{ba}) describing
new degrees of freedom at the boundary, the generators of the ``edge'' gauge
transformations get modified. The modification is by the terms
\begin{equation}
q(\Lambda )= \int _{\partial D} \Lambda (\Pi _{\phi}-\frac{1}{2}\phi '),
\label{chargel}
\end{equation}
where $\Pi _{\phi}:=\frac{1}{2}(D_{0}\phi +A_{\theta } )$, is the
canonical momentum conjugate to $\phi$ and obeys the usual commutation
relations.
$q(\Lambda )$ generates the transformations
\begin{eqnarray}
\phi &\rightarrow & \phi -\Lambda \nonumber\\
\Pi _{\phi} &\rightarrow & \Pi _{\phi} +\frac{1}{2}\partial _{\theta }\Lambda
\label{gtrl}
\end{eqnarray}
The algebra generated by the $q(\Lambda )$'s is given by
\begin{equation}
{[} q(\Lambda ),q(\Lambda ')] =i\int _{\partial D} \Lambda d\Lambda '
.\label{km}
\end{equation}
Thus the new
generators $\tilde{Q}(\Lambda ):= Q(\Lambda )+q(\Lambda )$ now commute amongst
themselves and can be chosen to annihilate the physical states.
Let us now attempt to couple electromagnetism to the action $S$ in (\ref{cs}).
$*d \alpha$ represents
a current so that the obvious coupling is
\begin{equation}
S^{1} =- q\int _{M} A d \alpha \label{min}
\end{equation}
Here $A$ is a background electromagnetic field.
However, we run into a problem when we consider the equation
of motion implied by $S + S^{1}$.
On varying with respect to $\alpha $, the equation of motion that we get in the
bulk is
\begin{equation} \label{eqnbul}
d\alpha = qdA
\end{equation}
while on the boundary, it is
\begin{equation} \label{eqnbou}
\frac{1}{2}\alpha = qA .
\end{equation}
(\ref{eqnbul}) and (\ref{eqnbou}) are incompatible. (\ref{eqnbou})
implies a relation between the values of the field strengths of
$\alpha$ and $A$ on the boundary that differs by a factor of two
from that implied by (\ref{eqnbul}) in the bulk whereas by continuity
they should be equal.
There is, however, the following simple modification of the minimal
coupling (\ref{min})
that gives a consistent set of equations. Consider the action
\begin{equation} \label{modmin}
S^{2} = -\frac{1}{2}\int q(Ad\alpha + \alpha dA)
\end{equation}
With this action, the boundary equation (\ref{eqnbou}) is modified to
\begin{equation} \label{meqnbou}
\alpha =qA .
\end{equation}
Thus (\ref{eqnbul}) and (\ref{meqnbou}) together say that
$\alpha = qA$ everywhere classically, up to gauge transformations that
vanish on the boundary.
Gauge transformations that do not vanish on the boundary
and consistent with the equations of motion have the form
\begin{equation} \label{ngtr}
\alpha \rightarrow \alpha + qd\Lambda \, ,\, A \rightarrow A + d \Lambda
\end{equation}
But
while we have achieved consistency of the equations of motion
at the edge and in the bulk, the action $S+ S^{2}$ given
by (\ref{cs}) and (\ref{modmin}) is no longer gauge
invariant under (\ref{ngtr}).
This is very similar to what
happens at the edge \cite{dhe}, where gauge invariance and chirality
are incompatible with the equations of motion. The cure
there (see \cite{dhe}and references therein),
was to introduce a coupling to the bulk. Similarly,
here, the cure is to couple to degrees of freedom living only at the
boundary, just as was done in the beginning of this section for the action
(\ref{cs}) (see (\ref{cs})-(\ref{ba})).
Thus we need to introduce a
scalar field $\phi $ with a boundary action of the form
\begin{equation} \label{ba1}
\Delta S ^{2} = \frac{q}{2}\int _{\partial M} d\phi A +\frac{1}{4}\int _{\partial
M} d^2 x (D_{\mu}\phi )^2 ,
\end{equation}
\[ D_{\mu}\phi =\partial _{\mu}\phi -qA_{\mu} \]
to maintain invariance under (\ref{ngtr}), namely the
electromagnetic gauge transformations.
Here $\phi $ transforms under
(\ref{ngtr}) in the following way:
\begin{equation} \label{U1em}
\phi \rightarrow \phi + q\Lambda
\end{equation}
[Once again, we can justify this addition by noting as before that with this
addition, the generators of the ``edge'' gauge transformations can be
required to annihilate the states.]
The full action ${\cal S}=S + S^{2} + \Delta S^{2} $ is thus gauge invariant
under the electromagnetic gauge transformations. It is also easy to see that
it gives
equations of motion in the bulk and the boundary that are compatible with each
other.
The final action is thus
\begin{equation} \label{fina}
{\cal S}=\int _{D\backslash H \times R^{1}} \{ \frac{1}{2}\alpha d \alpha -
\frac{q}{2}(A d \alpha + \alpha dA) \} +\frac{q}{2}
\int_{\partial D \times R^{1}} d\phi A +\frac{1}{4}\int _{\partial D\times
R^1}d^2 x (D_{\mu}\phi )^2
\end{equation}
Let us summarize what we have done with one CS field before we
generalize to the case of $d$ fields. We began with a CS action
for a gauge field $\alpha$, where $d\alpha $ represents the current of
electrons or quasiparticles.
We then introduced a coupling to a background
electromagnetic field.
Naively,
this action has a gauge invariance even without
introducing any edge degrees of freedom.
However there
is an inconsistency between the bulk
and the boundary equations. When the naive
coupling is modified to
restore consistency, the action is no longer gauge invariant.
The solution is
to introduce a scalar degree of freedom at the edge. The final
action is then given by (\ref{fina}).
It is now straightforward to extend this to the case with
$d$ CS fields and the action
\begin{equation} \label{dcs}
S = \pi K^{IJ} \int _{M}\alpha _{I} d \alpha _{J}
\end{equation}
We have introduced
the ``metric" $K^{IJ}$ that we had in the last section.
This theory has $d$ U(1) gauge invariances:
\begin{equation} \label{dgtr}
\alpha _{I} \rightarrow \alpha _{I} + d \Lambda _{I}
\end{equation}
We now introduce $d$ background gauge fields $A^{I}$, one of which
represents the physical electromagnetic field and the rest
are fictitious. They can be used, for instance,
to calculate correlations
between the different quasiparticle currents. Thus once we
integrate out the quasiparticles from the theory, the resultant
action will depend on these gauge fields. Functional differentiation
with respect to these fields then gives the correlators of the currents
(the connected Green functions).
They are, thus, a
means of keeping track of the information in the original action
after integrating out the $\alpha $ exactly, much as the ``sources"
of conventional field theory. Following the earlier procedure as earlier of
first
introducing a coupling as in (\ref{modmin}) and then introducing edge scalar
fields
for restoring gauge invariance, the final action becomes
\begin{eqnarray}
&&{\cal S}= \int _{M} \{ -\frac{1}{2}(A^{I} d \alpha _{I} + \alpha _{I} d A ^{I}) + \pi
K^{IJ} \alpha _{I} d \alpha _{J} \} +\int _{\partial M}
\frac{1}{4\pi} (K^{-1})_{IJ} \phi ^{I} A ^{J} \nonumber\\
&&+\frac{1}{8\pi} \int _{\partial M} (K^{-1})_{IJ} D _{\mu} \phi ^{I}
D^{\mu} \phi ^{J}
\end{eqnarray}
As in (\ref{ba}), here too the coefficient of the kinetic term is fixed by
requiring consistency between the chirality of the edge currents (\ref{coef})
and the equations of motion \cite{dhe}.
We can also specialize to the case where only the
electromagnetic background is non-zero. Then we can
set $A^{I} = q^{I} A _{em}$ and get the expression
for the Hall conductivity used in \cite{dhe}. The expression
used in the last section is obtained by further specializing
to the case $q^{1}=e$ and $q^{2}=q^{3}=...q^{d}=0$.
\sxn{``$T$-Duality'' in CS Theory}
Let us first review Buscher's duality argument \cite{bus} for the scalar field
in 1+1 dimensions. We shall later see that a straightforward generalization
works for the CS theory.
Consider the action
\begin{equation}
S=\frac{R^{2}}{4\pi}\int _{S^{1}\times {\bf R}}d^{2}x\partial _{\mu}\phi
\partial ^{\mu}\phi \label{or}
\end{equation}
with
$\phi$ identified with $\phi + 2\pi$:
\begin{equation}
\phi \approx \phi +2\pi.
\label{phieq}
\end{equation}
The translational invariance $\phi \rightarrow \phi +\alpha$ of
(\ref{or})
can be gauged to
arrive at the action
\begin{equation}
\tilde{S} =\frac{R^{2}}{4\pi}\int d^{2}x(\partial _{\mu}\phi +W_{\mu})^{2}
\label{org}
\end{equation}
where $W_{\mu}$ transforms according to
\begin{equation}
W_{\mu} \rightarrow W_{\mu} -\partial _{\mu}\alpha \label{ek}
.\end{equation}
We now introduce a Lagrange multiplier field $\lambda$ which constrains
$W$ in the following way:
\begin{eqnarray}
F\equiv dW&=&0 ,\label{curv}\\
\oint _{C}W &\in &2\pi {\bf Z} .\label{hol}
\end{eqnarray}
Here $C$ is any closed loop, space-like or time-like. [This latter possibility
arises if we identify $t=-\infty$ with $t=+\infty$ in the functional
integral so that the transition amplitude is between an initial state and a
final state obtained after transport around a {\em closed} path in the
configuration space (of fields other than the lagrange multiplier field
$\lambda$). In this case our manifold
can be thought of as $T^{2}=S^{1}\times S^{1}$.] If $W$ satisfies the
conditions
(\ref{curv}) and (\ref{hol}), then it has no observable effects on any other
fields and so it can be ``gauged away'' \cite{bus}
from (\ref{org}) to get back $S$.
Let us thus consider
\begin{equation}
S'=\frac{R^{2}}{4\pi}\int d^{2}x
(\partial _{\mu}\phi +W_{\mu})^{2} +
\frac{1}{2\pi}\int d\lambda W ,\label{Spr}
\end{equation}
\begin{equation}
(\partial _{\mu}\phi +W_{\mu})^{2} \equiv
(\partial _{\mu}\phi +W_{\mu}) (\partial ^{\mu}\phi +W^{\mu})
\end{equation}
where $\lambda$ is a function.
It follows from the equations of motion of (\ref{Spr})
that the Lagrange multiplier field $\lambda$ constrains $F(=dW)$ to
be zero. The condition (\ref{hol}) on the holonomies to be
quantized also follows \cite{lag} if we {\em require} that $\lambda$ be
identified
with $\lambda +2\pi$ just as $\phi$ itself was. (In the Appendix, we show
that this latter condition is in fact necessary for the theory to be
consistent).
An alternative derivation of
the quantization of the holonomies uses the functional integral approach
and is as follows. Consider the path integral \begin{equation}
Z_{\lambda}:=\int {\cal D}\lambda e^{\frac{i}{2\pi}\int d\lambda W }
\label{zlam}
\end{equation}
(which is the part of the full path integral that involves $\lambda$).
Since $\lambda \approx \lambda +2\pi$, we can expand $d\lambda$ according
to
\begin{equation}
d\lambda =\sum _{n}\alpha _{n}d\lambda _{n}^{(0)} +n_{x}\omega _{x}+n_{t}\omega
_{t},\;\; \alpha _{n}\in {\bf R},\;\; n_{x},n_{t}\in {\bf Z}.\label{explam}
\end{equation}
Here $\lambda _{n}^{(0)}$ is a complete set of single-valued functions on
$T^{2}$ while $\omega _{x}$ and $\omega _{t}$ are one-forms
such that
\begin{equation}
\oint _{x}\omega _{x}=\oint _{t}\omega _{t}=2\pi ,\label{normomeg}
\end{equation}
\begin{equation}
\oint _{t}\omega _{x}=\oint _{x}\omega _{t}=0 \nonumber
\end{equation}
where $\oint _{x}$ and $\oint _{t}$ refer respectively to the integrals along
the circles in $x$ and $t$ directions. Thus
\begin{eqnarray}
Z_{\lambda} &=&\sum _{n_x ,n_t }\int \prod _{n}d\alpha _{n}
e^{\frac{i}{2\pi}[\alpha _{n}\int d\lambda _{n}^{(0)}W+n_x \int \omega _{x}W
+n_t \int \omega _{t}W ]}\nonumber\\
&\sim & \prod _{n}\delta [\int d\lambda _{n}^{(0)}W]\sum _{n_x ,n_t
}e^{\frac{i}{2\pi}[n_x \int \omega _{x}W +n_{t}\int \omega _{t}W ]}.
\label{zlamb}
\end{eqnarray}
Now
\begin{equation}
\sum _{n}e^{inX} =2\pi \sum _{m}\delta (X-2\pi m).\label{Two}
\end{equation}
Therefore
\begin{equation}
Z_{\lambda}\sim \delta [dW] \sum _{m_1 ,m_2 }\delta (\int \frac{\omega
_{x}}{2\pi}W -2\pi m_1 )\delta (\int \frac{\omega _{t}}{2\pi}W -2\pi m_2 )
\label{zl}
\end{equation}
(where, to get the first delta functional, we have done a partial integration
of the corresponding term in (\ref{zlamb}) and used the fact that $\lambda
_{n}^{(0)}$ forms a basis).
On using (\ref{normomeg}), we now get
\begin{equation}
Z_{\lambda}\sim \delta [dW] \sum _{m_1 ,m_2 }\delta (\oint _{t}W -2\pi m_1
)\delta (\oint _{x}W-2\pi m_2 ). \label{zfin}
\end{equation}
Here we have used the fact that
$\omega _x$ ($\omega _t$) can be chosen to be independent of $t$ ($x$) by
adding an exact form. This addition does not affect the values of the
integrals
in (\ref{zl}) because of the multiplying delta functional $\delta [dW]$.
Thus the conditions (\ref{curv}) and (\ref{hol}) follow.
Under these conditions, we can therefore gauge away $W$ to get back the
original action (\ref{or}).
If on the other hand, we decide to integrate out
the $W$ field first, then
\begin{eqnarray}
Z_{W}&=&\int {\cal D}We^{i[\frac{R^{2}}{4\pi} \int (\partial _{\mu}\phi
+W_{\mu})^{2} +\frac{1}{2\pi}\int \epsilon ^{\mu\nu}\partial _{\mu}\lambda
W_{\nu}]} \nonumber\\
&\sim & e^{i[-\frac{R^{2}}{4\pi}\int (\partial _{\mu}\phi-\frac{1}{R^{2}}
\epsilon ^{\mu\nu}\partial _{\nu}\lambda )^{2} +\frac{R^{2}}{4\pi}\int
(\partial
_{\mu}\phi )^{2}]}\nonumber\\
&\sim & e^{i[\frac{1}{2\pi }\int d\phi d\lambda +\frac{1}{4\pi R^{2}}\int
(\partial _{\mu}\lambda )^{2}]} \nonumber\\
&\sim & e^{\frac{i}{4\pi R^{2}}\int (\partial _{\mu}\lambda )^{2}}.
\label{Three}
\end{eqnarray}
We have used the fact here that $e^{\frac{i}{2\pi}\int d\phi d\lambda}=1$
which is a consequence of the identification
$\phi \approx \phi +2\pi$ and $\lambda \approx
\lambda +2\pi$.
Thus the theory we get now has the ``dual'' action
\begin{equation}
S_{d}=\frac{1}{4\pi R^{2}}\int (\partial _{\mu}\lambda )^{2}. \label{daction}
\end{equation}
This completes our review of the duality argument for the scalar field theory.
We will now repeat this argument for the CS case. To begin with we
have the action
\begin{equation}
S=\frac{k}{2\pi}\int _{M}\alpha d\alpha ,\label{in}
\end{equation}
$M$ being an oriented three-
manifold with an annulus (say) as its spatial slice,
and with time compactified to a circle. This latter condition is equivalent
to assuming that the fields at $t=\pm \infty$ take the same values so that the
path integral (restricted to the Lagrange multiplier field that will be
introduced shortly) leads
to the transition amplitude between states after
transport around a closed loop in the configuration space (consisting of fields
other than the Lagrange multiplier field).
As with the scalar field, here too we need an extra
condition on the $\alpha$'s which disallows {\em arbitrary rescalings} of the
$\alpha$. Without such a condition, $k$ can be changed to $\lambda
^{2} k $ by changing $\alpha $ according to the scheme
$\alpha \rightarrow \alpha \lambda $ , $\lambda $ being a real number.
The condition that we impose is
\begin{equation}
\oint _{C\in \partial M}\alpha \in 2\pi {\bf Z},\label{topo}
\end{equation}
where $C$ is any closed loop on the boundary $\partial M$ of the manifold.
This condition is to be thought of as a generalization of the condition $\phi
\approx \phi +2\pi$ on the scalar field.
Under the transformation
\begin{equation}
\alpha \rightarrow \alpha + \omega \label{nonumber}
\end{equation}
on $\alpha$ where $\omega $ is a closed one-form, the Lagrangean three-form
is not invariant, but changes by an exact three-form:
\begin{equation}
\alpha d \alpha \rightarrow \alpha d \alpha - d ( \omega \alpha )
\nonumber
\end{equation}
We can make it exactly invariant by introducing a ``connection"
one-form $A$, transforming according to
\begin{equation}
A \rightarrow A - \omega \label{trA}
\end{equation}
and ``gauging'' $S$ to obtain
\begin{equation}
\tilde{S}=\frac{k}{2\pi}\int \alpha d\alpha +\frac{k}{2\pi}\int Ad\alpha .
\label{ing}
\end{equation}
But the action $\hat{S}$ is obviously not equivalent to the action $S$ because
the equations of motion are different.
We therefore introduce a Lagrange multiplier one-form $\lambda$ as before to
constrain $A$ by the equations
\begin{eqnarray}
dA&=&0,\label{curva}\\
\oint _{C\in \partial M}A&\in &2\pi {\bf Z}\label{holo}.
\end{eqnarray}
When $A$ fulfills (\ref{curva}) and
(\ref{holo}), we can redefine $\alpha$
using the transformation (\ref{nonumber}) and get back (\ref{in})
and (\ref{topo}).
We thus write
\begin{equation}
S'=\frac{k}{2\pi}\int \alpha d\alpha +\frac{k}{2\pi}\int Ad\alpha
+\frac{1}{2\pi}\int d\lambda A,\label{ingl}
\end{equation}
where
\begin{equation} \label{Four}
\oint _{C\in \partial M}\lambda \in 2\pi {\bf Z}.
\end{equation}
Consider
\begin{equation}
Z_{\lambda}=\int {\cal D}\lambda e^{\frac{i}{2\pi}\int d\lambda A}. \label{Zla}
\end{equation}
to see how (\ref{curva}) and (\ref{holo}) emerge when we integrate out
$\lambda$.
If now, each connected component of the boundary $\partial M$ of $M$, denoted
by $(\partial M)_a$, contains $p_a$ cycles
$C_{ai}$ which can serve to define the generators of its first homology group,
then there exist also $p_a$ closed
one-forms $\omega _{ai}$ (for each $a$) on $\partial M$ such that
\begin{equation}
\oint _{C_{aj}}\omega _{a'i}=2\pi \delta _{ij} \delta _{aa'}\;\; ,\;
i,j=1,2,\ldots ,p_a .\label{omegas}
\end{equation}
[We assume that the above homology group is torsion-free.]
If $M$ is compact, as we assume,
$(\partial M)_a$ is compact and has no boundaries. As $M$
is oriented, $\partial M$ too is oriented. Hence each connected component
$(\partial M)_a$ is a sphere with handles and its homology group has an even
number of generators \cite{?}. [When the spatial slice is an
annulus (say), $\partial M$ is $T^2 \sqcup T^2$.] Hence $p_a$
has to be even. In this case we can order the $\omega _{ai}$'s such that
\cite{?}
\begin{equation}
\int _{\partial M}\omega _{a,2l-1}\omega _{a'j}=4\pi ^{2}\delta _{2l,j} \delta
_{aa'} \;\;\; l=1,2,\ldots ,\frac{p_a}{2}.\label{twod}
\end{equation}
Given any such $\omega$ on $\partial M$, we can associate an $\omega$ on $M$
by requiring \cite{cour}
\begin{equation}
\nabla ^{2}\omega =0. \label{harm}
\end{equation}
Here $\nabla ^{2}$ is the Laplacian operator on one-forms on
$M$ defined using some Euclidean metric on $M$. [The pull-back of this $\omega$
to $\partial M$ must of course agree with the $\omega$ given there.]
A choice of $\omega _{\bot}$ (the component of $\omega$ perpendicular to
$\partial M$) needs to be made for solving (\ref{harm}).
We can choose it to be zero.
Now, using (\ref{Four}), we can write
\begin{equation}
\lambda =\lambda ^{(0)}+\sum _{a,i}n_{ai}\omega _{ai}\;\;\; , n_{ai}\in {\bf Z}
\label{dec}
\end{equation}
where $\lambda ^{(0)}$ is a one-form (on $M$) such that
\begin{equation}
\oint _{C_{aj}}\lambda ^{(0)}=0.\label{lambze}
\end{equation}
Now, given any three-manifold $M$, the operator $*d*d$ (defined by
choosing some Euclidean metric on $M$) on the space of one-forms $\gamma$
admits the following boundary condition compatible with the
self-adjointness of
$*d*d$ (the inner product being defined using the same Euclidean metric)
\cite{mcs}:
\begin{equation}
\mbox{Pull-back of } \gamma \mbox{ to }\partial M \equiv
\gamma |_{\partial M}=0. \label{sae}
\end{equation}
This means that the one-form $\lambda ^{(0)}$ of equation (\ref{dec}) can be
expanded in a basis of one-forms $\gamma _{n}$
which satisfy the above boundary condition (as in a Fourier expansion so that
the convergence is only in the ``mean-square'' sense).
Therefore
\begin{equation}
\lambda =\sum _{n}\beta _{n}\gamma _{n}+\sum _{a,i}n_{ai}\omega _{ai},
\label{decom}
\;\;\;\; \gamma _{n} | _{\partial M} =0.
\end{equation}
Thus
\begin{eqnarray}
Z_{\lambda}&=&\sum _{n_{ai}}\int \prod _{n}d\beta _{n}e^{\frac{i}{2\pi}[\sum
_{n}\beta _{n}\int _{M}d\gamma _{n}A +\sum _{a,i}n_{ai}\int _{M}d\omega
_{ai}A}]\nonumber\\
&\sim &\prod _{n}\delta (\int d\gamma _{n}A)\sum _{n_{ai}}e^{\frac{i}{2\pi}\sum
_{a,i}n_{ai}\int _{M}d\omega _{ai}A} \nonumber\\
&\sim & \delta [dA] \sum _{n_{ai}}e^{\frac{i}{2\pi}\sum _{a,i}n_{ai}\int
_{\partial M}\omega _{ai}A}. \label{simpli}
\end{eqnarray}
[In arriving at the delta functional here, we have done a partial integration
and
used the completeness of the $\gamma _{n}$'s while to
arrive at the integral in the exponent, we have again done a partial
integration and then neglected the bulk term. The latter is justified
owing to the multiplying delta functional.]
As before (see (\ref{Two})), this means that
\begin{equation}
Z_{\lambda}\sim \delta [dA] \prod _{a,i}(\sum _{m_{ai}}\delta
(\int _{\partial
M}\frac{\omega _{ai}}{2\pi}A-2\pi m_{ai})),\;\;\; m_{ai}\in {\bf Z}.
\label{Zfi}
\end{equation}
Since the delta functional above implies that $A$ is a closed one-form, we
can expand $A$ on the boundary $\partial M$ as
\begin{equation}
A|_{\partial M}=d\xi +\sum _{a,i}r_{ai}\omega _{ai} \label{Five}
\end{equation}
where $\xi$ is a function on $\partial M$ and $r_{ai}$ are valued in reals.
Substituting (\ref{Five}) in the second delta function in (\ref{Zfi}), and
using
(\ref{twod}) and the fact that $\int _{\partial M}\omega _{ai}d\xi =0$ (
$\omega _{ai}$'s being closed one-forms {\em at the boundary}), we
finally get
\begin{equation}
Z_{\lambda}\sim \delta [dA]\prod _{a,i}(\sum _{m_{ai}}\delta (r_{ai}-m_{ai})).
\label{ZFi}
\end{equation}
Thus, integrating out $\lambda$ gives exactly the conditions (\ref{curva}) and
(\ref{holo}) that we wanted and shows that $S$ is equivalent to the
original action (\ref{in}).
If on the contrary, we choose to integrate out the $A$ field from the action
$S'$ in (\ref{ingl}), we get
\begin{eqnarray}
Z_{A}&=&\int {\cal D}Ae^{i[\frac{k}{2\pi}\int \alpha d\alpha
+\frac{k}{2\pi}\int
Ad\alpha +\frac{1}{2\pi}\int d\lambda A]}\nonumber\\
&\sim &\delta (\frac{k}{2\pi}d\alpha -\frac{1}{2\pi}d\lambda
)e^{i\frac{k}{2\pi}\int \alpha d\alpha }.\label{zforn}
\end{eqnarray}
Since the delta functional here implies that
\begin{equation}
d\alpha =\frac{1}{k}d\lambda ,\label{condn}
\end{equation}
we have
\begin{equation}
\alpha =\frac{1}{k}\lambda +\omega ^{(1)} ,\label{stoke}
\end{equation}
$\omega ^{(1)}$ being a closed one-form on $M$.
Thus $Z_{A}$ can be simplified to
\begin{equation}
Z_{A}\sim \delta (\frac{k}{2\pi}d\alpha -\frac{1}{2\pi}d\lambda )e^{\frac{i}{2
\pi k}\int _{M}\lambda d\lambda +\frac{i}{2\pi}\int _{M}\omega ^{(1)}d\lambda
}\label{simplif}
\end{equation}
The last term in the exponent above is a surface term because $\omega
^{(1)}$ is a closed one-form.
Hence the ``dual" action obtained by integrating out $A$ is
\begin{equation}
S_{d}=\frac{1}{2\pi k}\int _{M}\lambda d\lambda -\frac{1}{2\pi}\int _{\partial
M}\omega ^{(1)}\lambda \label{Dual}
\end{equation}
where $\lambda $ is subject to the condition
\begin{equation}
\oint _{C\in \partial M}\lambda \in 2\pi {\bf Z}, \;\; C=\mbox{ any cycle on
}\partial M.
\end{equation}
Since the second term in (\ref{Dual}) is a surface term, it does not contribute
to the equations of motion. Moreover, on using the equations of motion
$d\lambda =0$ arising from the first term, we see that the second term
vanishes.
Although we have worked in this Section first with a single scalar field and
then with a single CS field, these considerations generalize (in a sense to be
made precise below) to the case with many scalar fields coupled by a
matrix $G_{ij}$ (as in \cite{dhe}) and the case with many CS fields coupled by
a
matrix $K^{IJ}$ as in the previous Sections.
The procedure to get the dual theory
is always as follows \cite{lag}:
(1) Introduce a gauge field $A$ for some particular transformation
that is a ``symmetry" of the action
(be it the scalar field theory or the CS theory). (2) Introduce a
Lagrange multiplier field which constrains $A$ by the conditions
$dA=0$ and $\oint _{C}A\in 2\pi
{\bf Z}$. (3) Integrate out the original
gauge fields to obtain the ``dual'' theory containing the Lagrange
multiplier field.
We get different dual
theories,
depending on the ``symmetries'' we choose to gauge. It should however be noted
that the duality group that we get using this procedure is still not the full
$O(d,d;{\bf Z})$ \cite{dua} group, because we do not have a method of
incorporating antisymmetric matrices (which are needed for the $O(d,d;{\bf Z})$
transformations) in this approach.
\sxn{Concluding Remarks}
There is a prevalent point of view that
CS theories involving several vector
potentials are quite effective in reproducing the Hall effect. In
this paper we have provided further evidence in support of this
viewpoint by showing that
the Haldane hierarchy can be implemented using a sequence of
such CS theories.
The connection of these CS theories to chiral scalar field theories at the edge
has also been demonstrated. The argument consisted of three stages. The first
is that the algebra of observables is the same for these two theories. The
second is that both give rise to the same Hall conductivity in the bulk. The
third (and perhaps the most important part of the argument) is due to the
fundamental requirement of gauge invariance. The CS theory when gauged gives
rise to an effective CS theory for the electromagnetic potential. This is not
gauge invariant and requires a surface action to restore gauge invariance. The
gauged chiral scalar field theory at the boundary serves precisely as this
surface action.
Another interesting result we have in this paper has to do with a
generalization of the duality transformations for scalar field theories
\cite{dua,bus}. In a previous work \cite{dhe}, we showed how such duality
transformations relate Hall conductivities at different fractions. There we
worked purely with a chiral scalar field theory at the edge to arrive at
this result. It is therefore satisfying to note that analogous duality
transformations exist also for the CS theory in the bulk. At this point, we
have only an analogue of the $R\rightarrow 1/R$ duality for the CS theory. It
would be interesting to check if we can also obtain an $O(d,d;{\bf Z})$ duality
for the CS theory with $d$ CS fields.
\centerline{ {\bf Acknowledgements}}
\nopagebreak
We thank T.R. Govindarajan, V. John, G. Jungman, A. Momen and S. Vaidya for
several discussions. The work of A.P.B. and L.C. was supported by a grant from
DOE, USA under contract number DE-FG02-85ER40231. The work of L.C. was
supported also by the DOE grant DE-FG05-86ER-40272.
|
2,869,038,156,797 | arxiv | \section{Introduction}
In quantum mechanics, the rarity of the potentials which are exactly
solvable in closed-form (most of them belonging to the class of
shape-invariant potentials \cite{cooper,Dutt,Gendenshtein}) gives a
undeniable importance to the reseach of new families of such potentials. A
possible way to generate new solvable potentials is to start from the known
ones and to construct regular rational extensions of them. If the procedure
has a long history, in the last years important progress have been made in
this direction \cit
{gomez,gomez2,gomez3,gomez4,gomez5,quesne1,quesne,quesne2,quesne3,odake,sasaki,ho,odake2,sasaki2,dutta
. In a recent work \cite{grandati2} we proposed an approach allowing to
generate such regular extensions starting from every translationally
shape-invariant potential (TSIP) of the second category (as defined in \cit
{grandati}). For this, we use regularized excited states Riccati-Schr\"{o
dinger (RS) functions as superpotentials in a generalized SUSY partnership.
The regularization scheme corresponds to a "spatial Wick rotation" which
eliminates the singularities from the real axis, a device already suggested
by Shnol' \cite{shnol'} in 1994 as a way to generate rational extensions of
the harmonic potential. In the following years, this suggestion has been
developped by Samsonov and Ovcharov \cite{samsonov} and Tkachuk \cit
{tkachuk}. Recently Fellows and Smith \cite{fellows} rediscovered this
technique in the case of the harmonic oscillator, the second rational
extension of which being the so-called CPRS potential \cite{carinena}. In
\cite{grandati2}, we have extended the procedure to cover the whole set of
TSIP belonging to the second category. For the isotonic oscillator, we
recovered the $L1$ family of rational extensions discovered by Gomez-Ullate,
Kamran and Milson \cite{gomez,gomez2,gomez3,gomez4,gomez5}, Quesne \cit
{quesne1,quesne,quesne2,quesne3} and Odake, Sasaki et al \cit
{odake,odake2,sasaki2}. For the other second category potentials, the
infinite set of regular quasi-rational extensions that we obtain coincides
with the $J1$ family \cit
{gomez,gomez2,gomez3,gomez4,quesne1,quesne,quesne2,quesne3,odake,odake2,sasaki2
.
In the present article, combining the finite difference B\"{a}cklund
algorithm with new regularization schemes which are based on specific
symmetries of the isotonic potential, we show how the extension of the SUSY
QM partnership to excited states allows to generate the three infinite sets
L1$, $L2$ and $L3$ of regular rationally solvable extensions of the isotonic
potential (as well as the singular $L0$ and $L3$ ones) in a direct and
systematic way. This approach leads to a simple and transparent proof of the
shape-invariance of the potentials of the $L1$ and $L2$ series.
The paper is organized as follows. We first recall how the generalization of
the SUSY partnership based on excited states leads to a series of singular
rational extensions of the initial potential. We then introduce basic
elements concerning the finite difference B\"{a}cklund algorithm viewed as a
set of covariance transformations for the class of Riccati-Schr\"{o}dinger
equations and we interpret the generalized SUSY partnership in this
perspective. In the third and fourth sections, we recapitulate some results
concerning the isotonic oscillator, its connection with confluent
hypergeometric equation and the Kienast-Lawton-Hahn's Theorem which
describes the distribution of the zeros of the Laguerre functions on the
real axis. The fifth section is devoted to present the set of parameters
transforms which are discrete symmetries of the isotonic potential. Using
them as regularization transformations, we show then that the finite
difference B\"{a}cklund algorithm based on the corresponding regularized RS
functions generates directly the three series $L1$, $L2$ and $L3$ of regular
rationally solvable extensions of the isotonic potential. In the last
section, we prove the shape-invariance of the potentials of the $L1$ and $L2$
series.
\section{Generalized SUSY partnership based on excited states: $L0$ series
of rational extensions}
Consider a family of closed form exactly solvable hamiltonians
H(a)=-d^{2}/dx^{2}+V(x;a),\ a\in \mathbb{R}^{m},\ x\in I\subset \mathbb{R}$,
the associated bound states spectrum of which being given by $\left(
E_{n}(a),w_{n}(x;a\right) )$, where $w_{n}(x;a)=-\psi _{n}^{\prime
}(x;a)/\psi _{n}(x;a)$ is the Riccati-Schr\"{o}dinger (RS) function
associated to the $n^{th}$ bound state eigenfunction $\psi _{n}(x;a)$. The
Riccati-Schr\"{o}dinger (RS) equation \cite{grandati} for the level
E_{n}(a) $ is then
\begin{equation}
-w_{n}^{\prime }(x;a)+w_{n}^{2}(x;a)=V(x;a)-E_{n}(a), \label{edr4}
\end{equation
where we suppose $E_{0}(a)=0$. The RS function presents $n$ real
singularities associated to the $n$ simple nodes of the eigenstates $\psi
_{n}(x;a)$. As it is well known \cite{robnik,klippert}, $H(a)$ admits
infinitely many different factorizations of the form
\begin{equation}
H(a)-E_{n}(a)=A^{+}\left( w_{n}\right) A\left( w_{n}\right) ,
\end{equation
where
\begin{equation}
A\left( w_{n}\right) =d/dx+w_{n}(x;a), \label{opA}
\end{equation
with, in particular
\begin{equation}
A\left( w_{n}\right) \psi _{n}(x;a)=0.
\end{equation}
This allows to associate to $H(a)$ or $V(x;a)$ an infinite family of
partners given by
\begin{equation}
H^{\left( n\right) }(a)-E_{n}(a)=A\left( w_{n}\right) A^{+}\left(
w_{n}\right) =-d^{2}/dx^{2}+V^{\left( n\right) }(x;a),
\end{equation
with
\begin{equation}
V^{\left( n\right) }(x;a)=V(x;a)+2w_{n}^{\prime }(x;a).
\end{equation}
For $n\geq 1$, these potentials are all singular at the nodes of $\psi
_{n}(x;a)$ and are defined on open intervals only.
On these domains, $H^{\left( n\right) }(a)$ is (quasi)isospectral to $H(a)$.
Indeed, writing\bigskip
\begin{equation}
\psi _{k}^{\left( n\right) }(x;a)=A\left( w_{n}\right) \psi _{k}(x;a),
\label{fopartner}
\end{equation
it is easy to verify that we have for any $k$
\begin{equation}
H^{\left( n\right) }(a)\psi _{k}^{\left( n\right) }(x;a)=E_{k}(a)\psi
_{k}^{\left( n\right) }(x;a), \label{hampart}
\end{equation
that is, $\psi _{k}^{\left( n\right) }(x;a)$ is an eigenstate of $H^{\left(
n\right) }(a)$ associated to the eigenvalue $E_{k}(a)$. We write symbolically
\begin{equation}
V^{\left( n\right) }(x;a)\underset{iso}{\equiv }V(x;a),
\end{equation
where $\underset{iso}{\equiv }$ means "isospectral to". Defining
\begin{equation}
w_{n,k}(x;a)=-\frac{\psi _{k}^{\left( n\right) \prime }(x;a)}{\psi
_{k}^{\left( n\right) }(x;a)}, \label{RSpart1}
\end{equation
Eq.(\ref{hampart}) gives, for $k>n$
\begin{equation}
-w_{n,k}^{\prime }(x;a)+w_{n,k}(x;a)^{2}=V^{\left( n\right) }(x;a)-E_{k}(a).
\end{equation}
This scheme generalizes the SUSY QM partnership, by using the excited state
RS functions $w_{n}$ as superpotentials. However, only for the ground state
n=0$, the factorization and then the partner potential $V^{\left( 0\right)
}(a)=V(x;a)+2w_{0}^{\prime }(x;a)$ are non singular and we recover the usual
SUSY QM partnership \cite{cooper,Dutt}.
\section{Finite difference B\"{a}cklund algorithm}
We can consider the preceding partnership in a different way which gives a
prominent role to the covariance transform of the RS equations class.
\subsection{Invariance group of the Riccati equations}
As established by Cari\~{n}ena et al. \cite{carinena2,Ramos}, the \bigskip
finite-difference B\"{a}cklund algorithm is a consequence of the invariance
of the set of Riccati equations under a subset of the group $\mathcal{G}$ of
smooth $SL(2,\mathbb{R})$-valued curves $Map(\mathbb{R},SL(2,\mathbb{R}))$.
For any element $A\in $ $\mathcal{G}$ characterized by the matrix:
\begin{equation}
A(x)=\left(
\begin{array}{cc}
\alpha (x) & \beta (x) \\
\gamma (x) & \delta (x
\end{array
\right) ,\quad \det A(x)=\alpha (x)\delta (x)-\beta (x)\gamma (x)=1,
\label{matrice}
\end{equation
the action of $A$ on $Map(\mathbb{R},\overline{\mathbb{R}})$ is given by:
\begin{equation}
w(x)\overset{A}{\rightarrow }\widetilde{w}(x)=\frac{\alpha (x)w(x)+\beta (x
}{\gamma (x)w(x)+\delta (x)}. \label{transfo}
\end{equation}
If $A$ acts on a solution of the Riccati equation:
\begin{equation}
w^{\prime }(x)=a_{0}(x)+a_{1}(x)w(x)+a_{2}(x)w^{2}(x), \label{edrg}
\end{equation
we obtain a solution of a new Riccati equation:
\begin{equation}
\widetilde{w}^{\prime }(x)=\widetilde{a}_{0}(x)+\widetilde{a}_{1}(x
\widetilde{w}(x)+\widetilde{a}_{2}(x)\widetilde{w}^{2}(x), \label{edr2}
\end{equation
the coefficients of which being given by
\begin{equation}
\overrightarrow{\widetilde{a}}(x)=M(A)\overrightarrow{a}(x)+\overrightarrow{
}(x),\quad \overrightarrow{u}(x)=\left(
\begin{array}{c}
u_{2}(x) \\
u_{1}(x) \\
u_{0}(x
\end{array
\right) , \label{transfocoeff1}
\end{equation
where:
\begin{equation}
M(A)=\left(
\begin{array}{ccc}
\delta ^{2}(x) & -\gamma (x)\delta (x) & \gamma ^{2}(x) \\
-2\beta (x)\delta (x) & \alpha (x)\delta (x)+\beta (x)\gamma (x) & -2\alpha
(x)\gamma (x) \\
\beta ^{2}(x) & -\alpha (x)\beta (x) & \alpha ^{2}(x
\end{array
\right) ,\quad \overrightarrow{W}(x)=\left(
\begin{array}{c}
W(\gamma ,\delta ;x) \\
W(\delta ,\alpha ;x)+W(\beta ,\gamma ;x) \\
W(\alpha ,\beta ;x
\end{array
\right)
\end{equation
$(W(f,g;x)=f(x)g^{\prime }(x)-f^{\prime }(x)g(x)$ is the wronskian of $f(x)$
and $g(x)$ in $x$). As noted in \cite{carinena2}, Eq.(\ref{transfocoeff1})
defines an affine action of $\mathcal{G}$ on the set of general Riccati
equations.
\subsection{Particular case of the RS equations and finite difference B\"{a
cklund algorithm}
The most general elements of $\mathcal{G}$ preserving the subset of RS
equations has been determined in \cite{carinena2}. Among them we find in
particular the elements of the form:
\begin{equation}
A(\phi )=\frac{1}{\sqrt{\lambda }}\left(
\begin{array}{cc}
\phi (x) & \lambda -\phi ^{2}(x) \\
-1 & \phi (x
\end{array
\right) ,\ \lambda >0, \label{transfo2}
\end{equation
where $\phi (x)$ satisfies an RS equation with the same potential as in Eq.
\ref{edr4}) but with a shifted energy:
\begin{equation}
-\phi ^{\prime }(x)+\phi ^{2}(x)=V(x)-\left( E-\lambda \right) .
\end{equation}
With this choice $\widetilde{w}(x)$ satisfies the RS equation:
\begin{equation}
-\widetilde{w}^{\prime }(x)+\widetilde{w}^{2}(x)=\widetilde{V}_{\phi
}(x)-\lambda ,
\end{equation
where $\widetilde{V}_{\phi }(x)=V(x)+2\phi ^{\prime }(x)$.
Consequently, starting from a given RS function of the discrete spectrum
w_{n}(x;a)$, for every value of $k$ such that $E_{k}>E_{n}$ , we can build
an element $A\left( w_{n}\right) \in \mathcal{G}$ of the form:
\begin{equation}
A\left( w_{n}\right) =\frac{1}{\sqrt{E_{k}(a)-E_{n}(a)}}\left(
\begin{array}{cc}
w_{n}(x;a) & E_{k}(a)-E_{n}(a)-w_{n}{}^{2}(x;a) \\
-1 & w_{n}(x;a
\end{array
\right) \label{transfoback}
\end{equation
which transforms $w_{k}$ as:
\begin{equation}
w_{k}(x;a)\overset{A\left( w_{n}\right) }{\rightarrow }w_{k}^{\left(
n\right) }(x;a)=-w_{n}(x;a)+\frac{E_{k}(a)-E_{n}(a)}{w_{n}(x;a)-w_{k}(x;a)},
\label{transfoback2}
\end{equation
where $w_{k}^{\left( n\right) }$ is a solution of the RS equation:
\begin{equation}
-w_{k}^{\left( n\right) \prime }(x;a)+\left( w_{k}^{(n)}(x;a)\right)
^{2}=V^{\left( n\right) }(x;a)-E_{k}(a), \label{eqtransform}
\end{equation
with the same energy $E_{l}(a)$ as in Eq(\ref{edr4}) but with a modified
potential
\begin{equation}
V^{\left( n\right) }(x;a)=V(x;a)+2w_{n}^{\prime }(x;a).
\end{equation}
This is the content of the finite-difference B\"{a}cklund algorithm \cit
{carinena2,Ramos,Fernandez,Mielnik,Adler1,Adler2}. It transposes at the
level of the RS equations the covariance of the set of Schr\"{o}dinger
equations under Darboux transformations \cite{darboux,luban,matveev}. In the
following we call $A\left( w_{n}\right) $ a Darboux-B\"{a}cklund
Transformation (DBT).
To $V(x;a)$, $A\left( w_{n}\right) $ associates the (quasi)isospectral
partner $V^{\left( n\right) }(x;a)$. Among the $A\left( w_{n}\right) $, only
$A\left( w_{0}\right) $ leads to the regular, usual SUSY QM partner
V^{\left( 0\right) }(x;a)$. The correspondence between the eigenvalues of
V(x;a)$ and $V^{\left( n\right) }(x;a)$ is direct. We also have from Eq(\re
{fopartner}) and Eq(\ref{opA})
\begin{equation}
\psi _{k}^{\left( n\right) }(x;a)\sim \left( w_{n}(x;a)-w_{k}(x;a)\right)
\psi _{k}(x;a),
\end{equation
that is (see Eq(\ref{RSpart1})),
\begin{equation}
w_{n,k}(x;a)=w_{k}^{\left( n\right) }(x;a).
\end{equation}
Then, the finite difference B\"{a}cklund algorithm generates exactly the RS
functions corresponding to the spectrum of the generalized SUSY partner
V^{\left( n\right) }$ of $V$.
Note that for shape invariant potentials (SIP) \cit
{cooper,Dutt,Gendenshtein}, $A\left( w_{0}\right) $ is in fact an invariance
transformation of the RS equations associated to the considered family of
potentials (indexed by the multiparameter $a$), since in this case
\begin{equation}
V^{\left( 0\right) }(x;a)=V(x;a_{1})+R(a)
\end{equation
and
\begin{equation}
w_{k}^{\left( 0\right) }(x,a)=w_{k-1}(x,a_{1}),
\end{equation
where $a_{1}=f(a)$ and $R(a)$ are two given functions of the multiparameter
a$.
As we noted before, starting from the RS function $w_{n}$ of a regular
excited bound state which has $n$ nodes on the real domain of definition $I$
of $V(x;a)$, we generate via $A\left( w_{n}\right) $ a generalized SUSY
partner which presents $n$ singularities on this domain. Nevertheless, the
finite difference B\"{a}cklund algorithm can be applied by replacing $w_{n}$
by any other solution of the same RS equation Eq(\ref{edr4}), even if this
solution does not correspond to a physical state. Knowing $w_{n}(x,a)$, the
general solution of Eq(\ref{edr4}) is given by
\begin{equation}
W_{n}(x;a,W_{0})=w_{n}(x;a)-\frac{e^{2\int_{x_{0}}^{x}w_{n}(s;a)ds}}
W_{0}+\int_{x_{0}}^{x}dse^{2\int_{x_{0}}^{s}w_{n}(t;a)dt}}, \label{RSgensol}
\end{equation
where $W_{0}$ is an arbitrary real parameter. We could then use the DBT
A\left( W_{n}\right) $ to build a generalized SUSY partner potential
V^{\left( n\right) }(x;a,W_{0})=V(x;a)+2W_{n}^{\prime }(x;a,W_{0})$ and look
for values of $W_{0}$ for which $W_{n}$ and $V^{\left( n\right) }$ are not
singular. For some potentials it is nevertheless possible, by using specific
symmetries, to build directly the researched regular RS\ functions. Such
symmetries exist in particular for the isotonic oscillator.
\section{The isotonic oscillator}
As shown in \cite{grandati}, the primary translationally shape invariant
potentials (TSIP), for which $a_{1}=a+\alpha $, can be classified into two
categories in which the potential can be brought into a harmonic or isotonic
form respectively, using a change of variables which satisfies a constant
coefficient Riccati equation.
The first element of the second category is the isotonic oscillator
potential itself (ie the radial effective potential for a three dimensional
isotropic harmonic oscillator with zero ground-state energy) defined on the
positive real half line
\begin{equation}
V(x;\omega ,a)=\frac{\omega ^{2}}{4}x^{2}+\frac{a(a-1)}{x^{2}}+V_{0}(\omega
,a),\ x>0, \label{isotpot}
\end{equation
with $a=l+1\geq 1$ and $V_{0}(\omega ,a)=-\omega \left( a+\frac{1}{2}\right)
$. The shape invariance property of $V(x;\omega ,a)$ is expressible as
\begin{equation}
V^{\left( 0\right) }(x;\omega ,a)=V(x;\omega ,a_{1})+2\omega \label{VSIP1}
\end{equation
and its spectrum is given by
\begin{equation}
E_{n}\left( \omega \right) =2n\omega ,\ \psi _{n}\left( x;\omega ,a\right)
\sim \exp \left( -\int w_{n}\left( x;\omega ,a\right) dx\right) ,
\label{spectrisot}
\end{equation
where the excited state Riccati-Schr\"{o}dinger function (RS function)
w_{n}\left( x;\omega ,a\right) $ can be written as a terminating continued
fraction as
\begin{equation}
w_{n}(x;\omega ,a)=w_{0}(x;\omega ,a)+R_{n}(x;\omega ,a),
\label{RS functions Isot}
\end{equation
wit
\begin{equation}
w_{0}(x;\omega ,a)=\frac{\omega }{2}x-\frac{a}{x} \label{RS functions Isot2}
\end{equation
and
\begin{eqnarray}
R_{n}(x;\omega ,a) &=&-\frac{E_{n}\left( \omega \right) }{w_{0}(x;\omega
,a)+w_{n-1}(x;\omega ,a_{1})} \label{RS functions Isot3} \\
&=&\frac{-2n\omega }{\omega x-\left( 2a+1\right) /x-}\Rsh ...\Rsh \frac
2\left( n-j+1\right) \omega }{\omega x-\left( 2\left( a+j\right) -1\right)
/x-}\Rsh ...\Rsh \frac{2\omega }{\omega x-\left( 2\left( a+n\right)
-1\right) /x}. \notag
\end{eqnarray}
\bigskip As it is well known, the isotonic oscillator eigenstates can be
also expressed in terms of Generalized Laguerre Polynomials (GLP) $\mathit{L
_{n}^{\left( \lambda \right) }$ as
\begin{equation}
\psi _{n}\left( x;\omega ,a\right) \sim x^{a}e^{-\omega x^{2}/4}\mathit{L
_{n}^{\left( a-1/2\right) }\left( \omega x^{2}/2\right) . \label{foOI}
\end{equation}
\bigskip This implies that we hav
\begin{equation}
R_{n}(x;\omega ,a)=-\left( \log \left( \mathit{L}_{n}^{\left( a-1/2\right)
}\left( \omega x^{2}/2\right) \right) \right) ^{\prime }=\omega x\frac
\mathit{L}_{n-1}^{\left( a+1/2\right) }\left( \omega x^{2}/2\right) }
\mathit{L}_{n}^{\left( a-1/2\right) }\left( \omega x^{2}/2\right) },
\label{RSLaguerre}
\end{equation
which is singular at the nodes of $\psi _{n}\left( x;\omega ,a\right) $,
that is, at the zeros of $\mathit{L}_{n}^{\left( a-1/2\right) }\left( \xi
\right) $. Concerning these last, we have a classical result of
Kienast,Lawton and Hahn \cite{szego,magnus,erdelyi}:
\emph{Kienast-Lawton-Hahn's Theorem }
Suppose that $\alpha \notin -\mathbb{N}$. Then $\mathit{L}_{n}^{\left(
\alpha \right) }\left( z\right) $ admits
\ \ \ \ \ \ \ 1) $n$ positive zeros if $\alpha >-1$
\ \ \ \ \ \ \ 2) $n+\left[ \alpha \right] +1$ positive zeros if $-n<\alpha
<-1$ ($\left[ \left\vert \alpha \right\vert \right] $ means the integer part
of $\alpha $)
\ \ \ \ \ \ \ 3) No positive zero if $\alpha <-n$
The number of negative zeros is always $0$ or $1$.
\ \ \ \ \ \ \ 1) $0$ if $\alpha >-1$
\ \ \ \ \ \ \ 2) $0$ if $-2k-1<\alpha <-2k$ and $1$ if $-2k<\alpha <-2k+1$,
with $-n<\alpha <-1$
\ \ \ \ \ \ \ 3) $0$ if $n$ is even and $1$ if $n$ is odd, with $\alpha <-n$
This theorem, confirms in particular that, for positive values of $a$, the
RS function $w_{n}(x;\omega ,a)$ corresponding to a physical bound state and
then the associated generalized SUSY partner $V^{\left( n\right) }(x;\omega
,a)$ of $V(x;\omega ,a)$ present always $n$ singularities on the positive
half axis. This family of singular rational extensions of $V(x;\omega ,a)$
will be called the $L0$ series.
\section{\protect\bigskip Confluent hypergeometric equation and isotonic
oscillator}
The confluent hypergeometric equation
\begin{equation}
zy^{\prime \prime }(z;\alpha ,\lambda )+(\alpha +1-z)y^{\prime }(z;\alpha
,\lambda )+\lambda y(z;\alpha ,\lambda )=0 \label{hypergeo}
\end{equation
on the positive half real line, can always been reduced to a Schr\"{o}dinger
equation for an isotonic oscillator. Indeed, if we put $z=\omega x^{2}/2$
and $\phi \left( x;\alpha ,\lambda \right) =y(z;\alpha ,\lambda )$ in Eq.
\ref{hypergeo}), we obtain the following equation for $\phi \left( x;\alpha
,\lambda \right) $:
\begin{equation}
\phi ^{\prime \prime }(x;\alpha ,\lambda )+(\frac{2\alpha +1}{x}-\omega
x)\phi ^{\prime }(x;\alpha ,\lambda )-2\omega \lambda \phi (x;\alpha
,\lambda )=0
\end{equation}
Then
\begin{equation}
\psi (x;\alpha ,\lambda )\sim \phi \left( x;\alpha ,\lambda \right) \exp
\frac{1}{2}\int dx(\frac{2\alpha +1}{x}-\omega x))=x^{\alpha +1/2}e^{-\omega
x^{2}/4}\phi \left( x;\alpha ,\lambda \right)
\end{equation}
satisfies
\begin{equation}
-\psi ^{\prime \prime }(x;\alpha ,\lambda )+\left( \frac{\omega ^{2}x^{2}}{4
+\frac{\left( \alpha +1/2\right) \left( \alpha -1/2\right) }{x^{2}}-\omega
(\alpha +1)\right) \psi (x;\alpha ,\lambda )=2\lambda \omega \psi \left(
x;\alpha ,\lambda \right) .
\end{equation}
If we define $a=\alpha +1/2$, $\psi (x;a-1/2,\lambda )=\psi _{\lambda }(x;a)$
and $E_{\lambda }(\omega )=2\lambda \omega $, we obtain
\begin{equation}
H(\omega ,a)\psi _{\lambda }(x;a)=E_{\lambda }(\omega )\psi _{\lambda }(x;a),
\label{hypergeomod}
\end{equation
where
\begin{equation}
\psi _{\lambda }(x;a)\sim x^{a}e^{-\omega x^{2}/4}y(\omega
x^{2}/2;a-1/2,\lambda ),
\end{equation
$H(\omega ,a)$ being the usual isotonic hamiltonian (see Eq.(\ref{isotpot}))
\begin{equation}
H(\omega ,a)=-\frac{d^{2}}{dx^{2}}+V(x;\omega ,a).
\end{equation}
Eq.(\ref{hypergeomod}) is the Schr\"{o}dinger equation \ for the isotonic
oscillator, where for physical bound states we must have $\lambda =n$. In
this case, the confluent hypergeometric equation
\begin{equation}
zy^{\prime \prime }(z;a-1/2,n)+(a+1/2-z)y^{\prime
}(z;a-1/2,n)+ny(z;a-1/2,n)=0,
\end{equation
admits the regular solution
\begin{equation}
y(z;a-1/2,n)=L_{n}^{\left( a-1/2\right) }(z)
\end{equation
and we have
\begin{equation}
\psi _{n}(x;a)\sim x^{a}e^{-\omega x^{2}/4}L_{n}^{\left( a-1/2\right)
}(\omega x^{2}/2).
\end{equation}
This is exactly the physical state for the isotonic oscillator at the energy
$E_{n}=2n\omega $.
In fact, as shown by Erdelyi \cite{erdelyi, magnus,gomez,gomez5}, Eq.(\re
{hypergeo}) admits quasi rational solutions built from GLP in four sectors
of the values of the parameters $\alpha $ and $\lambda $
\begin{equation}
\left\{
\begin{array}{c}
\lambda =n:\ y_{0}(z;\alpha ,n)=L_{n}^{\left( \alpha \right) }(z) \\
\lambda =-n-\alpha -1:\ y_{1}(z;\alpha ,\alpha +1+n)=e^{z}L_{n}^{\left(
\alpha \right) }(-z) \\
\lambda =n-\alpha :\ y_{2}(z;\alpha ,n-\alpha )=z^{-\alpha }L_{n}^{\left(
-\alpha \right) }(z) \\
\lambda =-n-1:\ y_{3}(z;\alpha ,-n-1)=z^{-\alpha }e^{z}L_{n}^{\left( -\alpha
\right) }(-z
\end{array
\right.
\end{equation}
They correspond to the following four eigenfunctions
\begin{equation}
\left\{
\begin{array}{c}
\psi _{n}(x;a)\sim x^{a}e^{-\omega x^{2}/4}L_{n}^{\left( a-1/2\right)
}(\omega x^{2}/2),\quad E_{n}(\omega )=2n\omega \\
\psi _{-n-\alpha -1/2}(x;a)\sim x^{a}e^{\omega x^{2}/4}L_{n}^{\left(
a-1/2\right) }(-\omega x^{2}/2),\quad E_{-n-\alpha -1/2}(\omega )=-2\left(
n+\alpha +1/2\right) \omega \\
\psi _{n-a+1/2}(x;a)\sim x^{1-a}e^{-\omega x^{2}/4}L_{n}^{-\left(
a-1/2\right) }(\omega x^{2}/2),\quad E_{n-a+1/2}(\omega )=2\left(
n-a+1/2\right) \omega \\
\psi _{-n-1}(x;a)\sim x^{1-a}e^{\omega x^{2}/4}L_{n}^{-\left( a-1/2\right)
}(-\omega x^{2}/2),\quad E_{-n-1}(\omega )=-2\left( n+1\right) \omeg
\end{array
\right. \label{secteurs}
\end{equation}
The 3 last cases don't correspond to physical states and physical energies.
\section{Discrete symmetries of the isotonic RS equation}
Since the isotonic oscillator is shape invariant, the $A\left( w_{0}\right) $
DBT is an invariance transformation for the RS\ equations associated to the
family of isotonic oscillators indexed by the couple of parameters $(\omega
,a)$ (see Eq.(\ref{isotpot})). But this family of RS\ equations is covariant
under other specific transformations which act on the parameters of the
isotonic potentials and preserve their functional class. As we will see, the
connections between the quasi rational sectors of the confluent
hypergeometric equation admit a very simple interpretation in terms of
covariance transformations of the isotonic potential.
\subsection{\protect\bigskip Inversion of $\protect\omega $ the parameter}
The first covariance transformation for $V(x;\omega ,a)$ acts on the $\omega
$ parameter as
\begin{equation}
\omega \overset{\Gamma _{\omega }}{\rightarrow }\left( -\omega \right)
,\left\{
\begin{array}{c}
V(x;\omega ,a)\overset{\Gamma _{\omega }}{\rightarrow }V(x;\omega ,a)+\omega
(2a+1) \\
w_{n}(x;\omega ,a)\overset{\Gamma _{\omega }}{\rightarrow }v_{n}(x;\omega
,a)=w_{n}(x;-\omega ,a)
\end{array
\right.
\end{equation
$v_{n}(x;\omega ,a)$ satisfying ($E_{n}\left( -\omega \right) =-E_{n}\left(
\omega \right) =E_{-n}\left( \omega \right) $)
\begin{equation}
-v_{n}^{\prime }(x;\omega ,a)+v_{n}^{2}(x;\omega ,a)=V(x;\omega
,a)-E_{-\left( n+a+1/2\right) }\left( \omega \right) . \label{oregpot}
\end{equation}
From Eq.(\ref{RS functions Isot}), Eq.(\ref{RS functions Isot2}) and Eq.(\re
{RS functions Isot3}), writing
\begin{equation}
v_{n}(x;\omega ,a)=v_{0}(x;\omega ,a)+Q_{n}(x;\omega ,a), \label{oregRSfct}
\end{equation
we deduce
\begin{equation}
v_{0}(x;\omega ,a)=-\frac{\omega }{2}x-\frac{a}{x} \label{oregRSfct2}
\end{equation
and
\begin{eqnarray}
Q_{n}(x;\omega ,a) &=&\frac{E_{n}(\omega )}{v_{0}(x;\omega
,a)+v_{n-1}(x;\omega ,a_{1})} \label{oregRSfct3} \\
&=&-\frac{2n\omega }{\omega x+\left( 2a+1\right) /x+}\Rsh ...\Rsh \frac
2\left( n-j+1\right) \omega }{\omega x+\left( 2\left( a+j\right) -1\right)
/x+}\Rsh ...\Rsh \frac{2\omega }{\omega x+\left( 2\left( a+n\right)
-1\right) /x} \notag \\
&=&-\left( \log \left( \mathit{L}_{n}^{\left( a-1/2\right) }\left( -\omega
x^{2}/2\right) \right) \right) ^{\prime }. \notag
\end{eqnarray}
Clearly, for $a\geq 1$ $(l\geq 0)$, $v_{n}(x;\omega ,a)$ does not present
any singularity on the positive real half line. This result is coherent with
the above mentioned Kienast-Lawton-Hahn's theorem since the argument of the
GLP $\mathit{L}_{n}^{\left( a-1/2\right) }$ in the expression of $Q_{n}$ is
now a strictly negative value.
Note that we recover exactly the same results if we use the "spatial Wick
rotation" \cite{shnol',samsonov,tkachuk,fellows,grandati2
\begin{equation}
w_{n}(x;\omega ,a)\rightarrow v_{n}(x;\omega ,a)=iw_{n}(ix;\omega ,a).
\end{equation}
This means that the $\Gamma _{\omega }$ transformation send the
singularities of $w_{n}$, which are initially all on the real axis, on the
imaginary axis. This explains why the new RS function $v_{n}$ does not
present any singularity on the real line. Finally, comparing Eq.(\re
{secteurs}) to Eq.(\ref{oregRSfct}), Eq.(\ref{oregRSfct2}) and Eq.(\re
{oregRSfct3}), we see that $\Gamma _{\omega }$ transforms an eigenfunction
of the first sector into an eigenfunction of the second sector and then
coincides with the Kummer's transformation \cite{erdelyi}.
\subsection{Inversion of $a$ the parameter}
The second covariance transformation acts on the $a$ parameter as
\begin{equation}
a\overset{\Gamma _{a}}{\rightarrow }1-a,\left\{
\begin{array}{c}
V(x;\omega ,a)\overset{\Gamma _{a}}{\rightarrow }V(x;\omega ,a)+\omega (2a-1)
\\
w_{n}(x;\omega ,a)\overset{\Gamma _{a}}{\rightarrow }u_{n}(x;\omega
,a)=w_{n}(x;\omega ,1-a)
\end{array
\right.
\end{equation
$u_{n}(x;\omega ,a)$ satisfying
\begin{equation}
-u_{n}^{\prime }(x;\omega ,a)+u_{n}^{2}(x;\omega ,a)=V(x;\omega
,a)-E_{n+1/2-a}\left( \omega \right) . \label{aregpot}
\end{equation}
From Eq.(\ref{RS functions Isot}), Eq.(\ref{RS functions Isot2}) and Eq.(\re
{RS functions Isot3}) we deduce
\begin{equation}
u_{n}(x;\omega ,a)=u_{0}(x;\omega ,a)+P_{n}(x;\omega ,a), \label{aregRSfct}
\end{equation
wher
\begin{equation}
u_{0}(x;\omega ,l)=\frac{\omega }{2}x+\frac{a-1}{x} \label{aregRSfct2}
\end{equation
and
\begin{eqnarray}
P_{n}(x;\omega ,a) &=&\frac{E_{n}(\omega )}{v_{0}(x;\omega
,a)+v_{n-1}(x;\omega ,a_{-1})} \label{aregRSfct3} \\
&=&\frac{-2n\omega }{\omega x+\left( 2a-3\right) /x-}\Rsh ...\Rsh \frac
2\left( n-j+1\right) \omega }{\omega x+\left( 2\left( a-j\right) -1\right)
/x-}\Rsh ...\Rsh \frac{2\omega }{\omega x+\left( 2\left( a-n\right)
-1\right) /x} \notag \\
&=&-\left( \log \left( \mathit{L}_{n}^{-\left( a-1/2\right) }\left( \omega
x^{2}/2\right) \right) \right) ^{\prime }. \notag
\end{eqnarray}
If in this case the argument of the GLP in the right hand member is strictly
positive, the associated $\alpha =-\left( a-1/2\right) $ parameter being
strictly negative. In accordance with the Kienast-Lawton-Hahn's theorem, by
taking $a$ sufficently large, we can decrease the number of real zeros and
in particular we can eliminate all the positive zeros. Thus, if $a>n+1/2$,
\mathit{L}_{n}^{-\left( a-1/2\right) }\left( \omega x^{2}/2\right) $ is
strictly positive for any value of $x$. This means that $P_{n}(x;\omega ,a)$
and $u_{n}(x;\omega ,a)$ are not singular on $\left] 0,+\infty \right[ $
when $a=n+m+1/2$, with $m>0$.
Note that
\begin{equation}
P_{1}(x;\omega ,a)=\frac{-2\omega }{\omega x+\left( 2a-3\right) /x
=Q_{1}(x;\omega ,a-2).
\end{equation}
Finally, comparing Eq.(\ref{secteurs}) to Eq.(\ref{aregRSfct}), Eq.(\re
{aregRSfct2}) and Eq.(\ref{aregRSfct3}), we see that $\Gamma _{a}$
transforms an eigenfunction of the first sector into an eigenfunction of the
third sector.
\subsection{Inversion of both parameters $\protect\omega $ and $a$}
Finally, we can also act simultaneously on both parameter a
\begin{equation}
(\omega ,a)\overset{\Gamma _{a}\circ \Gamma _{\omega }}{\rightarrow
(-\omega ,1-a)\left\{
\begin{array}{c}
V(x;\omega ,a)\overset{\Gamma _{a}\circ \Gamma _{\omega }}{\rightarrow
V(x;\omega ,a)+2\omega \\
w_{n}(x;\omega ,a)\overset{\Gamma _{a}\circ \Gamma _{\omega }}{\rightarrow
r_{n}(x;\omega ,a)=w_{n}(x;-\omega ,1-a)
\end{array
\right.
\end{equation
$r_{n}(x;\omega ,a)$ satisfyin
\begin{equation}
-r_{n}^{\prime }(x;\omega ,a)+r_{n}^{2}(x;\omega ,a)=V(x;-\omega
,1-a)-E_{n}(-\omega )=V(x;\omega ,a)-E_{-\left( n+1\right) }(\omega ).
\label{oaregpot}
\end{equation}
From Eq.(\ref{RS functions Isot}), Eq.(\ref{RS functions Isot2}) and Eq.(\re
{RS functions Isot3}) we have
\begin{equation}
r_{n}(x;\omega ,a)=r_{0}(x;\omega ,a)+T_{n}(x;\omega ,a), \label{oaregRSfct}
\end{equation
wher
\begin{equation}
r_{0}(x;\omega ,a)=-\frac{\omega }{2}x+\frac{a-1}{x}=-w_{0}(x;\omega ,a-1)
\label{oaregRSfct2}
\end{equation
and
\begin{eqnarray}
T_{n}(x;\omega ,a) &=&\frac{E_{n}(\omega )}{r_{0}(x;\omega
,a)+r_{n-1}(x;\omega ,a_{-1})} \label{oaregRSfct3} \\
&=&\frac{2n\omega }{\omega x-\left( 2a-3\right) /x+}\Rsh ...\Rsh \frac
2\left( n-j+1\right) \omega }{\omega x-\left( 2\left( a-j\right) -1\right)
/x+}\Rsh ...\Rsh \frac{2\omega }{\omega x-\left( 2\left( a-n\right)
-1\right) /x} \notag \\
&=&-\left( \log \left( \mathit{L}_{n}^{-\left( a-1/2\right) }\left( -\omega
x^{2}/2\right) \right) \right) ^{\prime }. \notag
\end{eqnarray}
In this case, the argument of the GLP in the right hand member and the
associated $\alpha =-\left( a-1/2\right) $ parameter are both strictly
negative. In accordance with the Kienast-Lawton-Hahn's theorem, by taking $a$
sufficently large, we can have any zero on the negative half line if $n$ is
even ($n=2l$) and one if $n$ is odd. Thus, if $n=2l$ and $a>2l+1/2$,
\mathit{L}_{2l}^{-\left( a-1/2\right) }\left( -\omega x^{2}/2\right) $ is
strictly positive for any value of $x$. This means that $T_{2l}(x;\omega ,a)$
and $r_{2l}(x;\omega ,a)$ are not singular on $\left] 0,+\infty \right[ $
when $a=2l+m+1/2$, with $m>0$.
Note that
\begin{equation}
T_{1}(x;\omega ,a)=\frac{-2\omega }{\omega x-\left( 2a-3\right) /x
=R_{1}(x;\omega ,a-2).
\end{equation}
Finally, comparing Eq.(\ref{secteurs}) to Eq.(\ref{oaregRSfct}), Eq.(\re
{oaregRSfct2}) and Eq.(\ref{oaregRSfct3}), we see that $\Gamma _{a}\circ
\Gamma _{\omega }$ transforms an eigenfunction of the first sector into an
eigenfunction of the fourth sector and corresponds also to a Kummer's
transformation \cite{erdelyi}.
\section{Regular rational extensions of the isotonic oscillator}
Since the transformations considered above are covariance transformations
for the family of isotonic potentials which regularize the RS functions, we
can use these regularized RS\ functions into the finite difference B\"{a
cklund algorithm and generate regular isospectral partners for the isotonic
potential.
\subsection{\protect\bigskip Rational extension of the $L1$ series}
$w_{k}$ and $v_{n}$ are associated to the same potential but with different
eigenvalues (cf Eq(\ref{oregpot}))
\begin{equation}
\left\{
\begin{array}{c}
-v_{n}^{\prime }(x;\omega ,a)+v_{n}^{2}(x;\omega ,a)=V(x;\omega
,a)-E_{-\left( n+a+1/2\right) }\left( \omega \right) \\
-w_{k}^{\prime }(x;\omega ,a)+w_{k}^{2}(x;\omega ,a)=V(x;\omega
,a)-E_{k}\left( \omega \right)
\end{array
\right.
\end{equation
which means that we can use $v_{n}$ to build a DBT $A\left( v_{n}\right) $
and apply it to $w_{k}$ as
\begin{equation}
w_{k}(x;\omega ,a)\overset{A\left( v_{n}\right) }{\rightarrow }w_{k}^{\left(
n\right) }(x;\omega ,a)=-v_{n}(x;\omega ,a)+\frac{E_{k}(\omega )-E_{-\left(
n+a+1/2\right) }(\omega )}{v_{n}(x;\omega ,a)-w_{k}(x;\omega ,a)},
\label{backL1}
\end{equation
where $w_{k}^{\left( n\right) }(x;\omega ,a)$ satisfies
\begin{equation}
-w_{k}^{\left( n\right) \prime }(x;\omega ,a)+\left( w_{k}^{\left( n\right)
}(x;\omega ,a)\right) ^{2}=V^{\left( n\right) }(x;\omega ,a)-E_{k}\left(
\omega \right) , \label{oregRSeq}
\end{equation
with
\begin{equation}
V^{\left( n\right) }(x;\omega ,a)=V(x;\omega ,a)+2v_{n}^{\prime }(x;\omega
,a). \label{oregSUSYpart}
\end{equation}
For every $n\geq 0$, $V^{\left( n\right) }(x;\omega ,a)$ is regular on the
positive half line and isospectral to $V(x;\omega ,a)$
\begin{equation}
V^{\left( n\right) }(x;\omega ,a)\underset{iso}{\equiv }V(x;\omega ,a).
\label{oregSUSYpart2}
\end{equation}
Clearly, $w_{-}^{\left( n\right) }(x;\omega ,a)=-v_{n}(x;\omega ,a)$ is also
a solution of Eq(\ref{oregRSeq}) associated to the eigenvalue $E_{-\left(
n+a+1/2\right) }(\omega )<E_{0}\left( \omega \right) =0$. Nevertheless, its
asymptotic behaviour is similar to the one of $w_{-}^{\left( 0\right)
}(x;\omega ,a)=-\omega x/2-a/x$ and consequently
\begin{equation}
\psi _{-}^{\left( n\right) }(x;\omega ,a)\sim \exp \left( -\int
w_{-}^{\left( n\right) }(x;\omega ,a)dx\right)
\end{equation
cannot satisfy the boundary condition associated to the physically allowed
eigenstates.
All the physical eigenfunctions of $H^{\left( n\right) }(\omega
,a)=-d^{2}/dx^{2}+V^{\left( n\right) }(x;\omega ,a)$ are then of the form
\begin{equation}
\psi _{k}^{\left( n\right) }(x;\omega ,a)=\frac{1}{\sqrt{E_{k}(\omega
)-E_{-\left( n+a+1/2\right) }(\omega )}}A\left( v_{n}\right) \psi
_{k}(x;\omega ,a),\ k\geq 0
\end{equation
and $H^{\left( n\right) }$ is strictly isospectral to $H$.
Since (cf Eq(\ref{oregRSfct2})
\begin{equation}
V(x;\omega ,a)+2v_{0}^{\prime }(x;\omega ,a)=V(x;\omega ,a_{1}),
\end{equation
Eq(\ref{oregSUSYpart}) and\ Eq(\ref{oregSUSYpart2}) can still be written as
\begin{equation}
V^{\left( n\right) }(x;\omega ,a)=V(x;\omega ,a_{1})+2Q_{n}^{\prime
}(x;\omega ,a)\underset{iso}{\equiv }V(x;\omega ,a).
\end{equation}
For instance, we have for $n=1$
\begin{equation}
V^{\left( 1\right) }(x;\omega ,a)=V(x;\omega ,a_{1})+\frac{4\omega }{\omega
x^{2}+2a+1}-\frac{8\omega \left( 2a+1\right) }{\left( \omega
x^{2}+2a+1\right) ^{2}}
\end{equation
and we recover the first rationally-extended radial oscillator obtained by
Quesne \cite{quesne}. For $n=2$, we have immediately from Eq.(\ref{oregRSfct
), Eq.(\ref{oregRSfct2}) and Eq.(\ref{oregRSfct3})
\begin{equation}
-v_{2}(x;\omega ,a-1)=\frac{\omega }{2}x+\frac{a-1}{x}+\frac{4\omega x\left(
\omega x^{2}+\left( 2a+1\right) \right) }{\left( \omega x^{2}+\left(
2a+1\right) \right) ^{2}-2\left( 2a+1\right) },
\end{equation
which corresponds to the superpotential associated to the second
rationally-extended radial oscillator of the $L1$ series obtained by Quesne
\cite{quesne}.
More generally, we have
\begin{equation}
Q_{n}(x;\omega ,a)=\left( \log \left( \mathit{L}_{n}^{\left( a-1/2\right)
}\left( -\omega x^{2}/2\right) \right) \right) ^{\prime }.
\end{equation}
In Odake-Sasaki 's approach \cite{odake,odake2,sasaki2}, this corresponds to
a prepotential of the form
\begin{equation}
W_{n}\left( x;\omega ,a\right) =-\frac{\omega }{4}x^{2}+a\log x+\log \left(
\mathit{L}_{n}^{\left( a-1/2\right) }\left( -\omega x^{2}/2\right) \right)
\end{equation
and we recover (up to a shift in $a\rightarrow a+n-2$) the result obtained
in \cite{odake,odake2,sasaki2} and \cite{grandati2} for the potentials
associated to the L1 exceptional orthogonal polynomials.
\subsection{\protect\bigskip Rational extension of the $L2$ series}
As in the preceding case (cf Eq(\ref{aregpot})), we can use $u_{n}$ to build
a DBT $A\left( u_{n}\right) $
\begin{equation}
w_{k}(x;\omega ,a)\overset{A\left( u_{n}\right) }{\rightarrow }w_{k}^{\left(
n\right) }(x;\omega ,a)=-u_{n}(x;\omega ,a)+\frac{E_{k}(\omega
)-E_{n+1/2-a}(\omega )}{u_{n}(x;\omega ,a)-w_{k}(x;\omega ,a)},
\label{backL2}
\end{equation
where $w_{k}^{\left( n\right) }(x;\omega ,a)$ satisfies
\begin{equation}
-w_{k}^{\left( n\right) \prime }(x;\omega ,a)+\left( w_{k}^{\left( n\right)
}(x;\omega ,a)\right) ^{2}=U^{\left( n\right) }(x;\omega ,a)-E_{k}(\omega ),
\label{aregRSeq}
\end{equation
with
\begin{equation}
U^{\left( n\right) }(x;\omega ,a)=V(x;\omega ,a)+2u_{n}^{\prime }(x;\omega
,a). \label{aregSUSYpart}
\end{equation}
If $a>n+1/2$, $U^{\left( n\right) }(x;\omega ,a)$ is regular on the positive
half line and isospectral to $V(x;\omega ,a)$
\begin{equation}
U^{\left( n\right) }(x;\omega ,a)\underset{iso}{\equiv }V(x;\omega ,a).
\label{aregSUSYpart2}
\end{equation}
In this case, as for the $L1$ series, we see immediately that $w_{-}^{\left(
n\right) }(x;\omega ,a)=-u_{n}(x;\omega ,a)$ is another solution of Eq(\re
{aregRSeq}) associated to the eigenvalue $E_{n+1/2-a}(\omega )(\omega
)<E_{0}\left( \omega \right) =0$. But here again, the asymptotic behaviour
of $w_{-}^{\left( n\right) }(x;\omega ,a)$ is similar to the one of
w_{-}^{\left( 0\right) }(x;\omega ,a)=-\omega x/2-(a-1)/x$ and consequently
\begin{equation}
\psi _{-}^{\left( n\right) }(x;\omega ,a)\sim \exp \left( -\int
w_{-}^{\left( n\right) }(x;\omega ,a)dx\right)
\end{equation
cannot satisfy the boundary condition associated to the physically
acceptable eigenstates.
All the physical eigenfunctions of $H^{\left( n\right) }(\omega
,a)=-d^{2}/dx^{2}+U^{\left( n\right) }(x;\omega ,a)$ are then of the form
\begin{equation}
\psi _{k}^{\left( n\right) }(x;\omega ,a)=\frac{1}{\sqrt{E_{k}(\omega
)-E_{n+1/2-a}(\omega )}}A\left( u_{n}\right) \psi _{k}(x;\omega ,a),\ k\geq 0
\end{equation
and in the $L2$ series, $H^{\left( n\right) }$ is also strictly isospectral
to $H$.
Since (cf Eq.(\ref{aregRSfct2})
\begin{equation}
V(x;\omega ,a)+2u_{0}^{\prime }(x;\omega ,a)=V(x;\omega ,a_{-1}),
\end{equation
using Eq.(\ref{aregSUSYpart}) and Eq.(\ref{aregSUSYpart2}), we obtain
\begin{equation}
U^{\left( n\right) }(x;\omega ,a)=V(x;\omega ,a_{-1})+2P_{n}^{\prime
}(x;\omega ,a)\underset{iso}{\equiv }V(x;\omega ,a).
\end{equation}
Note that, since $P_{1}(x;\omega ,a)=Q_{1}(x;\omega ,a-2)$, the first
rational extension of this family has the same functional form than the
first rational extension of the preceding family.
For instance, we have for $n=2
\begin{equation}
P_{2}(x;\omega ,a)=-\frac{4\omega x\left( \omega x^{2}+\left( 2a-5\right)
\right) }{\left( \omega x^{2}+\left( 2a-5\right) \right) ^{2}+2(2a-5)},
\end{equation
which corresponds to Quesne\cite{quesne} second rational extension of the
L2 $ series.
We have also, by redefining $a\rightarrow n+a$
\begin{equation}
V(x;\omega ,a_{n-1})\underset{iso}{\equiv }V(x;\omega ,a_{n})+2P_{n}^{\prime
}(x;\omega ,a_{n}),
\end{equation
where
\begin{eqnarray}
P_{n}(x;\omega ,a_{n}) &=&-\frac{2n\omega }{\omega x+\left( 2n+2a-3\right)
/x-}\Rsh ...\Rsh \frac{2\left( n-j+1\right) \omega }{\omega x+2\left( \left(
n+a-j\right) -1\right) /x-}\Rsh ...\Rsh \frac{2\omega }{\omega x+\left(
2a-1\right) /x} \\
&=&-\left( \log \left( \mathit{L}_{n}^{-\left( a+n-1/2\right) }\left( \omega
x^{2}/2\right) \right) \right) ^{\prime } \notag
\end{eqnarray
is regular on the positive half line for $a>0$. In Sasaki and al \cit
{sasaki,sasaki2} formulation, we recover the associated prepotential via
\begin{equation}
W_{n}\left( x;\omega ,a\right) =-\int u_{n}(x;\omega ,a+n)dx=-\frac{\omega }
4}x^{2}-\frac{a+n-1}{x}+\log \left( \mathit{L}_{n}^{-\left( a+n-1/2\right)
}\left( \omega x^{2}/2\right) \right)
\end{equation
and the family of regular rational extensions obtained is exactly the $L2$
one.
\subsection{Rational extension of the $L3$ series}
Finally, $w_{k}$ and $r_{n}$ being also associated to the same potential but
with different eigenvalues (cf Eq(\ref{oaregpot})), here again we can use
r_{n}$ to build a DBT $A\left( r_{n}\right) $ and apply it to $w_{k}$
\begin{equation}
w_{k}(x;\omega ,a)\overset{A\left( r_{n}\right) }{\rightarrow }w_{k}^{\left(
n\right) }(x;\omega ,a)=-r_{n}(x;\omega ,a)+\frac{E_{k}(\omega )-E_{-\left(
n+1\right) }(\omega )}{r_{n}(x;\omega ,a)-w_{k}(x;\omega ,a)},
\label{backL3}
\end{equation
where $w_{k}^{\left( n\right) }(x;\omega ,a)$ satisfies
\begin{equation}
-w_{k}^{\left( n\right) \prime }(x;\omega ,a)+\left( w_{k}^{\left( n\right)
}(x;\omega ,a)\right) ^{2}=W^{\left( n\right) }(x;\omega ,a)-E_{k}(\omega ),
\label{oaregRSeq}
\end{equation
with
\begin{equation}
W^{\left( n\right) }(x;\omega ,a)=V(x;\omega ,a)+2r_{n}^{\prime }(x;\omega
,a). \label{oaregSUSYpart}
\end{equation}
If $n=2l$ and $a>2l+1/2$, $W^{\left( 2l\right) }(x;\omega ,a)$ is regular on
the positive half line and isospectral to $V(x;\omega ,a)$
\begin{equation}
W^{\left( 2l\right) }(x;\omega ,a)\underset{iso}{\equiv }V(x;\omega ,a).
\label{oaregSUSYpart2}
\end{equation}
As for the eigenfunctions of $H^{\left( 2l\right) }(\omega
,a)=-d^{2}/dx^{2}+W^{\left( 2l\right) }(x;\omega ,a)$ generated from those
of by the DBT Eq(\ref{backL3}), they are given by
\begin{equation}
\psi _{k}^{\left( 2l\right) }(x;\omega ,a)=\frac{1}{\sqrt{E_{k}(\omega
)-E_{-\left( 2l+1\right) }(\omega )}}A\left( r_{2l}\right) \psi
_{k}(x;\omega ,a),\ k\geq 0
\end{equation
and constitute physically allowed eigenstates. But for the $L3$ series the
isospectrality is no more strict as for the preceding series. Indeed, Eq(\re
{oaregRSeq}) is evidently satisfied by the regular RS function
\begin{equation}
w_{-}^{\left( 2l\right) }(x;\omega ,a)=-r_{2l}(x;\omega ,a),
\end{equation
the asymptotic behaviour of which being identical to the one of
w_{-}^{\left( 0\right) }(x;\omega ,a)=-r_{0}(x;\omega ,a)$. Then
\begin{equation}
\psi _{-}^{\left( 0\right) }(x;\omega ,a)=\exp \left( \int dxw_{-}^{\left(
0\right) }(x;\omega ,a)\right) \sim \psi _{0}(x;\omega ,a)
\end{equation
and contrarily to the preceding cases
\begin{equation}
\psi _{-}^{\left( 2l\right) }(x;\omega ,a)=\exp \left( \int dxw_{-}^{\left(
2l\right) }(x;\omega ,a)\right) \label{oaregfond}
\end{equation
is a physical state associated to the eigenvalue $E_{-\left( n+1\right)
}(\omega )<0$, that is, the fundamental state of the hamiltonian $H^{\left(
2l\right) }(\omega ,a)$. Consequently, $H^{\left( 2l\right) }$ and $H$ are
only quasi-isospectral in this series, $H^{\left( 2l\right) }$ admitting a
supplementary energy level lower than those of $H$.
Since (cf Eq.(\ref{oaregRSfct2})
\begin{equation}
V(x;\omega ,a)+2r_{0}^{\prime }(x;\omega ,a)=V(x;\omega ,a_{-1})-2\omega ,
\end{equation
using Eq.(\ref{oaregSUSYpart}) and Eq.(\ref{oaregSUSYpart2}), we obtain
\begin{equation}
W^{\left( n\right) }(x;\omega ,a)=V(x;\omega ,a_{-1})-2\omega
+2P_{n}^{\prime }(x;\omega ,a)\underset{iso}{\equiv }V(x;\omega ,a).
\end{equation}
Since $T_{1}(x;\omega ,a)=R_{1}(x;\omega ,a-2)$, the first rational
extension of this family has the same functional form than the first
rational extension of the $L0$ family.
\ For $n=2$, we have
\begin{equation}
T_{2}(x;\omega ,a)=\frac{-4\omega x\left( \omega x^{2}-\left( 2a-3\right)
\right) }{\left( \omega x^{2}-\left( 2a-3\right) \right) ^{2}+2(2a-3)},
\end{equation
which is regular if $a\geq 2$ ($l\geq 1$) and corresponds to Quesne\cit
{quesne} second rational extension of the $L3$ series.
If we redefine $a\rightarrow 2l+1/2+a$,
\begin{equation}
T_{2l}(x;\omega ,2l+a+1/2)=-\log \left( \mathit{L}_{2l}^{-\left( a+2l\right)
}\left( -\omega x^{2}/2\right) \right) ^{\prime }
\end{equation
and $W^{\left( 2l\right) }(x;\omega ,a+2l+1/2)$ are regular on the positive
half line for $a>0$.
\section{Shape invariance properties of the extensions of the isotonic
oscillator}
As observed initially by Quesne \cite{quesne1,quesne} on the $n=1$ and $n=2$
examples, the rational extended potentials of the $L1$ and $L2$ series
inherit of the shape invariance properties of the isotonic potential.
Several general proofs of this result have been recently proposed \cit
{odake2,sasaki2}, in particular by Gomez-Ullate et al \cite{gomez5}. In the
present approach, these shape invariance properties can be derived in a very
direct and transparent manner.
\subsection{Shape invariance of the extended potentials of the $L1$ series}
The superpartner of a potential of the $L1$ series $V^{\left( n\right)
}(x;\omega ,a)=V(x;\omega ,a)+2v_{n}^{\prime }(x;\omega ,a)$ is defined as
\begin{equation}
\widetilde{V}^{\left( n\right) }(x;\omega ,a)=V^{\left( n\right) }(x;\omega
,a)+2w_{0}^{\left( n\right) \prime }(x;\omega ,a),\ n\geq 0,
\label{SUSYpartL11}
\end{equation
$w_{0}^{\left( n\right) }(x;\omega ,a)$ (see Eq.(\ref{backL1})) being the RS
function associated to the ground level of $V^{\left( n\right) }$ (
E_{0}\left( \omega \right) =0$).
We then have
\begin{eqnarray}
\widetilde{V}^{\left( n\right) }(x;\omega ,a) &=&V^{\left( n\right)
}(x;\omega ,a)-2v_{n}^{\prime }(x;\omega ,a)-2\left( \frac{E_{-\left(
n+a+1/2\right) }(\omega )}{v_{n}(x;\omega ,a)-w_{0}(x;\omega ,a)}\right)
^{\prime } \label{SUSYpartL12} \\
&=&V(x;\omega ,a)-2\left( \frac{E_{-\left( n+a+1/2\right) }(\omega )}
v_{n}(x;\omega ,a)-w_{0}(x;\omega ,a)}\right) ^{\prime }. \notag
\end{eqnarray}
Using Eq(\ref{oregRSfct2}), the shape invariance property of $V(x;\omega ,a)$
in Eq.(\ref{VSIP1}) can also be formulated a
\begin{equation}
V(x;\omega ,a)+2v_{0}^{\prime }(x;\omega ,a)=V(x;\omega ,a_{1}).
\label{VSIP2}
\end{equation}
Inserting Eq(\ref{VSIP2}) in Eq(\ref{SUSYpartL12}), we obtain
\begin{eqnarray}
\widetilde{V}^{\left( n\right) }(x;\omega ,a) &=&V(x;\omega ,a_{1})-2\left(
\frac{E_{-\left( n+a+1/2\right) }(\omega )}{v_{n}(x;\omega
,a)-w_{0}(x;\omega ,a)}+v_{0}(x;\omega ,a)\right) ^{\prime }
\label{SUSYpartL13} \\
&=&V^{\left( n\right) }(x;\omega ,a_{1})-2\left( \Delta _{n}^{1}\right)
^{\prime }, \notag
\end{eqnarray
where
\begin{equation}
\Delta _{n}^{1}=\frac{E_{-\left( n+a+1/2\right) }(\omega )}{v_{n}(x;\omega
,a)-w_{0}(x;\omega ,a)}+v_{0}(x;\omega ,a)+v_{n}(x;\omega ,a_{1}).
\label{delt1}
\end{equation}
As an example, consider the special case $n=1$. Using Eq(\ref{oregRSfct3}),
we can write
\begin{eqnarray}
\Delta _{1}^{1} &=&-2\omega \left( a+3/2\right) \frac{1}{\frac{E_{1}(\omega
}{v_{0}(x;\omega ,a)+v_{0}(x;\omega ,a_{1})}-\omega x}-\omega x-\frac{2a+1}{
}+\frac{E_{1}(\omega )}{v_{0}(x;\omega ,a_{1})+v_{0}(x;\omega ,a_{2})} \\
&=&\left( 2a+3\right) \frac{\omega x+\frac{2a+1}{x}}{\omega x^{2}+\left(
2a+3\right) }-\omega x-\frac{2a+1}{x}-\frac{2\omega x}{\omega x^{2}+\left(
2a+3\right) }=-\omega x. \notag
\end{eqnarray}
We obtain
\begin{equation}
\widetilde{V}^{\left( 1\right) }(x;\omega ,a)=V^{\left( 1\right) }(x;\omega
,a_{1})+2\omega ,
\end{equation
which implies that $V^{\left( 1\right) }(x;\omega ,a)$ has the same shape
invariance properties as $V(x;\omega ,a)$.
More generally, using Eq(\ref{oregRSfct3}) and defining $z=-\omega x^{2}/2$
and $\alpha =a+1/2$, we obtain\bigskip
\begin{eqnarray}
\Delta _{n}^{1} &=&E_{-\left( a+n+1/2\right) }(\omega )\frac{1}
Q_{n}(x;\omega ,a)-\omega x}+\left( v_{0}(x;\omega ,a)+v_{0}(x;\omega
,a_{1})\right) +Q_{n}(x;\omega ,a_{1}) \label{delt1n} \\
&=&\frac{2\alpha +2}{x}\frac{\mathit{L}_{n}^{\left( \alpha -1\right) }\left(
z\right) }{\mathit{L}_{n-1}^{\left( \alpha \right) }\left( z\right) +\mathit
L}_{n}^{\left( \alpha -1\right) }\left( z\right) }-\omega x\frac{\mathit{L
_{n-1}^{\left( \alpha +1\right) }\left( z\right) +\mathit{L}_{n}^{\left(
\alpha \right) }\left( z\right) }{\mathit{L}_{n}^{\left( \alpha \right)
}\left( z\right) }-\frac{2\alpha }{x}. \notag
\end{eqnarray}
But the generalized Laguerre polynomials satisfy the identity
\begin{equation}
\mathit{L}_{n}^{\left( \alpha \right) }\left( z\right) +\mathit{L
_{n-1}^{\left( \alpha +1\right) }\left( z\right) =\mathit{L}_{n}^{\left(
\alpha +1\right) }\left( z\right) , \label{recLag1}
\end{equation
which give
\begin{equation}
\Delta _{n}^{1}=-\omega x\frac{\left( \alpha +n\right) \mathit{L
_{n}^{\left( \alpha -1\right) }\left( z\right) +z\mathit{L}_{n}^{\left(
\alpha +1\right) }\left( z\right) -\alpha \mathit{L}_{n}^{\left( \alpha
\right) }\left( z\right) }{z\mathit{L}_{n}^{\left( \alpha \right) }\left(
z\right) }.
\end{equation}
The other fundamental recurrence
\begin{equation}
\left( n+\alpha \right) \mathit{L}_{n-1}^{\left( \alpha \right) }\left(
z\right) -z\mathit{L}_{n}^{\left( \alpha +1\right) }\left( z\right) -\left(
n-z\right) \mathit{L}_{n}^{\left( \alpha \right) }\left( z\right) =0,
\label{recLag2}
\end{equation
combined with Eq(\ref{recLag1}) gives then directly
\begin{equation}
\Delta _{n}^{1}=-\omega x,
\end{equation
that is,
\begin{equation}
\widetilde{V}^{\left( n\right) }(x;\omega ,a)=V^{\left( n\right) }(x;\omega
,a_{1})+2\omega . \label{SUSYpartL14}
\end{equation}
Consequently $V^{\left( n\right) }(x;\omega ,a)$ inherits of the shape
invariance properties of $V(x;\omega ,a)$ for every value of $n$.
\subsection{Shape invariance of the extended potentials of the $L2$ series}
The superpartner of a potential $U^{\left( n\right) }(x;\omega
,a)=V(x;\omega ,a)+2u_{n}^{\prime }(x;\omega ,a)$ of the $L2$ series is
defined as
\begin{equation}
\widetilde{U}^{\left( n\right) }(x;\omega ,a)=U^{\left( n\right) }(x;\omega
,a)+2w_{0}^{\left( n\right) \prime }(x;\omega ,a),\ n\geq 0,
\label{SUSYpartL21}
\end{equation
$w_{0}^{\left( n\right) }(x;\omega ,a)$ (see Eq.(\ref{backL2})) being the RS
function associated to the ground level of $U^{\left( n\right) }$ . Then
\begin{equation}
\widetilde{U}^{\left( n\right) }(x;\omega ,a)=V(x;\omega ,a)-2\left( \frac
E_{n+1/2-a}(\omega )}{u_{n}(x;\omega ,a)-w_{0}(x;\omega ,a)}\right) ^{\prime
}. \label{SUSYpartL22}
\end{equation}
Using as before, the shape invariance properties of $V(x;\omega ,a)$, this
gives
\begin{eqnarray}
\widetilde{U}^{\left( n\right) }(x;\omega ,a) &=&V(x;\omega ,a_{1})-2\left(
\frac{E_{n+1/2-a}(\omega )}{u_{n}(x;\omega ,a)-w_{0}(x;\omega ,a)
+v_{0}(x;\omega ,a)\right) \label{SUSYpartL23} \\
&=&U^{\left( n\right) }(x;\omega ,a_{1})-2\left( \Delta _{n}^{2}\right)
^{\prime }, \notag
\end{eqnarray
where
\begin{equation}
\Delta _{n}^{2}=E_{n+1/2-a}(\omega )\frac{1}{u_{n}(x;\omega
,a)-w_{0}(x;\omega ,a)}+v_{0}(x;\omega ,a)+u_{n}(x;\omega ,a_{1}).
\label{delt2}
\end{equation}
Using Eq(\ref{oregRSfct3}) and defining $z=\omega x^{2}/2$ and $\alpha
=1/2-a $, this become
\begin{equation}
\Delta _{n}^{2}=\frac{\left( 2n-2a+1\right) \omega }{P_{n}(x;\omega ,a)
\frac{2a-1}{x}}+P_{n}(x;\omega ,a_{1})=\omega x\left( \frac{\mathit{L
_{n}^{\left( \alpha \right) }\left( z\right) }{\mathit{L}_{n-1}^{\left(
\alpha -1\right) }\left( z\right) }+\frac{(n+\alpha )\mathit{L}_{n}^{\left(
\alpha \right) }\left( z\right) }{-\alpha \mathit{L}_{n}^{\left( \alpha
\right) }\left( z\right) +z\mathit{L}_{n-1}^{\left( \alpha +1\right) }\left(
z\right) }\right) , \label{delt2n}
\end{equation}
But the generalized Laguerre polynomials satisfy the identity
\begin{equation}
z\mathit{L}_{n-1}^{\left( \alpha +1\right) }\left( z\right) =(n+\alpha
\mathit{L}_{n-1}^{\left( \alpha \right) }\left( z\right) -n\mathit{L
_{n}^{\left( \alpha \right) }\left( z\right) , \label{recLag4}
\end{equation
which combined to Eq(\ref{recLag1}) gives
\begin{equation}
\Delta _{n}^{2}=-\omega x.
\end{equation}
Then $U^{\left( n\right) }(x;\omega ,a)$ has the same shape invariance
properties as $V(x;\omega ,a)$ for every value of $n$, that is
\begin{equation}
\widetilde{U}^{\left( n\right) }(x;\omega ,a)=U^{\left( n\right) }(x;\omega
,a_{1})+2\omega . \label{SUSYpartL24}
\end{equation}
\subsection{SUSY partners of the $L3$ series extended potentials}
In this case, the superpartner of the extended potential $V^{\left( n\right)
}(x;\omega ,a)=V(x;\omega ,a)+2r_{n}^{\prime }(x;\omega ,a)$ is defined as
\begin{equation}
\widetilde{W}^{\left( n\right) }(x;\omega ,a)=W^{\left( n\right) }(x;\omega
,a)+2\left( -r_{n}^{\prime }(x;\omega ,a)\right) =V(x;\omega ,a),\ n\geq 1
\label{SUSYpartL31}
\end{equation
since $-r_{n}(x;\omega ,a)$ is the RS function associated to the ground
level of $W^{\left( n\right) }$.
The SUSY partner of $W^{\left( n\right) }(x;\omega ,a)$ is nothing but the
initial potential $V(x;\omega ,a)$ itself and the DBT $A\left( v_{n}\right) $
is the reciprocal of a SUSY partnership.
\section{Conclusion and perspectives}
In this article, a new method to generate the regular rational extensions of
the isotonic oscillator associated to the $L1$ and $L2$ families of
exceptional Laguerre polynomials is presented. It is based on first order
Darboux-B\"{a}cklund Transformations which are built from excited states RS
functions regularized by using specific symmetries of the isotonic
potential. Starting from this primary shape invariant potential and using
the combination of these symmetries and DBT (as covariance transformations),
we generate four towers of secondary potentials, the four series $L0$, $L1$,
$L2$ and $L3$. Among them, the potentials belonging to the $L1$ and $L2$
series are regular as well as half of the potentials of the $L3$ series, the
other ones being singular on the positive half line. The secondary
potentials of the $L1$ and $L2$ series inherit of the same translational
shape invariance properties as the primary isotonic potential.
These new potentials being obtained, it is still possible to use the
Krein-Adler theorem \cite{krein,adler} and its subsequent extension obtained
by Samsonov \cite{samsonov2}, to generate other secondary potentials by
applications of some particular $n^{th}$ order DBT.
A similar study can be conducted for the other second category potentials
(Darboux-P\"{o}schl-Teller or Scarf hyperbolic and trigonometric) but also
for the first category potentials. These last \cite{grandati} include the
well known case of the one-dimensional harmonic oscillator \cit
{shnol',samsonov,tkachuk,gomez,fellows,grandati2} but also the Morse
potential (the regular algebraic deformations of which having already be
obtained by Gomez-Ullate et al \cite{gomez}), the effective radial
Kepler-Coulomb potential and the Rosen-Morse potentials. This work is in
progress and will be the object of a forthcoming paper.
\section{\protect\bigskip Acknowledgments}
I would like to thank A.\ B\'{e}rard, R.\ Milson and C.\ Quesne for
stimulating discussions and very interesting suggestions.
|
2,869,038,156,798 | arxiv | \section{Introduction}
\label{sec:intro}
Anomaly detection has received widespread attention in diverse domains, such as industrial defect inspection \cite{MVTec, PaDiM, PatchSVDD, SPADE} and medical lesion detection \cite{KDAD, DRA}. Most of previous anomaly detection methods \cite{GANomaly, ALAD, AnoGAN, fast-AnoGAN, GAN1, GAN2, RandNet, SSIM, AutoEncoder2, AutoEncoder3, deepSVDD, FCDD, PatchSVDD, Geom, DeepKNN, SPADE, PaDiM, PaDiM1} are unsupervised and pay much attention to normal samples while overlooking the anomalies, because it is difficult to collect all kinds of anomalies. However, learning only from normal samples may cause the discriminative performance of the model to be insufficient. Without efficient discriminability, the detector's generalization ability will be limited, and a lot of false negatives and false positives can be induced, especially for those unseen anomalies. As illustrated in Figure\ref{fig:activation}(a), without anomalies, the decision boundaries are generally implicit and are not discriminative enough. The \emph{insufficient discriminability} issue is a common issue in unsupervised anomaly detection due to the lack of knowledge about anomalies. To address this issue, anomalies should be exploited as possible, fortunately, a few labeled anomalies are usually available in real-world applications.
Recently, methods which can be called semi-supervised AD \cite{SAD, HSC} or AD with outlier exposure \cite{OE, OE2} begin to focus on those available anomalies. These methods attempt to learn knowledge from anomalies by one-class classification with anomalies as negative samples \cite{SAD, HSC} or by supervised binary classification \cite{OE, OE2}. They show a fact that the detection performance can be improved significantly even with a few anomalies. However, the known anomalies can't represent all kinds of anomalies. These methods may be biased by the known anomalies and fail to generalize to unseen anomalies. Thus, our purpose in this paper is to learn a more discriminative anomaly detection model by exploiting a few anomalies effectively with the objective to improve detection performance on known anomalies and generalize well to unseen anomalies.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{motivation.pdf}
\caption{Conceptual illustration of our method. (a) The ambiguous anomaly score distribution with implicit decision boundaries, the boundaries are far from the ideal decision boundary. The left boundary will cause many false negatives, and the right boundary will cause many false positives. (b) The normalized normal feature distribution, and we can find explicit boundaries close to the normal distribution. (c) The unambiguous anomaly score distribution guided by explicit separating boundaries.}
\label{fig:activation}
\end{figure}
To achieve the above objective, we propose a novel and more discriminative anomaly detection model: \textbf{B}oundary \textbf{G}uided \textbf{A}nomaly \textbf{D}etection with a \textbf{F}ew \textbf{A}bnormal \textbf{S}amples, termed as \textbf{BGAD-FAS}. Our model has two core designs as illustrated in Figure\ref{fig:activation}: explicit boundary generating and boundary guided optimizing.
$\bullet$ \textbf{Explicit Boundary Generating.} We first employ normalizing flow to learn normalized normal feature distribution (see Figure\ref{fig:activation}(b)), and obtain an explicit separating boundary close to the distribution edge. The obtained explicit and compact separating boundary only relies on the normal distribution and has no relation with the abnormal samples, thus the bias problem caused by insufficient known anomalies can be mitigated. The detailed description is in sec\ref{sec:distribution_normalizing} and \ref{sec:boundary_selection}.
$\bullet$ \textbf{Boundary Guided Optimizing.} After obtaining the explicit separating boundary, we then propose a boundary guided semi-push-pull (BG-SPP) loss to exploit anomalies for learning more discriminative features. With the BG-SPP loss, only normal features whose log-likelihoods are smaller than the boundary are pulled together to form a more compact normal feature distribution (semi-pull); while abnormal features whose log-likelihoods are larger than the boundary are pushed apart from the boundary (semi-push). The detailed description is in sec\ref{sec:bg_sppc}.
In this way, our model can form a more explicit and discriminative separating boundary and also a reliable margin region for distinguishing anomalies more effectively (see Figure\ref{fig:activation}(c)). Furthermore, rarity is a critical problem of anomalies and may cause feature learning inefficient. We thus propose RandAugmented CutPaste, which can simulate anomalies by creating local irregularities in normal samples, to tackle the rarity challenge.
In summary, the main contributions of this work are three-fold:
1. We propose a novel anomaly detection model termed as BGAD-FAS, in which an explicit separating boundary is found to guide the further discriminative feature learning.
2. To exploit a few existing anomalies effectively, we propose a BG-SPP loss to pull together normal features while pushing abnormal features apart from the separating boundary, thus more discriminative features can be learned.
3. We achieve new state-of-the-art results on the widely-used MVTecAD benchmark, with the performance of 98.8\% image-level AUROC and 99.4\% pixel-level AUROC.
\section{Overview and Notations}
\label{sec:overview}
Different from the general unsupervised anomaly detection setting, the training set in this paper is composed of normal images and a few existing anomalies, denoted as $\mathcal{I}_{train}=\mathcal{I}_n \cup \mathcal{I}_a$, where $\mathcal{I}_n = \{I_i\}_{i=1}^{N_0}$ and $\mathcal{I}_a = \{I_j\}_{j=1}^{M_0}$ indicate the collection of normal samples and abnormal samples, respectively.
Figure \ref{fig:framework} overviews our proposed method. The model consists of four parts: Feature Extractor $f: \mathcal{I} \rightarrow \mathcal{X}$, Conditional Normalizing Flow (CNFlow) $\varphi_\theta: \mathcal{X} \rightarrow \mathcal{Z}$, Explicit Boundary Generating and Boundary Guided Optimizing. We refer the features extracted by the feature extractor as input features for CNFlow, and denote these features as $\mathcal{X} = \mathcal{X}_n \cup \mathcal{X}_a$, where $\mathcal{X}_n=\{x_i\}_{i=1}^N$ and $\mathcal{X}_a=\{x_j\}_{j=1}^M$ are normal and abnormal features, respectively. The training procedure can be divided into two phases as shown in Figure \ref{fig:framework}: explicit boundary generating and boundary guided optimizing.
In the testing procedure, the CNFlow can assign corresponding log-likelihoods for input features, and the log-likelihoods can be converted to anomaly scores (see Anomaly Scoring in sec\ref{sec:boundary_selection}). We denote our \textbf{B}oundary \textbf{G}uided \textbf{A}nomaly \textbf{D}etection model without and with a \textbf{F}ew \textbf{A}bnormal \textbf{S}amples as \textbf{BGAD} and \textbf{BGAD-FAS}, respectively.
\section{Our Proposed Method}
\label{sec:method}
\subsection{Learning Normal Feature Distribution by Normalizing Flow}
\label{sec:distribution_normalizing}
In order to find one anomaly-independent separating boundary, one normalized distribution of normal features should be learned firstly. Normalizing flow is employed to learn normal feature distribution in our method.
\textbf{Conditional Normalizing Flow.} Formally, we denote $\varphi_\theta: \mathcal{X} \in \mathbb{R}^d \rightarrow \mathcal{Z} \in \mathbb{R}^d$ as our normalizing flow. It is built as a composition of coupling layers \cite{NICE} such that $\varphi_\theta=\varphi_L\circ\dots\circ\varphi_2\circ\varphi_1$, where $\theta$ is the trainable parameters and $L$ is the total number of layers. Defining $d$-dimensional input and output features of normalizing flow as $y_0=x\in \mathcal{X}$ and $y_L=z \in \mathcal{Z}$, the latents can be computed as $y_l=\varphi_l(y_{l-1})$, where $\{y_l\}_{l=1}^{L-1}$ are the intermediate outputs. The input distribution estimated by model $p_\theta(x)$ can be calculated according to the change of variables formula as follows \cite{NICE}\cite{realNVP}:
\begin{equation}
\label{eq:log_likelihood1}
{\rm log}p_{\theta}(x)={\rm log}p_\mathcal{Z}(\varphi_\theta(x))+\sum\nolimits_{l=1}^{L}{\rm log}\big|{\rm det}J_{\varphi_l}(y_{l-1})\big|
\end{equation}
where $J_{\varphi_l}(y_{l-1}) = \frac{\partial \varphi_l(y_{l-1})}{\partial y_{l-1}}$ is the Jaocabian matrix of the transformation $\varphi_l$ at $y_{l-1}$. Normalizing flow can approximate the feature distribution $p_{\mathcal{X}}$ with $p_\theta(x)$. The set of parameters $\theta$ is obtained by optimizing the log-likelihoods across the training distribution $p_{\mathcal{X}}$:
\begin{equation}
\label{eq:maximum_optimization}
\theta^* = \mathop{{\rm argmin}}\limits_{\theta \in \Theta}\mathbb{E}_{x \sim p_{\mathcal{X}}}[-{\rm log}p_\theta(x)]
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{framework.pdf} \\
\caption{Overview of our model. The extracted feature maps are transformed into a latent space using a conditional normalizing flow (CNFlow), which is then used to generate an anomaly score for each input feature. The training procedure can be divided into two phases: explicit boundary generating and boundary guided optimizing. In the first phase, only normal samples and ML loss (Eq.\ref{eq:ml_loss}) are utilized for model training to obtain a relatively stable normal log-likelihood distribution, and then one explicit separating boundary can be obtained based on the learned distribution. In the second phase, with the explicit boundary and the BG-SPP loss (Eq.\ref{eq:bg_sppc}), both normal and abnormal samples are utilized for model training, so as to learn more discriminative features.}
\label{fig:framework}
\end{figure}
The coupling layers in normalizing flow are generally implemented by fully connected layers, so the spatial position relationship will be destroyed because the 2D feature maps are flattened to 1D. To preserve the positional information, we add 2D-aware position embeddings to the feature maps \cite{CFLOW}.
\textbf{Normalizing Normal Feature Distribution.} We then employ normalizing flow to learn normal feature distribution by maximum likelihood optimization. The latent variable distribution $p_\mathcal{Z}(z), z \in \mathbb{R}^d$ can generally be assumed to obey the multivariate Gaussian distribution \cite{CFLOW, DifferNet} as follows:
\begin{equation}
\label{eq:gaussian}
p_\mathcal{Z}(z) = (2\pi)^{-\frac{d}{2}}{\rm det}(\Sigma^{-\frac{1}{2}}){\rm e}^{-\frac{1}{2}(z-\mu)^T\Sigma^{-1}(z-\mu)}
\end{equation}
When training normal features, the latent variables for normal features can be assumed to obey $\mathcal{N}(0, \mathbf{I})$ for further simplicity. By replacing $p_\mathcal{Z}(z)=(2\pi)^{-\frac{d}{2}}{\rm e}^{-\frac{1}{2}z^Tz}$ in formula\ref{eq:log_likelihood1}, the optimization objective in the formula \ref{eq:maximum_optimization} can be rewritten as:
\begin{equation}
\theta^* = \mathop{{\rm argmin}}\limits_{\theta \in \Theta}\mathbb{E}_{x \sim p_\mathcal{X}}\Big[\frac{1}{2}\varphi_\theta(x)^T\varphi_\theta(x)-\sum\nolimits_{l=1}^{L}{\rm log}\big|{\rm det}J_{\varphi_l}(y_{l-1})|+\frac{d}{2}{\rm log}(2\pi)\Big]
\end{equation}
The maximum likelihood loss function for optimizing normal features can be defined as:
\begin{equation}
\label{eq:ml_loss}
\mathcal{L}_{ml}= \mathbb{E}_{x \in \mathcal{X}_n}\Big[\frac{1}{2}\varphi_\theta(x)^T\varphi_\theta(x)-\sum\nolimits_{l=1}^{L}{\rm log}\big|{\rm det}J_{\varphi_l}(y_{l-1})|+\frac{d}{2}{\rm log}(2\pi)\Big]
\end{equation}
\subsection{Finding an Explicit and Compact Separating Boundary}
\label{sec:boundary_selection}
With the learned normal distribution, one explicit and compact separating boundary can be found, and then be used as the guidance for further contrastive learning.
\textbf{Anomaly Scoring.} The advantage of normalizing flow is that we can estimate the exact log-likelihood for each input feature $x$ as follows:
\begin{equation}
\label{eq:log_likelihood2}
{\rm log}p(x) = -\frac{1}{2}\varphi_\theta(x)^T\varphi_\theta(x)+\sum\nolimits_{l=1}^{L}{\rm log}\big|detJ_{\varphi_l}(y_{l-1})|-\frac{d}{2}{\rm log}(2\pi)
\end{equation}
where the ${\rm log}p(x)$ means the log-likelihood of $x$. With the estimated log-likelihood, we can convert it to likelihood via the exponential function. As we maximize log-likelihoods for normal features in Eq.(\ref{eq:ml_loss}), the likelihood can directly measure the normality. Thus, we can generate the anomaly score by using maximum likelihood to subtract each likelihood value as follows:
\begin{equation}
s(x) = \mathop{{\rm max}}\limits_{x^\prime \in \mathcal{X}}({\rm exp}({\rm log}p(x^\prime))) - {\rm exp}({\rm log}p(x))
\end{equation}
where the $s(x)$ means the anomaly score of $x$. Because the exponential function is monotonic, the log-likelihood can be equivalently converted to the anomaly score. Thus, the separating boundary in log-likelihood distribution is equivalent to the separating boundary in anomaly score distribution.
\textbf{Finding Explicit Separating Boundaries.} We then obtain separating boundaries based on log-likelihood distribution. We build the boundaries through the following two steps:
\textbf{1. Building normal log-likelihood distribution.} We can employ log-likelihood estimation formulation in Eq.(\ref{eq:log_likelihood2}) to obtain all log-likelihoods of normal features $\mathcal{P}_n=\{{\rm log}p_i\}_{i=1}^N$. The $\mathcal{P}_n$ can be used to approximate log-likelihood distribution of all normal features.
\textbf{2. Finding explicit normal and abnormal boundary.} Considering normal features are relatively sufficient, the closer the separating boundary is to the normal log-likelihood distribution, the more conducive
it is to distinguish the anomalies. However, if we set the boundary too close to the distribution center, the samples in the tail of the normal distribution are more likely to be misclassified as abnormal. Thus, we define a position hyperparameter $\beta$ to control the distance from the boundary to the center. We select the $\beta$-th percentile (\emph{i.e.} $\beta=5$) of sorted normal log-likelihood distribution as the normal boundary $b_n$, which also indicates the upper bound of the normal false positive rate is $\beta\%$. To make the feature learning more robust, we further introduce a margin hyperparameter $\tau$ and define an abnormal boundary $b_a = b_n - \tau$. The boundaries are shown in Figure\ref{fig:framework}.
The obtained explicit and compact (close to the normal distribution edge) separating boundary only relies on the normal distribution and has no relation with the abnormal samples, thus our method can avoid the bias problem caused by insufficient known anomalies (as validated in sec\ref{sec:ablation}).
\subsection{Learning More Discriminative Features by Boundary Guided Semi-Push-Pull}
\label{sec:bg_sppc}
With the explicit normal and abnormal boundary, we propose a boundary guided semi-push-pull (BG-SPP) loss for more discriminative feature learning. Our BG-SPP loss can utilize the boundary $b_n$ as the contrastive object (boundary guided), and only pull together normal features whose log-likelihoods are smaller than $b_n$ (semi-pull) while pushing abnormal features whose log-likelihoods are larger than $b_a$ apart from $b_n$ at least beyond the margin $\tau$ (semi-push). The formulation of the BG-SPP loss is defined as:
\begin{align}
\label{eq:bg_sppc}
\mathcal{L}_{bg-spp}^0 &= ||{\rm min}((\mathcal{P}_n-\mathcal{B}_n),0)||_0 + ||{\rm max}((\mathcal{P}_a-\mathcal{B}_n + \mathcal{T}),0)||_0 \nonumber \\
&= ||{\rm min}((\mathcal{P}_n-\mathcal{B}_n),0)||_0 + ||{\rm max}((\mathcal{P}_a-\mathcal{B}_a),0)||_0
\end{align}
where $\mathcal{B}_n=b_n\cdot\mathbf{1}_{N}$ and $\mathcal{B}_a=\mathcal{B}_n - \mathcal{T} =(b_n-\tau)\cdot\mathbf{1}_{M}$ are boundary parameters. We define BG-SPP loss as $\ell_0$ norm based formulation to encourage the sparse log-likelihood distribution in the margin region $(b_a,b_n)$, because any log-likelihood ${\rm log}p_i$ fallen into the margin region $(b_a,b_n)$ will increase the value of $\mathcal{L}_{bg-spp}^0$. Since the log-likelihoods can range from $(-\infty, 0]$, the large region makes it difficult to select the margin hyperparameter $\tau$. Thus, we define a large enough normalizer $\alpha_n = -\alpha \cdot b_n$ (\emph{i.e.} $\alpha=5$) and employ it to normalize the log-likelihoods to the range $[-1,0]$. We also denote that the extremely small log-likelihoods can be excluded outside the BG-SPP loss in Eq.\ref{eq:bg_sppc}, as these log-likelihoods can be easily divided into anomalies. Therefore, minimizing the BG-SPP loss will encourage all log-likelihoods $\mathcal{P}$ to distribute in the regions $[-1,b_a]$ or $[b_n,0]$. Even without anomalies, our model can also be optimized by the first part of the BG-SPP loss. In the second training phase, the objective function is as follows:
\begin{equation}
\label{eq:ml_bg_sppc}
\mathcal{L} = \mathcal{L}_{ml} + \lambda \mathcal{L}_{bg-spp}^0
\end{equation}
Minimizing the objective function in Eq.(\ref{eq:ml_bg_sppc}) is a classical $\ell_0$ norm optimization problem which is usually non-continuous and non-convex. For the original $\ell_0$ norm based formulation, we have that ${\rm min}((\mathcal{P}_n-\mathcal{B}_n),0) \in [-1, 0]^{N}$ and ${\rm max}((\mathcal{P}_a-\mathcal{B}_a),0) \in [0, 1]^{M}$. As the $\ell_1$ norm is a convex envelope of $\ell_0$ norm in the unit hypercube $[-1,1]^{N+M}$, we can convert the $\ell_0$ norm based form to the $\ell_1$ norm based form for easier optimization:
\begin{equation}
\mathcal{L}_{bg-spp}^1 = \sum_{i=1}^{N}|{\rm min}(({\rm log}p_i-b_n),0)| + \sum_{j=1}^{M}|{\rm max}(({\rm log}p_j - b_n + \tau),0)|
\end{equation}
We further provide an error bound analysis for the learning objective in Appendix \ref{sec:appendix_th1}.
\subsection{RandAugmented CutPaste}
In this subsection, we propose RandAugmented CutPaste (RACP), which can simulate anomalies by randomly creating local irregularities, to improve the quantity as well as the diversity of irregular patterns. The whole procedure is illustrated in Appendix \ref{sec:appendix_rcp} and summarized as follows:
1. Adapted from RandAugment \cite{RandAugment}, we first select $K$ available image transformations and probabilities of applying each transformation to construct an augmentation set $\mathcal{T} := \{T_1,\dots,T_K|T_k : \mathcal{I} \rightarrow \mathcal{I}\}$: \{AutoContrast, Equalize, Rotate, Posterize, Solarize, Brightness, Sharpness, Translate, Shear\}.
2. Randomly selecting an augmentation subset $T_{RS} \sim \mathcal{T}$ containing $S$ transformations to augment an abnormal sample: $I_a^{\prime} = T_{RS}(I_a), I_a \in \mathcal{I}_a$.
3. Cutting the anomalous regions of the augmented abnormal sample: $\mathcal{R}_a = \mathop{Cut}(I_a^{\prime})$.
4. Pasting the cropped anomalous regions back to a random normal sample at a random location: $I_a^{\prime \prime} = \mathop{Paste}(I_n, \mathcal{R}_a), I_n \in \mathcal{I}_n$.
RACP can mitigate the rarity problem at the image level, we further propose Asymmetric Focal Weighting for the objective function to focus on hard normal features and abnormal features at the loss level. Details and effectiveness of these methods are provided in Appendix\ref{sec:appendix_rcp}.
\section{Experiments}
\label{sec:experiments}
\subsection{Datasets and Metrics}
\textbf{Datasets.} In this work, we focus on anomalies in real-world applications, such as industrial defect inspection and medical lesion detection. Thus, anomaly datasets with natural anomalies are evaluated in our experiments rather than the semantic anomaly datasets used in many previous studies under the one-vs-all protocols. Specifically, we evaluate six real-world anomaly detection datasets, including four industrial defect inspection datasets: \textbf{MVTecAD} \cite{MVTec}, \textbf{BTAD} \cite{BTAD}, \textbf{AITEX} \cite{AITEX} and \textbf{ELPV} \cite{ELPV}; and two medical image datasets for detecting lesions on different organs: \textbf{BrainMRI} \cite{KDAD} and \textbf{HeadCT} \cite{KDAD}.
A more detailed introduction of these datasets is provided in Appendix\ref{sec:appendix_dataset}.
\textbf{Evaluation Metrics.} The performance of BGAD-FAS and all comparable methods are evaluated by the area under the curve (AUC) of the receiver operating characteristic (ROC) at the image or pixel level (AUROC). In order to weight ground-truth anomaly regions of various sizes equally, we also adopt the Per-Region-Overlap (PRO) curve metric proposed in \cite{STAD}.
The implementation details can be found in Appendix \ref{sec:appendix_imp}. The BGAD-FAS is trained with five anomaly samples per category by default if not specified.
\begin{table}
\vspace{-1.5cm}
\caption{Image-level anomaly detection and pixel-level anomaly localization results on the MVTecAD dataset. $\cdot/\cdot$ means pixel-level AUROC and PRO. The results of our model are averaged over three independent runs.}
\label{tab:main_results}
\resizebox{1.0\linewidth}{!}{
\begin{tabular}{ccccccccc|ccc}
\hline
& &\multicolumn{7}{c|}{AD w/o Abnormal Samples} & \multicolumn{3}{c}{AD w/ Abnormal Samples} \\
& Category & SPADE \cite{SPADE} & DFR \cite{DFR} & P-SVDD \cite{PatchSVDD} & CutPaste \cite{CutPaste} & PaDiM \cite{PaDiM} & MSFD \cite{MSFD} & BGAD (Ours) & FCDD \cite{FCDD} & DRA \cite{DRA} & BGAD-FAS (Ours) \\
\hline
\multirow{5}*{\rotatebox{90}{Textures}}& Carpet & 0.975/0.947 & 0.97/0.93 & 0.926/- & 0.983/- & 0.991/0.962 & 0.990/0.958 & 0.994/0.981 & 0.99/- & -/- & \textbf{0.997}$\pm$0.0002/\textbf{0.991}$\pm$0.0004\\
& Grid & 0.937/0.867 & 0.98/0.93 & 0.962/- & 0.975/- & 0.973/0.946 & 0.986/0.937 & 0.994/0.975 & 0.95/- & -/- & \textbf{0.997}$\pm$0.0002/\textbf{0.988}$\pm$0.0001\\
& Leather & 0.976/0.972 & 0.98/0.97 & 0.974/- & 0.995/- & 0.992/0.978 & 0.978/0.924 & 0.997/0.993 & 0.99/- & -/- & \textbf{0.998}$\pm$0.0001/\textbf{0.994}$\pm$0.0003\\
& Tile & 0.874/0.759 & 0.87/0.79 & 0.914/- & 0.905/- & 0.941/0.860 & 0.952/0.841 & 0.969/0.929 & 0.98/- & -/- & \textbf{0.994}$\pm$0.0077/\textbf{0.978}$\pm$0.0021\\
& Wood & 0.874/0.885 & 0.83/0.91 & 0.908/- & 0.955/- & 0.949/0.911 & 0.953/0.925 & 0.970/0.956 & 0.94/- & -/- & \textbf{0.986}$\pm$0.0053/\textbf{0.977}$\pm$0.0007\\
\hline
\multirow{10}*{\rotatebox{90}{Objects}} & Bottle & 0.984/0.955 & 0.97/0.93 & 0.981/- & 0.976/- & 0.983/0.948 & 0.985/0.940 & 0.989/0.960 & 0.96/- & -/- & \textbf{0.994}$\pm$0.0009/\textbf{0.976}$\pm$0.0011\\
& Cable & 0.972/0.909 & 0.92/0.81 & 0.968/- & 0.900/- & 0.967/0.888 & 0.972/0.922 & 0.978/0.969 & 0.93/- & -/- & \textbf{0.992}$\pm$0.0010/\textbf{0.999}$\pm$0.0030\\
& Capsule & 0.990/0.937 & 0.99/0.96 & 0.958/- & 0.974/- & 0.990/0.935 & 0.979/0.878 & 0.990/0.945 & 0.95/- & -/- & \textbf{0.992}$\pm$0.0021/\textbf{0.965}$\pm$0.0033\\
& Hazelnut & 0.991/0.954 & 0.99/0.97 & 0.975/- & 0.973/- & 0.991/0.926 & 0.982/0.968 & 0.985/0.977 & 0.97/- & -/- & \textbf{0.994}$\pm$0.0040/\textbf{0.981}$\pm$0.0028\\
& Metal nut & 0.981/0.944 & 0.93/0.90 & 0.980/- & 0.931/- & 0.981/0.856 & 0.972/0.985 & 0.974/0.950 & 0.98/- & -/- & \textbf{0.996}$\pm$0.0003/\textbf{0.972}$\pm$0.0012\\
& Pill & 0.965/0.946 & 0.97/0.96 & 0.951/- & 0.957/- & 0.914/0.927 & 0.971/0.929 & 0.982/0.979 & 0.99/- & -/- & \textbf{0.997}$\pm$0.0002/\textbf{0.986}$\pm$0.0005\\
& Screw & 0.989/0.960 & 0.99/0.96 & 0.957/- & 0.967/- & 0.989/0.944 & 0.983/0.924 & 0.990/0.952 & 0.93/- & -/- & \textbf{0.992}$\pm$0.0003/\textbf{0.966}$\pm$0.0010\\
& Toothbrush & 0.979/0.935 & 0.99/0.93 & 0.981/- & 0.981/- & 0.979/0.931 & 0.986/0.877 & 0.984/0.935 & 0.95/- & -/- & \textbf{0.992}$\pm$0.0003/\textbf{0.949}$\pm$0.0026\\
& Transistor & 0.941/0.874 & 0.8/0.79 & 0.970/- & 0.930/- & 0.941/0.845 & 0.886/0.781 & 0.936/0.845 & 0.90/- & -/- & \textbf{0.992}$\pm$0.0005/\textbf{0.982}$\pm$0.0015\\
& Zipper & 0.965/0.926 & 0.96/0.90 & 0.951/- & 0.993/- & 0.965/0.959 & 0.981/0.935 & 0.987/0.951 & 0.98/- & -/- & \textbf{0.995}$\pm$0.0003/\textbf{0.975}$\pm$0.0002\\
\hline
\hline
& Mean & 0.960/0.917 & 0.95/0.91 & 0.957/- & 0.960/- & 0.975/0.921 & 0.970/0.915 & 0.981/0.953 & 0.96/- & -/- & \textbf{0.994}$\pm$0.0007/\textbf{0.979}$\pm$0.0006\\
\hline
& Image-level Mean & 0.855 & 0.938 & 0.921 & 0.971 & 0.979 & 0.964 & 0.968 & - & 0.959 & \textbf{0.988}$\pm$0.0012\\
\hline
\end{tabular}}
\end{table}
\subsection{Comparison with the State-of-the-Art}
\textbf{MVTecAD.} We compare our BGAD-FAS with the state-of-the-art anomaly detection methods, including SPADE \cite{SPADE}, DFR \cite{DFR}, P-SVDD \cite{PatchSVDD}, PaDiM \cite{PaDiM}, CutPaste \cite{CutPaste}, MSFD \cite{MSFD} under the metrics of image-level AUROC, pixel-level AUROC and PRO (Table \ref{tab:main_results}). The detailed pixel-level AUROC, and PRO comparison results of all categories are shown in Table \ref{tab:main_results}. As shown in Table \ref{tab:main_results}, the BGAD can achieve comparable results with the SOTA methods while the BGAD-FAS reaches the best performance under all three evaluation metrics. Our BGAD-FAS can further surpass unsupervised BGAD by 2.0\%, 1.3\% and 2.6\% for image-level AUROC, pixel-level AUROC, and PRO, respectively. What's more, the largest gain in PRO demonstrates that the proposed method is not only effective in anomaly detection but also in anomaly localization to better locate the anomalous area.
\begin{wraptable}{r}{0.6\linewidth}
\caption{Pixel-level anomaly localization results measured by pixel-wise AUROC on BTAD dataset. All the other results are from \cite{BTAD}. $\cdot/\cdot$ means pixel-level AUROC and PRO. The results of our model are averaged over three independent runs.}
\label{tab:BTAD}
\resizebox{\linewidth}{!} {
\begin{tabular}{c|c|c|c|c|c}
\hline
Categories & AE MSE \cite{AutoEncoder} & AE SSIM \cite{SSIM} & VT-ADL \cite{BTAD} & BGAD (Ours) & BGAD-FAS (Ours) \\
\hline
1 & 0.490 & 0.530 & \textbf{0.990} & 0.972/0.767 & 0.980$\pm$0.0027/\textbf{0.797}$\pm$0.0318 \\
\hline
2 & 0.920 & 0.960 & 0.940 & 0.967/0.578 & \textbf{0.977}$\pm$0.0018/\textbf{0.623}$\pm$0.0173\\
\hline
3 & 0.950 & 0.890 & 0.770 & 0.996/0.988 & \textbf{0.998}$\pm$0.0003/\textbf{0.993}$\pm$0.0005\\
\hline
Mean & 0.780 & 0.790 & 0.900 & 0.978/0.778 & \textbf{0.985}$\pm$0.0015/\textbf{0.804}$\pm$0.0163\\
\hline
\end{tabular}}
\end{wraptable}
\textbf{BTAD.} We compare our BGAD-FAS with three baseline methods reported in \cite{BTAD}: AutoEncoder with MSE loss, AutoEncoder with SSIM loss, and VT-ADL. In \cite{BTAD}, only pixel-level AUROCs are reported, thus we also only evaluate anomaly localization performance under metrics of pixel-level AUROC and PRO. The detailed pixel-level AUROC and PRO comparison results of all categories are shown in Table \ref{tab:BTAD}. Our BGAD-FAS can achieve 98.5\% mean pixel-level AUROC which surpasses other methods as high as 8.5\% AUROC, and surpasses unsupervised BGAD by 0.7\%. What's more, BGAD-FAS can achieve 80.4\% PRO which surpasses unsupervised BGAD by 2.6\%.
\textbf{Other Datasets.} For the other datasets, we compare our BGAD-FAS with six recent and closely related SOTA methods reported in \cite{DRA}: unsupervised KDAD \cite{MSFD}, and open-set supervised DevNet \cite{DevNet}, FLOS \cite{FocalLoss}, SAOE \cite{SAOE}, MLEP \cite{MLEP} and DRA \cite{DRA}. In \cite{DRA}, only image-level AUROCs are reported, thus we also only evaluate anomaly detection performance under the metric of image-level AUROC. Similar to the setting in \cite{DRA}, all models are trained with one anomaly sample. The image-level anomaly detection results are shown in Table\ref{tab:other_dataset}. Our model can achieve the best AUROC performance on the two industrial defect inspection datasets (AITEX and ELPV), and comparable results with the SOTA methods on the two medical lesion detection datasets (BrainMRI and HeadCT). Specifically, our model surpasses the previous SOTA method DRA by 13.4\% and 22.8\% on the AITEX and ELPV datasets respectively. These results combined with the results of MVTecAD and BTAD datasets demonstrate that our method is more suitable for the industrial defect inspection tasks. The detection results on diverse datasets across application domains also demonstrate our method's generalization ability in different applications.
\begin{table}
\caption{Image-level anomaly detection results on the AITEX, ELPV, BrainMRI, and HeadCT datasets. All reported image-level AUROCs are averaged over three independent runs. All the other results are from \cite{DRA}.}
\label{tab:other_dataset}
\resizebox{1.0\linewidth}{!}{
\begin{tabular}{c|c|ccccccc}
\hline
\multirow{2}*{\textbf{Dataset}} & AD w/o Abnormal Samples & \multicolumn{6}{c}{AD w/ Abnormal Samples} \\
& KDAD \cite{KDAD} & DevNet \cite{DevNet} & FLOS \cite{FocalLoss} & SAOE \cite{SAOE} & MLEP \cite{MLEP} & DRA \cite{DRA} & BGAD-FAS (Ours)\\
\hline
\textbf{AITEX} & 0.576$\pm$0.002 & 0.598$\pm$0.070 & 0.538$\pm$0.073 & 0.675$\pm$0.094 & 0.564$\pm$0.055 & 0.692$\pm$0.124 & \textbf{0.826}$\pm$0.011 \\
\textbf{ELPV} & 0.744$\pm$0.001 & 0.514$\pm$0.076 & 0.457$\pm$0.056 & 0.635$\pm$0.092 & 0.578$\pm$0.062 & 0.675$\pm$0.024 & \textbf{0.903}$\pm$0.003 \\
\textbf{BrainMRI} & 0.733$\pm$0.016 & 0.694$\pm$0.004 & 0.693$\pm$0.036 & 0.531$\pm$0.060 & 0.632$\pm$0.017 & \textbf{0.744}$\pm$0.004 & 0.740$\pm$0.006 \\
\textbf{HeadCT} & 0.598$\pm$0.070 & 0.742$\pm$0.076 & 0.698$\pm$0.092 & 0.597$\pm$0.022 & 0.758$\pm$0.038 & 0.796$\pm$0.105 & \textbf{0.807}$\pm$0.004 \\
\hline
\end{tabular}}
\end{table}
\subsection{Ablation Study}
\label{sec:ablation}
\textbf{Experiments On Hard Subsets From MVTecAD.} MVTecAD dataset contains many easy anomalies, and the detection performance on these anomalies is hard to be improved further. Thus, results obtained on the full MVTecAD dataset can't fully demonstrate the effectiveness of our model. In order to thoroughly verify the effectiveness of our method, we further construct two more difficult subsets from the MVTecAD dataset. The details of subset selection are provided in Appendix\ref{sec:appendix_sub}. The first subset is used to verify our model's anomaly detection performance, and the second subset is used to verify our model's anomaly localization performance. The results are shown in Table \ref{tab:harder_subset}. It can be found that the detection and localization performance gain on these hard subsets is larger than that on the original dataset with a margin of 1.0\%, 1.3\%, and 6.0\% respectively. This ablation study demonstrates that our model is more beneficial for harder anomaly categories.
\begin{wraptable}{r}{0.6\linewidth}
\caption{Anomaly detection and localization results on MVTecAD dataset according to hyperparameter $\beta$ and $\tau$. $\cdot/\cdot/\cdot$ means mean image-level AUROC, mean pixel-level AUROC and mean PRO.}
\label{tab:hyperparameter_a}
\resizebox{\linewidth}{!} {
\begin{tabular}{c|ccc}
\hline
\diagbox{$\beta$}{$\tau$} & 0.1 & 0.2 & 0.3\\
\hline
\hline
1\% & \textbf{0.9916}/0.9940/0.9779 & 0.9915/0.9940/0.9782 & 0.9906/0.9938/0.9778 \\
5\% & 0.9896/0.9942/0.9789 & 0.9898/\textbf{0.9943}/0.9793 & 0.9902/0.9941/0.9792 \\
10\% & 0.9877/0.9942/0.9789 & 0.9900/0.9942/\textbf{0.9794} & 0.9905/0.9942/0.9792\\
\hline
\end{tabular}}
\end{wraptable}
\textbf{Boundary Hyperparameters.} To verify the influence of normal boundary (controlled by $\beta$) and abnormal boundary (controlled by $\tau$) on the model's detection performance, we evaluate different combinations of $\beta$ (1\%, 5\%, 10\%) and $\tau$ (0.1, 0.2, 0.3). Experimental results are shown in Table \ref{tab:hyperparameter_a}. From Table \ref{tab:hyperparameter_a}, we can draw the following main conclusions: 1) $\beta$ has a more significant effect on performance compared with $\tau$, and pixel-wise AUROC is insensitive to the hyperparameters. 2) Our model is insensitive to the margin $\tau$, which means our model can achieve superior results as long as a certain margin is formed between normal and abnormal. 3) Better detection results can be obtained by decreasing the $\beta$. Generally, a lower $\beta$ means the normal boundary is closer to the distribution edge, thus the results show that the closer the separating boundary is to the normal log-likelihood distribution edge, the more conducive it is to distinguish the anomalies. The best results can be achieved with the combination of $\beta=1\%$ and $\tau=0.1$ among all combinations.
\begin{wrapfigure}{r}{0.4\linewidth}
\includegraphics[width=1.0\linewidth]{learning_efficiency.jpg}
\caption{AUROC vs epoch curve of cable category on MVTecAD dataset.}
\label{fig:learning_efficiency}
\end{wrapfigure}
\textbf{Generalization Capability and Learning Efficiency.} In order to verify the generalization capability to unseen anomalies, we only select a subset of anomalies to participate in training. We use the easy subsets as the training set and validate results on the hard subsets to explore the generalization capability of the model. The easy subsets are formed by excluding the hard subsets mentioned above from the original dataset. The experiment results are shown in Table \ref{tab:harder_subset}. It can be found that even only trained with easy anomalies, the model can generalize well to hard anomalies with performance gain by 2.3\%, 2.2\%, and 6.7\% for image-level AUROC, pixel-level AUROC, and PRO respectively. The detect results illustrate the generalization capability to unseen anomalies of our model. To illustrate the learning efficiency, we show AUROC vs epoch curve in Figure \ref{fig:learning_efficiency}, specifically, the pixel-level AUROC with FAS converges rapidly compared to its counterparts. The AUROC can increase a large margin generally only a meta epoch (8 epochs) after adding BG-SPPC loss for optimization.
\begin{table}
\centering
\vspace{-1.5cm}
\caption{Anomaly detection and localization results on subsets from the MVTecAD dataset. Image AUROC is measured on the first subset. Pixel AUROC and PRO are measured on the second subset. see details in Appendix\ref{sec:appendix_sub}.}
\label{tab:harder_subset}
\resizebox{\linewidth}{!} {
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\multirow{2}*{\diagbox{Metric}{Dataset}} & \multicolumn{2}{c|}{MVTecAD} & \multicolumn{2}{c|}{Hard Subsets} & \multicolumn{2}{c}{Unseen Subsets} \\
\cline{2-7}
& BGAD & BGAD-FAS & BGAD & BGAD-FAS & BGAD & BGAD-FAS\\
\hline
Image AUROC & 0.968 & 0.988(+2.0\%) & 0.948 & 0.978(\textbf{+3.0\%}) & 0.948 & 0.971(+\textbf{2.3}\%)\\
\hline
Pixel AUROC & 0.981 & 0.994(+1.3\%) & 0.960 & 0.986(\textbf{+2.6\%}) & 0.960 & 0.982(+\textbf{2.2}\%)\\
\hline
PRO & 0.953 & 0.979(+2.6\%) & 0.863 & 0.949(\textbf{+8.6\%}) & 0.863 & 0.930(+\textbf{6.7}\%)\\
\hline
\end{tabular}}
\end{table}
\subsection{Qualitative Results}
We visualize some anomaly localization results in Figure \ref{fig:qualitative_results} with the MVTecAD dataset. It can be found that our BGAD-FAS can generate more accurate anomaly localization maps with the guidance of the explicit boundary (see columns of \{1,3,4,5,6\} in Figure \ref{fig:qualitative_results}), or even generate anomaly maps better than ground truth (see columns of \{2\} in Figure \ref{fig:qualitative_results}).
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{qualitative_results.pdf}
\caption{Qualitative results. The anomaly localization results generated by MSFD \cite{MSFD}, BGAD, and BGAD-FAS are shown for comparison. In the first row, the areas enclosed by the red lines are ground-truth. In the other rows, areas marked by red are generated anomaly localization results.}
\label{fig:qualitative_results}
\end{figure}
\section{Related Work}
\label{sec:related_work}
\textbf{Unsupervised Approaches.} Most of anomaly detection methods are unsupervised and only learn from normal samples, such as AutoEncoder \cite{RandNet, SSIM, AutoEncoder2, AutoEncoder3}, GAN \cite{AnoGAN, fast-AnoGAN, GANomaly, ALAD, GAN1, GAN2} and one-class-classification (OCC) \cite{OneclassSVM, SVDD, deepSVDD, PatchSVDD} based methods. Recently, most of superior methods utilize pre-trained deep models, such as DeepKNN \cite{DeepKNN}, GaussianAD \cite{PaDiM1}, SPADE \cite{SPADE} and PaDiM \cite{PaDiM}. There are also some anomaly detection methods based on knowledge distillation \cite{STAD, MSFD}, feature reconstruction \cite{DFR}, and normalizing flows \cite{DifferNet, CFLOW}.
\textbf{Supervised Approaches.} Currently, a few existing works are similar to ours, \emph{i.e.} AD with outlier exposure \cite{OE, OE2} and deep semi-supervised AD \cite{SAD, HSC, FCDD}. In \cite{OE}, Hendrycks,\emph{et al.} term random nature images from the large scale datasets that are likely not nominal as outlier exposure, and explore how to utilize such data to improve unsupervised AD. The method presented in \cite{OE2} utilizes thousands of OE samples to achieve state-of-the-art results on standard image AD benchmarks. DeepSAD \cite{SAD} is the first deep model utilizing a few anomalies by generalizing the unsupervised DeepSVDD \cite{deepSVDD} method to a semi-supervised AD setting. In \cite{HSC}, Ruff,\emph{et al.} further modify the DeepSAD based on cross-entropy classification that concentrates nominal samples, this modification significantly improves the performance of DeepSAD. FCDD proposed in \cite{FCDD} extends the pseudo-Huber loss in \cite{HSC} to construct a semi-supervised anomaly localization framework. However, these methods didn't consider the model's bias to known anomalies. The recent work DRA \cite{DRA} is the most similar to ours, which also considers the model's generalization to unseen anomalies. The DRA model can learn disentangled representations of anomalies to enable generalized detection. Different from prior works, we aim to exploit anomalies with a carefully designed explicit boundary guided semi-push-pull strategy, which can enhance discriminability while mitigating the bias problem caused by insufficient known anomalies.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we propose a novel and more discriminative anomaly detection model termed as BGAD-FAS to tackle the \emph{insufficient discriminability} issue and the \emph{bias} issue simultaneously. By exploiting a few anomalies effectively, our model can learn more discriminative features for distinguishing anomalies. With the explicit and compact separating boundary, our model can avoid the bias problem caused by a few known anomalies. Considering the rarity of anomalies, we further propose RandAugmented CutPaste. The experimental results show BGAD-FAS's ability to achieve or outperform SOTA anomaly detection methods on six real-world anomaly detection datasets.
\bibliographystyle{IEEEtran}
|
2,869,038,156,799 | arxiv |
\section{Introduction}
Studying the differential production cross sections of top quark pairs (\ttbar) at high energies is a crucial ingredient in testing the standard model and searching for sources of new physics, which could alter the production rate. In particular, the differential \ttbar cross sections probe predictions of quantum chromodynamics (QCD) and facilitate the comparisons of the data with state-of-the-art calculations. In addition, some of the measured distributions, especially distributions of invariant mass and rapidity of the \ttbar system, can be used to improve our understanding of parton distribution functions (PDFs).
A measurement of the \ttbar differential and double-differential production cross sections as a function of jet multiplicity and of kinematic variables of the top quarks and the \ttbar system is presented. The measurement is based on proton-proton collision data at a center-of-mass energy of 13\TeV corresponding to an integrated luminosity of 2.3\fbinv~\cite{LUMI}. The data were recorded by the CMS experiment at the CERN LHC in 2015. This measurement makes use of the \ttbar decay into the $\ell$+jets\xspace ($\ell=\Pe,\mu$) final state, where, after the decay of each top quark into a bottom quark and a \ensuremath{\PW}\xspace boson, one of the \ensuremath{\PW}\xspace bosons decays hadronically and the other one leptonically. The $\tau$ lepton decay mode is not considered here as signal. The differential cross sections are presented as a function of the transverse momentum \pt and the absolute rapidity $\abs{y}$ of the hadronically (\ensuremath{\PQt_\mathrm{h}}\xspace) and the leptonically (\ensuremath{\PQt_\ell}\xspace) decaying top quarks; as a function of \pt, $\abs{y}$, and mass $M$ of the \ttbar system. The cross section is also measured as a function of the number of additional jets in the event. In addition, the differential cross sections as a function of $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ and $\pt(\ttbar)$ are measured in bins of jet multiplicity and double-differential cross sections for the following combinations of variables are determined: $\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ \vs $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$, $M(\ttbar)$ \vs $\abs{y(\ttbar)}$, and $\pt(\ttbar)$ \vs $M(\ttbar)$.
This measurement continues a series of differential \ttbar production cross section measurements in proton-proton collisions at the LHC. Previous measurements at 7\TeV~\cite{Chatrchyan:2012saa,Aad:2015eia} and 8\TeV~\cite{Khachatryan:2015oqa,Aad:2015mbv,Aad:2015hna,Khachatryan:2015fwh,Khachatryan:2149620} have been performed in various \ttbar decay channels.
The differential cross sections are presented in two different ways, at particle level and at parton level. For the particle-level measurement a proxy of the top quark is defined based on experimentally accessible quantities like jets, which consist of quasi-stable particles with a mean lifetime greater than 30\unit{ps}. These are described by theoretical calculations that, in contrast to pure matrix-element calculations, involve parton shower and hadronization models. These objects are required to match closely the experimental acceptance. A detailed definition is given in Section~\ref{PSTOP}. Such an approach has the advantage that it reduces theoretical uncertainties in the experimental results by avoiding theory-based extrapolations from the experimentally accessible portion of the phase space to the full range, and from jets to partons. However, such results cannot be compared to parton-level calculations.
For the measurement at parton level, the top quarks are defined directly before decaying into a bottom quark and a \ensuremath{\PW}\xspace boson. For this analysis the parton-level \ttbar system is calculated at next-to-leading order (NLO) and combined with a simulation of the parton shower. No restriction of the phase space is applied for parton-level top quarks.
The experimental signature is the same for both measurements and consists of two jets coming from the hadronization of $\PQb$ quarks (b jets), two jets from a hadronically decaying \ensuremath{\PW}\xspace boson, a transverse momentum imbalance associated with the neutrino, and a single isolated muon or electron.
This paper is organized as follows: In Section~\ref{SIM} we provide a description of the signal and background simulations, followed by the definition of the particle-level top quarks in Section~\ref{PSTOP}. After a short overview of the CMS detector and the particle reconstruction in Section~\ref{DET}, we describe the object and event selections in Sections~\ref{EVS} and \ref{EVTSEL}, respectively. Section~\ref{TTREC} contains a detailed description of the reconstruction of the \ttbar system. Details on the background estimation and the unfolding are presented in Sections~\ref{BKG} and \ref{UNFO}. After a discussion on systematic uncertainties in Section~\ref{UNC}, the results are finally presented in Section~\ref{RES}.
\section{Signal and background modeling}
\label{SIM}
The Monte Carlo programs \POWHEG~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd,Campbell:2014kua} (v2) and \MADGRAPH{}5\_a\MCATNLO~\cite{Alwall:2014hca} (v2.2.2) (\AMCATNLO) are used to simulate \ttbar events. They include NLO QCD matrix element calculations that are combined with the parton shower simulation of \PYTHIA~\cite{Sjostrand:2006za,Sjostrand:2007gs} (v8.205) (\textsc{pythia}8\xspace) using the tune CUETP8M1~\cite{Skands:2014pea}. In addition, \AMCATNLO is used to produce simulations of \ttbar events with additional partons. In one simulation all processes of up to three additional partons are calculated at leading order (LO) and combined with the \textsc{pythia}8\xspace parton shower simulation using the MLM~\cite{MLM} algorithm. In another simulation all processes of up to two additional partons are calculated at NLO and combined with the \textsc{pythia}8\xspace parton shower simulation using the FxFx~\cite{Frederix:2012ps} algorithm. The default parametrization of the PDF used in all simulations is NNPDF30\_nlo\_as\_0118~\cite{Ball:2014uwa}. A top quark mass $\ensuremath{m_\PQt}\xspace=172.5$\GeV is used. When compared to the data, simulations are normalized to an inclusive \ttbar production cross section of $832^{+40}_{-46}$\unit{pb}~\cite{Czakon:2011xx}. This value is calculated with next-to-NLO (NNLO) precision including the resummation of next-to-next-to-leading-logarithmic (NNLL) soft gluon terms. Its given uncertainty is due to the choice of hadronization/factorization scales and PDF.
In all simulations, event weights are calculated that represent the usage of the uncertainty eigenvector sets of the PDF. There are also event weights available that represent the changes of factorization and renormalization scales by a factor of two or one half. These additional weights allow for the calculation of systematic uncertainties due to the PDF and the scale choices. For additional uncertainty estimations we use \POWHEG{}+\textsc{pythia}8\xspace simulations with top quark masses of 171.5 and 173.5\GeV, with parton shower scales varied up and down by a factor of two, and a simulation with \POWHEG combined with \HERWIGpp~\cite{Bahr:2008pv} (v2.7.1) using the tune EE5C~\cite{Seymour:2013qka}.
The main backgrounds are produced using the same techniques. The \AMCATNLO generator is used for the simulation of \ensuremath{\PW}\xspace boson production in association with jets, $t$-channel single top quark production, and Drell--Yan (DY) production in association with jets. The \POWHEG generator is used for the simulation of single top quark associated production with a \ensuremath{\PW}\xspace boson ($\PQt\PW$) and \textsc{pythia}8\xspace is used for multijet production. In all cases the parton shower and the hadronization are described by \textsc{pythia}8\xspace. The \ensuremath{\PW}\xspace boson and DY backgrounds are normalized to their NNLO cross sections~\cite{Li:2012wna}. The single top quark processes are normalized to NLO calculations~\cite{Kant:2014oha,Kidonakis:2012rm}, and the multijet simulation is normalized to the LO calculation~\cite{Sjostrand:2007gs}.
The detector response is simulated using \GEANTfour{}~\cite{Allison:2006ve}. Afterwards, the same reconstruction algorithms that are applied to the data are used. Multiple proton-proton interactions per bunch crossing (pileup) are included in the simulation. To correct the simulation to be in agreement with the pileup conditions observed during the data taking, the average number of pileup events per bunch crossing is calculated for the measured instantaneous luminosity. The simulated events are weighted, depending on their number of pileup interactions, to reproduce the measured pileup distribution.
\section{Particle-level top quark definition}
\label{PSTOP}
The following list describes the definitions of objects constructed from quasi-stable particles, obtained from the predictions of \ttbar event generators before any detector simulation. These objects are further used to define the particle-level top quarks.
\begin{itemize}
\item Muons and electrons that do not have their origin in a decay of a hadron are selected and their momenta are corrected for the final-state radiation effects. The anti-\kt jet algorithm~\cite{Cacciari:2008gp, Cacciari:2011ma} with a distance parameter of 0.1 is used to cluster the leptons and photons not originating from hadron decays. Those photons that are clustered together with a selected lepton are assumed to have been radiated by the lepton and their momenta are added to the lepton momentum. However, the lepton is only selected if the original \pt is at least half of their corrected \pt.
\item All neutrinos that do not have their origin in a decay of a hadron are selected.
\item Jets are clustered by the anti-\kt jet algorithm with a distance parameter of 0.4. All quasi-stable particles are considered, excluding the selected neutrinos and leptons together with their radiated photons.
\item $\PQb$ jets at particle level are defined as those jets that contain a $\PQb$ hadron. As a result of the short lifetime of $\PQb$ hadrons, only their decay products should be considered for the jet clustering. However, to allow their association to a jet, the $\PQb$ hadrons are also included with their momenta scaled down to a negligible value. This preserves the information of their directions, but they have no impact on the jet clustering itself.
\end{itemize}
Based on the invariant masses $M$ of these objects, we construct a pair of particle-level top quarks in the $\ell$+jets\xspace final state. Events with exactly one muon or electron with $\pt > 30$\GeV and an absolute pseudorapidity $\abs{\eta} < 2.5$ are selected. We take the sum of the four-momenta of all selected neutrinos as the neutrino momentum $p_\nu$ from the leptonically decaying top quark and find the permutation of jets that minimizes the quantity
\ifthenelse{\boolean{cms@external}}{
\begin{multline}
K^2 = [M(p_\nu + p_{\ell} + p_{{\PQb}_\ell}) - \ensuremath{m_\PQt}\xspace]^2
+ [M(p_{\mathrm{j}_1} + p_{\mathrm{j}_2}) - \ensuremath{m_{\PW}}\xspace]^2 \\+ [M(p_{\mathrm{j}_1} + p_{\mathrm{j}_2} + p_{{\PQb}_\mathrm{h}}) - \ensuremath{m_\PQt}\xspace]^2,
\label{PSTOPE1}
\end{multline}
}{
\begin{equation}
K^2 = [M(p_\nu + p_{\ell} + p_{{\PQb}_\ell}) - \ensuremath{m_\PQt}\xspace]^2 + [M(p_{\mathrm{j}_1} + p_{\mathrm{j}_2}) - \ensuremath{m_{\PW}}\xspace]^2 + [M(p_{\mathrm{j}_1} + p_{\mathrm{j}_2} + p_{{\PQb}_\mathrm{h}}) - \ensuremath{m_\PQt}\xspace]^2,
\label{PSTOPE1}
\end{equation}
}
where $p_{\mathrm{j}_{1/2}}$ are the four-momenta of two light-flavor jet candidates, $p_{\PQb_{\ell/\mathrm{h}}}$ are the four-momenta of two \PQb-jet candidates, $p_\ell$ is the four-momentum of the lepton, and $\ensuremath{m_{\PW}}\xspace = 80.4$\GeV is the mass of the \ensuremath{\PW}\xspace boson. All jets with $\pt > 25$\GeV and $\abs{\eta} < 2.5$ are considered. At least four jets are required, of which at least two must be $\PQb$ jets. If there are more than two $\PQb$ jets, we allow $\PQb$ jets as decay products of the proxy for the hadronically decaying \ensuremath{\PW}\xspace boson. Due to a limited efficiency of the $\PQb$ jet identification at detector level this improves the agreement between the reconstructed top quarks and the particle-level top quarks. The remaining jets with the same kinematic selection are considered as additional jets at particle level.
It should be remarked that events with a hadronic and a leptonic particle-level top quark are not required to be $\ell$+jets\xspace events at the parton level. As an example, in \FIG{PSTOPF1} the relation between the $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ values at particle and parton level is shown.
\begin{figure}[tbhp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_001-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_001-b.pdf}\\
\caption{Comparison between the $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ at particle and parton level, extracted from the \POWHEG{}+\textsc{pythia}8\xspace simulation. \cmsLLeft: fraction of parton-level top quarks in the same bin at particle level (purity), fraction of particle-level top quarks in the same bin at parton level (stability), ratio of the number of particle- to parton-level top quarks, and fraction of events with a particle-level top quark pair that are not considered as signal events at parton level. \cmsRRight: bin migrations between particle and parton level. The \pt range of the bins can be taken from the \cmsLeft panel. Each column is normalized to the number of events per column at parton level in the full phase space.}
\label{PSTOPF1}
\end{figure}
\section{The CMS detector}
\label{DET}
The central feature of the CMS detector is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the $\eta$ coverage provided by the barrel and endcap detectors. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. A more detailed description of the CMS detector, together with a definition of the coordinate system and relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}.
The particle-flow (PF) event algorithm~\cite{CMS-PAS-PFT-09-001,CMS-PAS-PFT-10-001} reconstructs and identifies each individual particle with an optimized combination of information from the various elements of the CMS detector. The energy of photons is directly obtained from the ECAL measurement, corrected for zero-suppression effects. The energy of electrons is determined from a combination of the electron momentum at the primary interaction vertex as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track. The energy of muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energy.
\section{Physics object reconstruction}
\label{EVS}
This analysis depends on the reconstruction and identification of muons, electrons, jets, and missing transverse momentum associated with a neutrino. Only leptons are selected that are compatible with originating from the primary vertex, defined as the vertex at the beam position with the highest sum of $\pt^2$ of the associated tracks. Leptons from \ttbar decays are typically isolated, i.e., separated in $\Delta R = \sqrt{(\Delta \phi)^2 + (\Delta \eta)^2}$ from other particles. A requirement on the lepton isolation is used to reject leptons produced in decays of hadrons.
The muon isolation variable is defined as the sum of the \pt of all tracks, except for the muon track, originating from the \ttbar interaction vertex within a cone of $\Delta R = 0.3$. It is required to be less than 5\% of the muon \pt. The muon reconstruction and selection~\cite{Chatrchyan:2012xi} efficiency is measured in the data using tag-and-probe techniques~\cite{TNPREF}. Depending on the \pt and $\eta$ of the muon it is 90--95\%.
For electrons the isolation variable is the sum of the \pt of neutral hadrons, charged hadrons, and photon PF candidates in a cone of $\Delta R = 0.3$ around the electron. Contributions of the electron to the isolation variable are suppressed excluding a small region around the electron. This isolation variable is required to be smaller than 7\% of the electron \pt. An event-by-event correction is applied that maintains a constant electron isolation efficiency with respect to the number of pileup interactions~\cite{Cacciari:2007}. The measured reconstruction and identification~\cite{Khachatryan:2015hwa} efficiency for electrons is 70--85\% with a \pt and $\eta$ dependence.
Jets are reconstructed from PF objects clustered using the anti-\kt jet algorithm with a distance parameter of 0.4 using the \textsc{FastJet} package~\cite{Cacciari:2011ma}. Charged particles originating from a vertex of a pileup interaction are excluded. The total energy of the jets is corrected for energy depositions from pileup. In addition, \pt- and $\eta$-dependent corrections are applied to correct for detector response effects~\cite{JET}. Those jets identified as isolated muons or electrons are removed from consideration.
For the identification of $\PQb$ jets the combined secondary vertex algorithm~\cite{BTV} is used. It provides a discriminant between light-flavor and $\PQb$ jets based on the combined information of secondary vertices and the impact parameter of tracks at the primary vertex. A jet is identified as $\PQb$ jet if the associated value of the discriminant exceeds a threshold criterion. Two threshold criteria are used: a tight threshold with an efficiency of about 70\% and a light-flavor jet rejection probability of 95\%, and a loose one with an efficiency of about 80\% and a rejection probability of 85\%.
The missing transverse momentum \ptvecmiss is calculated as the negative of the vectorial sum of transverse momenta of all PF candidates in the event. Jet energy corrections are also propagated to improve the measurement of \ptvecmiss.
\section{Event selection}
\label{EVTSEL}
Events are selected if they pass single-lepton triggers. These require $\pt > 22$\GeV for electrons and $\pt > 20$\GeV for muons, as well as various quality and isolation criteria.
To reduce the background contributions and optimize the \ttbar reconstruction additional, more stringent, requirements on the events are imposed. Events with exactly one muon or electron with $\pt > 30$\GeV and $\abs{\eta} < 2.1$ are selected. No additional muons or electrons with $\pt > 15$\GeV and $\abs{\eta} < 2.4$ are allowed. In addition to the lepton, at least four jets with $\pt > 30$\GeV and $\abs{\eta} < 2.4$ are required. At least two of these jets must be tagged as $\PQb$ jets. At least one jet has to fulfill the tight \PQb-jet identification criterion while for the second $\PQb$ jet only the loose criterion is required. At least one of the two jets with the highest value of the $\PQb$ tagging discriminant and at least one of the remaining jets are required to have $\pt > 35$\GeV.
We compare several kinematic distributions of the muon and electron channels to the simulation to verify that there are no unexpected differences. The ratios of the measured to the expected event yields in the two channels agree within the uncertainty in the lepton reconstruction and selection efficiencies. In the remaining steps of the analysis the two channels are combined by adding their distributions.
\section{Reconstruction of the top quark-antiquark system}
\label{TTREC}
The goal of the \ttbar reconstruction is the correct identification of reconstructed objects as parton- or particle-level top quark decay products. To test the performance of the reconstruction algorithm an assignment between detector level and particle- (parton-) level objects is needed. For the particle-level measurement this relationship is straightforward. Reconstructed leptons and jets can be matched spatially to corresponding objects at the particle level. For the parton-level measurement we need to define how to match the four initial quarks from a \ttbar decay with reconstructed jets. This is not free of ambiguities since a quark does generally not lead to a single jet. One quark might shower into several jets or multiple quarks might be clustered into one jet if they are not well separated. We introduce an unambiguous matching criterion that matches the reconstructed jet with the highest \pt within $\Delta R = 0.4$ to a quark from the \ttbar decay. If two quarks are matched with the same jet, the event has a merged topology and is considered as ``not reconstructible'' in the context of this analysis.
The same matching criterion is also used to assign particle-level jets to the \ttbar decay products at parton level. Those particle-level jets with $\pt > 25$\GeV and $\abs{\eta} < 2.5$, which are not assigned to one of the initial quarks, are considered as additional jets at parton level.
For the reconstruction of the top quark-antiquark system all possible permutations of jets that assign reconstructed jets to the decay products of the \ttbar system are tested and a likelihood that a certain permutation is correct is evaluated. Permutations are considered only if the two jets with the highest $\PQb$ tagging probabilities are the two \PQb-jet candidates. In addition, the \pt of at least one \PQb-jet candidate and at least one jet candidate from the \ensuremath{\PW}\xspace boson decay have to be above 35\GeV. In each event the permutation with the highest probability is selected. The likelihoods are evaluated separately for the particle- and the parton-level measurements.
The first reconstruction step involves the determination of the neutrino four-momentum $p_\nu$. This is performed using the algorithm described in Ref.~\cite{Betchart:2013nba}. The idea is to find all possible solutions for the three components of the neutrino momentum using the two mass constraints $(p_\nu + p_\ell)^2 = m_{\PW}^2$ and $(p_\nu + p_\ell + p_{{\PQb}_\ell})^2 = m_\PQt^2$. Each equation describes an ellipsoid in the three-dimensional momentum space of the neutrino. The intersection of these two ellipsoids is usually an ellipse. We select $p_\nu$ as the point on the ellipse for which the distance $D_{\nu,\text{min}}$ between the ellipse projection onto the transverse plane and \ptvecmiss is minimal. This algorithm leads to a unique solution for the longitudinal neutrino momentum and an improved resolution for the transverse component. The minimum distance $D_{\nu,\mathrm{min}}$ can also be used to identify the correct ${\PQb}_\ell$. In the cases with an invariant mass of the lepton and the ${\PQb}_\ell$ candidate above $m_\PQt$ no solution can be found and we continue with the next permutation.
The likelihood $\lambda$ is maximized to select the best permutation of jets. It uses constraints of the top quark and \ensuremath{\PW}\xspace boson masses on the hadronic side and the $D_{\nu,\mathrm{min}}$ value from the neutrino reconstruction, and is defined through
\begin{equation}
-\log(\lambda) = -\log(P_m(m_2, m_3)) -\log(P_{\nu}(D_{\nu,\mathrm{min}})), \label{TTRECEQ1}
\end{equation}
where $P_m$ is the two-dimensional probability distribution of the invariant masses of correctly reconstructed \ensuremath{\PW}\xspace bosons and top quarks. This probability is calculated for the invariant mass of the two jets $m_2$ tested as the \ensuremath{\PW}\xspace boson decay products, and the invariant mass of the three jets $m_3$ tested as the decay products of the hadronically decaying top quark. The distributions for the correct jet assignments, taken from the \POWHEG{}+\textsc{pythia}8\xspace simulation and normalized to unity, are shown in \FIG{TTRECF2} for the particle- and parton-level measurements. Permutations with probabilities of less than 0.1\% of the highest value are rejected. This part of the likelihood is sensitive to the correct reconstruction of the hadronically decaying top quark, modulo a permutation of the two jets from the \ensuremath{\PW}\xspace boson, but none of the measured kinematic variables will be affected by this ambiguity.
The probability $P_{\nu}$ describes the distribution of \ensuremath{D_{\nu,\text{min}}}\xspace for a correctly selected ${\PQb}_\ell$. In \FIG{TTRECF2} the normalized distributions of \ensuremath{D_{\nu,\text{min}}}\xspace for ${\PQb}_\ell$ and for other jets are shown. On average, the distance \ensuremath{D_{\nu,\text{min}}}\xspace for correctly selected ${\PQb}_\ell$ is smaller and has a lower tail compared to the distance obtained for other jets. Permutations with values of $\ensuremath{D_{\nu,\text{min}}}\xspace > 150$\GeV are rejected since they are very unlikely to originate from a correct ${\PQb}_\ell$ association. This part of the likelihood is sensitive to the correct reconstruction of the leptonically decaying top quark.
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_002-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_002-b.pdf}
\includegraphics[width=0.49\textwidth]{Figure_002-c.pdf}
\includegraphics[width=0.49\textwidth]{Figure_002-d.pdf}
\caption{Top: normalized two-dimensional mass distribution of the correct reconstructed hadronically decaying \ensuremath{\PW}\xspace bosons $M(\PW)$ and the correct reconstructed top quarks $M(\ensuremath{\PQt_\mathrm{h}}\xspace)$ for the parton- (left) and the particle- (right) level measurements. Bottom: normalized distributions of the distance \ensuremath{D_{\nu,\text{min}}}\xspace for correctly and wrongly selected $\PQb$ jets from the leptonically decaying top quarks. The distributions are taken from the \POWHEG{}+\textsc{pythia}8\xspace \ttbar simulation.}
\label{TTRECF2}
\end{figure*}
The likelihood $\lambda$ combines the probabilities from the reconstruction of the hadronically and leptonically decaying top quarks and provides information on reconstructing the whole \ttbar system. The performance of the reconstruction algorithm is tested using the three \ttbar simulations generated with \POWHEG combined with \textsc{pythia}8\xspace or \HERWIGpp, and \AMCATNLO{}+\textsc{pythia}8\xspace where we use the input distributions $P_m$ and $P_{\nu}$ from \POWHEG{}+\textsc{pythia}8\xspace. The efficiency of the reconstruction algorithm is defined as the probability that the most likely permutation, as identified through the maximization of the likelihood $\lambda$, is the correct one, given that all decay products from the \ttbar decay are reconstructed and selected. These efficiencies as a function of the jet multiplicity are shown in \FIG{TTRECF3}. Since the number of permutations increases drastically with the number of jets, it is more likely to select a wrong permutation if there are additional jets. The small differences observed in different simulations are taken into account for the uncertainty estimations. We observe a lower reconstruction efficiency for the particle-level measurement. This is caused by the weaker mass constraints for a particle-level top quark, where, in contrast to the parton-level top quark, exact matches to the top quark and \ensuremath{\PW}\xspace boson masses are not required. This can be seen in the mass distributions of \FIG{TTRECF2} and the likelihood distributions in \FIG{TTRECF4}. Here the signal simulation is divided into the following categories: correctly reconstructed \ttbar systems (\ttbar right reco), events where all decay products are available, but the algorithm failed to identify the correct permutation (\ttbar wrong reco), $\ell$+jets\xspace \ttbar events where at least one decay product is missing (\ttbar not reconstructible), and nonsignal \ttbar events (\ttbar background). However, the lower reconstruction efficiency of the particle-level top quark is compensated by the higher number of reconstructible events.
\begin{figure}[tbhp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_003-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_003-b.pdf}
\caption{Reconstruction efficiency of the \ttbar system as a function of the number of additional jets for the parton- (\cmsLeft) and particle- (\cmsRight) level measurements calculated based on the simulations with \POWHEG{}+\textsc{pythia}8\xspace (P8), \POWHEG{}+\HERWIGpp (H++), and \AMCATNLO+\textsc{pythia}8\xspace.}
\label{TTRECF3}
\end{figure}
\begin{figure}[tbhp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_004-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_004-b.pdf}
\caption{Distribution of the negative log-likelihood for the selected best permutation in the parton- (\cmsLeft) and the particle- (\cmsRight) level measurements in data and simulations. The simulation of \POWHEG{}+\textsc{pythia}8\xspace is used to describe the \ttbar production. Experimental (cf. Section~\ref{UNC}) and statistical uncertainties (hatched area) are shown for the total simulated yield, which is normalized to the measured integrated luminosity. The ratios of data to the sum of the expected yields are provided at the bottom of each panel.}
\label{TTRECF4}
\end{figure}
In \FIG{TTRECF4a} the distributions of \pt and $\abs{y}$ of the reconstructed hadronically decaying top quarks for the parton- and particle-level measurements are compared to the simulation. In \FIG{TTRECF4b} the distributions of $\pt(\ttbar)$, $\abs{y(\ttbar)}$, $M(\ttbar)$, and the number of additional jets are shown. In general, good agreement is observed between the data and the simulation though the overall yield in the data is slightly lower, but within the experimental uncertainties. The observed jet multiplicities are lower than predicted.
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_005-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_005-b.pdf}\\
\includegraphics[width=0.49\textwidth]{Figure_005-c.pdf}
\includegraphics[width=0.49\textwidth]{Figure_005-d.pdf}\\
\caption{Comparisons of the reconstructed $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ (top) and $\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ (bottom) in data and simulations for the parton (left) and the particle (right) level. The simulation of \POWHEG{}+\textsc{pythia}8\xspace is used to describe the \ttbar production. Experimental (cf. Section~\ref{UNC}) and statistical uncertainties (hatched area) are shown for the total simulated yield, which is normalized according to the measured integrated luminosity. The ratios of data to the expected yields are given at the bottom of each panel.
}
\label{TTRECF4a}
\end{figure*}
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_006-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_006-b.pdf}
\includegraphics[width=0.49\textwidth]{Figure_006-c.pdf}
\includegraphics[width=0.49\textwidth]{Figure_006-d.pdf}
\includegraphics[width=0.49\textwidth]{Figure_006-e.pdf}
\includegraphics[width=0.49\textwidth]{Figure_006-f.pdf}
\caption{Comparisons of the reconstructed distributions of $\pt(\ttbar)$ (top) and $M(\ttbar)$ (middle) for the parton- (left) and the particle- (right) level measurements in data and simulations. Bottom: distributions of $\abs{y(\ttbar)}$ (left) and the number of additional jets (right). The simulation of \POWHEG{}+\textsc{pythia}8\xspace is used to describe the \ttbar production. Experimental (cf. Section~\ref{UNC}) and statistical uncertainties (hatched area) are shown for the total simulated yield, which is normalized according to the measured integrated luminosity. The ratios of data to the expected yields are given at the bottom of each panel.}
\label{TTRECF4b}
\end{figure*}
\section{Background subtraction}
\label{BKG}
After the event selection and \ttbar reconstruction about 65\,000 (53\,000) events are observed in the particle- (parton-) level measurements. A small contribution of about 9\% of single top quark, DY, \ensuremath{\PW}\xspace boson, and multijet events is expected. These have to be estimated and subtracted from the selected data.
The background from single top quark production is subtracted based on its simulation. Its overall contribution corresponds to about 4\% of the selected data. Single top quark production cross sections are calculated with precisions of a few percent~\cite{Kant:2014oha,Kidonakis:2012rm}. Since the calculations have a limited reliability after \ttbar selection we assume an overall uncertainty of 50\%. However, this conservative estimate has negligible impact on the final results and their accuracy.
The simulations of multijet, DY, and \ensuremath{\PW}\xspace boson production contain limited numbers of events after the full selection. We extract the shapes of the distributions of these backgrounds from a control region in the data, similar to the signal region, but requiring no b-tagged jet in the event. In this selection the contribution of \ttbar events is estimated to be about 15\%. The remaining fraction consists of multijet, DY, and \ensuremath{\PW}\xspace boson events. The reconstruction algorithm is exactly the same as for the signal selection. To estimate the shape dependency in the control region on the selection we vary the selection threshold of the $\PQb$ tagging discriminant. This changes the top quark contribution and the flavor composition, however, we find the observed shape variation to be negligible. For the background subtraction, the distributions extracted from the control region are normalized to the yield of multijet, DY, and \ensuremath{\PW}\xspace boson events predicted by the simulation in the signal region. In the control region the expected and measured event yields agree within their statistical uncertainties. Taking into account the statistical uncertainty of the normalization factor and the shape differences between the signal and control regions in the simulation, we estimate an overall uncertainty of 20\% in this background estimation. The overall contribution to the selected data is about 5\%.
For the parton-level measurement, special care has to be taken with the contribution of nonsignal \ttbar events, i.e., dilepton, all-jets, and $\tau$+jets events. For the particle-level measurement care is needed with all \ttbar events for which no pair of particle-level top quarks exists. The behavior of this background depends on the \ttbar cross section and a subtraction according to the expected value can result in a bias of the measurement, especially if large differences between the simulation and the data are observed. However, the shapes of the distributions show an agreement within uncertainties between data and simulation and we subtract the predicted relative fractions from the remaining event yields.
\section{Unfolding}
\label{UNFO}
For the unfolding, the iterative D'Agostini method~\cite{D'Agostini:1994zf} is used. The migration matrices and the acceptances are needed as input. The migration matrix relates the quantities at particle (parton) level and at detector level. It accounts for the effects from the parton shower and hadronization as well as the detector response, where the former has a large impact on the parton-level measurement. For the central results the migration matrices and the acceptances are taken from the \POWHEG{}+\textsc{pythia}8\xspace simulation and other simulations are used to estimate the uncertainties. The binning in the unfolding is optimized based on the resolution in the simulation. We utilize for the minimal bin widths that, according to the resolution, at least 50\% of the events are reconstructed in the correct bin.
The iterative D'Agostini method takes the number of iterations as an input parameter to control the level of regularization. A small number of iterations corresponds to a large regularization, which may bias the unfolded results. The level of regularization and hence the bias decreases with the number of iterations -- but with the drawback of increasing variances in the unfolded spectra. To optimize the number of iterations, we chose the criterion that the compatibility between a model and the unfolded data at particle (parton) level is the same as the compatibility between the folded model and the data at detector level. The compatibilities are determined by $\chi^2$ tests at both levels based on all available simulations and several modified spectra obtained by reweighting the $\pt(\PQt)$, $\abs{y(\PQt)}$, or $\pt(\ttbar)$ distributions in the \POWHEG{}+\textsc{pythia}8\xspace simulation. The reweighted spectra are chosen in such a way that they cover the observed differences between the data and the unmodified simulation.
We find the above criterion fulfilled for the number of iterations such that a second $\chi^2$ test between the detector-level spectrum with its statistical uncertainty and the refolded spectrum exceeds a probability of 99.9\%. The refolded spectrum is obtained by inverting the unfolding step. This consists of a multiplication with the response matrix and does not need any regularization.
For the two-dimensional measurements with $n$ bins in one and $m$ bins in the other quantity the D'Agostini unfolding can be generalized using a vector of $n\cdot m$ entries of the form: ${b_{1,1},b_{2,1}\ldots b_{n,1},\ldots b_{1,m},b_{2,m}\ldots b_{n,m}}$ with a corresponding $(n\cdot m) \times (n\cdot m)$ migration matrix. The number of iterations is optimized in the same way.
\section{Systematic uncertainties}
\label{UNC}
We study several sources of experimental and theoretical uncertainty. Uncertainties in the jet and \ptvecmiss calibrations, in the pileup modeling, in the $\PQb$ tagging and lepton selection efficiencies, and in the integrated luminosity measurement fall into the first category.
Uncertainties in the jet energy calibration are estimated by shifting the energies of jets in the simulation up and down by their \pt- and $\eta$-dependent uncertainties of 3--7\%~\cite{JET}. At the same time \ptvecmiss is recalculated according to the rescaled jet energies. The recomputed backgrounds, response matrices, and acceptances are used to unfold the data. The observed differences between these and the original results are taken as an uncertainty in the unfolded event yields. The same technique is used to calculate the impact of the uncertainties in the jet energy resolution, the uncertainty in \ptvecmiss not related to the jet energy calibration, in the $\PQb$ tagging, and in the pileup modeling.
The $\PQb$ tagging efficiency in the simulation is corrected using scale factors determined from the data~\cite{BTV}. These have an uncertainty of about 2--5\% depending on the \pt of the $\PQb$ jet.
The effect on the measurement due to the uncertainty in the modeling of pileup in the simulation is estimated by varying the average number of pileup events per bunch crossing by 5\% and reweighting the simulated events accordingly.
The trigger, reconstruction, and identification efficiencies of leptons are evaluated with tag-and-probe techniques using \Z boson dilepton decays~\cite{TNPREF}. The uncertainties in the scale factors, which are used to correct the simulation to match the data, take into account the different lepton selection efficiencies in events with high jet multiplicities. The overall uncertainty in the lepton reconstruction and selection efficiencies is 3\%.
The relative uncertainty in the integrated luminosity measurement is 2.3\%~\cite{LUMI}.
Uncertainties in the PDFs, the choice of factorization and renormalization scales, the modeling of the parton shower and hadronization, the effect of different NLO event generation methods, and the top quark mass fall into the second category of theoretical uncertainties.
The effects of these uncertainties are estimated either by using the various event weights introduced in Section~\ref{SIM}, e.g., in the case of PDFs, factorization scale, and renormalization scale, or by using a different \ttbar signal simulation. The \POWHEG simulation combined with \HERWIGpp is used to estimate the effect of different parton shower and hadronization models. In addition, \POWHEG{}+\textsc{pythia}8\xspace samples with a parton shower scale varied by a factor of two are used to study the parton shower modeling uncertainties. The result obtained with \AMCATNLO is used to estimate the effect of different NLO event generation methods. The effect due to uncertainties in the top quark mass is estimated using simulations with altered top quark masses. We quote as uncertainty the cross section differences observed for a top quark mass variation of 1\GeV around the central value of 172.5\GeV used in the central simulation.
The background predictions, response matrices, and acceptances obtained from these simulations are used to unfold the data. The observed deviations with respect to the original result are quoted as an uncertainty in the unfolded event yield.
For the PDF uncertainty only the variation in the acceptance is taken into account while variations due to migrations between bins are neglected. It is calculated according to the uncertainties in the NNPDF30\_nlo\_as\_0118~\cite{Ball:2014uwa} parametrization. In addition, the uncertainties obtained using the PDF sets derived with varied values of the strong coupling constant $\alpha_\mathrm{s} = 0.117$ and 0.119 are considered.
An overview of the uncertainties in the differential cross sections is provided in \TAB{UNCT1}, where the typical ranges of uncertainties in the bins are shown. In the double-differential measurements the jet energy scale uncertainty is about 15\% in bins of high jet multiplicities and the dominant uncertainties due to hadronization modeling and NLO calculation reach up to 30\% for the parton-level measurements.
\begin{table}[tbhp]
\caption{Overview of the uncertainties in the differential cross section measurements at particle and at parton level. Typical ranges of uncertainties in the bins are shown.}
\centering\begin{scotch}{l|cc}
Source & Particle& Parton\\
& level\,[\%] & level\,[\%]\\\hline
Statistical uncertainty & 1--5 & 1--5 \\\hline
Jet energy scale & 5--8 & 6--8 \\
Jet energy resolution & $<$1 & $<$1 \\
\ptvecmiss (non jet) & $<$1 & $<$1 \\
b tagging & 2--3 & 2--3 \\
Pileup & $<$1 & $<$1 \\
Lepton selection & 3 & 3 \\
Luminosity & 2.3 & 2.3 \\
Background & 1--3 & 1--3 \\
PDF & $<$1 & $<$1 \\
Fact./ren. scale & $<$1 & $<$1 \\
Parton shower scale & 2--5 & 2--9\\
\POWHEG{}+\textsc{pythia}8\xspace vs. \HERWIGpp & 1--5 & 1--12\\
NLO event generation & 1--5 & 1--10\\
\ensuremath{m_\PQt}\xspace & 1--2 & 1--3\\
\end{scotch}
\label{UNCT1}
\end{table}
\section{Cross section results}
\label{RES}
The cross section $\sigma$ in each bin is calculated as the ratio of the unfolded signal yield and the integrated luminosity. These are further divided by the bin width (the product of the two bin widths) to obtain the single- (double-) differential results.
{\tolerance=1200
The measured differential cross sections are compared to the predictions of \POWHEG and \AMCATNLO, each combined with the parton shower simulations of \textsc{pythia}8\xspace and \HERWIGpp. In addition, the \ttbar multiparton simulations of \AMCATNLO at LO and NLO with a \textsc{pythia}8\xspace parton shower are shown in Fig.~\ref{XSECPA1} (\ref{XSECPS1}) as a function of the top quark \pt and $\abs{y}$ at parton (particle) level. In Figs.~\ref{XSECPA2} and \ref{XSECPS2} the cross sections as a function of kinematic variables of the \ttbar system and the number of additional jets are compared to the same theoretical predictions.
\par}
In \FIG{XSECPA1t} the parton-level results are compared to theoretical predictions of various accuracies. The first is an approximate NNLO~\cite{Guzzi:2014wia} QCD calculation using the CT14\,NNLO~\cite{Dulat:2015mca} PDF and $\ensuremath{m_\PQt}\xspace = 172.5$\GeV. The factorization and renormalization scales are fixed at \ensuremath{m_\PQt}\xspace. The second is an approximate next-to-NNLO (NNNLO)~\cite{ANNNLO, ANNNLOdiff} QCD calculation using the MSTW2008nnlo~\cite{Martin:2009iq} PDF, $\ensuremath{m_\PQt}\xspace = 172.5$\GeV and factorization and renormalization scales fixed at \ensuremath{m_\PQt}\xspace. The third combines the NLO QCD calculation with an improved NNLL QCD calculation (NLO+NNLL')~\cite{NLONNLL} using the MSTW2008nnlo PDF, $\ensuremath{m_\PQt}\xspace=173.2$\GeV, and the renormalization and factorization scales of $M_\mathrm{T} = \sqrt{\ensuremath{m_\PQt}\xspace^2 + \pt^2(\PQt)}$ for the $\pt(\PQt)$ calculation and $M(\ttbar)/2$ for the $M(\ttbar)$ calculation. The fourth is a full NNLO~\cite{NNLO} QCD calculation using the NNPDF3.0 PDF, $\ensuremath{m_\PQt}\xspace = 173.3$\GeV, and the renormalization and factorization scales of $M_\mathrm{T}/2$ for the $\pt(\PQt)$ calculation and one-fourth of the sum of the \pt of all partons for the other distributions.The displayed uncertainties come from varying the scales up and down by a factor of two. Only the uncertainties in the approximate NNLO calculation include PDF uncertainties and a \ensuremath{m_\PQt}\xspace variation of 1\GeV.
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_007-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_007-b.pdf}\\
\includegraphics[width=0.49\textwidth]{Figure_007-c.pdf}
\includegraphics[width=0.49\textwidth]{Figure_007-d.pdf}
\caption{Differential cross sections at parton level as a function of $\pt(\PQt)$ (top) and $\abs{y(\PQt)}$ (bottom) measured separately for the hadronically (left) and leptonically (right) decaying top quarks. The cross sections are compared to the predictions of \POWHEG and \AMCATNLO(MG5) combined with \textsc{pythia}8\xspace(P8) or \HERWIGpp(H++) and the multiparton simulations \AMCATNLO{}+\textsc{pythia}8\xspace MLM and \AMCATNLO{}+\textsc{pythia}8\xspace FxFx. The ratios of the various predictions to the measured cross sections are shown at the bottom of each panel together with the statistical and systematic uncertainties of the measurement.}
\label{XSECPA1}
\end{figure*}
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_008-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_008-b.pdf}\\
\includegraphics[width=0.49\textwidth]{Figure_008-c.pdf}
\includegraphics[width=0.49\textwidth]{Figure_008-d.pdf}
\caption{Differential cross sections at particle level as a function of $\pt(\PQt)$ (top) and $\abs{y(\PQt)}$ (bottom) measured separately for the hadronically (left) and leptonically (right) decaying particle-level top quarks. The cross sections are compared to the predictions of \POWHEG and \AMCATNLO(MG5) combined with \textsc{pythia}8\xspace(P8) or \HERWIGpp(H++) and the multiparton simulations \AMCATNLO{}+\textsc{pythia}8\xspace MLM and \AMCATNLO{}+\textsc{pythia}8\xspace FxFx. The ratios of the various predictions to the measured cross sections are shown at the bottom of each panel together with the statistical and systematic uncertainties of the measurement.}
\label{XSECPS1}
\end{figure*}
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_009-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_009-b.pdf}\\
\includegraphics[width=0.49\textwidth]{Figure_009-c.pdf}
\includegraphics[width=0.49\textwidth]{Figure_009-d.pdf}
\caption{Differential cross sections at parton level as a function of $\pt(\ttbar)$, $\abs{y(\ttbar)}$, $M(\ttbar)$, and cross sections as a function of the number of additional jets compared to the predictions of \POWHEG and \AMCATNLO(MG5) combined with \textsc{pythia}8\xspace(P8) or \HERWIGpp(H++) and the multiparton simulations \AMCATNLO{}+\textsc{pythia}8\xspace MLM and \AMCATNLO{}+\textsc{pythia}8\xspace FxFx. The ratios of the various predictions to the measured cross sections are shown at the bottom of each panel together with the statistical and systematic uncertainties of the measurement.}
\label{XSECPA2}
\end{figure*}
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_010-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_010-b.pdf}\\
\includegraphics[width=0.49\textwidth]{Figure_010-c.pdf}
\includegraphics[width=0.49\textwidth]{Figure_010-d.pdf}
\caption{Differential cross sections at particle level as a function of $\pt(\ttbar)$, $\abs{y(\ttbar)}$, $M(\ttbar)$, and cross sections as a function of the number of additional jets compared to the predictions of \POWHEG and \AMCATNLO(MG5) combined with \textsc{pythia}8\xspace(P8) or \HERWIGpp(H++) and the multiparton simulations \AMCATNLO{}+\textsc{pythia}8\xspace MLM and \AMCATNLO{}+\textsc{pythia}8\xspace FxFx. The ratios of the various predictions to the measured cross sections are shown at the bottom of each panel together with the statistical and systematic uncertainties of the measurement.}
\label{XSECPS2}
\end{figure*}
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_011-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_011-b.pdf}\\
\includegraphics[width=0.49\textwidth]{Figure_011-c.pdf}
\includegraphics[width=0.49\textwidth]{Figure_011-d.pdf}\\
\includegraphics[width=0.49\textwidth]{Figure_011-e.pdf}
\caption{Differential cross sections at parton level as a function of $\pt(\PQt)$, $\abs{y(\PQt)}$, $\pt(\ttbar)$, $\abs{y(\ttbar)}$, and $M(\ttbar)$ compared to the available predictions of an approximate NNLO calculation~\cite{Guzzi:2014wia}, an approximate NNNLO calculation~\cite{ANNNLO, ANNNLOdiff}, a NLO+NNLL' calculation~\cite{NLONNLL}, and a full NNLO calculation~\cite{NNLO}. For these models uncertainties due to the choices of scales are shown. To improve the visibility the theoretical predictions are horizontally shifted. The ratios of the various predictions to the measured cross sections are shown at the bottom of each panel together with the statistical and systematic uncertainties of the measurement.}
\label{XSECPA1t}
\end{figure*}
The differential cross sections as a function of $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ and $\pt(\ttbar)$ in bins of the number of additional jets are shown in \FIG{XSECPA2D1} (\ref{XSECPS2D1}) at parton (particle) level. The double-differential cross sections as a function of $\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ \vs $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$, $M(\ttbar)$ \vs $\abs{y(\ttbar)}$, and $\pt(\ttbar)$ \vs $M(\ttbar)$ are shown at parton level in Figs.~\ref{XSECPA2D3}--\ref{XSECPA2D5} and at particle level in Figs.~\ref{XSECPS2D3}--\ref{XSECPS2D5}. The results are compared to the predictions of the event generators. All cross section values together with their statistical and systematic uncertainties are listed in Appendices~\ref{APP1} and \ref{APP2} for the parton- and particle-level measurements, respectively.
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.85\textwidth]{Figure_012-a.pdf}\\
\includegraphics[width=0.85\textwidth]{Figure_012-b.pdf}
\caption{Differential cross sections at parton level as a function of $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ (upper two rows) and $\pt(\ttbar)$ (lower two rows) in bins of the number of additional jets. The measurements are compared to the predictions of \POWHEG and \AMCATNLO(MG5) combined with \textsc{pythia}8\xspace(P8) or \HERWIGpp(H++) and the multiparton simulations \AMCATNLO{}+\textsc{pythia}8\xspace MLM and \AMCATNLO{}+\textsc{pythia}8\xspace FxFx. The ratios of the predictions to the measured cross sections are shown at the bottom of each panel together with the statistical and systematic uncertainties of the measurement.}
\label{XSECPA2D1}
\end{figure*}
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.85\textwidth]{Figure_013-a.pdf}\\
\includegraphics[width=0.85\textwidth]{Figure_013-b.pdf}
\caption{Differential cross sections at particle level as a function of $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ (upper two rows) and $\pt(\ttbar)$ (lower two rows) in bins of the number of additional jets. The measurements are compared to the predictions of \POWHEG and \AMCATNLO(MG5) combined with \textsc{pythia}8\xspace(P8) or \HERWIGpp(H++) and the multiparton simulations \AMCATNLO{}+\textsc{pythia}8\xspace MLM and \AMCATNLO{}+\textsc{pythia}8\xspace FxFx. The ratios of the predictions to the measured cross sections are shown at the bottom of each panel together with the statistical and systematic uncertainties of the measurement.}
\label{XSECPS2D1}
\end{figure*}
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.85\textwidth]{Figure_014-a.pdf}
\includegraphics[width=0.85\textwidth]{Figure_014-b.pdf}
\caption{Double-differential cross sections at parton level as a function of $\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ \vs $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ (upper two rows) and $M(\ttbar)$ \vs $\abs{y(\ttbar)}$ (lower two rows). The measurements are compared to the predictions of \POWHEG and \AMCATNLO(MG5) combined with \textsc{pythia}8\xspace(P8) or \HERWIGpp(H++) and the multiparton simulations \AMCATNLO{}+\textsc{pythia}8\xspace MLM and \AMCATNLO{}+\textsc{pythia}8\xspace FxFx. The ratios of the predictions to the measured cross sections are shown at the bottom of each panel together with the statistical and systematic uncertainties of the measurement.}
\label{XSECPA2D3}
\end{figure*}
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.85\textwidth]{Figure_015.pdf}
\caption{Double-differential cross section at parton level as a function of $\pt(\ttbar)$ \vs $M(\ttbar)$. The measurements are compared to the predictions of \POWHEG and \AMCATNLO(MG5) combined with \textsc{pythia}8\xspace(P8) or \HERWIGpp(H++) and the multiparton simulations \AMCATNLO{}+\textsc{pythia}8\xspace MLM and \AMCATNLO{}+\textsc{pythia}8\xspace FxFx. The ratios of the predictions to the measured cross sections are shown at the bottom of each panel together with the statistical and systematic uncertainties of the measurement.}
\label{XSECPA2D5}
\end{figure*}
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.85\textwidth]{Figure_016-a.pdf}
\includegraphics[width=0.85\textwidth]{Figure_016-b.pdf}
\caption{Double-differential cross sections at particle level as a function of $\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ \vs $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ (upper two rows) and $M(\ttbar)$ \vs $\abs{y(\ttbar)}$ (lower two rows). The measurements are compared to the predictions of \POWHEG and \AMCATNLO(MG5) combined with \textsc{pythia}8\xspace(P8) or \HERWIGpp(H++) and the multiparton simulations \AMCATNLO{}+\textsc{pythia}8\xspace MLM and \AMCATNLO{}+\textsc{pythia}8\xspace FxFx. The ratios of the predictions to the measured cross sections are shown at the bottom of each panel together with the statistical and systematic uncertainties of the measurement.}
\label{XSECPS2D3}
\end{figure*}
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.85\textwidth]{Figure_017.pdf}
\caption{Double-differential cross section at particle level as a function of $\pt(\ttbar)$ \vs $M(\ttbar)$. The measurements are compared to the predictions of \POWHEG and \AMCATNLO(MG5) combined with \textsc{pythia}8\xspace(P8) or \HERWIGpp(H++) and the multiparton simulations \AMCATNLO{}+\textsc{pythia}8\xspace MLM and \AMCATNLO{}+\textsc{pythia}8\xspace FxFx. The ratios of the predictions to the measured cross sections are shown at the bottom of each panel together with the statistical and systematic uncertainties of the measurement.}
\label{XSECPS2D5}
\end{figure*}
The precision of the measurement is limited by systematic uncertainties, dominated by jet energy scale uncertainties on the experimental side and parton shower and hadronization modeling uncertainties on the theoretical side. As expected, the theoretical uncertainties are reduced in the particle-level measurements since these are less dependent on theory-based extrapolations.
We evaluate the level of agreement between the measured differential cross sections and the various theoretical predictions using $\chi^2$ tests. In these tests we take into account the full covariance matrix obtained from the unfolding procedure for the statistical uncertainty. For each of the studied systematic uncertainties we assume a full correlation among all bins. No uncertainties in the theoretical predictions are considered for this comparison. However, these uncertainties are known to be large. Typically, differences between the various models are used to assess their uncertainties. From the $\chi^2$ values and the numbers of degrees of freedom, which corresponds to the number of bins in the distributions, the p-values are calculated. The results are shown in \TAB{REST1} for the parton-level and in \TAB{REST2} for the particle-level measurements.
\begin{table*}[tbhp]
\caption{Comparison between the measured distributions at parton level and the predictions of \POWHEG and \AMCATNLO combined with \textsc{pythia}8\xspace(P8) or \HERWIGpp(H++) and the multiparton simulations \AMCATNLO MLM and \AMCATNLO FxFx, as well as the predictions of an approximate NNNLO calculation~\cite{ANNNLO, ANNNLOdiff}, a NLO+NNLL' calculation~\cite{NLONNLL}, and a full NNLO calculation~\cite{NNLO}. We list the results of the $\chi^2$ tests together with the numbers of degrees of freedom (dof) and the corresponding p-values. For the comparison no uncertainties in the theoretical predictions are taken into account.}
\centering
\cmsTable{
\renewcommand{\arraystretch}{1.2}
\begin{scotch}{l|r@{\hspace{4mm}}lr@{\hspace{4mm}}lr@{\hspace{4mm}}l}
Distribution & $\chi^2/\mathrm{dof}$ & p-value & $\chi^2/\mathrm{dof}$ & p-value & $\chi^2/\mathrm{dof}$ & p-value\\\hline
& \multicolumn{2}{c}{\POWHEG{}+P8} & \multicolumn{2}{c}{\POWHEG{}+H++} & \multicolumn{2}{c}{\AMCATNLO{}+P8 MLM}\\
& \multicolumn{2}{c}{Order: NLO} & \multicolumn{2}{c}{Order: NLO} & \multicolumn{2}{c}{Order: LO, up to 3 add. partons}\\
$\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 10.7/9&0.295 & 8.01/9&0.533 & 19.0/9&0.025\\
$\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ & 3.91/7&0.790 & 4.33/7&0.741 & 4.49/7&0.721\\
$\pt(\ensuremath{\PQt_\ell}\xspace)$ & 14.9/9&0.093 & 9.03/9&0.435 & 41.8/9&$<0.01$\\
$\abs{y(\ensuremath{\PQt_\ell}\xspace)}$ & 11.4/7&0.121 & 13.1/7&0.070 & 12.0/7&0.100\\
$M(\ttbar)$ & 5.61/8&0.691 & 10.9/8&0.206 & 45.0/8&$<0.01$\\
$\pt(\ttbar)$ & 0.941/5&0.967 & 4.34/5&0.501 & 16.8/5&$<0.01$\\
$\abs{y(\ttbar)}$ & 1.95/6&0.924 & 2.04/6&0.916 & 5.55/6&0.476\\
Additional jets & 8.22/5&0.145 & 6.88/5&0.230 & 5.82/5&0.324\\
Additional jets \vs $\pt(\ttbar)$ & 85.3/20&$<0.01$ & 132/20&$<0.01$ & 135/20&$<0.01$\\
Additional jets \vs $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 89.0/36&$<0.01$ & 43.1/36&0.193 & 71.7/36&$<0.01$\\
$\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ \vs $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 55.3/36&0.021 & 52.4/36&0.038 & 60.7/36&$<0.01$\\
$M(\ttbar)$ \vs $\abs{y(\ttbar)}$ & 19.3/24&0.734 & 18.3/24&0.788 & 49.4/24&$<0.01$\\
$\pt(\ttbar)$ \vs $M(\ttbar)$ & 14.5/32&0.997 & 26.2/32&0.755 & 100/32&$<0.01$\\\hline
& \multicolumn{2}{c}{\AMCATNLO{}+P8} & \multicolumn{2}{c}{\AMCATNLO{}+H++} & \multicolumn{2}{c}{\AMCATNLO{}+P8 FxFx}\\
& \multicolumn{2}{c}{Order: NLO} & \multicolumn{2}{c}{Order: NLO} & \multicolumn{2}{c}{Order: NLO, up to 2 add. partons}\\
$\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 8.68/9&0.467 & 15.3/9&0.084 & 9.35/9&0.406\\
$\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ & 4.11/7&0.767 & 5.42/7&0.608 & 3.91/7&0.790\\
$\pt(\ensuremath{\PQt_\ell}\xspace)$ & 13.0/9&0.162 & 26.8/9&$<0.01$ & 11.7/9&0.228\\
$\abs{y(\ensuremath{\PQt_\ell}\xspace)}$ & 14.3/7&0.046 & 10.7/7&0.151 & 16.4/7&0.022\\
$M(\ttbar)$ & 9.91/8&0.271 & 5.93/8&0.655 & 28.0/8&$<0.01$\\
$\pt(\ttbar)$ & 31.1/5&$<0.01$ & 24.6/5&$<0.01$ & 18.4/5&$<0.01$\\
$\abs{y(\ttbar)}$ & 1.97/6&0.923 & 2.04/6&0.916 & 2.49/6&0.870\\
Additional jets & 21.5/5&$<0.01$ & 4.21/5&0.520 & 7.98/5&0.158\\
Additional jets \vs $\pt(\ttbar)$ & 319/20&$<0.01$ & 259/20&$<0.01$ & 121/20&$<0.01$\\
Additional jets \vs $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 90.9/36&$<0.01$ & 45.0/36&0.145 & 52.5/36&0.037\\
$\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ \vs $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 73.1/36&$<0.01$ & 111/36&$<0.01$ & 48.1/36&0.086\\
$M(\ttbar)$ \vs $\abs{y(\ttbar)}$ & 26.1/24&0.347 & 17.8/24&0.811 & 36.7/24&0.047\\
$\pt(\ttbar)$ \vs $M(\ttbar)$ & 229/32&$<0.01$ & 71.5/32&$<0.01$ & 97.6/32&$<0.01$\\\hline
& \multicolumn{2}{c}{appr. NNLO} & \multicolumn{2}{c}{appr. NNNLO} & \multicolumn{2}{c}{NLO+NNLL'}\\
$\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 14.3/9&0.111 & 36.7/9&$<0.01$ & 6.29/9&0.710\\
$\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ & 5.30/7&0.623 & 2.59/7&0.920 & \NA & \NA\\
$\pt(\ensuremath{\PQt_\ell}\xspace)$ & 12.1/9&0.209 & 92.1/9&$<0.01$ & 3.06/9&0.962\\
$\abs{y(\ensuremath{\PQt_\ell}\xspace)}$ & 3.77/7&0.805 & 4.34/7&0.739 & \NA & \NA\\
$M(\ttbar)$ & \NA & \NA & \NA & \NA & 6.70/8&0.569\\\hline
& \multicolumn{2}{c}{NNLO} & \multicolumn{4}{c}{}\\
$\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 5.78/9&0.762&\multicolumn{4}{c}{}\\
$\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ & 2.20/7&0.948&\multicolumn{4}{c}{}\\
$\pt(\ensuremath{\PQt_\ell}\xspace)$ & 5.54/9&0.785&\multicolumn{4}{c}{}\\
$\abs{y(\ensuremath{\PQt_\ell}\xspace)}$ & 6.48/7&0.485&\multicolumn{4}{c}{}\\
$M(\ttbar)$ & 5.88/8&0.660&\multicolumn{4}{c}{}\\
$\pt(\ttbar)$ & 3.50/5&0.623&\multicolumn{4}{c}{}\\
$\abs{y(\ttbar)}$ & 1.42/6&0.965&\multicolumn{4}{c}{}\\
\end{scotch}
}
\label{REST1}
\end{table*}
\begin{table*}[tbhp]
\caption{Comparison between the measured distributions at particle level and the predictions of \POWHEG and \AMCATNLO combined with \textsc{pythia}8\xspace(P8) or \HERWIGpp(H++) and the multiparton simulations \AMCATNLO MLM and \AMCATNLO FxFx. We list the results of the $\chi^2$ tests together with the numbers of degrees of freedom (dof) and the corresponding p-values. For the comparison no uncertainties in the theoretical predictions are taken into account.}
\centering
\cmsTable{
\renewcommand{\arraystretch}{1.2}
\begin{scotch}{l|r@{\hspace{4mm}}lr@{\hspace{4mm}}lr@{\hspace{4mm}}l}
Distribution & $\chi^2/\mathrm{dof}$ & p-value & $\chi^2/\mathrm{dof}$ & p-value & $\chi^2/\mathrm{dof}$ & p-value\\\hline
& \multicolumn{2}{c}{\POWHEG{}+P8} & \multicolumn{2}{c}{\POWHEG{}+H++} & \multicolumn{2}{c}{\AMCATNLO{}+P8 MLM}\\
& \multicolumn{2}{c}{Order: NLO} & \multicolumn{2}{c}{Order: NLO} & \multicolumn{2}{c}{Order: LO, up to 3 add. partons}\\
$\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 14.2/9&0.115 & 24.0/9&$<0.01$ & 32.8/9&$<0.01$\\
$\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ & 3.47/7&0.838 & 5.66/7&0.579 & 6.64/7&0.468\\
$\pt(\ensuremath{\PQt_\ell}\xspace)$ & 20.8/9&0.013 & 38.2/9&$<0.01$ & 49.7/9&$<0.01$\\
$\abs{y(\ensuremath{\PQt_\ell}\xspace)}$ & 6.37/7&0.497 & 9.69/7&0.207 & 16.1/7&0.025\\
$M(\ttbar)$ & 9.03/8&0.340 & 148/8&$<0.01$ & 12.0/8&0.151\\
$\pt(\ttbar)$ & 2.15/5&0.829 & 29.4/5&$<0.01$ & 49.2/5&$<0.01$\\
$\abs{y(\ttbar)}$ & 0.869/6&0.990 & 2.06/6&0.914 & 13.2/6&0.040\\
Additional jets & 28.2/5&$<0.01$ & 17.2/5&$<0.01$ & 36.8/5&$<0.01$\\
Additional jets \vs $\pt(\ttbar)$ & 70.7/20&$<0.01$ & 86.1/20&$<0.01$ & 161/20&$<0.01$\\
Additional jets \vs $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 91.6/36&$<0.01$ & 200/36&$<0.01$ & 162/36&$<0.01$\\
$\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ \vs $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 56.2/36&0.017 & 197/36&$<0.01$ & 114/36&$<0.01$\\
$M(\ttbar)$ \vs $\abs{y(\ttbar)}$ & 26.6/24&0.324 & 263/24&$<0.01$ & 38.1/24&0.034\\
$\pt(\ttbar)$ \vs $M(\ttbar)$ & 13.4/32&0.998 & 459/32&$<0.01$ & 89.0/32&$<0.01$\\\hline
& \multicolumn{2}{c}{\AMCATNLO{}+P8} & \multicolumn{2}{c}{\AMCATNLO{}+H++} & \multicolumn{2}{c}{\AMCATNLO{}+P8 FxFx}\\
& \multicolumn{2}{c}{Order: NLO} & \multicolumn{2}{c}{Order: NLO} & \multicolumn{2}{c}{Order: NLO, up to 2 add. partons}\\
$\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 11.9/9&0.221 & 5.51/9&0.788 & 4.17/9&0.900\\
$\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ & 7.34/7&0.394 & 10.6/7&0.156 & 5.93/7&0.547\\
$\pt(\ensuremath{\PQt_\ell}\xspace)$ & 11.0/9&0.274 & 6.37/9&0.702 & 6.51/9&0.688\\
$\abs{y(\ensuremath{\PQt_\ell}\xspace)}$ & 12.3/7&0.092 & 6.04/7&0.535 & 14.3/7&0.047\\
$M(\ttbar)$ & 9.57/8&0.296 & 28.7/8&$<0.01$ & 28.5/8&$<0.01$\\
$\pt(\ttbar)$ & 37.1/5&$<0.01$ & 7.92/5&0.161 & 29.6/5&$<0.01$\\
$\abs{y(\ttbar)}$ & 1.75/6&0.942 & 1.98/6&0.922 & 2.87/6&0.825\\
Additional jets & 29.6/5&$<0.01$ & 12.2/5&0.032 & 11.6/5&0.041\\
Additional jets \vs $\pt(\ttbar)$ & 197/20&$<0.01$ & 163/20&$<0.01$ & 85.3/20&$<0.01$\\
Additional jets \vs $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 151/36&$<0.01$ & 57.7/36&0.012 & 40.4/36&0.282\\
$\abs{y(\ensuremath{\PQt_\mathrm{h}}\xspace)}$ \vs $\pt(\ensuremath{\PQt_\mathrm{h}}\xspace)$ & 36.6/36&0.441 & 82.5/36&$<0.01$ & 42.2/36&0.222\\
$M(\ttbar)$ \vs $\abs{y(\ttbar)}$ & 21.4/24&0.612 & 47.9/24&$<0.01$ & 52.3/24&$<0.01$\\
$\pt(\ttbar)$ \vs $M(\ttbar)$ & 119/32&$<0.01$ & 164/32&$<0.01$ & 107/32&$<0.01$\\
\end{scotch}
}
\label{REST2}
\end{table*}
The observed cross sections are slightly lower than expected. However, taking into account the systematic uncertainties, that are highly correlated among the bins, there is no significant deviation. In general, the measured distributions are in agreement with the predictions of the event generators with some exceptions in the $\pt(\ttbar)$ and $M(\ttbar)$ distributions. The jet multiplicities are lower than predicted by almost all simulations. The measured \pt of the top quarks is slightly softer than predicted. Such an effect has already been observed in previous measurements~\cite{Aad:2015eia,Chatrchyan:2012saa,Aad:2015mbv,Khachatryan:2015oqa}. However, the comparison between the \HERWIGpp and \textsc{pythia}8\xspace simulations together with the same matrix-element calculations show the large impact of the parton shower and hadronization modeling. The parton-level results are well described by the matrix-element calculations. Especially, the soft \pt of the top quarks is predicted by the NNLO and NLO+NNLL' QCD calculation.
\section{Summary}
Measurements of the differential and double-differential cross sections for \ttbar production in proton-proton collisions at 13\TeV have been presented. The data correspond to an integrated luminosity of 2.3\,\fbinv recorded by the CMS experiment. The \ttbar production cross section is measured in the lepton+jets channel as a function of transverse momentum \pt and rapidity $\abs{y}$ of the top quarks; \pt, $\abs{y}$, and invariant mass of the \ttbar system; and the number of additional jets. The measurement at parton level is dominated by the uncertainties in the parton shower and hadronization modeling. The dependence on these theoretical models is reduced for the particle-level measurement, for which the experimental uncertainties of jet energy calibration and $\PQb$ tagging efficiency are dominant.
The results are compared to several standard model predictions that use different methods and approximations for their calculations. In general, the measured cross sections are slightly lower than predicted, but within the uncertainty compatible with the expectation. The measured distributions are in agreement with the predictions of the event generators with some exceptions in the $\pt(\ttbar)$ and $M(\ttbar)$ distributions. The number of additional jets is lower and the measured \pt of the top quarks is slightly softer than predicted by most of the event generators. A softer \pt of the top quarks has already been observed in previous measurements and is predicted by the NNLO and the NLO+NNLL' QCD calculation.
\begin{acknowledgments}
\hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren Rachada-pisek} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Science, Research and Economy and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport, and the Croatian Science Foundation; the Research Promotion Foundation, Cyprus; the Secretariat for Higher Education, Science, Technology and Innovation, Ecuador; the Ministry of Education and Research, Estonian Research Council via IUT23-4 and IUT23-6 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucl\'eaire et de Physique des Particules~/~CNRS, and Commissariat \`a l'\'Energie Atomique et aux \'Energies Alternatives~/~CEA, France; the Bundesministerium f\"ur Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Scientific Research Foundation, and National Innovation Office, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Science, ICT and Future Planning, and National Research Foundation (NRF), Republic of Korea; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Mexican Funding Agencies (BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Centre, Poland; the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, and the Russian Foundation for Basic Research; the Ministry of Education, Science and Technological Development of Serbia; the Secretar\'{\i}a de Estado de Investigaci\'on, Desarrollo e Innovaci\'on and Programa Consolider-Ingenio 2010, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine, and State Fund for Fundamental Researches, Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation.
Individuals have received support from the Marie-Curie program and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2013/11/B/ST2/04202, 2014/13/B/ST2/02543 and 2014/15/B/ST2/03998, Sonata-bis 2012/07/E/ST2/01406; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the National Priorities Research Program by Qatar National Research Fund; the Programa Clar\'in-COFUND del Principado de Asturias; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); and the Welch Foundation, contract C-1845.
\end{acknowledgments}
\clearpage
|
2,869,038,156,800 | arxiv | \section{Introduction}
A wide variety of tasks in the physical sciences requires a deconvolution of raw data before relevant information can be accessed. Examples include the reconstruction of images of starlight having passed through a turbulent atmosphere or the extraction of spectral information from numerical simulations of the strong force. Here we are interested in the general setting, where the sought after and positive definite function, called spectrum $\rho(\omega)\geq 0$, is connected to data $D(\tau)$ via an integral kernel $K(\tau,\omega)$
\begin{align}
D(\tau)=\int_{-\infty}^{\infty} K(\tau,\omega) \rho(\omega) d\omega.\label{Eq:Convol}
\end{align}
Depending on the choice of kernel function, Eq.\eqref{Eq:Convol} can amount to a Fourier-type transformation, where e.g. $K(\tau,\omega)\propto {\rm sin}[\omega\tau]$ or a double sided Laplace transform with $K(\tau,\omega)\propto {\rm exp}[-\omega\tau]$. In general the inversion of the above relation is an ill-defined problem and we will set out to give meaning to it through the use of Bayesian inference.
Let us start by preparing the stage, noting that data is obtained by an experimental apparatus or a numerical simulation and thus its values are known only at $N_\tau$ discrete points $D(\tau_i)=D_i$ in the interval $\tau_i\in[0,\beta]$, up to a given uncertainty denoted by an error matrix
\begin{align}
C_{ij}=\frac{1}{N_{\rm c}(N_{\rm c}-1)}\sum_{k=1}^{N_{\rm c}} \Big(D^k_i- D_i \Big)\Big(D^k_j- D_j \Big).
\end{align}
Here $D^k_i$ represents one of the $N_{\rm c}$ individual measurements of the data-point at $\tau_i$.
To carry out the task of determining the spectrum from this data, we need to discretize $\rho(\omega_l)=\rho_l$ over frequencies $\omega_l$ in Eq.\eqref{Eq:Convol} using $N_\omega$ points between an upper and lower cutoff $\omega_{\rm max}$ and $\omega_{\rm min}$. This leads to a spacing of $\Delta\omega =\frac{\omega_{\rm max}-\omega_{\rm min}}{N_\omega}$.
Note that this step already requires us to supply additional knowledge about the measured system, since $\omega_{\rm max}$ and $\omega_{\rm min}$ need to be chosen such that all relevant frequencies encoded in the data can be accounted for. Prior information of this kind can often be derived from sampling theorems in the case of an experimental apparatus or the finite size of the underlying numerical simulation that produces the data-points.
Thus the fully discretized equation we are to supposed to invert reads
\begin{align}
D_i=\Delta\omega \sum_{l=1}^{N_\omega} \;K_{il}\; \rho_l \label{Eq:ConvDiscr}
\end{align}
The task posed by inverting Eq.\eqref{Eq:ConvDiscr} is ill defined due to the presence of both noise in the measured data and the finite number of datapoints $N_\tau$, which is often significantly smaller than the number of points $N_\omega$ one wishes to reconstruct in the spectrum.
Imagine performing a simple $\chi^2$ fitting, i.e. finding a set of points $\rho_l$ that reproduces the data $D_i$ within the errors $\sigma_i=\sqrt{C_{ii}}$. In such a case many degenerate solutions exist, none of which is superior to any other. The reason for this is that the finite number of data-points can only constrain parts of the spectrum. Unfortunately at this stage we are not able to decide which of the reconstructed features in $\rho$ these correspond to. Note that in addition, the problem at hand is not linear as might be assumed from Eq.\eqref{Eq:ConvDiscr}, since we require the values of $\rho_l$ to be positive definite. This in turn corresponds to an additional constraint to be met, which prohibits a naive matrix inversion in \eqref{Eq:ConvDiscr} even in the case of perfect data.
A possible way to give meaning to such a problem is provided by Bayesian inference. This well established branch of statistics tells us through Bayes theorem that prior information is a key ingredient to the question of what spectrum correctly describes the physical system under investigation. More precisely one asks, what is the probability of a test function $\rho_l$ to be the correct spectral function, given measured data $D_i$ and prior information $I$
\begin{align}
P[\rho|D,I]=\frac{P[D|\rho]P[\rho|I]}{P[D|I]}.
\end{align}
The first term $P[D|\rho]$ appearing on the RHS is called the likelihood probability and denotes the probability of the data, given a test spectral function. This contribution is nothing but the usual $\chi^2$ fitting term and amounts to a Gaussian in the distance between measured data $D_i$ and the corresponding data $D^\rho_i$ obtained from inserting the test spectrum $\rho_l$ into Eq.\eqref{Eq:ConvDiscr}
\begin{align}
P[D|\rho]\propto{\rm exp}[-{\cal L}]={\rm exp}\Big[ -\frac{1}{2} \sum_{i,j=1}^{N_\tau} (D_i-D^\rho_i)C_{ij}^{-1} (D_j-D^\rho_j) \Big].\label{Eq:LikelihodProb}
\end{align}
The second term on the RHS, the prior probability $P[\rho|I]$, is crucial in going beyond the naive $\chi^2$ fitting, as it incorporates our prior knowledge. We require the spectrum to be positive definite, hence this distribution may not permit negative values and we deploy the particular choice of the so called Shannon-Jaynes entropy ${\cal S}$ in the following
\begin{align}
P_{MEM}[\rho|I(m)]\propto{\rm exp}[\alpha {\cal S}] = {\rm exp}\Big[ \alpha \sum_{l=1}^{N_\omega} \Big( \rho_l-m_l-\rho_l{\rm log}[\frac{\rho_l}{m_l}]\Big)\Big]. \label{Eq:PriorProb}
\end{align}
Here prior knowledge $I=I[(m)]$ is supplied through a function $m(\omega)$, which by definition denotes the correct spectrum in the absence of measured data. This function can e.g. contain the results of a previous investigation or an approximate solution obtained from theoretical considerations. Note that one has introduced a hyperparameter $\alpha$ in Eq.\eqref{Eq:PriorProb}, which is used to self consistently determine how strongly the entropy has to be weighted compared to the likelihood \cite{Jaynes1984_1,springerlink:10.1007/BF02427376,Jarrell1996133,Asakawa:2000tr}.
If we neglect the denominator $P[D|I]$, as it does not depend on the spectral function itself, the question of finding the most probable spectral function, given data and prior knowledge is now expressed as the following stationarity condition
\begin{align}
\left. \frac{\delta}{\delta \rho_l} P[\rho|D,I(m)] \right|_{\rho=\rho_{\rm MEM}}\propto \left. \frac{\delta}{\delta \rho_l}\Big( P[D|\rho]P_{MEM}[\rho,I]\Big) \right|_{\rho=\rho_{\rm MEM}}= 0.\label{MEM:Optimize}
\end{align}
Since the real exponential function is monotonous and we wish to avoid dealing with numbers over many orders of magnitude numerically, we focus in practice on the equivalent problem of minimizing the functional
\begin{align}
{\cal Q}(\rho,D,m) = {\cal L}(D,\rho)-\alpha {\cal S}(m,\rho).\label{Eq:Q}
\end{align}
To understand how the ill defined problem is given meaning, note that there are two contributions in Eq.\eqref{Eq:Q} that compete for the selection of the global minimum. Whereas ${\cal L}$ favors a spectrum that exactly reproduces the available datapoints, it is ${\cal S}$ that guides the spectrum toward the prior function.
The most important fact to note is that there exists a proof \cite{Asakawa:2000tr}, which tells us the following. Since we supply in addition to our measured $N_\tau$ data-points $N_\omega$ points of prior information by introducing the function $m_l$, the functional ${\cal Q}(\rho,D,m)$ possesses a unique minimum in the $N_\omega$ dimensional space of functions $\rho_l$, if such an extremum exists\footnote{At this true global extremum, we expect the likelihood ${\cal L}$ to be of comparable size to the entropy term $\alpha{\cal S}$, all of them being of order ${\cal O}(1-10)$. If in the numerical implementation the most probable spectrum still remains at values of ${\cal L}$ larger than $\sim 100$ the discretization in frequency space is chosen too coarse or too narrow.}. This is not surprising, since with the inclusion of prior knowledge, we have at our disposal more points of data than free parameters entering the problem. Even the most extreme case, where no data is supplied, is well defined, as the
prior function will then constitute the correct solution.
Intuitively the MEM result depends on a combination of three ingredients, the number of datapoints, the quality of the supplied data, as well as the prior information. The problem of inverting the underlying equation Eq.\eqref{Eq:ConvDiscr} is still ill-defined, but there exists a crucial difference to the naive $\chi^2$ fitting approach. Due to the presence of a prior function, Eq.\eqref{MEM:Optimize} selects a single solution from the degenerate set of functions that all equally well minimize $P[\rho|D]$. Part of this spectrum is fixed by the data points, part of it is selected through the function $m(\omega)$. I.e. changing the functional form of the prior will select a different spectrum, which however still reproduces the data within its errorbars. We can conclude that those parts of the spectrum that stay invariant under a change of prior must hence be fixed by the datapoints, while the rest of the spectrum follows from the choice of $m(\omega)$.
How the recovered spectrum improves with increasing the number of datapoints or lowering the measurement errors depends in part on the form of the kernel function. In a Fourier-type setting, it is known that sampling the same interval $\tau\in[0,\beta]$ with an increasing number of points will allow us to reconstruct spectral features at higher and higher frequencies. Less errors on the other hand will allow us to improve the localization of peaks, i.e. the resolution of any individual peak will increase. In case of the Laplace transform, the number of sampled points is not connected to a maximum frequency but instead reflects in how reliably the width of a spectral peak can be recovered.
\section{Towards an Improvement of the MEM Implementation}
In practice Eq.\eqref{MEM:Optimize} constitutes a high dimensional optimization problem, often of order $N_\omega\sim O(1000)$ and above. Since reliable second order methods, such as the Levenberg-Marquardt algorithm, require an inversion of the Hesse-matrix of size $N_\omega\times N_\omega$, this direct approach quickly becomes too costly when $N_\omega$ increases. One strategy, which was introduced in \cite{springerlink:10.1007/BF02427376} is to limit the dimensionality of the solution space apriori by choosing a set of basis functions derived from an SVD of the discretized integral kernel $K^t_{il}$. The apparent reduction of computational cost in this approach is significant, it posits that one has to deal only with $N_\tau$ degrees of freedom instead of the original $N_\omega$.
We will show in the following that the solution from within the SVD search space does not in general correspond to the global minimum sought after in Eq.\eqref{MEM:Optimize}. Our argument is based on the functional form of the basis functions following from the SVD of the kernel on the one hand and a direct counterexample from a mock data analysis, which shows how Bryan's method fails to obtain the correct Bayesian solution.
Before elaborating on a possible improvement let us briefly recollect how the standard implementation is justified.
\subsection{Bryan's Search Space}
Inserting the definitions of Eq.\eqref{Eq:LikelihodProb} and Eq.\eqref{Eq:PriorProb} into the stationarity condition for the functional ${\cal Q}$
\begin{align}
\frac{\delta {\cal Q}(\rho,D,m)}{\delta \rho}=0,
\end{align}
we find the following implicit expression for the spectrum
\begin{align}
-\alpha {\rm log}[\frac{\rho_l}{m_l}]=\sum_{i=1}^{N_\tau} K_{il} \frac{d{\cal L}}{dD^\rho_i(\rho)},
\end{align}
the LHS of which originates from the entropy term. The fraction in the logarithm invites us to make the positive definiteness of the spectrum and the prior function explicit by using the general parametrization $\rho_l=m_l\, {\rm exp}[a_l]$, which, if written in vector notation, leads to
\begin{align}
-\alpha \vec{a} = K^t \vec{\frac{d{\cal L}}{dD^\rho(a)}}.
\end{align}
Note that $\vec{a}$ essentially characterizes the deviation of the spectrum from the prior function. Bryan's strategy amounts to applying the SVD to the transposed kernel $K^t=U\Sigma V^t$, such that
\begin{align}
-\alpha \vec{a} = U \Sigma V^t \vec{\frac{d{\cal L}}{dD^\rho(a)}}. \label{Eq:DefSVDsp}
\end{align}
Note that by definition of the SVD, the matrix $U$ contains a full orthonormal basis of the $\mathbb{R}^{N_\omega}$. $\Sigma$ on the other hand is a diagonal matrix, which contains only $N_\tau$ entries different from zero, since there were only $N_\tau$ columns in $K^t_{il}$. The above implicit expression leads Bryan to the incorrect (as will be shown in the next section) assumption that the vector $\vec{a}$, characterizing the global extremum, always has to lie in the subspace spanned by the first $N_\tau$ columns of the matrix $U$. He thus decides to parametrize the spectral function using the $N_\tau$ values $b_j$
\begin{align}
\rho_l=m_l \,{\rm exp}[\sum_{j=1}^{N_\tau} U_{lj} b_j].\label{Eq:BryanParam}
\end{align}
\subsection{Inadequacy of the search space}
The first sign of an inadequacy of the search space introduced through the parametrization in Eq.\eqref{Eq:BryanParam} can be found in the functional form of the basis functions $U_{lj}$.
\subsubsection{SVD Basis functions}
In Fig.\ref{Fig:SVDBasisShift} we plot the first twelve basis functions for the case of the Laplace transform with $K(\tau,\omega)=e^{-\omega\tau}$. The frequencies are discretized with a $\Delta\omega=0.02$ in three different intervals, ranging from a common upper cutoff $\omega_{\rm max}=20$ to $\omega_{\rm min}=-10,-15,-20$. What we find is that all functions $U_j(\omega)$ share the same qualitative behavior. Starting from $\omega_{\rm min}$ they oscillate up to a certain $\omega_{\rm osc}$, beyond which a rapid damping toward zero sets in. If we choose (Fig.\ref{Fig:SVDBasisShift}, right) $\omega_{\rm min}=-10$, while being fixed to $N_\tau=12$ basis functions, the oscillatory part extends only up to $\omega<\omega_{\rm osc}\simeq0$. Obviously we will not be able to reconstruct sharp peak structures in the region $\omega>\omega_{\rm osc}$.
This constitutes a conceptual problem in the approach of Bryan, since the derivation of Eq.\eqref{Eq:BryanParam} did not refer to a particular choice of $\omega_{\rm min}$ and thus allows us to set its value arbitrarily. As seen from the center and left panels in Fig.\ref{Fig:SVDBasisShift}, changing $\omega_{\rm min}$ while keeping $\Delta\omega$ fixed, does not influence the length of the oscillatory regime but only shifts the whole function to lower frequencies. It is thus possible to always make the MEM fail within the singular search space, since $\omega_{\rm min}$ can be large and negative, such that no peak structures remain available for a reconstruction of the spectrum.
Note that the proof of existence and uniqueness for the solution laid out in \cite{Asakawa:2000tr} does not rely on any parametrization or restriction of the underlying functional space. The fact that by choosing $\omega_{\rm min}$, Bryan's MEM can always be made to fail, indicates that the $N_\tau$ dimensional subspace artificially restricts the solution of Eq.\eqref{MEM:Optimize}.
\begin{figure*}[!t]
\hspace{-1.3cm}
\includegraphics[scale=0.2, angle=-90] {JCP_Comapre_Shifted_Basis.pdf}
\caption{ Comparison of the first twelve basis functions $U_j(\omega)$ from an SVD of the kernel $K(\tau,\omega)={\rm exp}[-\omega\tau]$. For the discretization we choose $N_\tau=12$ with $\tau\in[0,6.1]$, while the frequency interval with upper cutoff $\omega_{\rm max}=20$ uses a spacing of $\Delta\omega=0.02$. From the left to the right panel we change the lower cutoff of the $\omega$ range $\omega_{\rm min}=-20,-15,-10$ and observe that the functional form of the $U_j(\omega)$'s does not change, while they are shifted as a whole along the frequency axis. Note that already for the choice $\omega_{\rm min}=-10$ the oscillatory regime ends slightly above $\omega_{\rm osc}\simeq0$}
\label{Fig:SVDBasisShift}
\end{figure*}
The effects of Bryan's search space on the quality of a reconstruction of actual spectra can be investigated by using mock data, as we will proceed to do in the next section
\subsubsection{Numerical Evidence from a Mock Data Analysis}
Working with data from numerical simulations of the strong force \cite{Rothkopf:2011db}, it became apparent that the MEM based on Bryan's prescription was not able to adequately reconstruct the encoded spectrum in many cases. Here we demonstrate this effect by feeding to the algorithm a set of prepared datapoints, which encode a known spectral function, whose form closely resembles those encountered in our numerical investigation.
The spectrum used in the following is a particular choice, it however contains several elements that are characteristic for those cases where Bryan's approach warrants an improvement. If we e.g. had only a single peak encoded in the spectrum, we might be able to improve the situation somewhat by moving $\omega_{\rm min}$ close to expected position of that spectral feature. In nature however we often encounter the case that several peaks of different width and wildly different amplitude are distributed over a broad frequency range, hence the adjustment of $\omega_{\rm min}$ is not an adequate remedy. Therefore we choose as mock spectrum a sum of four Gaussian peaks with parameters as shown in Tab.\ref{Tab:GPeaks}.
\begin{table}[!t]
\begin{center}
\begin{tabular}{ l || c| c | c | r }
&1st peak & 2nd peak & 3rd peak & 4th peak\\
\hline \hline
amplitude:& $3e^{-8}$ & 0.6 & 0.25 & 0.2 \\
position:& -2.3 & 0.52 & 2.6 & 7.5 \\
width: & 0.1 & 0.1 & 0.4 & 1.4 \\
\end{tabular}
\caption{Parameters of the Gaussian peaks used in the mock function $\rho_{\rm mock}$, inspired by Lattice QCD data obtained in \cite{Rothkopf:2011db} }
\label{Tab:GPeaks}
\end{center}
\end{table}
The frequency range of $\omega^{\rm mock}\in[-5,20]$ is discretized with $N^{\rm mock}_\omega=5000$ points used to sample the mock spectrum and to generate ideal data $D^{\rm ideal}$ through insertion into Eq.\eqref{Eq:ConvDiscr}. The influence of errors is taken into account by adding Gaussian noise at each individual $\tau_k$ with variance $\delta D^{\rm mock}_k$. The strength of the disturbance is controlled by the parameter $\eta$, i.e.
\begin{align}
\delta D^{\rm mock}_k=k\eta D^{\rm ideal}_k, \quad k\in[1,\cdots,N_\tau].
\end{align}
As we wish to separate the question of how well the reconstruction succeeds from the quality of data and focus on the choice of search space, a small noise $\eta=0.0001$ is used to only slightly distort the ideal mock data.
We choose as prior the function
\begin{align}
m(\omega)=\frac{1}{\omega+\omega_0},\label{Eq:mfunc}
\end{align}
with $\omega_0$ selected such that its integral coincides with the area under the mock spectrum. Any particular choice of the prior will influence the outcome of the reconstruction, since the parametrization of Eq.\eqref{Eq:BryanParam} includes $m(\omega)$ as a prefactor\footnote{As we argued in the introduction, the result of the MEM will be a spectrum, parts of which are constrained by the data, parts of which are constrained by our choice of $m(\omega)$. If our goal is to reliably determine, which part of $\rho(\omega)$ is actually a result of the supplied measurements, we will have to redo the MEM with several different priors to identify, what spectral feature remains unchanged.}. In practice we often only have partial prior information available, usually far from the region where the spectral features of interest are located. Hence our goal here is to use a prior that resembles this fact, by approaching zero for large frequencies, while being incorrect but still a smooth function at small frequencies.
To reconstruct the supplied mock spectrum, we choose for the MEM the frequency range $\omega\in[-10,20]$ divided into $N_\omega=1500$ points, whereas $\tau\in[0,6.1]$ with $N_\tau=12$. The inclusion of negative frequencies leads to a large dynamic range of the kernel, hence the internal arithmetic is set to use 384 bits of precision.
\begin{figure*}[t!]
\hspace{-1.3cm}
\includegraphics[scale=0.22,angle=-90] {JCP_Bryan_Mock_Results.pdf}
\caption{(left) Comparison of the $N_\tau=12$ mock data points (circles) and the data (line) obtained from inserting the MEM reconstructed spectrum into Eq.\eqref{Eq:ConvDiscr}. Note that with Bryan's prescription used here, the solution does not reproduce the datapoints around $\tau\simeq5$ within their errorbars. (center and right) Comparison of the mock spectrum and the reconstructed function $\rho(\omega)$ according to Bryan's prescription. Note that the peak at negative frequencies is not captured at all, as is the third peak at positive $\omega$.}
\label{Fig:BryanMockReconstr}
\end{figure*}
In Fig.\ref{Fig:BryanMockReconstr} we present the results of the reconstruction according to Bryan's prescription. The first indication that the MEM has not been successful in this approach is the large value of the residual ${\cal Q}\simeq 10000$, which is dominated by a large value of ${\cal L}$ of the same order of magnitude. Indeed the idea of the MEM is to regularize an otherwise underdetermined $\chi^2$ fitting, by selecting from a large number of degenerate solutions the one with maximum entropy. This however entails that the chosen solution still reproduces all data within their errors, which is only possible if ${\cal L}\sim {\cal O}(1)$.
Looking at the reconstructed spectrum itself in the center and left plot of Fig.\ref{Fig:BryanMockReconstr} we find that the negative frequency peak as well as the third peak at large $\omega$ is not captured at all, while the first two peaks at $\omega>0$ are washed out and shifted. This is not surprising if we remember the set of basis functions available to the MEM in this case, as shown in the right panel of Fig.\ref{Fig:SVDBasisShift}. Within Bryan's approach their number is fixed by the quantity of available data-points. In addition, our choice of $\omega_{\rm min}=-10$ is valid, as we expect from the upward trend in the mock data that negative frequencies need to be taken into account. Since the oscillating range of the functions $U_j(\omega)$ ends shortly above $\omega=0$ it is however very difficult to reproduce the correct spectral features.
We conclude that the search space provided by the first $N_\tau$ columns of the SVD of the transposed kernel $K^t_{il}$ does not allow us to reconstruct reliably the spectrum encoded in the mock data $D^{\rm mock}$. Thus we set out to improve the implementation of the maximum entropy method by extending the search space systematically as laid out in the following section.
\subsection{Extension of the search space}
The reason for the popularity of Bryan's approach is that it apparently offers a dramatic decrease in computational cost from $N_\omega$ to $N_\tau$ degrees of freedom. However the proof on the existence and uniqueness of an MEM solution in \cite{Asakawa:2000tr} applies only to the full $\mathbb{R}^{N_\omega}$ search space. In addition we have seen that the reconstruction in the SVD subspace can always be made to fail by choosing $\omega_{\rm min}$ large and negative.
Therefore we propose to systematically enlarge the search space starting from Bryan's SVD subspace with the prospect of locating the correct global extremum of the functional ${\cal Q}(\rho,D,m)$ already with a number $N_{\rm ext}<N_\omega$ of basis functions. To this end we decide to extend the search space by including more and more of the columns of the matrix $U$ in the parametrization of the spectrum, so that now
\begin{align}
\rho_l=m_l \,{\rm exp}[\sum_{j=1}^{N_{\rm ext}} U_{lj} b_j ]\label{Eq:MeParam}
\end{align}
with $N_\tau<N_{\rm ext}<N_\omega$.
The number of basis vectors required to adequately determine the global extremum can then be determined by increasing the number $N_{\rm ext}$ until the minimal value of ${\cal Q}(\rho,D,m)$ does not decrease when adding an additional basis function. In the worst case this process has to be continued until $N_{\rm ext}=N_\omega$ since only the full set of columns of $U$ encodes a complete set of basis vectors for the $\mathbb{R}^{N_\omega}$.
\begin{figure*}[!t]
\hspace{-1.5cm}
\includegraphics[scale=0.22,angle=-90] {JCP_Extended_Search_Data_Reconstr.pdf}
\caption{(left) The values of ${\cal Q}$ associated with the final MEM reconstruction for different numbers of basis vectors used in the parametrization Eq.\eqref{Eq:MeParam}. Note that all runs use the same $N_\tau=12$ mock dataset so that the difference in the value of ${\cal Q}$ solely originates in the available search space. This result is a direct counterexample to the claim that the correct MEM solution, i.e. the global extremum of Eq.\eqref{MEM:Optimize} always lies in Bryan's SVD search space. (right) Comparison of the mock data (circles) and the values (line) obtained from inserting the MEM reconstructed spectrum for $N_{\rm ext}=28$ and $50$ into Eq.\eqref{Eq:ConvDiscr}. The large discrepancy at $\tau\simeq5$ that existed in the case $N_{\rm ext}=N_{\tau}$ is significantly reduced here. }
\label{Fig:QvalExtended}
\end{figure*}
The central result, concerning the increase in the number of basis vectors, can be found in the right panel of Fig.\ref{Fig:QvalExtended}. There we plot the dependence on $N_{\rm ext}$ of the value of ${\cal Q}$, associated with the final solution of the MEM reconstruction. Contrary to the claim of Bryan, the global minimum sought after in Eq.\eqref{MEM:Optimize} is found outside of the SVD search space. Instead, after a rapid decrease of the residual ${\cal Q}$ for $12<N_{\rm ext}<20$ the reconstruction further improves at a slower rate and we are able to reach the region of ${\cal Q}\sim O(1-10)$ in which the correct solution is supposed to be located. The decrease in ${\cal Q}$ is also directly related to the success in reconstructing the mock data shown on the right of Fig.\ref{Fig:QvalExtended} for the values $N_{\rm ext}=28$ and $50$. While in the case of Bryan's search space with $N_{\rm ext}=N_\tau$, shown in the right panel of Fig.\ref{Fig:BryanMockReconstr}, the data at $\tau\simeq5$ was not
reproduced within its errorbars, the discrepancy is significantly reduced here.
Alternatively we can also observe an improvement in the recovery of the mock spectrum parameters. As an example we fit the lowest lying positive peak of the MEM result and compare the extracted values to the mock parameters of Tab.\ref{Tab:GPeaks}. Fig.\ref{Fig:ExtendedreconstrParameters}, which shows the relative deviation of the extracted parameters, tells us that both the reconstruction of the peak position and width improves as we increase the value of $N_{\rm ext}>N_{\tau}$. For small values of $N_{\rm ext}$ the MEM tends to overestimate the position of the peak, since it tries to incorporate the higher lying spectral features into the insufficient number of degrees of freedom available to it. The width is also initially estimated with a too large value, since the oscillatory behavior of the basis functions is not fast enough to reproduce a narrow structure as small as the first peak\footnote{Note that for larger values of $N_{\rm ext}>60$ both the width and position in Fig.\ref{Fig:ExtendedreconstrParameters} are being underestimated, as the basis functions are able to produce structures with a width smaller than the lowest lying peak. This issue can be remedied if a larger number of data-points is supplied.}.
\begin{figure*}[!t]
\hspace{-1.5cm}
\centering
\includegraphics[scale=0.27,angle=-90] {JCP_Extended_Search_Reconstruction.pdf}
\caption{Visualization of the improvement in reconstructing the spectrum through an increase in the number of basis functions $N_{\rm ext}$. We plot the relative deviation of the reconstructed peak position $\omega_1/\omega_1^{\rm mock}$ against the number of supplied basis functions on the left. The right panel on the other hand shows the relative deviation of the reconstructed peak width $\Gamma_1/\Gamma_1^{\rm mock}$. }
\label{Fig:ExtendedreconstrParameters}
\end{figure*}
\begin{figure*}[!t]
\hspace{-1.2cm}
\includegraphics[scale=0.20,angle=-90] {JCP_Extended_Search_Compare_Results.pdf}
\caption{Comparison of the reconstructed spectra along the positive frequencies (top row) and negative frequencies (middle row). As the lowest lying positive peak is better and better reconstructed when going from $N_{\rm ext}=28$ (left column) via $N_{\rm ext}=50$ (center column) to $N_{\rm ext}=100$ (right column) it is clearly visible that at higher frequencies lots of wiggly structures arise. As argued in the text, the data-points are only able to constrain parts of the spectrum, the rest being determined by our choice of $m(\omega)$. To identify which of the wiggly features are actually constrained by the supplied measurements, we need to redo the MEM with a different functional form of the prior and observe their variation. (bottom row) The set of basis functions used in the determination of the MEM spectrum. }
\label{Fig:ExamplesExtendedReconstr}
\end{figure*}
In order to inspect the overall changes in the reconstruction of the mock spectrum brought about by an extension of the search space, we provide Fig.\ref{Fig:ExamplesExtendedReconstr}. There we plot the full spectrum at positive (top row) and negative frequencies (middle row) as well as the available basis functions (bottom row) for three different values of $N_{\rm ext}=28,50$ and $100$ (left, center and right column). While we find that in accordance with Fig.\ref{Fig:ExtendedreconstrParameters} the lowest lying positive frequency peak is increasingly well captured, the higher omega region shows a marked increase in variation. To understand which of these spectral features are actually important to us, we need to remember the role of the prior function. The result of the MEM reconstruction depends both on the supplied data and the choice of $m(\omega)$. As part of the spectrum is fixed by the former, part of it by the latter, we need to redo the MEM with different functional forms for the prior and observe which region stays invariant, subsequently being identified as constrained by the data.
\begin{figure*}[!t]
\hspace{-1.5cm}
\centering
\includegraphics[scale=0.25,angle=-90] {JCP_Extended_Search_Time.pdf}
\caption{Comparison of running time for the evaluation of the functional ${\cal Q}$ (circle) and the overall running time of the program (triangle) relative to the values at $N_{ext}=N_{\tau}=12$. As expected the individual function evaluation time grows linearly with $N_{\rm ext}$ as only a linear increase of additions contributes to Eq.\eqref{Eq:MeParam}. The overall running time also shows a slowing down for larger values of $N_{\rm ext}$, however the behavior for small numbers of basis functions does not exhibit a clear trend. A possible explanation is that for small $N_{\rm ext}$ the search space is too limited to approach the vicinity of the correct extremum, hence the minimizer will use a lot of time along the boundary of the restricted search space before settling into a local minimum.}
\label{Fig:SearchTime}
\end{figure*}
We have seen that by increasing the number of basis functions the quality of the reconstructed MEM spectrum can be significantly improved. The price to pay is an associated increase in computational cost. The most direct consequence of a larger number of basis vectors is that the evaluation time of the function ${\cal Q}$ increases linearly with $N_{\rm ext}$ as expected from Eq.\eqref{Eq:MeParam} and confirmed by explicit timing in Fig.\ref{Fig:SearchTime} (circles). The overall running time of the program increases monotonously (triangles), once $N_{\rm ext}>40$ but for smaller values the required time varies strongly. The reason is that the minimizer in the case of a severely restricted set of basis functions will only be able to move into the direction of the global minimum until it reaches the boundary of the search space, where it remains for a long time before settling into a local minimum.
\section{Conclusion}
The Maximum Entropy method offers a solution to the question of how to bring meaning to the ill-defined problem of inverting Eq.\eqref{Eq:ConvDiscr}, i.e. to infer the $N_\omega$ values $\rho_l$ from a noisy and finite data-set $D_i$ of size $N_\tau$. Instead of maximizing only the likelihood probability with a test spectral function $\rho_l$, one regularizes the process by including as prior probability the Shannon-Jaynes entropy. The function $\rho^{\rm MEM}_l$ that represents the extremum of Eq.\eqref{MEM:Optimize} is hence the most probable answer in the Bayesian sense.
Since in Bryan's approach the selection of the SVD basis functions does not depend on the choice of $\omega_{\rm min}$ and their number is fixed by the supplied number of data-points, we argue that his search space does not in general contain the correct global extremum of the functional ${\cal Q}(\rho,D,m)$. Numerical evidence was presented to support this conclusion. We thus propose to systematically expand the search space to $N_\tau<N_{\rm ext} <N_\omega$ dimensions until the correct global extremum of the functional has been found.
Introducing a large number of basis functions inevitably leads to the appearance of ''wiggly`` structures in the reconstructed spectral function $\rho^{\rm MEM}(\omega)$. If they are not constrained by the data, such artifacts can be identified through a variation of the prior function. In turn, the features of $\rho^{\rm MEM}(\omega)$ that are reliably encoded in the data do not suffer from the changes in $m_l$.
\subsection*{Acknowledgements}
The author would like to thank T. Hatsuda, O.Kaczmarek, J.-I. Skullerud and S. Sasaki for the many valuable discussions and comments. A.R. acknowledges support from the BMBF project \textit{Heavy Quarks as a Bridge between
Heavy Ion Collisions and QCD}, funding from the Sofja Kovalevskaja program of the Alexander von Humboldt foundation and the EU Integrated Infrastructure Initiative \textit{Hadron Physics 2} as well as partial support by the Swiss National Science Foundation
(SNF) under grant 200021-140234.
|
2,869,038,156,801 | arxiv | \section{Introduction}
If $H$ is a subgroup of a group $G$, $\pi_1$ an irreducible representation
of $G$, one is often interested in decomposing the representation $\pi_1$ when restricted to $H$,
called the branching laws. In this paper, we will be dealing mostly with infinite dimensional representations of a group $G$
which when restricted to $H$ are usually not completely reducible and there is often no obvious meaning to
``decomposing the representation restricted to $H$'', or a meaning has to be assigned in some precise way,
such as the Plancherel decomposition for unitary representations of $G$ restricted to $H$.
Unless otherwise mentioned, we will say that a representation $\pi_2$ of $H$ appears in a representation $\pi_1$ of $G$ if
\[{\rm Hom}_H[\pi_1,\pi_2] \not = 0.\]
The local GGP conjectures (which are all theorems now!) are about such branching laws for certain pairs of classical groups
$(G,H)$, which in this paper we will often take to be $({\rm GL}_{n+1}(F), {\rm GL}_n(F))$, or $({\rm SO}_{n+1}(F), {\rm SO}_n(F))$, where $F$
is a local field which will be non-archimedean unless otherwise mentioned.
For an irreducible admissible representation $\pi_1$ of ${\rm SO}_{n+1}(F)$, and $\pi_2$ of ${\rm SO}_n(F)$, the question of interest for GGP is the understanding of the Hom spaces,
\begin{eqnarray*}
{\rm Hom}_{{\rm SO}_n(F)}[\pi_1,\pi_2] & \cong & {\rm Hom}_{{\rm SO}_n(F)}[\pi_1 \otimes \pi_2^\vee, \mathbb{C} ] \\
&\cong& {\rm Hom}_{{\rm SO}_{n+1}(F) \times {\rm SO}_n(F)}[\mathcal{S}(X), \pi_1^\vee \otimes \pi_2],\end{eqnarray*}
where
$X= {\rm SO}_n(F)\backslash [{\rm SO}_n(F) \times {\rm SO}_{n+1}(F)],$ and $\mathcal{S}(X)$ denotes the space of compactly supported
smooth functions on $X$.
The first important result about branching laws considered by GGP
is the multiplicity one property:
\[m(\pi_1,\pi_2):=\dim {\rm Hom}_{{\rm SO}_n(F)}[\pi_1,\pi_2] \leq 1.\]
This is due to A. Aizenbud, D. Gourevitch, S. Rallis and G. Schiffmann in \cite{AGRS} in the non-archimedean case,
and B. Sun and C. Zhu in \cite{Sun-Zhu} in the archimedean case.
It may be mentioned that before
the full multiplicity one theorem was proved, even finite dimensionality of the multiplicity spaces was not
known, which were later answered in greater generality in the work of Y. Sakellaridis and A. Venkatesh
in \cite{Sak-Ven}. For infinite dimensional representations which is what we are mostly dealing with,
there is also the possibility that $m(\pi_1,\pi_2)$ could be identically 0 for a particular $\pi_1$!
With the multiplicity one theorems proved,
one then goes on to prove a more precise
description of the set of irreducible admissible representations $\pi_1$ of ${\rm SO}_{n+1}(F)$ and $\pi_2$ of ${\rm SO}_n(F)$
with
\[{\rm Hom}_{{\rm SO}_n(F)}[\pi_1,\pi_2] \not = 0.\]
Precise theorems about ${\rm Hom}_{{\rm SO}_n(F)}[\pi_1,\pi_2]$ have become available in a series of papers due to
Waldspurger and Moeglin-Waldspurger, cf. \cite{Wa}, \cite{Wa1}, \cite{Wa2}, \cite{Mo-Wa} for orthogonal groups. These
were followed by a series of papers by
Beuzart-Plessis for unitary groups, cf. \cite{Ra1}, \cite{Ra2}, \cite{Ra3}.
Given the interest in the space
\[{\rm Hom}_{{\rm SO}_n(F)}[\pi_1,\pi_2] \cong {\rm Hom}_{{\rm SO}_{n+1}(F) \times {\rm SO}_n(F)}[\mathcal{S}(X), \pi_1^\vee\otimes \pi_2],\]
it is natural
to consider the related spaces
\[{\rm Ext}^i_{{\rm SO}_n(F)}[\pi_1,\pi_2] \cong {\rm Ext}^i_{{\rm SO}_{n+1}(F) \times {\rm SO}_n(F)}[\mathcal{S}(X), \pi_1^\vee \otimes \pi_2],\] and in fact
homological algebra methods suggest that the simplest answers are not for
these individual spaces, but for the
alternating sum of their dimensions:
$${\rm EP}[\pi_1,\pi_2] = \sum_{i=0}^{\infty}(-1)^i\dim {\rm Ext}^i_{{\rm SO}_n(F)}[\pi_1,\pi_2];$$
these hopefully more manageable
objects - certainly more flexible - when coupled with vanishing of higher ${\rm Ext}$'s (when available)
may give theorems about
$${\rm Hom}_{{\rm SO}_n(F)}[\pi_1,\pi_2].$$
We hasten to add that before we can define ${\rm EP}[\pi_1,\pi_2]$,
${\rm Ext}^i_{{\rm SO}_n(F)}[\pi_1,\pi_2]$
needs to be proved to be finite dimensional
for $\pi_1$ and $\pi_2$ finite length admissible representations
of ${\rm SO}_{n+1}(F)$ and ${\rm SO}_n(F)$ respectively, and also
proved to be 0 for $i$ large.
Vanishing of
\[{\rm Ext}^i_{{\rm SO}_n(F)}[\pi_1,\pi_2]\]
for large $i$ is a well-known generality:
for reductive $p$-adic groups $G$ considered here, it is
known that
\[{\rm Ext}^i_G[\pi,\pi'] = 0 \]
for any two smooth representations $\pi$ and $\pi'$ of $G$ when $i$ is greater than the $F$-split rank of $G$.
This is a standard application of the projective resolution of the trivial representation
$\mathbb{C}$ of $G$ provided by the (Bruhat-Tits) building associated to $G$.
For the proof of the finite dimensionality
of ${\rm Ext}^i_G[\pi_1,\pi_2]$
we note
that unlike the Hom spaces, ${\rm Hom}_{G}[\pi_1,\pi_2]$, where we will have no idea how to
prove finite dimensionality of ${\rm Hom}_{G}[\pi_1,\pi_2]$ if both $\pi_1$ and $\pi_2$ are cuspidal,
for ${\rm Ext}^i_G[\pi_1,\pi_2]$ exactly
this case can be handled a priori, for $i> 0$, as almost by the very definition of cuspidal
representations, they are both projective and injective objects in the category of smooth representations
(and projective objects remain projective on restriction to a closed subgroup).
The finite dimensionality of ${\rm Ext}^i_{{\rm SO}(n)}[\pi_1,\pi_2]$ when one of the representations
$\pi_1,\pi_2$ is a full principal series representation,
is achieved by an inductive argument
both on $n$ and on the split rank of the Levi from which the principal
series arises. The resulting analysis needs the notion of {\it Bessel models}, which is also
a restriction problem involving a subgroup which has both reductive and unipotent parts.
Recently, there is a very general
finiteness theorem for ${\rm Ext}_G^i[\pi_1,\pi_2]$ (for spherical varieties) due to
A. Aizenbud and E. Sayag in \cite{AS}. However, the approach via Bessel models which intervene when
analyzing principal series representations of ${\rm SO}_{n+1}(F)$ when
restricted to ${\rm SO}_n(F)$ has, as a bonus, explicit answers about
Euler-Poincar\'e characteristics (at least in some cases).
The definition and the theorem below are due to Aizenbud and Sayag.
\begin{definition}
(Locally finitely generated representations) Suppose $G$ is a $p$-adic group and $\pi$ is a smooth represetation of $G$. Then
$\pi$ is said to be a {\rm locally finitely generated representation} of $G$ (or, also, just locally finite representation) if it satisfies one of the following equivalent conditions.
\begin{enumerate}
\item For each compact open subgroup $K$ of $G$, $\pi^K$ is a finitely generated module over the Hecke-algebra ${\mathcal H}(K\backslash G /K)$.
\item For each cuspidal datum $(M,\rho)$, i.e., $M$ a Levi subgroup of $G$, and $\rho$ a cuspidal representation of $M$, $\pi[M,\rho]$, the corresponding
component of $\pi$ in the Bernstein decomposition of the category of smooth representations of $G$, is a finitely generated $G$-module.
\end{enumerate}
\end{definition}
\begin{thm} \label{AS} (Aizenbud-Sayag)
For $\pi$ an irreducible admissible representation of ${\rm GL}_{n+1}(F)$, the restriction
of $\pi$ to ${\rm GL}_n(F)$ is locally finite (and true more generally for {\it spherical pairs} where
finite multiplicity is known).
\end{thm}
As a consequence of this theorem due to Aizenbud and Sayag, note that the restriction of an irreducible
representation $\pi$
of ${\rm GL}_{n+1}(F)$ to ${\rm GL}_n(F)$
is finitely generated in any Bernstein component of ${\rm GL}_n(F)$, hence $\pi|_{{\rm GL}_n(F)}$
has nonzero irreducible quotients by generalities (a statement which we will not know how to prove
for a general restriction problem as we said earlier).
The following corollary is an easy consequence of standard homological algebra where we also use the fact that
if a module is finitely generated over a noetherian ring $R$ (which need not be commutative but contains 1),
then it has a resolution by finitely generated projective
$R$-modules.
\begin{cor}
For $\pi_1$ an irreducible representation of ${\rm GL}_{n+1}(F)$, and
$\pi_2$ of $H={\rm GL}_n(F)$ (and true more generally for {\it spherical pairs} where
finite multiplicity is known),
\[ {\rm Ext}^i_H [\pi_1,\pi_2]\]
are finite dimensional, and zero beyond the split rank of $H$.
\end{cor}
We end the introduction by suggesting that although in this work we discuss exclusively
the restriction problems arising
in the GGP context, the notion of a locally finitely generated representation,
and its becoming a projective module on restriction to suitably chosen subgroups -- which is one of the properties
emphasized in this work -- should work
well in many other situations involving finite multiplicities, such as the Weil representation and its restriction to dual reductive pairs which we
briefly mention now. A criterion on locally finitely generated, and projectivity, would be very welcome
in the geometric context, say when a ($p$-adic) group $G$ acts on a ($p$-adic) space $X$ with an equivariant sheaf
$\psi$, where one would like to understand these questions for the action of $G$ on the Schwartz space $\mathcal{S}(X,\psi)$.
In the context of the Howe correspondence for a dual reductive pair $(G_1,G_2)$ with $G_1$ ``smaller than or equal to'' $G_2$, with $K_1$, $K_2$ compact open subgroups in $G_1$ and $G_2$,
it appears that the Weil representation $\omega$ of the ambient group, $\omega^{K_1 \times K_2}$ is a finitely generated module over both ${\mathcal H}(K_1\backslash G_1/K_1)$ and ${\mathcal H}(K_2\backslash G_2/K_2)$
and is a projective module over ${\mathcal H}(K_1\backslash G_1/K_1)$, and that
one can use $\omega^{K_1 \times K_2}$ as a bimodule to construct
an embedding of the category of smooth representations of the smaller pair among the dual reductive pair to
the bigger pair. Investigations on this ``functorial approch'' to the Howe correspondence seems not to have been undertaken so far.
\vspace{5mm}
\section{Branching laws from ${\rm GL}_{n+1}(F)$ to ${\rm GL}_n(F)$}
Recall
the following basic result which is proved as a consequence of the Rankin-Selberg theory, cf. \cite{Pr2}.
\begin{thm} \label{duke93}Given
an irreducible generic representation $\pi_1$ of ${\rm GL}_{n+1}(F)$, and an
irreducible generic representation $\pi_2$ of ${\rm GL}_{n}(F)$,
\[{\rm Hom}_{{\rm GL}_n(F)}[\pi_1,\pi_2] \cong \mathbb{C}.\]
\end{thm}
The following theorem can be considered as the Euler-Poincar\'e version of the above theorem and is much more flexibile than the previous theorem, and proved more easily!
\begin{thm}\label{whittaker} Let
$\pi_1$ be an admissible representation of ${\rm GL}_{n+1}(F)$ of finite length, and
$\pi_2$ an admissible representation of ${\rm GL}_{n}(F)$ of finite length.
Then, ${\rm Ext}^i_{{\rm GL}_n(F)}[\pi_1,\pi_2]$ are finite dimensional vector spaces over $\mathbb{C}$, and
$${\rm EP}_{{\rm GL}_n(F)}[\pi_1,\pi_2]
= \dim {\rm Wh}(\pi_1) \cdot \dim {\rm Wh}(\pi_2),$$
where ${\rm Wh}(\pi_1)$, resp. ${\rm Wh}(\pi_2)$, denotes the space of Whittaker models for $\pi_1$, resp. $\pi_2$,
with respect to fixed non-degenerate characters on a maximal unipotent subgroup
in ${\rm GL}_{n+1}(F)$ and ${\rm GL}_n(F)$ respectively.
\end{thm}
Here is a curious corollary!
\begin{cor}
{ If
$\pi_1$ is an irreducible admissible representation of ${\rm GL}_{n+1}(F)$, and
$\pi_2$ an irreducible admissible representation of ${\rm GL}_{n}(F)$,
then the only values taken by ${\rm EP}_{{\rm GL}_n(F)}[\pi_1,\pi_2]$ is 0 and 1,
in particular it is $\geq 0$.}
\end{cor}
{\bf Proof of Theorem \ref{whittaker}}: The proof of the Theorem \ref{whittaker} is accomplished using some results of Bernstein and Zelevinsky, cf.
\S3.5 of \cite{BZ1}, regarding the
structure of representations of ${\rm GL}_{n+1}(F)$ restricted to the mirabolic subgroup.
Recall that $E_{n}$, the mirabolic subgroup of ${\rm GL}_{n+1}(F)$,
consists of matrices
in ${\rm GL}_{n+1}(F)$
whose last row is equal to $(0, 0,\cdots, 0, 1)$.
For a representation $\pi$ of ${\rm GL}_{n+1}(F)$, Bernstein-Zelevinsky define
\[ \pi^i = \text{ the {\it i}-th derivative of $\pi$}, \]
which is a representation of ${\rm GL}_{n+1- i}( F)$.
Of crucial importance is the fact that if $\pi$ is of finite length for ${\rm GL}_{n+1}(F)$,
then $\pi^i$ are representations of finite length of ${\rm GL}_{n+1-i}(F)$.
Bernstein-Zelevinsky prove that the restriction of an admissible representation $\pi$ of ${\rm GL}_{n+1}(F)$ to the mirabolic $E_n$ has a finite filtration whose successive quotients are described by the derivatives $\pi^i$ of $\pi$.
Using the Bernstein-Zelevinsky filtration, and a form of Frobenius reciprocity
for Ext groups, Theorem \ref{whittaker} eventually follows from the following easy lemma. We refer to \cite{Pr3} for more details.
\begin{lemma} \label{vanishing} If $V$ and $W$ are any two finite length representations of ${\rm GL}_d(F)$, then
if $d>0$, $${\rm EP}[V,W] = 0.$$
If $d=0$, then of course
\[{\rm EP}[V,W] = \dim V \cdot \dim W.\]
\end{lemma}
The following result conjectured by the author some years ago, cf. \cite{Pr3},
and recently proved by Chan and Savin in \cite{CS2},
is at the root of why the simple and general result in Theorem \ref{whittaker} above
translates into a simple result about Hom spaces for generic representations
in Theorem \ref{duke93}.
\begin{thm} \label{vanishing} Let $\pi_1$ be an irreducible generic representation of ${\rm GL}_{n+1}(F)$, and $\pi_2$ an
irreducible generic representation of ${\rm GL}_{n}(F)$. Then,
\[{\rm Ext}^i_{{\rm GL}_n(F)}[\pi_1,\pi_2] = 0, \]
for all $i > 0$.
\end{thm}
On the other hand, Theorem \ref{whittaker} also has implications for non-vanishing of (higher) Ext groups in certain cases that we discuss now in the following remark.
\begin{remark} \label{rem1}
One knows, cf. \cite{Pr2}, that there are irreducible generic representations of ${\rm GL}_{3}(F)$ which have the trivial
representation of ${\rm GL}_2(F)$ as a quotient; similarly, there are
irreducible non-generic representations of ${\rm GL}_{3}(F)$ with irreducible generic
representations of ${\rm GL}_2(F)$ as a quotient. For such pairs $(\pi_1,\pi_2)$ of representations, it
follows from Theorem \ref{whittaker} on Euler-Poincar\'e characteristic that
\[{\rm EP}_{{\rm GL}_2(F)}[\pi_1, \pi_2]=0,\]
whereas
\[{\rm Hom}_{{\rm GL}_2(F)}[\pi_1, \pi_2] \not = 0.\]
Therefore, for such pairs $(\pi_1,\pi_2)$
of irreducible representations, we must have
\[{\rm Ext}^i_{{\rm GL}_2(F)}[\pi_1, \pi_2] \not =0,\]
for some $i>0$. The paper \cite{GGP2} studies more generally branching problem ${\rm Hom}_{{\rm GL}_n(F)}[\pi_1,\pi_2]$
when one of the irreducible representations, $\pi_1$ of ${\rm GL}_{n+1}(F)$ or $\pi_2$ of ${\rm GL}_n(F)$, is not generic, and both are Speh modules on discrete series representations, i.e., belongs to A-packets, thus leading to
non-vanishing of higher Ext groups.
\end{remark}
\section{Bessel subgroup} \label{bessel}
We will use Bessel subgroups, and Bessel models without defining them referring the reader to \cite{GGP},
except to recall that
these are defined for the classical groups ${\rm GL}(V), {\rm SO}(V), {\rm U}(V)$, through a subspace $W\subset V$,
with $V/W$ odd dimensional which in the case of ${\rm SO}(V)$ will be a
split quadratic space. In this paper we will use these subgroups only for ${\rm SO}(V)$.
The Bessel subgroup ${\rm Bes}(V,W)$ (shortened to ${\rm Bes}(W)$ if $V$ is understood)
is a subgroup of ${\rm SO}(V)$ of the form
${\rm SO}(W) \cdot U$ where $U$ is a unipotent subgroup of ${\rm SO}(V)$ which comes
with a character $\psi: U \rightarrow \mathbb{C}^\times$ normalized by ${\rm SO}(W)$.
The Bessel subgroup ${\rm Bes}(V,W) = {\rm SO}(W)$ if $ \dim (V/W)=1$. For a representation $\rho$ of ${\rm SO}(W)$,
we denote by $\rho \otimes \psi$ the corresponding representation of ${\rm Bes}(W) = {\rm SO}(W) \cdot U $.
The representation ${\rm ind}_{{\rm Bes}(W)}^{{\rm SO}(V)} (\rho \otimes \psi)$ of ${\rm SO}(V)$
will be called a Gelfand-Graev-Bessel representation, and plays a prominent role in analysing the
restriction problem from ${\rm SO}(V^+)$ to ${\rm SO}(V)$ for $V^+$ a quadratic space containing $V$ as a subspace
of codimension 1 such that $V^+/W$ is a split quadratic space of even dimension.
\begin{prop} \label{proj} If $\rho$ is a finite length representation of ${\rm SO}(W)$,
then the Gelfand-Graev-Bessel representation, \[ {\rm ind}_{{\rm Bes}(W)}^{{\rm SO}(V)} (\rho \otimes \psi),\]
is a
locally finitely generated representation of ${\rm SO}(V)$ which is
projective if, further, $\rho$ is cuspidal.
\end{prop}
\begin{proof}
Projectivity of the Gelfand-Graev representation for any quasi-split group is due to Chan and Savin in the appendix to the
paper \cite{CS3}. Let us remind ourselves a slightly delicate point.
By exactness of $U$-coinvariants, what is obvious is that ${\rm Ind}_{U}^{G} ( \psi)$ is an injective module for
$U$ any unipotent subgroup of a reductive group $G$.
That the dual of a projective module is an injective module is a generality, but this does not prove that
${\rm ind}_{U}^{G} ( \psi)$ is projective!
Instead of directly proving that ${\rm ind}_{U}^{G} ( \psi)$ is projective, Chan and Savin
prove that ${\rm Ext}_G^i[{\rm ind}_{U}^{G} ( \psi), \sigma] = 0$ for all $\sigma$ and all $i > 0$. By generalities, for algebras ${\mathcal H}$ containing a finitely generated $\mathbb{C}$-algebra $Z$ in its center over which
${\mathcal H}$ is finitely generated as a $Z$-module,
${\rm Ext}_{\mathcal H}^i[M,N] = 0$ for $i>0$ and for all $N$, if and only if this is true for
finitely generated $N$ and eventually ${\rm Ext}_{\mathcal H}^i[M,N] = 0$ for $i>0$ and for all $N$, if and only if ${\rm Ext}_{\mathcal H}^i[M,N] = 0$ for $i>0$ and for all $N$ of finite length. (Clearly, only irreducible $N$ are adequate!)
Going from finitely generated to finite length is a generality that Chan and Savin discuss, and is also in Proposition 5.2 of
\cite{NP} according to which
\[{\rm Ext}_{\mathcal H}^i[M,N]
\otimes_Z \widehat{Z} \cong {\rm Ext}_{\mathcal H}^i[M,\widehat{N}]
\cong \lim_{\leftarrow}{\rm Ext}_{\mathcal H}^i[M,N/{\mathfrak m} ^nN],\]
where $\widehat{N} = \displaystyle{ \lim_{\leftarrow}}(N/{\mathfrak m}^nN)$. For all this, finite generation of
$M$ is essential for which Chan and Savin quote the paper \cite{Bu-He} which proves that the Gelfand-Graev
representations are locally finitely generated.
In our case, we can appeal to Theorem \ref{AS} of Aizenbud-Sayag to prove that
the Gelfand-Graev-Bessel representation $ {\rm ind}_{{\rm Bes}(W)}^{{\rm SO}(V)} (\rho \otimes \psi)$ are locally
finitely generated which we now elaborate upon; the rest of the argument of Chan-Savin in \cite{CS3} goes verbatim.
Let $V^+= V + L$ where $L$ is a one dimensional quadratic space such that
$V^++ L = X + W + Y$ for $X,Y$ isotropic, perpendicular to $W$. Consider the representation $\tau \times \rho$ of
${\rm SO}(V^+)$, a
parabolically induced representation of ${\rm SO}(V^+)$ from the parabolic with Levi subgroup ${\rm GL}(X) \times {\rm SO}(W)$ of the representation
$\tau \boxtimes \rho$ where $\tau$ is any cuspidal representation of ${\rm GL}(X)$. Then it follows from the analogue of
Bernstein-Zelevinsky filtration for the restriction of the representation $\tau \times \rho$ of
${\rm SO}(V^+)$ to ${\rm SO}(V)$
due to Moeglin-Waldspurger, cf. \cite{Mo-Wa},
that $ {\rm ind}_{{\rm Bes}(W)}^{{\rm SO}(V)} (\rho \otimes \psi)$
is a submodule of the representation $\tau \times \rho$ of
${\rm SO}(V^+)$ restricted to ${\rm SO}(V)$. Since the rings which govern a Bernstein block are Noetheriam rings, submodules of
locally finitely generated representations are locally finitely generated, proving the proposition. \end{proof}
Note a particular case of this proposition.
\begin{cor}
If $W\subset V$ is a codimension one subspace of $V$, a quadratic space, and $\rho$
a finite length representation of ${\rm SO}(W)$, then ${\rm ind}_{{\rm SO}(W)}^{{\rm SO}(V)} (\rho)$ is a locally finitely
generated representation of ${\rm SO}(V)$
which is projective if $\rho$ is cuspidal
(and if $\dim(W)=2$, $W$ is not split). Also, similar assertions
for ${\rm GL}_n(F), {\rm U}_n$.
\end{cor}
\section{What does the restriction really looks like!}
So far, we have been discussing the question: which representations of ${\rm GL}_n(F)$ appear as a quotient of an irreducible representation
of ${\rm GL}_{n+1}(F)$. It is possible to have a more complete understanding of what a representation of ${\rm GL}_{n+1}(F)$ restricted to ${\rm GL}_n(F)$ looks like.
Vanishing of Ext groups in many but not in all cases, suggests that
the restriction to ${\rm GL}_n(F)$ of
an irreducible admissible
(generic) representation $\pi$ of ${\rm GL}_{n+1}(F)$ is close to being a projective module without
being one in all the cases.
Since the category of smooth representations of ${\rm GL}_n(F)$ is decomposed
into blocks parametrized by the inertial equivalence classes of cuspidal datum $(M,\rho)$ in ${\rm GL}_n(F)$, one can
ask if the projection of $\pi$ to the particular block, call it $\pi[M,\rho]$, is a projective module in that block.
This appears
to be an important question to understand: given an irreducible representation $\pi$ of ${\rm GL}_{n+1}(F)$, for which
blocks $(M,\rho)$ in ${\rm GL}_n(F)$, is $\pi[M,\rho]$ a projective module.
The following proposition is a direct consequence of the Bernstein-Zelevinski filtration which describes
the restriction of a representation $\pi$ of ${\rm GL}_{n+1}(F)$ to the mirabolic subgroup of ${\rm GL}_{n+1}(F)$
in terms of the derivatives $\pi^i$ of $\pi$ which are finite length smooth representations of ${\rm GL}_{n+1-i}(F)$.
Recall that the derivatives satisfy the Leibnitz rule (in the Grothendieck group of representations
of ${\rm GL}_{n+1}(F)$):
\[ (\pi_1 \times \pi_2)^d = \sum_{i=0}^{d} \pi_1^{d-i} \times \pi_2 ^i,\]
and that for an irreducible cuspidal representation $\pi$ of ${\rm GL}_d(F)$, the only nonzero derivatives are $\pi^0=\pi$, and
$\pi^d = \mathbb{C}$.
\begin{prop}\label{gln}
Let $\pi$ be a generic representation of ${\rm GL}_{n+1}(F)$. Let $(M,\rho)$ be a cuspidal datum in ${\rm GL}_n(F)$, thus
$M= {\rm GL}_{n_1}(F) \times \cdots \times {\rm GL}_{n_k}(F)$ with $n = n_1+\cdots + n_k$,
is a Levi subgroup inside ${\rm GL}_n(F)$, and $\rho = \rho_1 \boxtimes \cdots \boxtimes \rho_k$ is a tensor product of irreducible
cuspidal representations of ${\rm GL}_{n_i}(F)$. Assume that none of the cuspidal representations $\rho_i$ of ${\rm GL}_{n_i}(F)$
appear in the cuspidal support of $\pi$ even after an unramified twist.
Then $\pi|_{{\rm GL}_n(F)} [M,\rho]$ is a projective representation and is the
$[M,\rho]$ component of the Gelfand-Graev representation ${\rm ind}_N^{{\rm GL}_n(F)} \psi$.
\end{prop}
Here is the corresponding result for classical groups, asserted for simplicity of notation
only for ${\rm SO}(W) \subset {\rm SO}(V)$ where
$W \subset V$ is a codimension 1 nondegenerate subspace of a quadratic space $V$ with $\dim(V)=n+1$. This
result like Proposition \ref{gln} is also a
consequence of a Bernstein-Zelevinski like filtration (due to Moeglin and Waldspurger in \cite{Mo-Wa})
on the restriction of a representation of ${\rm SO}(V)$ to ${\rm SO}(W)$ when the representation of ${\rm SO}(V)$ is
induced from a maximal parabolic
with Levi of the form ${\rm GL}_m(F) \times {\rm SO}(W')$ of a representation of the form $\mu_1 \boxtimes \mu_2$, and using the
Bernstein-Zelevinski filtration for $\mu_1$ restricted to a mirabolic in ${\rm GL}_m(F)$. The proposition below uses
the representation ${\rm ind}_{{\rm Bes}(W_0)}^{{\rm SO}(W)} (\rho_0 \otimes \psi)$, for
$\rho_0$ a cuspidal representation of ${\rm SO}(W_0)$, which we called a Gelfand-Graev-Bessel representation
in section \ref{bessel}, and which is a projective representation by Proposition \ref{proj}.
Here, ${\rm Bes}(W_0)$ is the Bessel subgroup inside ${\rm SO}(W)$, introduced in section \ref{bessel},
where $W_0\subset W$ is a
nondegenerate subspace of a quadratic space $W$ with $W_0^\perp$ an odd dimensional hyperbolic space,
\begin{prop} \label{son}
Let $\pi$ be an admissible representation of ${\rm SO}(V)$ which is the full induction of a cuspidal representation
of a Levi subgroup of ${\rm SO}(V)$. Let $(M,\rho)$ be a cuspidal datum in ${\rm SO}(W)$, thus,
$M= {\rm GL}_{n_1}(F) \times \cdots \times {\rm GL}_{n_k}(F) \times {\rm SO}(W_0)$ with $n = 2n_1+\cdots + 2n_k+ \dim(W_0)$,
is a Levi subgroup inside ${\rm SO}(W)$, and $\rho = \rho_1 \boxtimes \cdots \boxtimes \rho_k \boxtimes \rho_0$
is a tensor product of irreducible cuspidal
representations of ${\rm GL}_{n_i}(F)$, and $\rho_0$ is an irreducible cuspidal representation of ${\rm SO}(W_0)$.
Assume that none of the cuspidal representations $\rho_i$ of ${\rm GL}_{n_i}(F)$
appear in the cuspidal support of $\pi$ even after an unramified twist (no condition on $\rho_0$).
Then $\pi|_{{\rm SO}(W)} [M,\rho]$ is a projective representation and is the
$[M,\rho]$ component of the Gelfand-Graev-Bessel representation ${\rm ind}_{{\rm Bes}(W_0)}^{{\rm SO}(W)} (\rho_0 \otimes \psi)$.
\end{prop}
\begin{remark}
We assumed $\pi$ in Proposition \ref{gln} to be generic as otherwise the assertion in the Proposition \ref{gln} will become
empty, i.e., $\pi|_{{\rm GL}_n(F)} [M,\rho]$ will be zero if $\pi$ is nongeneric. However, in Proposition \ref{son}
we do not assume that $\pi$ is generic. Neither of the two propositions require $\pi$ to be irreducible, and
in Proposition \ref{son} we do not require the inducing data for $\pi$ to be irreducible.
\end{remark}
The following theorem is due to Chan and Savin, cf. \cite{CS1}, \cite{CS2}, especially section 5 of
\cite{CS2}.
\begin{thm} \label{CS}
\begin{enumerate}
\item Restriction of an irreducible admissible representation
$\pi$ of ${\rm GL}_{n+1}(F)$ to ${\rm GL}_n(F)$ is projective in a
particular Bernstein block of smooth representations of ${\rm GL}_n(F)$
if and only if $\pi$ itself is generic and
all irreducible ${\rm GL}_n(F)$-quotients of $\pi$,
in that particular Bernstein block of smooth representations of ${\rm GL}_n(F)$,
are generic.
\item If $\pi_1,\pi_2$ are any two irreducible representations
of ${\rm GL}_{n+1}(F)$ whose restrictions to ${\rm GL}_n(F)$ are projective in a
particular Bernstein block of smooth representations of ${\rm GL}_n(F)$, then
$\pi_1$ and $\pi_2$ are isomorphic in that particular Bernstein block of smooth representations of ${\rm GL}_n(F)$.
\item For $\pi$ an irreducible generic representation of ${\rm GL}_{n+1}(F)$ which is projective when
restricted to the Iwahori block of ${\rm GL}_n(F)$,
\[ \pi|_{{\rm GL}_n(F)}[I,1] \cong {\rm ind}_{G(O_F)}^{G(F)} ({\rm St}).\]
\item More generally, by theorems of Bushnell and Kutzko, cf. \cite{B-K}, \cite{B-K2},
a general block for ${\rm GL}_n(F)$ arising out of a cuspidal datum
$(M,\rho)$ is equivalent to the Iwahori block of a product of general linear groups. Therefore,
there is an analogue of the representation ${\rm ind}_{G(O_F)}^{G(F)} ({\rm St})$ for each block in
${\rm GL}_n(F)$ and which the restriction problem from ${\rm GL}_{n+1}(F)$ to ${\rm GL}_n(F)$ picks up when the
restriction is projective in that block.
\end{enumerate}
\end{thm}
\begin{remark}
After this theorem of Chan and Savin, the unfinished tasks are:
\begin{enumerate}
\item Given an irreducible generic representation $\pi$ of ${\rm GL}_{n+1}(F)$, can we classify exactly the
Bernstein blocks of ${\rm GL}_n(F)$ in which $\pi|_{{\rm GL}_n(F)}$ is not projective?
\item More generally, if $\pi$ is an irreducible representation of ${\rm GL}_{n+1}(F)$ which may or may not be generic,
can one understand projective dimension (i.e., the minimal length of a projective resolution) of
$\pi|_{{\rm GL}_n(F)}$ in a particular Bernstein block?
\end{enumerate}
As is often the case in representation theory of $p$-adic groups,
dealing with discrete series which are non-cuspidal is often the most difficult part. In Proposition
\ref{restriction} in the next section,
we prove that for $\pi$ a generic representation of ${\rm GL}_{n+1}(F)$, $\pi|_{{\rm GL}_n(F)}$ is a projective
representation in those Bernstein blocks of ${\rm GL}_n(F)$ which contain no non-cuspidal discrete series representations.
Both Proposition
\ref{restriction} and Proposition
\ref{gln} can be considered as the simplest blocks where there is a nice answer.
\end{remark}
The following theorem of Chan, cf. \cite{Chan}, gives a complete classification of the irreducible
representations of ${\rm GL}_{n+1}(F)$ which when restricted to ${\rm GL}_n(F)$ are projective modules, thus remain projective
in {\it all} blocks.
\begin{thm} \label{thmchan}
Let $\pi$ be an irreducible representation of ${\rm GL}_{n+1}(F)$. Then $\pi$ restricted to ${\rm GL}_n(F)$ is a projective representation
if and only if
\begin{enumerate}
\item Either $\pi$ is essentially square integrable, or,
\item $(n+1) =2d$, $\pi = \pi_1 \times \pi_2$ where $\pi_i$ are cuspidal on ${\rm GL}_{d}(F)$.
\end{enumerate}
\end{thm}
\begin{remark} \label{rem3}
The non-tempered GGP, conjectured in \cite{GGP2} and proved for ${\rm GL}_n(F)$ in \cite{Chan2}, \cite{Gur}, describes irreducible representations $\pi$ of ${\rm GL}_{n+1}(F)$ and $\pi'$ of ${\rm GL}_n(F)$ (which are Speh representations on discrete series representations, i.e., have $A$-parameters) with
\[{\rm Hom}_{{\rm GL}_n(F)}[\pi,\pi'] \not = 0.\]
Thus from the list in Theorem \ref{thmchan}, we see that for the tempered representation $\pi= {\rm St}_d \times \chi {\rm St}_d$
of ${\rm GL}_{2d}(F)$ where $\chi$ is a unitary character of $F^\times$, and ${\rm St}_d$ is the Steinberg representation
of ${\rm GL}_d(F)$, although $\pi$ has no non-generic quotient with an $A$-parameter, it does have other
non-generic quotients.
\end{remark}
\begin{remark}
From theorems of Chan, just like cuspidal representations, discrete series representations of ${\rm GL}_{n+1}(F)$
are always projective representations when restricted to ${\rm GL}_n(F)$. This seems a general feature of all
the GGP pairs for which there is no proof yet.
\end{remark}
\section{A theorem of Roche and some consequences}
In the last section we discussed some situations where restriction of irreducible admissible
representations of ${\rm GL}_{n+1}(F)$ (resp., other classical groups) to ${\rm GL}_n(F)$ (resp., subgroups of other
classical groups) give rise to projective modules and which are for ${\rm GL}_{n+1}(F)$, by theorems of Chan and Savin,
very explicit compactly induced representations. In this section, we use a theorem of Alan Roche to one more such situation for both
${\rm GL}_{n+1}(F)$ as well as for classical groups where the restriction gives rise to projective modules. In this case,
however, the projective modules are {\it universal principal series} representations.
\begin{thm} \label{Roche} (Alan Roche) Let $G$ be a reductive $p$-adic group, $(M,\rho)$, a cuspidal datum. Let $M^0$ be the subgroup of $M$ generated by
compact elements in $M$. Assume
that no nontrivial element of $N_G(M)/M$ preserves $\rho$ up to an unramified twist. Then
the induced representation,
\[ {\rm Ind}_P^G(\rho),\]
is irreducible. Furthermore, the parabolic induction from $P$ (with Levi $M$) to $G$ gives an equivalence of categories
\[ { \mathcal{R}}
[M]{[\rho]} \rightarrow {\mathcal{R}}[G]{[M, \rho]}.\]
In particular, since the category of representations ${\mathcal{R}}[M]{[\rho]}$ in the Bernstein component of $M$ corresponding to the cuspidal representation $\rho$ of $M$ is the same as the category of modules over an Azumaya algebra with center
the ring of functions on the complex torus consisting of the unramified twists of $\rho$, the same is true
of the Bernstein component ${\mathcal{R}}[G]{[M, \rho]}$ of $G$.
\end{thm}
\begin{remark}
By the Geometric Lemma (which calculates Jacquet modules of full principal series representations), the assertion
that no nontrivial element of $N_G(M)/M$ preserves $\rho$ up to an unramified twist is equivalent to say that
the Jacquet module with respect to the parabolic $P$ of the principal series representation
${\rm Ind}_P^G(\rho)$ contains $\rho$ with multiplicity 1, and
no unramified twist of it distinct from itself.
\end{remark}
\begin{prop} \label{restriction} Let ${\rm G}_{n+1}$ be any of the classical groups ${\rm GL}_{n+1}(F),$ ${\rm SO}_{n+1}(F),$
${\rm U}_{n+1}(F)$. Let $\pi_1$ be an irreducible representation of ${\rm G}_{n+1}(F)$ belonging to a generic $L$-packet of ${\rm G}_{n+1}(F)$,
and let $(M,\rho)$ be a cuspidal
datum for ${\rm G}_n(F)$. Assume that no nontrivial element of $N_{{\rm G}_n(F)}(M)/M$ preserves $\rho$ up to an unramified twist.
Let $\rho^0$ be an irreducible representation of $M^0$, the subgroup of $M$ generated by compact
elements of $M$, with $\rho^0 \subset \rho|_{M^0}$.
Then,
the $(M,\rho)$ Bernstein component of $\pi_1$ restricted to ${\rm G}_n(F)$ is the
``universal principal series''
representation, i.e.,
\[ \pi_1|_{{\rm G}_n(F)} [M,\rho] \cong {\rm ind}_{P^0}^{{\rm G}_n}(\rho^0) \cong {\rm Ind}_{P}^{{\rm G}_n}{\rm ind}_{M^0}^M(\rho^0) ,\]
where $P^0=M^0N$. In particular, the $(M,\rho)$ Bernstein component of $\pi_1|_{{\rm G}_n(F)}$ is a projective representation which
is independent of $\pi_1$.
\end{prop}
\begin{proof}
Since $M$ is a Levi subgroup of ${\rm G}_n(F)$, $M$ is a product of the groups ${\rm GL}_{n_i}(F)$ with
${\rm G}_m(F)$ for some $m \geq 0$ (which are semisimple if $m>2$),
it is easy to see that any irreducible representation of $M$ when restricted to $M^0$,
is a finite direct sum of irreducible representations of $M^0$ with multiplicity 1. By the second adjointness combined with a form of Frobenius reciprocity for open subgroups,
for $\pi$ any irreducible representation of ${\rm G}_n(F)$,
\[ {\rm Hom}_{{\rm G}_n(F)}[ {\rm Ind}_{P}^{{\rm G}_n(F)} {\rm ind}_{M^0}^M(\rho^0), \pi] \cong {\rm Hom}_M [{\rm ind}_{M^0}^M(\rho^0), \pi_{\bar{N}} ] \cong
{\rm Hom}_{M^0} [\rho^0, \pi_{\bar{N}}].\]
Thus any irreducible representation of ${\rm G}_n(F)$ which appears as a quotient of ${\rm Ind}_{P^0}^{{\rm G}_n(F)}(\rho^0)$ appears with multiplicity atmost one, and appears with multiplicity one if and only if it belongs to the Bernstein block $[M,\rho]$, and is
a full principal series. Further, because of this multiplicity 1, the Azumaya algebra appearing in Theorem \ref{Roche}, is the ring $R$ of Laurent polynomials
$R=\mathbb{C}[X_1,X_1^{-1}, \cdots , X_d, X_d^{-1}]$
which is the ring of regular functions on a $d$-dimensional complex torus.
Now we analyse $\pi_1|_{{\rm G}_n(F)}[M,\rho]$ considered as a module, call it ${\mathcal M}$ over the ring,
$R=\mathbb{C}[X_1,X_1^{-1}, \cdots , X_d, X_d^{-1}]$. In the category of
$R$-modules, irreducible = simple modules are of the form $R/{\mathfrak m}$ where ${\mathfrak m}$ are maximal
ideals in $R$, and therefore irreducible quotients of $\pi_1|_{{\rm G}_n(F)}[M,\rho]$ are homomorphism of $R$-modules
${\mathcal M} \rightarrow R/{\mathfrak m} =\mathbb{C}$, equivalently, homomorphism of $R$-modules
${\mathcal M}/{\mathfrak m}{\mathcal M} \rightarrow \mathbb{C}$.
As every irreducible representation in this Bernstein block is a full principal series,
in particular they are all generic, therefore by Theorem \ref{gln} in the case of ${\rm GL}_{n+1}(F)$ and by GGP conjectures (theorems!) in other cases, each have
these irreducible principal series representations arise as a quotient of $\pi_1$ with multiplicity 1 (multiplicity identically zero is a possibility too for ${\rm SO}_{n+1}(F),{\rm U}_{n+1}(F)$; the important thing is that the multiplicity
is constant among all irreducible representations in this block).
This analysis can then be summarized to say that for the module ${\mathcal M}$ over $R=\mathbb{C}[X_1,X_1^{-1}, \cdots , X_d, X_d^{-1}]$ corresponding to $\pi_1|_{{\rm G}_n(F)}[M,\rho]$, ${\mathcal M}/{\mathfrak m}{\mathcal M} \cong \mathbb{C}$ for all maximal ideals ${\mathfrak m}$ in $R$.
By
Theorem \ref{AS} of Aizenbud-Sayag we know the finite generation of ${\mathcal M}$ over $R=\mathbb{C}[X_1,X_1^{-1}, \cdots , X_d, X_d^{-1}]$. Thus all the assumptions in the Lemma \ref{commutative} below are satisfied, and hence
${\mathcal M}$ is a projective module of rank 1 over
the ring $R$ of Laurent polynomials
$R=\mathbb{C}[X_1,X_1^{-1}, \cdots , X_n, X_n^{-1}]$. This is also the case for the universal principal series
representation ${\rm ind}_{P^0}^{{\rm G}_n}(\rho^0)$. Since
any rank 1 projective module over a Laurent polynomial ring is free,
this concludes the proof of the proposition.
\end{proof}
\begin{lemma} \label{commutative} Let $R$ be a finitely generated $k$-algebra where $k$ is a field. Suppose that $R$ has no nilpotent
elements. Let ${\mathcal M}$ be a finitely generated module over $R$ such that for each maximal ideal
${\mathfrak m}$ of $R$, ${\mathcal M}/{\mathfrak m}{\mathcal M}$ is free of rank 1 over $R/{\mathfrak m}$, then ${\mathcal M}$ is a
projective module of rank 1
over $R$. \end{lemma}
\begin{remark}
Proposition \ref{restriction} applies to all Bernstein blocks of ${\rm GL}_n(F)$ which do not contain a
non-cuspidal discrete series representation of ${\rm GL}_n(F)$, in particular it applies to
Bernstein blocks of ${\rm GL}_n(F)$ which contain a
cuspidal representation of ${\rm GL}_n(F)$!
\end{remark}
\section{ Euler-Poincar\'e characteristic for classical groups}
In the next few sections we will discuss Euler-Poincar\'e characteristic for branching laws for
classical groups, restricting ourselves to the case of $G={\rm SO}_{n+1}(F)$ and $H={\rm SO}_n(F)$.
In the following theorem, so as to simplify notation, if $\lambda_1$ is a representation of ${\rm SO}(V_1)$
and $\lambda_2$ is a representation of ${\rm SO}(V_2)$, then by ${\rm EP}_{{\rm Bes}}[\lambda_1, \lambda_2]$ we will give it the
usual meaning if $V_2 \subset V_1$ with $V_1/V_2$ odd dimensional split quadratic space, whereas
${\rm EP}_{{\rm Bes}}[\lambda_1, \lambda_2]$
will stand for
${\rm EP}_{{\rm Bes}}[\lambda_2, \lambda_1]$ if $V_1 \subset V_2$ with $V_2/V_1$ odd dimensional split quadratic space.
The notation ${\rm EP}_{{\rm Bes}}[\lambda_1, \lambda_2]$ will presume that we are in one of the two cases. This notation
has the utility of being able to add hyperbolic spaces of arbitrary dimension to $V_1$ or $V_2$, and by Theorem
15.1 of \cite{GGP},
\[ {\rm EP}_{{\rm Bes}}[\lambda_1, \lambda_2] = {\rm EP}_{{\rm Bes}}[\tau_1 \times \lambda_1, \tau_2 \times \lambda_2],\]
where $\tau_1,\tau_2$ are any cuspidal representations on general linear groups of arbitrary dimensions.
The following theorem is the analogue of Theorem \ref{whittaker}
which was for representations ${\rm GL}_{n+1}(F),{\rm GL}_n(F)$, now for classical groups, but as in the rest of the paper, we assert it only for special orthogonal groups.
\begin{thm} \label{EP} Let $V$ be a quadratic space over $F$, $V'$ a nondegenerate subspace of
codimension 1 inside $V$.
Let $\sigma = \pi_0 \times \sigma_0,$
be a representation for ${\rm SO}(V)$
where $\pi_0$ is a finite length representation of ${\rm GL}_{n_0}(F)$, and
$\sigma_0$ is a finite length representation of ${\rm SO}(V_0)$ where $V_0\subset V$
is a quadratic subspace of $V$ such that the quadratic space $V/V_0$ is a
hyperbolic space of dimensions $2n_0$. Similarly, let
$\sigma' = \pi'_0 \times \sigma'_0,$
be a representation for ${\rm SO}(V')$, then:
\[{\rm EP}_{{\rm SO}(V')}[\sigma,\sigma'] = \dim {\rm Wh}(\pi_0) \cdot \dim {\rm Wh}(\pi_0') \cdot \dim {\rm EP}_{{\rm Bes}}[\sigma_0, \sigma'_0].\]
\end{thm}
\begin{proof}
The proof of this theorem is very analogous to the proof of Theorem \ref{whittaker} for representations $\pi_1,\pi_2$ of ${\rm GL}_{n+1}(F),{\rm GL}_n(F)$,
replacing the Bernstein-Zelevinsky exact sequence describing the restriction of the representation $\pi_1$
to the mirabolic subgroup in ${\rm GL}_{n+1}(F)$, by a similar exact sequence describing the restriction of the representation $\sigma_0$
to the subgroup ${\rm SO}(V')$ due to Moeglin-Waldspurger in \cite{Mo-Wa}. The essential part of the proof of Theorem \ref{whittaker} was Lemma \ref{vanishing}
about vanishing of ${\rm EP}[V,W]$ when $V,W$ are finite length representations of ${\rm GL}_m(F)$, $m \geq 1$. This continues
to be the case here. We give some details of the proof here.
According to \cite{Mo-Wa}, the restriction of $\sigma = \pi_0 \times \sigma_0,$ to ${\rm SO}(V')$ has a filtration
with one sub-quotient equal to
\[ {\rm Ind}_{P'}^{{\rm SO}(V')} ( \pi_0 \times \sigma_0|_{{\rm SO}(V'_{0})}), \tag{1} \]
where $V'_{0} = V_0 \cap V'$ is a codimension one subspace of $V_0$, and $P'$ the parabolic in ${\rm SO}(V')$
with Levi ${\rm GL}_{n_0}(F) \times {\rm SO}(V'_{0}) $. (If there is no such parabolic in ${\rm SO}(V')$, then this term will not be there.)
The other subquotients of $\sigma|_{{\rm SO}(V')}$ are the principal series representations in ${\rm SO}(V')$ induced from
maximal parabolics with Levi whose ${\rm GL}$ part is of dimension $n_0-i$ (and we do not describe
the ${\rm SO}$ part of the Levi just calling it $V_i'$)
\[ \pi_0^i \times {\rm ind}_{{\rm Bes}(V_0)}^{{\rm SO}(V_i')} (\sigma_0 \otimes \psi) , \,\,\,\,\,\, n_0\geq i \geq 1 \tag{2} \]
Given the filtration on $\sigma|_{{\rm SO}(V')}$ with successive quotients as in (1) and (2),
and as ${\rm EP}[\sigma, \sigma']$ is additive in exact sequences,
one applies the 2nd adjointness theorem of Bernstein together with Lemma \ref{vanishing}
about vanishing of ${\rm EP}[V,W]$ when $V,W$ are finite length representations of ${\rm GL}_m(F)$, $m \geq 1$, and the Kunneth theorem, cf. \cite{Pr3}.
This implies that the only non-vanishing contribution to ${\rm EP}[\sigma, \sigma']$ will come from the
term in (2) corresponding to the highest derivative of $\pi_0$ giving rise to the representation
of ${\rm GL}(0) = \{1\}$ of dimension $\dim {\rm Wh}(\pi_0)$. This needs to be multiplied by
\[{\rm EP} [{\rm ind}_{{\rm Bes}(V_0)}^{{\rm SO}(V_{n_0}')} (\sigma_0 \otimes \psi), \sigma'] = {\rm EP}_{{\rm Bes}(V_0)}[\sigma', \sigma_0].\]
Thus, \[{\rm EP}_{{\rm SO}(V')}[\sigma,\sigma'] = \dim {\rm Wh}(\pi_0) \cdot \dim {\rm EP}_{{\rm Bes}(V_0)}[\sigma',\sigma_0]. \tag{3}\]
Doing this once more, using now the representation $\sigma' = \pi'_0 \times \sigma'_0,$ we get,
\[ {\rm EP}_{{\rm Bes}(V_0)}[\sigma',\sigma'_0] = \dim {\rm Wh}(\pi'_0) \cdot \dim {\rm EP}_{{\rm Bes}(V'_0)}[\sigma_0,\sigma'_0]. \tag{4}\]
By (3) and (4), we find:
\[{\rm EP}_{{\rm SO}(V')}[\sigma,\sigma'] = \dim {\rm Wh}(\pi_0) \cdot \dim {\rm Wh}(\pi'_0) \cdot \dim
{\rm EP}_{{\rm Bes}}[\sigma_0,\sigma'_0], \tag{5}\]
completing the proof of Theorem \ref{EP}.
\end{proof}
\section{Euler-Poincar\'e characteristic for the group case: Kazhdan orthogonality}
The branching laws considered in this paper are for $H \hookrightarrow G \times H$,
where $H \subset G$, eventually interpreted as the $(G \times H)$ spherical variety
$\Delta(H) \backslash (G \times H)$,
in which we try to understand ${\rm Ext}^i_H(\pi_1,\pi_2)$
for an irreducible representation
$\pi_1 \boxtimes \pi_2$ of $G \times H$. A special case of this branching problem is for the ``group case''
where $H=G$, so in the case of
$G \hookrightarrow G \times G$ where we will be considering ${\rm Ext}^i_G(\pi_1,\pi_2)$ where $\pi_1, \pi_2$
are representations of the same group $G$. This could be considered as a precursor of the more general branching
for $H \hookrightarrow G \times H$, and has played an important role in the subject.
Explicit calculation of ${\rm Ext}_G^i(\pi_1,\pi_2)$ has been carried out in several cases for $G$, a reductive $p$-adic group, and $\pi_1,\pi_2$
irreducible representations of $G$. In particular, if both $\pi_1$ and $\pi_2$ are tempered representations of $G$,
there are general results in \cite{Op-Sol} using the formulation of $R$-groups, and there are some independent
specific calculations in \cite{Ad-Dp}.
Ext groups for certain non-tempered representations are considered in \cite{Or} as well as in \cite{Dat}.
In the archimedean case, $H^i({\mathfrak g}, K, \pi) = {\rm Ext}^i(\mathbb{C}, \pi)$ has been much studied, but not ${\rm Ext}^i(\pi_1,\pi_2)$ as far as I know.
The following theorem was conjectured by Kazhdan and was proved by Schneider-Stuhler, cf. \cite{Sch-Stu}, and by Bezrukavnikov. It is known only in characteristic zero.
\begin{theorem}
Let $\pi$ and $\pi'$ be finite-length, smooth
representations of a
reductive
$p$-adic
group $G$.
Then \[{\rm EP}_G[\pi,\pi'] =
\int_{C_{ellip}} \Theta(c)\bar{\Theta}'(c)\, dc,\]
where $\Theta$ and $\Theta'$ are the characters of $\pi$ and $\pi'$,
and $dc$ is a natural measure on the set
$C_{ellip}$ of regular elliptic conjugacy classes in $G$, and is given by
\[ dc = {W(G(F), T(F))}^{-1} \cdot \| \det (1- Ad(\gamma))_{{\frak g}/{\frak g}_\gamma} \| dt,\]
where $dt$ is the normalized Haar measure on the elliptic torus $T= G_\gamma$ giving it measure 1.
\end{theorem}
\section{An integral formula of Waldspurger}
In this section we review an integral formula of Waldspurger, cf. \cite{Wa}, \cite{Wa1},
which we then propose to be the
integral formula for the Euler-Poincar\'e pairing for
\[{\rm EP}_{{\rm Bes}(V,W)}[\sigma,\sigma' ]\]
for $\sigma$
any finite length
representation of $ {\rm SO}(V)$, and $\sigma'$ any finite length representation of ${\rm SO}(W)$,
where $V$ and $W$ are quadratic spaces over $F$ with
\[V = X + D + W + Y\]
with $W$ a quadratic subspace of $V$ of codimension $2k+1$
with $X$ and $Y$ totally isotropic subspaces of $V$ of dimension $k$, in duality with each other under the underlying bilinear form,
and $D$ an anisotropic line in $V$. Let $Z=X+Y$.
Let $\underline{\mathcal T}$ denote the set of elliptic tori $T$ in ${\rm SO}(W)$ such that there exist quadratic subspaces $W_T,W'_T$ of $W$ such that:
\begin{enumerate}
\item $W= W_T \oplus W'_T$, and $V=W_T \oplus W'_T \oplus D \oplus Z$.
\item $\dim (W_T)$ is even, and ${\rm SO}(W'_T)$ and
${\rm SO}(W'_T \oplus D \oplus Z)$ are quasi-split.
\item $T$ is a maximal (elliptic) torus in ${\rm SO}(W_T)$.
\end{enumerate}
Let
${\mathcal T}$ denote a set of orbits for the action of ${\rm SO}(W)$ on $\underline{\mathcal T}$.
For our purposes we note the
most important elliptic torus $T= \langle e \rangle$
corresponding to $W_T= 0$.
For $\sigma$ an admissible representation of ${\rm SO}(V)$ of finite length, define a function
$c_\sigma(t)$ for regular elements of a torus $T$ belonging to $\underline{\mathcal T}$ by the germ
expansion of the character $\theta_\sigma(t)$ of $\sigma$ on the centralizer of $t$ in the Lie algebra
of ${\rm SO}(V)$, and picking out `the' leading term.
Similarly, for $\sigma'$ an admissible representation of ${\rm SO}(W)$ of finite length,
one defines a function
$c_{\sigma'}(t)$ for regular elements of a torus $T$ belonging to $\underline{\mathcal T}$ by the germ
expansion of the character $\theta_{\sigma'}(t)$ of $\sigma'$.
Define a function $\Delta_T$ on an elliptic torus $T$ belonging to
$\underline{\mathcal T}$ with $W= W_T \oplus W'_T$, by
\[\Delta(t) = |\det(1-t)|_{W_T}|,\] and let $D^H$ denote the function
on $H(F) = {\rm SO}(W)$ defined by:
\[ D^H(t) = |\det(Ad(t) -1)_{h(F)/h_t(F)}|_F,\]
where $h(F)$ is the Lie algebra of $H$ and $h_t(F)$ is the Lie algebra of the centralizer of $t$ in $H$.
For a torus $T$ in $H$, define the Weyl group $W(H,T)$ by the usual normalizer divided by the centralizer:
\[W(H,T) = N_{H(F)}(T)/Z_{H(F)}(T).\]
The following theorem of Waldspurger could be considered as the analogue
of Kazhdan orthogonality for the group case encountered earlier.
\begin{thm}
Let $V = X + D + W + Y$ be a quadratic space over the non-archimedean local field $F$ with $W$ a quadratic subspace of codimension $2k+1$ as above.
Then for any irreducible admissible representation $\sigma$ of ${\rm SO}(V)$
and irreducible admissible representation $\sigma'$ of ${\rm SO}(W)$,
\[c(\sigma,\sigma'): =\sum _{T \in {\mathcal T}} |W(H,T)|^{-1} \int_{T(F)} c_\sigma(t) c_{\sigma'}(t) D^H(t) \Delta^k(t) dt ,\]
is a finite sum of absolutely convergent integrals. (The Haar measure on $T(F)$ is normalized to have volume 1.) If either $\sigma$ is a supercuspidal representation of
${\rm SO}(V)$,
and $\sigma'$ is arbitrary irreducible admissible representation of ${\rm SO}(W)$,
or both $\sigma$ and $\sigma'$ are tempered representations, then
$$c(\sigma,\sigma') = \dim {\rm Hom}_{{\rm Bes}(V,W)}[\sigma,\sigma' ].$$
\end{thm}
\section{Conjectured EP formula}
Given the theorem of Waldspurger, it is most natural to propose the following conjecture on
Euler-Poincar\'e pairing following the earlier notation of
$V = X + D + W + Y$, a quadratic space over the non-archimedean local field $F$ with $W$ a quadratic subspace of
$V$ of codimension $2k+1$.
\begin{conj} \label{integral}
\begin{enumerate}
\item If $\sigma$ and $\sigma'$ are irreducible tempered representations of ${\rm SO}(V), {\rm SO}(W)$ respectively with
$W\subset V$, a nondegenerate subspace with $V/W$ a split quadratic space of odd dimension, then
\[{\rm Ext}^i_{{\rm Bes}(V,W)}[\sigma,\sigma'] = 0\] for $i > 0$.
\item For finite length representations $\sigma$ of ${\rm SO}(V)$
and $\sigma'$ of ${\rm SO}(W)$, we have:
\begin{eqnarray*}
{\rm EP}_{{\rm Bes}(V,W)}[\sigma, \sigma' ] & : =&
\sum_i (-1)^i \dim {\rm Ext}^i_{{\rm Bes}(V,W)}[\sigma,\sigma'], \\
& = & \sum _{T \in {\mathcal T}} |W(H,T)|^{-1} \int_{T(F)} c_\sigma(t) c_{\sigma'}(t) D^H(t) \Delta^k(t) dt.
\end{eqnarray*}
\end{enumerate}
\end{conj}
\begin{remark}
\begin{itemize}
\item Waldspurger's theorem
is equivalent to the
conjectural statement on Euler-Poincar\'e characteristic if $\sigma$ or $\sigma'$ is supercuspidal (except that it is not proved if $\sigma'$ is supercuspidal, but $\sigma$ is arbitrary).
\item Waldspurger integral formula is available also in the work of R. Beuzart-Plessis for unitary groups.
\item A general integral formula for spherical varieties has been formulated by Chen Wan in \cite{Wan}.
\end{itemize}
\end{remark}
The following theorem asserts that once ${\rm Ext}_{{\rm Bes}}^i[\pi_1,\pi_2]$ are known to be zero for tempered
representations for $i \geq 1$, the Conjecture \ref{integral} on Waldspurger integral formula giving
Euler-Poincar\'e characteristic for all finite length representations holds. There is the further assertion
on vanishing of ${\rm Ext}_{{\rm Bes}}^i[\pi_1,\pi_2]$ for $i \geq 1$ for $\pi_1,\pi_2$ standard modules assuming that ${\rm Ext}_{{\rm Bes}}^i[\pi_1,\pi_2]$ are known to be zero for tempered
representations for $i \geq 1$.
\begin{thm}
For an irreducible tempered representation
$\sigma$ of ${\rm SO}(V)$
and $\sigma'$ of ${\rm SO}(W)$, assume that,
\[{\rm Ext}^i_{{\rm Bes}(V,W)}[\sigma,\sigma'] = 0 {\rm ~~for~~} i > 0,\]
then
\[{\rm Ext}^i_{{\rm Bes}(V,W)}[\sigma,\sigma'] = 0 {\rm ~~for~~} i > 0,\]
for all standard modules $\sigma$ of ${\rm SO}(V)$
and $\sigma'$ of ${\rm SO}(W)$. In particular, as irreducible representations of an orthogonal group
which belong to a generic $L$-packet are standard modules,
\[{\rm Ext}^i_{{\rm Bes}(V,W)}[\sigma,\sigma'] = 0 {\rm ~~for~~} i > 0,\]
if $\sigma$ is an irreducible representation belonging to a generic $L$-packet of ${\rm SO}(V)$ and $\sigma'$ of ${\rm SO}(W)$.
Further (assuming vanishing of higher Ext groups for tempered representations), we have the
Euler-Poincar\'e formula
for any finite length representation $\sigma$ of ${\rm SO}(V)$
and any finite length representation $\sigma'$ of ${\rm SO}(W)$:
\begin{eqnarray*} {\rm EP}_{{\rm Bes}(V,W)}[\sigma, \sigma' ] & : = &
\sum_i (-1)^i \dim {\rm Ext}^i_{{\rm Bes}(V,W)}[\sigma,\sigma'] \\ &=&
\sum _{T \in {\mathcal T}} |W(H,T)|^{-1} \int_{T(F)} c_\sigma(t) c_{\sigma'}(t) D^H(t) \Delta^k(t) dt.
\end{eqnarray*}
\end{thm}
\begin{proof}
Since both sides of the proposed equality:
\begin{eqnarray*} {\rm EP}_{{\rm Bes}(V,W)}[\sigma, \sigma' ] & : = &
\sum_i (-1)^i \dim {\rm Ext}^i_{{\rm Bes}(V,W)}[\sigma,\sigma'] \\ &=&
\sum _{T \in {\mathcal T}} |W(H,T)|^{-1} \int_{T(F)} c_\sigma(t) c_{\sigma'}(t) D^H(t) \Delta^k(t) dt,
\end{eqnarray*}
are bilinear forms, it suffices to prove it for
$\sigma$ belonging to a set of generators for the Grothendieck group of
finite length representations of ${\rm SO}(V)$, and $\sigma'$ belonging to a set of generators
for the Grothendieck group of
finite length representations of ${\rm SO}(W)$. It is well known that
standard modules form a generator, in fact a basis, of the Grothendieck group of finite length representations
of any reductive $p$-adic group. Therefore if we can prove:
\begin{eqnarray*} {\rm EP}_{{\rm Bes}(V,W)}[\sigma, \sigma' ] & : = &
\sum_i (-1)^i \dim {\rm Ext}^i_{{\rm Bes}(V,W)}[\sigma,\sigma'] \\ &=&
\sum _{T \in {\mathcal T}} |W(H,T)|^{-1} \int_{T(F)} c_\sigma(t) c_{\sigma'}(t) D^H(t) \Delta^k(t) dt,
\end{eqnarray*}
for $\sigma, \sigma'$ standard modules, we would know it for all finite length modules.
Let,
\[\sigma = \pi_1|\cdot|_F^{b_1}\times \cdots \times \pi_t|\cdot|_F^{b_t} \times \sigma_0,\]
be a standard module for ${\rm SO}(V)$, thus, we have:
\begin{enumerate}
\item For $i=1,\cdots, t$, $\pi_i$ is an irreducible, admissible, tempered representation of ${\rm GL}_{n_i}(F)$.
\item $\sigma_0$ is an irreducible, admissible, tempered representation of ${\rm SO}(V_0)$ where $V_0\subset V$
is a quadratic subspace of $V$ such that the quadratic space $V/V_0$ is an orthogonal direct sum
of hyperbolic spaces of dimensions $2n_i$.
\item The $b_i$ are real with $b_1\geq b_2\geq \cdots \geq b_t \geq 0$.
\end{enumerate}
Similarly, let
\[\sigma' = \pi'_1|\cdot|_F^{b'_1}\times \cdots \pi'_{t'}|\cdot|_F^{b'_{t'}} \times \sigma'_0,\]
be a standard module for ${\rm SO}(W)$.
We recall that by Proposition 1.1 of \cite{Mo-Wa},
\[ \dim {\rm Hom}[\sigma, \sigma'] = \dim {\rm Hom}[\sigma_0, \sigma_0'] .\]
Since the representations $\sigma_0$ of ${\rm SO}(V_0)$ and $\sigma'_0$ of ${\rm SO}(V'_0)$ are
irreducible tempered representations, by our assumption,
\[ {\rm Ext}^i[\sigma_0, \sigma_0'] = 0 {\rm ~~for~~} i>0.\]
Therefore, the Euler-Poincar\'e formula
\begin{eqnarray*} {\rm EP}_{{\rm Bes}(V,W)}[\sigma, \sigma' ] & : = &
\sum_i (-1)^i \dim {\rm Ext}^i_{{\rm Bes}(V,W)}[\sigma,\sigma'] \\ &=&
\sum _{T \in {\mathcal T}} |W(H,T)|^{-1} \int_{T(F)} c_\sigma(t) c_{\sigma'}(t) D^H(t) \Delta^k(t) dt,
\end{eqnarray*}
is valid for $\sigma_0$ and $\sigma'_0$ by the work of Waldspurger from \cite{Wa2}.
Now Proposition 1.1 of \cite{Mo-Wa} relating
$\dim {\rm Hom}[\sigma, \sigma']$ and $ \dim {\rm Hom}[\sigma_0, \sigma_0']$
is proved in two steps, proving $\dim {\rm Hom}[\sigma, \sigma'] \leq \dim {\rm Hom}[\sigma_0, \sigma_0']$ and then proving
$\dim {\rm Hom}[\sigma, \sigma'] \geq \dim {\rm Hom}[\sigma_0, \sigma_0']$. The first step uses relationships
of certain central exponents which works equally well to allow one to conclude
that $\dim {\rm Ext}^i[\sigma, \sigma']
\leq \dim {\rm Ext}^i[\sigma_0, \sigma_0']$ for all $i \geq 0$, and therefore as
we assume that ${\rm Ext}^i[\sigma_0, \sigma_0'] =0$ for all $i \geq 1$ (for tempered representations),
the same holds for ${\rm Ext}^i[\sigma, \sigma']$ for all $i \geq 1$.
We do not give more details here.
Once $ {\rm Ext}^i[\sigma, \sigma']$ are proved to be zero for $i\geq 1$, it then suffices to prove that the sum:
\[ \sum _{T \in {\mathcal T}} |W(H,T)|^{-1} \int_{T(F)} c_\sigma(t) c_{\sigma'}(t) D^H(t) \Delta^k(t) dt,\]
is the same for $\sigma_0$ and $\sigma'_0$ as it is for $\sigma$ and $\sigma'$. This is a consequence of the
van Dijk formula for principal series representations, see Lemma 2.3 of \cite{Wa1}. \end{proof}
\section{The Schneider-Stuhler duality theorem}
The following theorem is a mild generalization of a
duality theorem of Schneider and Stuhler in \cite{Sch-Stu}, see \cite{NP}; it turns
questions on ${\rm Ext}^i[\pi_1,\pi_2] $ to ${\rm Ext}^j[\pi_2,\pi_1]$, and is of considerable
importance to our theme.
\begin{thm} \label{SS}
Let $G$ be a reductive $p$-adic group, and $\pi$ an irreducible admissible representation of $G$.
Let $d(\pi) $ be the split rank of the center of the Levi subgroup $M$ of $G$ which carries the cuspidal support of $\pi$,
$D(\pi)$ be the Aubert-Zelevinsky involution of $\pi$. Then,
\begin{enumerate}
\item ${\rm Ext}_G^{d(\pi)}[\pi, D(\pi)] \cong \mathbb{C}$, and
\item For any smooth representation $\pi'$ of $G$, the bilinear pairing
\[(*) \,\,\,\,\, {\rm Ext}^{i}_G[\pi, \pi'] \times {\rm Ext}^{j}_G[\pi', D(\pi)] \rightarrow
{\rm Ext}^{i+j = d(\pi)}_G[\pi, D(\pi)] \cong \mathbb{C}, \]
is non-degenerate.
\end{enumerate}
\end{thm}
\section{An example: triple products for ${\rm GL}_2(F)$}
As suggested earlier, we expect that for all the GGP pairs $(G,H)$, when the irreducible representation
$\pi_1$ of $G$ and $\pi_2$ of $H$ are tempered,
${\rm Ext}^i_H[\pi_1,\pi_2]$ is non-zero only for $i=0$.
On the other hand, by the duality theorem discussed in the last section,
we expect that ${\rm Ext}^i_H[\pi_2,\pi_1]$ is typically zero for $i=0$, i.e., ${\rm Hom}_H[\pi_2,\pi_1]=0$ (so no wonder branching is usually not considered as a subrepresentation!), and shows up only for $i$ equals the split rank of the center of the Levi
from which $\pi_2$ arises through parabolic induction of a
supercuspidal representation. This is not completely correct as we will see.
The purpose of this section is to do an explicit restriction problem as
an example of what happens for classical groups
in one specific instance: the restriction problem from split ${\rm SO}(4)$ to split ${\rm SO}(3)$.
Thus we calculate ${\rm Ext}^i_{{\rm SO}_3(F)}[V,V']$, $i\geq 0$, and ${\rm EP}[V,V']$,
for $V$ a representation of ${\rm SO}_4(F)$ of finite length,
and $V'$ a representation of ${\rm SO}_3(F)$ of finite length, and then investigate when the restriction of $V$ to
${\rm SO}(3)$ is a projective module. As a consequence of what we do here, we will have constructed a projective module
in the Iwahori block of ${\rm SO}(3)={\rm PGL}_2(F)$ which is different from what we encountered earlier in the restriction
problem from ${\rm GL}_3(F)$ to ${\rm GL}_2(F)$ which all had only generic representations as a quotient, but here there will
be another possibility. (In fact Lemma 2.4 of \cite{CS3} has two options for projective modules which have multiplicity 1, and this other possibility which we will see here is the second option for projective modules in Lemma 2.4 of \cite{CS3}.)
Since
${\rm SO}_4(F)$ and ${\rm SO}_3(F)$ are closely related to ${\rm GL}_2(F) \times {\rm GL}_2(F)$
and ${\rm GL}_2(F)$ respectively, we equivalently consider $V \cong \pi_1 \otimes \pi_2$
for admissible representations $\pi_1, \pi_2$ of ${\rm GL}_2(F)$, and
$V' = \pi_3$ of ${\rm GL}_2(F)$. Our aim then is to calculate
$${\rm Ext}^i_{{\rm GL}_2(F)}[\pi_1 \otimes \pi_2, \pi_3],$$
or since we will prefer not to bother with central characters, we assume
that $\pi_1 \otimes \pi_2$ and $\pi_3$ have trivial central characters, and we will
then calculate,
$${\rm Ext}^i_{{\rm PGL}_2(F)}[\pi_1 \otimes \pi_2, \pi_3].$$
The following proposition follows from more general earlier results on Euler-Poincar\'e characteristic for principal
series representations, or can be deduced directly from Mackey theory. If at least one of the $\pi_i$ is cuspidal, then it is easy to see
that $ {\rm Ext}^1_{{\rm PGL}_2(F)}[\pi_1 \otimes \pi_2, \pi_3] =0$, and the proposition is equivalent to by-now well-known
results, cf. \cite{Pr1} about $ {\rm Hom}_{{\rm PGL}_2(F)}[\pi_1 \otimes \pi_2, \pi_3]$. The proposition in case one of the $\pi_i$'s is a twist of the Steinberg representation of ${\rm GL}_2(F)$ follows by embedding the Steinberg representation
in the corresponding principal series, and using additivity of EP in exact sequences.
\begin{prop} \label{trilinear} Let
$\pi_1, \pi_2$ and $\pi_3$
be either irreducible, infinite dimensional representations of ${\rm GL}_2(F)$, or (reducible)
principal series representations
of ${\rm GL}_2(F)$ induced from one dimensional representations. Assume that the product of the central characters of $\pi_1$ and $\pi_2$ is trivial, and $\pi_3$ is
of trivial central character. Then,
\[ {\rm EP}_{{\rm PGL}_2(F)}[\pi_1 \otimes \pi_2, \pi_3] =1,\]
except when all the representations $\pi_i$ are irreducible discrete series representations, and there is
a $D^\times$ invariant linear form on $\pi'_1 \otimes \pi'_2 \otimes \pi'_3$ where $\pi_i'$ denotes the
representation of $D^\times$ associated to $\pi_i$ by Jacquet-Langlands.
\end{prop}
Since for ${\rm PGL}_2(F)$, the only nonzero ${\rm Ext}^i$ can be for $i=0,1$, ${\rm EP}$ together with knowledge of ${\rm Hom}$ spaces implies
the following corollary.
\begin{cor} \label{ext-vanish} If $\pi_1, \pi_2$ and $\pi_3$ are any three irreducible, infinite dimensional representations
of ${\rm GL}_2(F)$,
with the product of the central characters of $\pi_1$ and $\pi_2$ trivial, and $\pi_3$
of trivial central character, then ${\rm Ext}^i_{{\rm PGL}_2(F)}[\pi_1 \otimes \pi_2,\pi_3] = 0$ for $i>0$.
\end{cor}
\begin{remark}
The authors Cai and Fan in \cite{CF} prove more generally that
\[ {\rm Ext}^i_{{\rm GL}_2(F)}[\Pi, \mathbb{C}] = 0 ~~~{\rm~~ for ~~~} i \geq 1,\]
where $\Pi$ is an irreducible generic representation of ${\rm GL}_2(E)$
whose central character restricted to $F^\times$ is trivial for $E$ a cubic \'etale extension of $F$.
\end{remark}
We now use Proposition \ref{trilinear} and Corollary \ref{ext-vanish}
to study the restriction problem from $[{\rm GL}_2(F) \times {\rm GL}_2(F)]/\Delta (F^\times)$ to ${\rm PGL}_2(F)$, and to understand when $\pi= \pi_1 \otimes \pi_2$ where $\pi_1, \pi_2$ are any two irreducible, infinite dimensional representations
of ${\rm GL}_2(F)$,
with the product of the central characters of $\pi_1$ and $\pi_2$ trivial, is a projective representation of ${\rm PGL}_2(F)$
As discussed
earlier, if a smooth representation $\pi$ of ${\rm PGL}_2(F)$ is locally finitely generated, then it is a projective module in the category
of smooth representations of ${\rm PGL}_2(F)$ if and only if
\[ {\rm Ext}^1_{{\rm PGL}_2(F)}[\pi, \pi'] = 0, \]
for all smooth finitely generated representations $\pi'$ of ${\rm PGL}_2(F)$ which is the case
if and only if
\[ {\rm Ext}^1_{{\rm PGL}_2(F)}[\pi, \pi'] = 0, \]
for all finite length representations $\pi'$ of ${\rm PGL}_2(F)$, which is the case
if and only if
\[ {\rm Ext}^1_{{\rm PGL}_2(F)}[\pi, \pi'] = 0, \]
for all irreducible representations $\pi'$ of ${\rm PGL}_2(F)$.
In our case, $\pi= \pi_1 \otimes \pi_2$ where $\pi_1, \pi_2$ are any two irreducible, infinite dimensional representations
of ${\rm GL}_2(F)$,
with the product of the central characters of $\pi_1$ and $\pi_2$ trivial. Therefore if $\pi'$ is any
infinite dimensional irreducible representation
of ${\rm PGL}_2(F)$, the desired vanishing of ${\rm Ext}^1[\pi,\pi']$ is a consequence of Corollary \ref{ext-vanish}. Therefore to
prove projectivity of $\pi= \pi_1 \otimes \pi_2$ as a representation of ${\rm PGL}_2(F)$, it suffices to check that,
\[ {\rm Ext}^1_{{\rm PGL}_2(F)}[\pi_1\otimes \pi_2, \chi] = 0, \]
for $\chi: F^\times /F^{\times 2} \rightarrow \mathbb{C}^{\times}$, treated as a character of ${\rm PGL}_2(F)$. Now,
\[ {\rm Ext}^1_{{\rm PGL}_2(F)}[\pi_1\otimes \pi_2, \chi] = {\rm Ext}^1_{{\rm PGL}_2(F)}[\pi_1, \chi \pi_2^\vee].\]
It is easy to see that if $\pi_1,\pi_2$ are irreducible and infinite dimensional representations of ${\rm GL}_2(F)$, then
${\rm Ext}^1_{{\rm PGL}_2(F)}[\pi_1, \chi \pi_2^\vee]$ is not zero if and only if $\pi_1,\pi_2^\vee$ are irreducible principal series representations
of ${\rm GL}_2(F)$ such that $\pi_1 \cong \chi \pi_2^\vee$ with $\chi$ a quadratic character. (Vanishing of ${\rm Ext}^1_{{\rm PGL}_2(F)}[{\rm St}_2, \chi {\rm St}_2]$ is a well-known generality about discrete series representations; for a proof in this case, see Lemma 7 of \cite{Pr2}.)
We summarize this analysis in the following proposition.
\begin{prop} \label{gl2} Let
$\pi_1, \pi_2$
be irreducible, infinite dimensional representations of ${\rm GL}_2(F)$ such that the product of the central
characters of $\pi_1$ and $\pi_2$ is trivial. Then the representation $\pi_1\otimes \pi_2$ of ${\rm PGL}_2(F)$ is a
projective module unless $\pi_1,\pi_2$ are irreducible principal series representations
of ${\rm GL}_2(F)$ such that $\pi_1 \cong \chi \pi_2^\vee$ with $\chi$ a quadratic character, in which case it is
not a projective module exactly in the block of ${\rm PGL}_2(F)$ containing the character $\chi$. In particular,
if at least one of $\pi_1$ or $\pi_2$ is a twist of the Steinberg representation, $\pi_1\otimes \pi_2$ is a projective
module in the category of smooth representations of ${\rm PGL}_2(F)$.
\end{prop}
\begin{cor}
Let ${\rm St}_2$ be the Steinberg representation of ${\rm PGL}_2(F)$, and $T$ the diagonal split torus of ${\rm PGL}_2(F)$.
Then ${\rm St}_2 \otimes {\rm St}_2$ is a projective representation
of ${\rm PGL}_2(F)$, and ${{\rm ind}}_T^{{\rm PGL}_2(F)}(\mathbb{C})$ which is not a projective module but which contains the Steinberg
representation by Lemma 5.4 of \cite{Pr1} and has the property that (using Lemma 5.4 of
\cite{Pr1} for the isomorphism)
\[ {{\rm ind}}_T^{{\rm PGL}_2(F)}(\mathbb{C})/{\rm St}_2 \cong {\rm St}_2 \otimes {\rm St}_2\]
is a projective representation of
${\rm PGL}_2(F)$.
\end{cor}
In earlier sections we saw the construction of projective modules in the Iwahori block given by
${\rm ind}_{G(O_F)}^{G(F)} {\rm St}$.
Proposition \ref{gl2} allows one to construct another projective module
in the Iwahori block of ${\rm PGL}_2(F)$ which are the same outside the reducible principal series which also arises
from the restriction problem (from ${\rm GL}_2(F) \times {\rm GL}_2(F)$ to the diagonal ${\rm GL}_2(F)$); it is of course
the projective representation ${\rm ind}_{G(O_F)}^{G(F)} \mathbb{C}$ given by Lemma 2.4 of \cite{CS3}.
For the Steinberg representation ${\rm St}_2$ of ${\rm PGL}_2(F)$, ${\rm St}_2 \otimes {\rm St}_2$ is a projective module,
does not have ${\rm St}_2$ as a quotient, but has the trivial representation as a quotient. Further, ${\rm St}_2 \otimes {\rm St}_2$ has all other
irreducible principal series as a unique quotient. On the other hand, as ${\rm St}_2 \otimes {\rm St}_2$
is a projective module,
and the principal series ${\rm Ps}(\nu^{1/2}, \nu^{-1/2})$ has the trivial repesentation of ${\rm PGL}_2(F)$ as a quotient, there is a surjective map from ${\rm St}_2 \otimes {\rm St}_2$ to ${\rm Ps}(\nu^{1/2}, \nu^{-1/2})$.
Thus ${\rm St}_2 \otimes {\rm St}_2$ as a module $M$ over the Iwahori Hecke algebra,
hence over its center $Z $, is a 2-dimensional free module, which at the maximal ideals $\mathfrak{m}$ of $Z$ corresponding points away from
the character $(\nu^{1/2}, \nu^{-1/2})$ has $M/\mathfrak{m}M$ as two dimensional complex vector space corresponding to the Iwahori
fixed vectors in an irreducible unramified principal series representation of ${\rm PGL}_2(F)$ whereas at the maximal ideal corresponding
the character $(\nu^{1/2}, \nu^{-1/2})$, $M/\mathfrak{m}M$ as two dimensional complex vector space corresponding to the Iwahori
fixed vectors in the reducible principal series representation ${\rm Ps}(\nu^{1/2}, \nu^{-1/2})$ of ${\rm PGL}_2(F)$.
On the other hand, for distinct irreducible cuspidal representations $\pi_1,\pi_2$ of ${\rm PGL}_2(F)$, $\pi_1 \otimes \pi_2$ is a projective module, which in the Iwahori block
of ${\rm PGL}_2(F)$, has ${\rm St}_2$ as a quotient, but not the trivial representation as a quotient, and has all
irreducible principal series as a unique quotient. As $\pi_1 \otimes \pi_2$
is a projective module,
and the principal series ${\rm Ps}(\nu^{-1/2}, \nu^{1/2})$ has the Steinberg repesentation of ${\rm PGL}_2(F)$ as a quotient, there is a surjective map from $\pi_1 \otimes \pi_2$ to ${\rm Ps}(\nu^{-1/2}, \nu^{1/2})$.
Summarizing, the restriction problem from ${\rm GL}_2(F) \times {\rm GL}_2(F)$ to the diagonal ${\rm GL}_2(F)$), when it is projective in the Iwahori block,
gives rise to the two projective modules ${\rm ind}_{{\rm PGL}_2(O_F)}^{{\rm PGL}_2(F)} {\rm St}$ and ${\rm ind}_{{\rm PGL}_2(O_F)}^{{\rm PGL}_2(F)} \mathbb{C}$,
and also gives rise to a module which is not projective in a very rare case as described
in Proposition \ref{gl2}, and when non-projective, it contains a submodule (a twist of the Steinberg) as we will presently see. All these three options are explicitly described in Proposition \ref{gl2}, or are explicitly describable!
It may be hoped that this kind of complete explicit description can be made for the branching problems around GGP.
\vspace{2mm}
Here is an application of the calculation on Ext groups which when combined with the
duality theorem leads to existence of submodules.
The
following proposition gives a complete classification
of irreducible submodules $\pi$ of the tensor product $\pi_1 \otimes \pi_2$ of two (irreducible, infinite dimensional)
representations $\pi_1,\pi_2$ of ${\rm GL}_2(F)$ with the product of their central characters trivial. A more general result is available in \cite{CF}.
\begin{prop}
Let $\pi_1, \pi_2$ be two irreducible admissible infinite dimensional
representations of ${\rm GL}_2(F)$ with product of their
central characters trivial. Then the following is a complete list of irreducible sub-representations
$\pi$ of $\pi_1 \otimes \pi_2$ as ${\rm PGL}_2(F)$-modules.
\begin{enumerate}
\item $\pi$ is a supercuspidal representation of ${\rm PGL}_2(F)$, and appears as a quotient of $\pi_1 \otimes \pi_2$.
\item $\pi$ is a twist of the Steinberg representation, which we assume by absorbing the twist in
$\pi_1$ or $\pi_2$ to be the Steinberg representation ${\rm St}$ of ${\rm PGL}_2(F)$. Then ${\rm St}$ is a
submodule of $\pi_1 \otimes \pi_2$ if and only if
$\pi_1, \pi_2$ are both irreducible
principal series representations, and $\pi_1 \cong \pi_2^\vee$.
\end{enumerate}
\end{prop}
\begin{remark}
Unlike the case of triple products above, Chan in \cite{Chan} has proved that in the case of the pair $({\rm GL}_{n+1}(F),{\rm GL}_n(F))$,
if ${\rm Hom}_{{\rm GL}_n(F)}[\pi_2,\pi_1] \not = 0$, for $\pi_1$ an irreducible representation of ${\rm GL}_{n+1}(F)$ and $\pi_2$ of ${\rm GL}_n(F)$, then both $\pi_1,\pi_2$ must be one dimensional. Thus in this case, even supercuspidals of ${\rm GL}_n(F)$ do not arise as submodules which is related to the non-compact center of the subgroup ${\rm GL}_n(F)$
(and which is not contained in the center of the ambient ${\rm GL}_{n+1}(F)$).
\end{remark}
\section{Template from algebraic geometry}
We enumerate some of the basic theorems in algebraic geometry which seem to have
closely related analogues in our context, although for no obvious reason! For the analogy, we consider
$H^0(X,{\mathfrak F})$,
for $X$ a
smooth projective varieties (or sometimes more general varieties) equipped
with a coherent sheaf ${\mathfrak F}$ versus ${\rm Hom}[\pi_1,\pi_2]$, and corresponding $H^i$
and ${\rm Ext}^i$.
\begin{enumerate}
\item Finite dimensionality of $H^i(X,{\mathfrak F})$
and vanishing for $i> \dim X$.
\item Semi-continuity theorems available both in algebraic geometry for
$H^i(X,{\mathfrak F}_\lambda)$, and ${\rm Ext}^i[\pi_{1, \lambda}, \pi_{2,\mu}]$ for families of sheaves or
of representations.
\item Riemann-Roch theorem expressing $EP(X,{\mathfrak F})$ in terms of simple
invariants associated to $X$ and the sheaf ${\mathfrak F}$. In our case, these are the integral formulae
which go into the Kazhdan conjecture and in the work of Waldspurger, involving invariants of the
space $X$, certain elliptic tori, and invariants associated to sheaves= representations through character
theory.
\item Kodaira vanishing for $H^i(X,{\mathfrak F})$, $i> 0$ for an ample sheaf ${\mathfrak F}$.
\item Serre duality
\[{\rm Ext}^i({\mathcal O}_X,{\mathfrak F}) \times {\rm Ext}^{d-i}({\mathfrak F}, \omega_X)
\rightarrow {\rm Ext}^d({\mathcal O}_X, \omega_x) = F.\]
\item Special role played by $X = {\mathbb P}^d(F)$ in Algebraic geometry, and here,
we have our own, {\it
her all-embracing majesty}, ${\rm GL}_n(F)$.
\end{enumerate}
\vspace{1cm}
{\bf Acknowledgement:}
This paper is an expanded and written version of my lecture
in the IHES Summer School on the Langlands Program in July 2022. The author would like to thank the organizers for putting together a wonderful program. The author especially thanks R. Beuzart-Plessis and Kei Yuen Chan for all their
helpful remarks.
\bibliographystyle{amsalpha}
|
2,869,038,156,802 | arxiv | \section{Introduction}\label{sec_intro}
Strongly interacting systems attract considerable attention because of
fruitful phenomena in a new field of {\it cross-correlation} physics~\cite{Tokutra}
and their potential applicability
to developing field of spintronics.
Theoretical study of strongly correlated systems,
e.g. many-electron systems and interacting spin systems,
becomes time-consuming and more difficult
when one starts numerical investigation of larger systems.
One reason of this difficulty is, of course, the large dimension of
the Hilbert space or the Hamiltonian matrix of many-electron systems.
The dimension of the Hilbert space grows exponentially
with increasing number of atoms linearly in a many-electron system
and, on the contrary, that in a one electron problem (or the
density functional theory, DFT) the size of the Hamiltonian matrix
is proportional to the number of atoms.
The second reason is
the fact that the rigorousness or accuracy control becomes seriously difficult
in a problem of the large Hamiltonian matrix.
Because the width of the spectra is in proportion to the number of atoms in many cases,
the energy interval between adjacent eigenenergies becomes small quite rapidly
with increasing number of atoms.
The short interval between adjacent eigenenergies causes the difficulty in
separating of respective eigenvectors.
Then, for example, it is very important to obtain the precise ground state,
from which all the physical quantities are derived
in the (zero-temperature) many-electron theory.
Thus, one needs higher energy resolution with increasing number of atoms,
but, sometimes, we do not have fast, reliable and stable
calculation algorithm for large Hamiltonian matrices.
Our main target is the calculation of the Green's function matrix $G(\omega)$
in many-electron problems;
\begin{eqnarray}
G_{ij}(\omega) = [(\omega + {\rm i}\eta -H)^{-1}]_{ij} ,
\label{eq:gf}
\end{eqnarray}
where $H$ and $\omega$ are a real Hamiltonian matrix and energy parameter, respectively.
The suffices $i$ and $j$ denote arbitrary state such as
$\hat{c}_i \left|\right\rangle$ or $\hat{c}_i^\dagger \left|\right\rangle$,
where $\hat{c}$ is an annihilation operator and $\left|\right\rangle$ is a ground state.
Here, we should use a positive finite parameter $\eta$
in a numerical calculation of a finite system,
instead of infinitesimally small positive number.
The spectral function
is the important physical quantity derived from the Green's function Eq.~(\ref{eq:gf}).
There are two possibilities for calculating Eq.~(\ref{eq:gf}).
One is to solve the eigenvalue problem with an eigenvalue $\omega$,
e.g. the Lanczos method.
The other is to solve following linear equation and to take inner product between
the solution and vector $\left|i\right\rangle$,
\begin{eqnarray}
&& A \stackrel{\rm def}{=}\omega+{\rm i}\eta -H,
\label{eq:A} \\
&& A \left|x_j\right\rangle = \left|j \right\rangle ,
\label{eq:lin_eq}\\
&& G_{ij}(\omega)=\left\langle i\right.\left|x_j\right\rangle ,
\label{eq:gf2}
\end{eqnarray}
with an arbitrary energy parameter $\omega$,
e.g. the shifted COCG (conjugate-orthogonal-conjugate-gradient) method,
a family of the CG (conjugate-gradient) method.
In both cases, we first restrict the space dimension of states to be finite.
In other words, we assume the size of the Hamiltonian matrix to be finite.
Then we construct the Krylov subspace defined as
\begin{eqnarray}
{\cal K}_{n}(A, |j\rangle)={\rm span} \{|j\rangle,A|j\rangle,A^2|j\rangle,\ldots, A^n|j\rangle\}.
\label{eq:Krylov}
\end{eqnarray}
In the Lanczos method, orthogonalized base vectors (Lanczos vectors) are
successively generated in ${\cal K}_{n}(A,|j\rangle)$, and
at the same time, the Hamiltonian matrix is tridiagonalized.
In a large scale calculation,
one can only use a small Krylov subspace,
because of heavy load of computation
and a corruption of the orthogonality of generated basis vectors.
It is well known that the rounding error breaks down the
orthogonality of the generated base vectors rapidly,
when the dimension of the Krylov subspace exceeds several tens.
The corruption of the orthogonality causes spurious eigenvalues and,
more seriously, incorrect eigenvectors.
Therefore, the size of the Krylov subspace
should be limited usually to some tens or a hundred.
We developed the shifted COCG method,
where the Eq.~(\ref{eq:gf2}) is solved within the Krylov subspace,
and applied it to the one-electron tight-binding Hamiltonian
in the system with a large number of atoms.~\cite{Takayama}
A set of orthogonal base is created by the iterative
process of the shifted COCG method, like Lanczos process,
but the calculation is stable for
large dimension of the Krylov subspace,
in contrast to the Lanczos method.
We must solve the Eq.~(\ref{eq:gf2}),
for every scalar shift $\sigma$ of $A$ corresponding to
respective energy mesh point.
The number of the $\sigma$'s is as much as O($10^2$)$\sim$O($10^4$)
generally,
however, the most time-consuming matrix-vector operations are
needed only at a single reference energy ($\sigma=0$).
Then the order of the total amount of calculation is just the same
as Lanczos method.
The reduction of the matrix-vector operation at non-zero $\sigma$
are based on the fact that a power of $(A+\sigma)$
is decomposed into a linear combination of powers of $A$.
Thus, Krylov subspace is invariant
${\cal K}_{n}(A,\left|j\right\rangle)
={\cal K}_{n}(A+\sigma,\left|j\right\rangle)$ against $\sigma$.
In the application of this method to the many-electron theory,
because the dimension of the vectors is huge,
we must take care for the total amount of
base vector storage for ${\cal K}_{n}(A,\left|j\right\rangle)$,
in order to satisfy the memory constraint in modern computers.
We explain the innermost loop index should be the iteration step $n$,
for an extremely large size of the Hamiltonian matrix.
This structure also give us following additional two merits.
One is that a part of the program code can be used
in the inverse iteration process to improve the ground state.
Another is that the calculation with the different $\eta$ can be done
without time consuming matrix-vector operations.
The structure of the paper is as follows.
In Sec.~\ref{Sec:sCOCG}, the basics of the shifted COCG method
is explained briefly.
Section~\ref{Sec:error_and_seed_switching}
is devoted to explanation of how to obtain global convergence.
Then we apply the shifted COCG method to an
extended Hubbard Hamiltonian with orbital degeneracy
and intra- and inter-site Coulomb interactions
in Sec.~\ref{Sec:Application}, where
the size of Hamiltonian matrix is equal to 64,128,064.
We calculate one-electron excitation spectra and evaluate the insulating gap.
In Sec.~\ref{Sec:Discussion},
we will conclude that
the essential difficulties of numerical investigation
of many-electron problems, the accuracy control (or monitoring)
and the robustness are achieved by the present method,
within the moderate amount of memory space.
We explain the two points to understand
the mathematics in the back ground of the shifted COCG method
in Appendix~\ref{app:math}.
The practical design of storing the huge Hamiltonian matrix
is discussed in Appendix~\ref{app:storage}.
\section{Shifted COCG method}
\label{Sec:sCOCG}
Assuming that the Hamiltonian is represented by using $N$-dimensional real matrix $H$
and $A$ is a complex symmetric matrix $\omega_{\rm ref}+{\rm i}\eta_{\rm ref}-H$,
we should solve the linear simultaneous equation of
\begin{equation}
A \bm{x}=\bm{b},
\label{Eq:ref}
\end{equation}
and its shifted equation
\begin{equation}
(A+\sigma) \bm{x}^{\sigma}=\bm{b},
\label{Eq:shifted}
\end{equation}
where $\sigma=(\omega+{\rm i}\eta)-(\omega_{\rm ref}+{\rm i}\eta_{\rm ref})$.
We represent quantities $q$ in the shifted system as $q^\sigma$.
The right hand side $\bm{b}$ represents $\left|j\right\rangle$
in Eq.~(\ref{eq:lin_eq}).
We assume that the vector $\bm{b}$ is a real and normalized.
In the family of CG method, here the shifted COCG method,
it is important that the approximate solution of
Eq.~(\ref{Eq:ref}) is searched
within the Krylov subspace ${\cal K}_n (A,\bm{b})$.
The subspace ${\cal K}_n (A,\bm{b})$
becomes whole space at $n=N-1$,
and the solution becomes exact.
The accuracy of the approximate
solution at $n$-th iteration $\bm{x_{n}}$ is evaluated
by using the residual vector,
\begin{equation}
\bm{r}_n = \bm{b}-A\bm{x}_n \label{Eq:residual},
\end{equation}
and the iteration is stopped
as soon as the norm of the residual vector, $||\bm{r}_n||$,
satisfies the criterion for the convergence.
The residual vectors are ``orthogonalized'' with respect to
the non-standard ``inner-product'' $(\bm{u},\bm{v}) = \bm{u}^T \bm{v}$.
When $\eta=0$, all the relevant vectors are real and the
``inner-product'' and ``orthogonality'' reduce to standard ones, respectively.
Because $\bm{r}_n$'s are ``orthogonalized'', it is convenient to use
them as base vectors of ${\cal K}_n (A,\bm{b})$.
In addition to that, owning to the ``orthogonality'', we obtain the important
theorem of ``{\it collinear residual}'' (See appendix {\ref{app:math}}).
\subsection{COCG method}
The shifted COCG method starts from the COCG method~\cite{Vorst}
solving Eq.~(\ref{Eq:ref}).
We define $\bm{x}_n$, $\bm{p}_n$ and $\bm{r}_n$
as the approximate solution at $n$-th iteration,
the searching direction to the approximate solution at the next
iteration, and the residual vector, respectively.
At a reference energy, we must solve the following equations
under the initial conditions, $\bm{x}_0=\bm{p}_{-1}=\bm{0}$, $\bm{r}_0=\bm{b}$,
$\alpha_{-1}=1$, and $\beta_{-1}=0$:
\begin{eqnarray}
\bm{x}_n &=& \bm{x}_{n-1} + \alpha_{n-1} \bm{p}_{n-1} \label{Eq:CG:x} , \\
\bm{r}_n &=& \bm{r}_{n-1} - \alpha_{n-1} A \bm{p}_{n-1} \label{Eq:CG:r} , \\
\bm{p}_n &=& \bm{r}_{n} + \beta_{n-1} \bm{p}_{n-1} \label{Eq:CG:p} , \\
\alpha_{n-1} &=& \frac{(\bm{r}_{n-1},\bm{r}_{n-1})}{(\bm{p}_{n-1}, A \bm{p}_{n-1})} \label{Eq:CG:alpha} , \\
\beta_{n-1} &=& \frac{(\bm{r}_{n},\bm{r}_{n})}{(\bm{r}_{n-1},\bm{r}_{n-1})} \label{Eq:CG:beta}.
\end{eqnarray}
Here, we must notice the fact, in the procedure of iteration,
$(\bm{v},\bm{v})=0$ can happen though $\bm{v}\ne\bm{0}$.~\cite{inner_product}
This cannot happen in the CG method ($\eta_{\rm ref}=0$,
the matrix $A$ is positive definite) and
the other part is perfectly identical to the CG method.
A set of residual vector $\bm{r}_n$ forms the ``orthogonalized'' base.
This ``orthogonality'' is very important for us to understand
the theorem of collinear residual.
We explain it in detail in Appendix~\ref{app:math}.
We can choose an alternative set of the recurrence equations, as follows.
Eliminating $\bm{p}$'s from Eqs.~(\ref{Eq:CG:r}) and (\ref{Eq:CG:p}),
we obtain the recurrence equation of $\bm{r}_n$,
\begin{equation}
\bm{r}_{n+1} =
\left(1+\frac{\beta_{n-1}\alpha_{n}}{\alpha_{n-1}} -\alpha_n A\right) \bm{r}_n
- \frac{\beta_{n-1}\alpha_{n}}{\alpha_{n-1}} \bm{r}_{n-1}. \label{Eq:shift:r}
\end{equation}
Taking ``inner product'' between $\bm{r}_n$ and the Eq.~(\ref{Eq:shift:r}),
we obtain
\begin{equation}
\alpha_n = \frac{(\bm{r}_n,\bm{r}_n)}
{(\bm{r}_n, A \bm{r}_n)-\frac{\beta_{n-1}}{\alpha_{n-1}}(\bm{r}_n,\bm{r}_n)}.
\label{eq:recurr}
\end{equation}
Then the Eqs.~(\ref{Eq:CG:beta}), (\ref{eq:recurr}) and (\ref{Eq:shift:r})
can produce all the base vectors, $\bm{r}_k$'s ($k > n$),
when $\alpha_{n-1}$, $\bm{r}_{n-1}$ and $\bm{r}_{n}$ are supplied.
\subsection{Shifted equations}
\label{sub:shift}
The key to the reduction of the matrix-vector operations
in solving the shifted system Eq.~(\ref{Eq:shifted}),
is the theorem of collinear residual:
\begin{equation}
\bm{r}^\sigma_{n}=\frac{1}{\pi^{\sigma}_{n}}\bm{r}_{n}, \label{Eq:collinear}
\end{equation}
where the $\pi^\sigma$ is a scalar function (actually polynomial) of $\sigma$.
Then, once $\{\bm{r}_n\}$ are given,
the base set $\{\bm{r}_n^\sigma\}_n$ for the arbitrarily shifted system
can be obtained by using scalar multiplication.
We obtain the recurrence equations that determines
$\pi_n^\sigma, \alpha_n^\sigma,
\beta_n^\sigma, \bm{x}_n^\sigma$, and $\bm{p}_n^\sigma$,
from Eqs.~(\ref{Eq:CG:x})$\sim$(\ref{Eq:CG:beta}),
with replacing $A$ by $A+\sigma$, with the same
initial conditions:
\begin{eqnarray}
\pi^\sigma_{n+1} &=&
\left(1+\frac{\beta_{n-1}\alpha_{n}}{\alpha_{n-1}} + \alpha_n \sigma\right) \pi^\sigma_n
- \frac{\beta_{n-1}\alpha_{n}}{\alpha_{n-1}} \pi^\sigma_{n-1}, \nonumber \\
&& \label{Eq:shift:pi} \\
\alpha^\sigma_n &=&\frac{\pi^\sigma_n}{\pi^\sigma_{n+1}}\alpha_n \label{Eq:shift:alpha},\\
\beta^\sigma_{n}&=&\left(\frac{\pi^\sigma_n}{\pi^\sigma_{n+1}}\right)^2 \beta_n ,
\label{Eq:shift:beta} \\
\bm{x}^\sigma_{n} &=& \bm{x}^\sigma_{n-1} + \alpha^\sigma_{n-1} \bm{p}^\sigma_{n-1} \label{Eq:shift:x} , \\
\bm{p}^\sigma_{n} &=& \frac{1}{\pi^\sigma_n}\bm{r}_{n} + \beta^\sigma_{n-1} \bm{p}^\sigma_{n-1} \label{Eq:shift:p}.
\end{eqnarray}
These recurrence equations can be {\bf solved without time consuming matrix-vector operation}.
In addition to that, each component of the vector
Eqs.~(\ref{Eq:shift:pi})$\sim$(\ref{Eq:shift:p})
can be solved separately,
due to the absence of the matrix operation.
\subsection{Crucial remarks for extremely large matrix
to save required memory space}
\label{sub:reconstruction}
For the solution of the relatively small matrix (${\rm Dim} \lesssim 10^{4}$),
any loop structure of shifted COCG method can be applicable.
However, in the many-electron theory,
the dimension of the intermediate vectors is huge
and then the number of intermediate vectors is restricted
to some tens or hundreds.
In the standard loop structure which Frommer showed,~\cite{Frommer}
the outermost loop index is the iteration step $n$, and,
all the vectors $\bm{p}_n^\sigma$, $\bm{r}_n^\sigma$ and $\bm{x}_n^\sigma$
for every energy mesh point $\sigma$, are required
in order to start the calculation at the iteration step $n+1$.
Then all the energy mesh points must be fixed before the calculation starts.
Because the calculation at the respective energy mesh points are independent
to each other, the loop structure can be transformed such that
the reference system is solved with the COCG method storing the
$\{\alpha_n\}_n$,$\{\beta_n\}_n$ and $\{\bm{r}_n\}_n$,
then the shifted systems are solved with stored information about reference system
for each energy mesh points.
Here the innermost loop index is the iteration step $n$.
Since the number of energy mesh points is larger than the number of iteration generally,
the latter transformed loop structure requires smaller memory than the original one.
In addition to that, we need not to prepare
energy mesh points $\{\sigma\}$ because all the required information
related to the reference system are stored in the COCG process,
the preceded part of the algorithm.
Then, for example, we can change the smearing factor $\eta$ freely without repeating
COCG process that includes matrix-vector operations.
Further reduction of the required memory is possible,
with further modification of the recurrence equations for the shifted system.
Assuming that a real constant vector $\bm{c}$ is an adjoint vector
and taking the inner product between $\bm{c}$ and
the Eqs.~(\ref{Eq:shift:x}) and (\ref{Eq:shift:p}), we obtain
a set of self-contained equations for determining the $(\bm{c},\bm{x}_n)$,
$n$-th approximate solution of the element of Green's function,
due to the absence of matrix-vector operations.
In the applications in Sec.~\ref{Sec:Application},
we are interested in the case where
$\bm{c}=\bm{b}$ in order to calculate the trace of the Green's function.
Therefore, we need to store $(\bm{b}, {\bm{r}_n})$, $\alpha_n$, $\beta_n$ and $||\bm{r}_n||$,
in the COCG part, and later, solve only the $\bm{b}$-component of
Eqs.~(\ref{Eq:shift:x}) and (\ref{Eq:shift:p}).
Here, the norm of the residual vector
$||\bm{r}^\sigma_n||=\frac{1}{|\pi^\sigma_n|}||\bm{r}_n||$
is not necessary to solve the recurrence equations but
is used to monitor the convergence of the approximate solution.
Additionally, we store the full components of
the last two $\bm{r}_n$'s in the COCG part,
in order to extends the iteration number in the seed switching part
(subsection~\ref{sub:seed_switch}).
Even if the full components of the Green's function are needed,
we need to store just a few components of ${\bm{r}_n}$,
because the suffix of the Green's function denotes the one-electron orbitals,
the number of which is very small compared to the dimension of many-electron Hamiltonian $H$.
\subsection{Preparation of ground state wavefunction}
\label{sub:eigen}
The transformation of the loop structure
in the subsection~\ref{sub:reconstruction} increase
the re-usability of the program code.
The COCG part of the code can also be used in
the process to improve the ground state wavefunction as follows.
First we use the Lanczos method in order to tridiagonalize the Krylov subspace,
then, obtain the ground state diagonalizing it.
The calculated lowest eigenenergy converges rapidly
with increase of the dimension of the subspace,
but the wavefunction does not,
due to the unstable orthogonality against the inevitable rounding error.
Next we improve the approximate eigenenergy and the wavefunction
with the inverse iteration method.
Because the COCG process with the real arithmetics
is the same as CG process,
here we can use the COCG part of the shifted COCG algorithm
whose loop structure is changed as
in the subsection~\ref{sub:reconstruction}.~\cite{COCG_real}
Since the inverse iteration method works only when the approximate
eigenvalue and eigenvector are given,
the first Lanczos process can not be omitted.
If the accuracy of the calculated wavefunction is not enough,
the processes are repeated with replacing the initial Lanczos
vector by the latest approximate wavefunction.
\section{Accuracy and seed switching}
\label{Sec:error_and_seed_switching}
\subsection{Estimating accuracy of Green's function}
\label{sec:numerical_error}
In this subsection, we explain the accuracy of the Green's function calculated
by the shifted COCG method and give its estimation.
Here $G_{\rm exact}$ is the ``exact'' solution
of Eqs.~(\ref{eq:lin_eq}) and (\ref{eq:gf2}) for a given finite value of $\eta$.
Then we say that ``the calculated Green's function is accurate'',
when the $|\frac{[G_{\rm sCOCG}-G_{\rm exact}]_{ij}}{[G_{\rm exact}]_{ij}}|$
(hereafter, ``accuracy'') is small.
The ``accuracy'' and $||\bm{r}_n||$ are generally ``truncation error''
of $G$ and $\bm{x}_n$, respectively.
Because the shifted system is equivalent to the
reference system, then we can estimate the accuracy for
the shifted system,
with replacing $A$ by $A+\sigma$ and any other quantities $\{q\}$
by $\{q^\sigma\}$.
We can derive following equation from
Eqs.~(\ref{eq:lin_eq}),(\ref{eq:gf2}) and (\ref{Eq:residual}),
\begin{equation}
\left|\frac{[G_{\rm sCOCG}-G_{\rm exact}]_{jj}}{[G_{\rm exact}]_{jj}}\right|
= \left|\frac{(\bm{b}, A^{-1} \bm{r}_n)}{(\bm{b}, A^{-1} \bm{b})}\right| .
\label{Eq:Truncation}
\end{equation}
If the matrix $A$ was positive definite real symmetric matrix and
the vector $\bm{b}$ and $\bm{r}_n$ were real vectors,
the upper bound of the right hand side of Eq.~(\ref{Eq:Truncation})
is equal to $\frac{||\bm{r}_n||}{||\bm{b}||}=||\bm{r}_n||$.
When the matrix $A$ can be fully diagonalized numerically,
we can estimate $G_{\rm exact}$
within rounding errors, and then,
obtain the ``accuracy'' of the approximate Green's function calculated
by shifted COCG method.
\begin{figure}
\begin{center}
\caption{
An example of the ``accuracy'' of $[G_{\rm sCOCG}]_{jj}$,
$||\bm{r}_n||$ and the imaginary part of the ``exact''
solution ${\rm Im} [G_{\rm exact}]_{jj}$ (See text).
The dimension of the matrix $A$ here is equal to 8,960,
in the same model in Sec.~\ref{Sec:Application}.
The reference energy and smearing factor are equal to $\omega_{\rm ref}=6.04$eV
and $\eta=0.05eV$. The iteration number is equal to 800.
}
\label{Fig:error}
\end{center}
\end{figure}
The dotted and solid line in the Fig.~\ref{Fig:error} show
the ``accuracy'' of $[G_{\rm sCOCG}]_{jj}$ and the norm of the
residual vector $||\bm{r}_n||$, respectively.
The dashed line shows the excitation spectra.
The figure shows that
the ``accuracy'' is bounded by the norm of the residual vector.
Therefore, we can estimate the ``accuracy'' of the calculated
Green's function by using $||\bm{r}_n||$, without the knowledge of
the ``exact'' solution $G_{\rm exact}$.
We can also see from the figure that
the Green's function calculated by the shifted COCG method
is accurate more, near the bounds of the spectra.
\subsection{Seed switching}
\label{sub:seed_switch}
Assuming that the approximate solution of the reference system,
Eqs.~(\ref{Eq:CG:x})$\sim$(\ref{Eq:CG:beta}),
converges at $M$-th iteration,
we can solve the shifted system,
Eqs.~(\ref{Eq:shift:pi})$\sim$(\ref{Eq:shift:p}),
up to the same $M$-th iteration.
When the approximate solution of the shifted system
does not converges at any $\omega+{\rm i}\eta=\omega_{\rm ref}+{\rm i}\eta_{\rm ref}+\sigma$,
we should extend the iteration of the reference system.
However at $\omega_{\rm ref}+{\rm i}\eta_{\rm ref}$,
the extension does not improve the approximate solution,
since the norm of the residual vector is considerably small already.
In that occasion,
we should change the seed ($\omega_{\rm ref}+{\rm i}\eta_{\rm ref}$) to a new one,
$\omega_{\rm ref}^{\rm new}+{\rm i}\eta_{\rm ref}^{\rm new}$,
where the norm of the residual vector is large and the approximate
solution does not converge.
Because the shifted system is equivalent to the reference system,
we can change the seed as follows,
without disposing the previous calculation at the old $\omega_{\rm ref}$.~\cite{Sogabe}
We define $\sigma_{\rm max}$ so that $\bm{r}_M^{\sigma_{\rm max}}={\rm Max}_{\sigma} \{\bm{r}_M^\sigma\}$,
where ${\rm Max}_{\sigma}$ means the maximum value on the $\sigma$-mesh
(energy mesh) points.
Because the $\omega_{\rm ref}+{\rm i}\eta_{\rm ref}+\sigma_{\rm max}$ is
the prime candidate for the energy of the slowest convergence,
we choose it as the $\omega_{\rm ref}^{\rm new}+{\rm i}\eta_{\rm ref}^{\rm new}$.
Then, $\alpha_{k}^{\sigma_{\rm max}}, \beta_{k}^{\sigma_{\rm max}}$, and
$\bm{r}_{k}^{\sigma_{\rm max}}$ ($k=0,1,\cdots,M$) are calculated and
replaces the old values at old reference energy, respectively.
Finally, the recurrence Eqs.~(\ref{Eq:CG:x})$\sim$(\ref{Eq:CG:beta})
of the COCG method at $\omega_{\rm ref}^{\rm new}+{\rm i}\eta_{\rm ref}^{\rm new}$
are calculated until the solution converges at $M^{\rm new}$-th iteration.
\begin{figure}
\begin{center}
\caption{The seed
($\omega_{\rm ref}+{\rm i}\eta_{\rm ref}$)
switching and the norm of residual vector
$||\bm{r}_n||$ at respective seed (the black solid line).
Here the $\eta_{\rm ref}$ is a constant and equal to $0.05$eV.
The dashed vertical line indicates the iteration number where the seed is switched.
The gray lines show the $||\bm{r}_n^{\sigma_{\rm max}}||$'s (see text).
The calculated values of $||\bm{r}_n||$ at $\omega_{\rm ref}=8.95$eV
are multiplied by the factor of $100$ in order to avoid overlap.
}
\label{Fig:seed_switch}
\end{center}
\end{figure}
The important point of the seed switching is that we can recalculate new
$\alpha_n$'s, $\beta_n$'s, and $||\bm{r}_n||$'s ($0 < n \leq M$)
without any matrix-vector operation,
though the matrix-vector operation is required to calculate
the new ones at further $n'$-th iteration ($M < n' \leq M^{\rm new}$).
The same remark as the subsection \ref{sub:reconstruction} is applicable
to the implementation of the seed switching.
The shifted COCG algorithm with seed switching of any loop structure
can be applicable to relatively small matrices, but,
we must change the loop structure of it
from the previous one~\cite{Sogabe},
since the size of the intermediate vector is huge in the many-electron
theory.
We must even change the recurrence equation of $\bm{r}_n$ to
Eq.~(\ref{Eq:shift:r}),
so that the $\bm{r}_{n+1}$ is calculated only with
the $\bm{r}_{n-1}$ and $\bm{r}_n$,
in stead of all $\{\bm{r}_n\}_{n=1,2,\ldots,n}$,
for extremely huge matrices.
When we store $\{\alpha_n,\beta_n\}_{n=0,1,\ldots,M-1}$,
$\bm{r}_{M-1}$ and $\bm{r}_{M}$ in the COCG process,
then we can calculate $\alpha_{M-1}^{\sigma_{\rm max}}$,
$\bm{r}_{M-1}^{\sigma_{\rm max}}$ and $\bm{r}_{M}^{\sigma_{\rm max}}$,
which are required for the following calculations.
Then we calculate the new reference system
up to $M^{\rm new}$-th iteration step,
by using Eqs.~(\ref{Eq:CG:beta}), (\ref{eq:recurr}) and (\ref{Eq:shift:r}).
Figure~\ref{Fig:seed_switch} shows the example of the seed switching.
The system is the same one as in Fig.~\ref{Fig:error} except $\omega_{\rm ref}$.
Here the $\eta=\eta_{\rm ref}=0.05$eV.
At the $\omega_{\rm ref}=-1.40$eV, $||\bm{r}_n||$ decrease exponentially,
and satisfies the criterion $||\bm{r}_n||< 10^{-10}$ at iteration step $n=330$.
However, there are many energies where the converging speed of the residual
vector $||\bm{r}^\sigma_n||$ is slower than that at $\omega_{\rm ref}$.
Then the $\sigma_{\rm max}=7.43$eV is searched and the
$\omega_{\rm ref}^{\rm new}$ is shifted to be $6.03$eV.
We need seed switching twice more at $1509$- and $3297$-th iteration
in order to obtain the global convergence.
The largest value of the ``accuracy'' is $1.5 \times 10^{-10}$
at the last iteration step $n=3455$.
\subsection{Robustness of shifted COCG method}
In this subsection, we explain the
robustness of the shifted COCG method,
which is very important to obtain the converged approximate
solution, especially in the case of the long iteration.
We say that the calculation is robust,
when the calculation is stable against the perturbation.
For examples, the orthogonality of $\{\bm{r}_n\}$ is
not a robust property, because
the inevitable rounding error perturb
the calculation and the orthogonality is broken down quickly.
The robustness of the shifted COCG method consists of
two parts. One is the robustness of COCG method
at the reference energy $\omega_{\rm ref}$.
And the other is the robustness of the iterative solution of
the shifted equations.
Figure~\ref{Fig:seed_switch} shows the robustness of the COCG method
at $\omega_{\rm ref}$, because the norm of the residual vector
$||\bm{r}_n||$ goes to 0 in spite of long iteration 3,540.
The global convergence of the ``accuracy'' that is mentioned at the end of
the subsection~\ref{sub:seed_switch} shows
the robustness of the iterative solution of the shifted equations.
In the shifted COCG method, the ``orthogonality'' of base vectors $\{\bm{r}_n\}$
is not necessary for reducing $ ||\bm{r}_n||$,
in contrast to the fact that
the subspace diagonalization methods requires the unitarity of
the base vectors.
\section{Application of the shifted COCG method to the many-electron problem}
\label{Sec:Application}
Here we apply the shifted COCG method, to the
double orbital extended Hubbard Hamiltonian
and calculate the excitation spectra.~\cite{Yamamoto}
\subsection{Hamiltonian of La$_\frac{3}{2}$Sr$_\frac{1}{2}$NiO$_{4}$}
The experimental results show that the
layered perovskite La$_\frac{3}{2}$Sr$_\frac{1}{2}$NiO$_{4}$
is an insulator with charge and spin stripe order,
as depicted in Fig.~\ref{Fig:ChargeSpinOrder}.
The charge and spin structures of the single layer of La$_\frac{3}{2}$Sr$_\frac{1}{2}$NiO$_{4}$
(pseudo two-dimensional system), choosing Ni 3d $e_g$ orbitals as relevant ones,
was studied with the extended Hubbard model recently.~\cite{Yamamoto}
Here we use the same Hamiltonian.
\begin{eqnarray}
\hat{H} &=& \sum_{\scriptstyle i,j, \alpha,\beta, \sigma}
t_{i \alpha j \beta} \hat{c}^{\dagger}_{i \alpha \sigma}
\hat{c}_{j \beta \sigma}
+ \sum_{i, \alpha, \sigma}
\varepsilon_{i \alpha} \hat{n}_{i \alpha \sigma} \nonumber \\
&+& U \sum_{i,\alpha} \hat{n}_{i \alpha \uparrow}\hat{n}_{i \alpha \downarrow}
+ (U-2J) \sum_{i,\sigma,\sigma'}\hat{n}_{i ,3z^2-1, \sigma}\hat{n}_{i ,x^2-y^2, \sigma'} \nonumber \\
&+& \frac{J}{2} \sum_{\STACK{i,\alpha\ne\beta,}{\sigma,\sigma'}}
\left(
\hat{c}^{\dagger}_{i \alpha \sigma} \hat{c}^{\dagger}_{i \beta \sigma'}
\hat{c}_{i \alpha \sigma'} \hat{c}_{i \beta \sigma}
+
\hat{c}^{\dagger}_{i \alpha \sigma} \hat{c}^{\dagger}_{i \alpha \sigma'}
\hat{c}_{i \beta \sigma'} \hat{c}_{i \beta \sigma} \right)
\nonumber \\
&+& V \sum_{\STACK{\left<i,j\right>,\alpha,}{\beta, \sigma, \sigma'}}
\hat{n}_{i \alpha \sigma} \hat{n}_{j \beta \sigma'} , \label{eq_reduced_hamiltonian}
\end{eqnarray}
where the suffix $\{i,j\}$ denote the site,
$\{\alpha,\beta\}$ denote the orbital $3z^2-1$ or $x^2-y^2$,
and $\{\sigma,\sigma'\}$ denote the spin co-ordinate.
The annihilation and number operator are $\hat{c}$ and $\hat{n}$, respectively.
The symbol $t$, $\varepsilon$, $U$, $J$, $V$ denote
the Slater-Koster type hopping parameter, single electron energy, on-site Coulomb
interaction, on-site exchange interaction, intersite Coulomb interaction, respectively.
Hopping parameters are finite for nearest neighbor (n.n.) and second n.n. pair of sites.
The braces $\langle\cdots\rangle$ means that two sites enclosed by them
are the n.n. sites.
Though the anisotropy of the hopping parameters for the second n.n. pair stabilizes
the spin structure,~\cite{Yamamoto} we choose the isotropic (tetragonal) parameter set
shown in Table~\ref{Tab}.
The role of $U$ and $J$ in the present situation is stabilization
of integral valency of Ni ions (Ni$^{3+}$ and Ni$^{2+}$) and spin polarization.
\begin{table}[h]
\begin{tabular}{cccccccc}
$t_{dd\sigma}$ & $t_{dd\delta}$ & $\frac{1}{4}t'_{dd\sigma} + \frac{3}{4}t'_{dd\delta}$ &
$t'_{dd\pi}$ & $\Delta$ & $U$ & $J$ & $V$ \\
\hline
-0.543 & 0.058 & -0.018 & -0.023 & 0.97 & 7.5 & 0.88 & 0.5
\end{tabular}
\caption{The values of parameters in the Hamiltonian in unit of eV.~\cite{Yamamoto}
The prime symbol at the right shoulder of ``$t$'' means the second n.n. hopping.
}
\label{Tab}
\end{table}
\begin{figure}[hbt]
\begin{center}
\caption{Experimentally observed charge and spin order in the La$_\frac{3}{2}$Sr$_\frac{1}{2}$NiO$_4$.
\cite{Yoshizawa_2000}}
\label{Fig:ChargeSpinOrder}
\end{center}
\end{figure}
\subsection{Ground state of La$_\frac{3}{2}$Sr$_\frac{1}{2}$NiO$_{4}$}
Here we summarize the properties of the calculated ground state
of the La$_\frac{3}{2}$Sr$_\frac{1}{2}$NiO$_{4}$.~\cite{Yamamoto}
The calculated ground state shows the charge and spin stripe order
consistent with experimental observation and the system is insulator.
Diagonal hole stripes are separately localized on Ni$^{3+}$ site in order to
reduce hole-hole interaction energy induced by inter-site Coulomb interaction $V$.
Charge order and the inter-site Coulomb interaction $V$ are directly related to
the energy gap in the excitation spectra of the system.
Spin stripe occurs only under the condition of the existence of multi-orbitals
and the charge order with a help of anisotropy.
The spin stripe is determined by the electronic structure with smaller energy scale
than that of the charge stripe.
\subsection{Computational details}
Here we explain miscellaneous computational details.
The calculated system is two-dimensional square lattice.
There are 12 electrons on the periodic $\sqrt{8}\times\sqrt{8}$ sites.
Because the $\mbox{total }S_z$ of the system is preserved,
we can reduce the number of relevant many-electron states to 64,128,064,
by using the condition $S_z =0$.
The smearing factor $\eta$ is also an arbitrary parameter in the present paper.
Here we explain how we chose the value of $\eta$.
The energy scale of the low energy excitations is
$t \sim V \sim {\rm O}(10^{-1} eV)$, because the value of
on-site Coulomb interaction $U$ is much larger.
Therefore we must set $\eta$ lower than $10^{-1}$eV,
so that $\eta$ does not smear out the finer structure of the spectra than itself.
There is another restriction that the interval of the energy mesh
is small enough than $\eta$, in order to see
the fine peak structure of the spectra.
Then, because the calculation time increases with decreasing $\eta$,
the value of $\eta$ is roughly determined as O(10$^{-3}$eV)$\sim$O(10$^{-2}$eV).
Next point is the criterion for the convergence of the ground state vector.
Our calculations are of the double precision and the rounding error is inevitable
($\sim 10^{-16}$) in the each component of the eigenvector.
Assuming the accumulated error is of O($\sqrt{N}$) ($N=$64,128,064),
the accuracy is expected to be $10^{-16}\times\sqrt{N}\sim 10^{-12}$.
We set the allowance for the estimation by factor of $10^2$,
and the criterion for the accuracy of calculated ground state energy $E_{\rm gs}$ and
eigenvector $\left| \right\rangle$ is
$\sqrt{\left\langle \right| (\hat{H}-E_{\rm gs})^2 \left| \right\rangle}< 10^{-10}$.
\begin{figure}
\begin{center}
\caption{The spectral functions of the present
Hamiltonian of double orbital extended Hubbard model with
12 electrons on the periodic $\sqrt{8}\times\sqrt{8}$ sites (gray line)
and their ``accuracy'' (black line, see text).
The spectra of affinity and ionization levels are calculated separately
by using the shifted COCG method, for the given smearing factor $\eta=0.01$eV
(upper panel) and $\eta=0.10$eV (lower panel).
The highest occupied level is at $9.4$eV and lowest unoccupied $10.3$eV.
Intersite Coulomb interaction $V=0.5$eV.
Energy zeroth is set at the ground state energy $36.755$eV of 12 electron system.}
\label{Fig:eta}
\end{center}
\end{figure}
\subsection{Spectral function}
We examined the spectral function
\begin{equation}
{\cal A}(\omega)=-\frac{1}{\pi}{\rm Tr}\left[ {\rm Im} G(\omega) \right].
\end{equation}
This can be easily evaluated by the shifted COCG method.
Figure~\ref{Fig:eta} shows the spectral functions of the state D at $V=0.5$eV.
The upper and the lower panel show the case of $\eta=0.1$eV and $\eta=0.01$eV, respectively.
Both of them are calculated from the same COCG calculations
and the only difference between them is the imaginary part of the energy shift $\sigma$.
The spectra of ionization and affinity levels are calculated separately and
the $\omega_{\rm ref}$'s for respective spectra are chosen to be $(9.0+{\rm i} 0.01)$eV
and $(10.4+{\rm i} 0.01)$eV.
The highest occupied level is at $9.4$eV and lowest unoccupied $10.3$eV.
The number of iterations equals to 800 for each spectra.
If one attempt to obtain the profile of the spectra with smoothly connected curves
as is in the bulk limit,
one should set the value of $\eta$ sufficiently larger than
$\frac{(\mbox{width of spectra})}{(\mbox{iteration number})}$,
in order to smear out the excessive peaks caused by the finite system.
Because iteration number equals to 800 in the present calculations,
this criterion becomes $\eta \gg 0.03$eV,
and the gray curves in the upper panel of Fig.~\ref{Fig:eta} shows
the smooth profile of the spectral function.
If one attempt to see whether the energy gap opens at the boundary between
affinity and ionization levels, one must choose sufficiently smaller $\eta$ than
the width of energy gap.
In the present calculation, this criterion becomes $\eta \ll 0.9$eV,
and, the gray curves in the lower panel of Fig.~\ref{Fig:eta} show
the energy gap around $\omega=9.8$eV.
We can choose $\eta$ independent with the reference energy,
then the energetic resolution of the spectral function can be changed
after all the time consuming matrix-vector operations have been finished.
The black curves in the upper and lower panels of the Fig.~\ref{Fig:eta}
show the ``accuracy'' of the respective spectral functions, and
the spectra are extremely accurate near the boundary of the spectra
($\omega=9.8$eV), where the energy gap is open.
Therefore, we conclude from the lower panel of Fig.~\ref{Fig:eta}
that the ground state of the present Hamiltonian is insulator.
Changing the value of $V$ continuously from $0.5$eV to $0.0$eV,
we find that the system becomes metal.~\cite{Yamamoto}
Therefore, the intersite Coulomb interaction makes the present system insulator,
unlike the usual transition metal oxide
where the large on-site Coulomb interaction makes the system insulator.
\section{Discussion and summary}
\label{Sec:Discussion}
Once COCG method is applied to the reference system
Eq.~(\ref{Eq:ref}), the shifted system Eq.~(\ref{Eq:shifted}) is
solved without time consuming matrix-vector operations,
by shifted COCG method.
This notable property is due to the mathematical structure of COCG method,
such that the residual vector's are forming the ``orthogonal''
base set of vectors, whose direction does not change against $\sigma$.
This reduction of the matrix-vector operation extremely
accelerate the calculation speed of Green's function
$G(\omega)$, keeping the robustness of COCG method.
Simultaneously, the accuracy of the approximate $G(\omega)$ is easily
estimated as the norm of the newly generated base vector (residual vector)
at the latest iteration.
The total accuracy of the shifted COCG method varies
depending on $\omega$, $\sigma$ and
$\omega_{\rm ref}$~\cite{note_for_omega_ref} and
generally very small near the bounds of the respective spectra.
In the many-electron Green's function,
we are usually interested in the low energy excitations,
in other words,
the spectra near the boundary between affinity and ionization levels.
Therefore, we can calculate the Green's function
accurately and quickly by the shifted COCG method, in the interesting energy range.
Another problem in the application of the COCG method to the
many-electron theory is a memory constraint due to the extremely large size
of vectors and matrices.
We resolved this problem with separating
the COCG part for the reference system
and the part for the shifted equations, changing the loop structure.
This change give us the following two merits.
One is a usage of the former part for improving the ground state,
as is mentioned in the subsection~\ref{sub:eigen}.
The other is the fast calculation of changing smearing factor $\eta$,
which is just a imaginary shift.
When we do not know the proper energy scale {\it a priori},
the width of the energy gap in the present paper
or the proper value of $\eta$, this merit is very important.
The seed switching is a very important idea
for the shifted COCG method to give global convergence,
which means that the calculated solution converges
everywhere in the interested energy region.
Because it takes much iteration steps to converge the solution
especially in the middle of the spectra,
sometimes we must discontinue the iteration step before obtaining global convergence.
In that case, we must examine the accuracy of the result and check if
the solutions in the required energy range satisfy the criterion.
The applicability of the above reconstruction
and the seed switching are not specific to
the many-electron problem.
We can apply them to the general solution of
the Green's function of extremely large dimension.
We applied the shifted COCG method to the charge and spin order
in La$_\frac{3}{2}$Sr$_\frac{1}{2}$NiO$_{4}$,
where the intersite Coulomb interaction, relatively small compared with on-site one,
plays an important role.
Then we conclude the relatively small energy gap opens at the Fermi energy,
and the system becomes insulator, due to the intersite Coulomb interaction.
\begin{acknowledgments}
Calculations were done at the Supercomputer Center, Institute for Solid State
Physics, The University of Tokyo.
This work was partially supported by a Grant-in-Aid for Scientific Research in Priority
Areas ``Development of New Quantum Simulators and Quantum Design'' (No.170640004) of
The Ministry of Education, Culture, Sports, Science, and Technology, Japan.
\end{acknowledgments}
|
2,869,038,156,803 | arxiv | \section{Strahler-Optimal Attractor Decompositions}
In this section we prove that every parity game whose Lehtinen number
is~$k$ has an attractor decomposition of Strahler number at most~$k$.
In other words, we establish the Lehtinen number upper bound on the
Strahler number, which together with
Lemma~\ref{lem:Strahler-bounds-Lehtinen} provides a positive answer to
Question~\ref{question:Strahler-eq-register}.
\begin{theorem}
\label{thm:Lehtinen-bounds-Strahler}
The Strahler number of a parity game is no larger than its Lehtinen
number.
\end{theorem}
When talking about strategies in parity games in
Section~\ref{section:tuning}, we only considered
positional strategies, for which it was sufficient to verify the
parity criterion on (simple) cycles.
Instead,
we explicitly consider the parity criterion on infinite paths here,
which we find more convenient to establish properties of Audrey
strategies in the proof of
Theorem~\ref{thm:Lehtinen-bounds-Strahler}.
First, we introduce the concepts of \emph{tight} and
\emph{offensively optimal} attractor decompositions.
\begin{definition}
\label{def:Tight}
A Steven $d$-attractor decomposition $\mathcal{H}$ of $\mathcal{G}$ is \emph{tight}
if Audrey has a winning strategy from at least one state in
$\Def{\Strah{\mathcal{H}}-1}{\mathcal{G}}$ in which the value of register
$\Strah{\mathcal{H}}-1$ is~$d$.
\end{definition}
By definition, the existence of a tight Steven $d$-attractor
decomposition on a parity game implies that the Lehtinen number of the
game is at least its Strahler number, from which
Theorem~\ref{thm:Lehtinen-bounds-Strahler} follows.
Offensive optimality of an attractor decomposition, the concept we
define next, may seem less natural and more technical than tightness,
but it facilitates our proof that every game has a tight attractor
decomposition.
\begin{definition}
\label{def:Opt}
Let
$\mathcal{H} = \seq{A,(S_1, \mathcal{H}_1, A_1), \ldots,(S_\ell, \mathcal{H}_\ell, A_\ell)}$
be a Steven $d$-attractor decomposition, let games $\mathcal{G}_i$ for
$i = 1, 2, \dots, \ell$ be as in the definition of an attractor
decomposition, let $A_i'$ be the Audrey attractor of the set of
vertices of priority $d-1$ in $\mathcal{G}_i$, and let
$\mathcal{G}_i' = \mathcal{G}_i\setminus A_i'$.
We say that $\mathcal{H}$ is \emph{offensively optimal} if for every
$i = 1, 2, \ldots, \ell$, we have:
\begin{itemize}
\item
Audrey has a dominion strategy on $\Def{\Strah{\mathcal{H}_i}-1}{\mathcal{G}'_i}$;
\item
Audrey has a dominion strategy on
$\Def{\Strah{\mathcal{H}_i}}{\mathcal{G}_i'\setminus S_i}$.
\end{itemize}
\end{definition}
Proving that every offensively optimal Steven attractor
decomposition is tight (Lemma~\ref{lemma:AudreyStrategyTight}), and
that every Steven dominion in a parity game has an offensively optimal
Steven attractor decomposition (Lemma~\ref{lemma:TightStrahler}), will
complete the proof of Theorem~\ref{thm:Lehtinen-bounds-Strahler}. We first give two propositions that will be useful in the proofs.
\begin{proposition}
\label{prop:offensivetodominion}
For every parity game $\mathcal{G}$ and non negative integer $k$, if Audrey has a dominion strategy from every state of $\Def{k}{\mathcal{G}}$ then Audrey has a dominion strategy on $\Reg{k}{\mathcal{G}}$.
\end{proposition}
\begin{proof}
For every state $s$ of $\Def{k}{\mathcal{G}}$, Audrey has a winning strategy $\tau_s$ on
$\Def{k}{\mathcal{G}}$ starting in~$s$.
We construct a dominion strategy for her on
$\Reg{k}{\mathcal{G}}$:
after every visit to a state of rank $2k+1$, Audrey
follows $\tau_s$, where $s$ is the first state that follows on the
path and whose rank is smaller than $2k+1$.
This defines a dominion strategy on
$\Reg{k}{\mathcal{G}}$.
\end{proof}
\begin{proposition}
\label{prop:dominioni}
If
$\mathcal{H} = \seq{A, (S_1, \mathcal{H}_1, A_1), \dots,
(S_{\ell}, \mathcal{H}_{\ell}, A_{\ell})}$
is an offensively optimal Steven $d$-attractor decomposition, then for
every $i = 1, 2, \dots, \ell$, we have that Audrey has a dominion
strategy on $\Reg{\Strah{\mathcal{H}_i}-1}{\mathcal{G}_i}$ (and also a dominion
strategy on $\Def{\Strah{\mathcal{H}_i}-1}{\mathcal{G}_i}$).
\end{proposition}
\begin{proof}
Let $i$ in $\{1, 2, \dots, \ell\}$.
Consider the following strategy in $\Def{\Strah{\mathcal{H}_i}-1}{\mathcal{G}_i}$:
\begin{itemize}
\item
On the set of states
whose vertex components are in~$A'_i$,
Audrey follows a strategy induced by the
reachability strategy in $A'_i$ to a vertex of priority~$d-1$
(picking any move if $v$ is of priority~$d-1$);
\item
In states whose vertex component is in~$\mathcal{G}'_i$,
Audrey plays a $(k-1)$-register dominion strategy
on~$\Def{\Strah{\mathcal{H}_i}-1}{\mathcal{G}'_i}$.
Such a strategy exists by the definition of offensive optimality.
\end{itemize}
This strategy is indeed an Audrey dominion strategy
on~$\Def{\Strah{\mathcal{H}_i}-1}{\mathcal{G}_i}$, because any play either visits a state whose
first component is a vertex in $A_i'$ infinitely often, or it
eventually remains in $\Def{\Strah{\mathcal{H}_i}-1}{\mathcal{G}'_i}$.
In the former case, the play visits a state
whose first component is a vertex of priority $d-1$ infinitely often.
In the latter case, the strategy is a dominion strategy
on~$\Def{\Strah{\mathcal{H}_i}-1}{\mathcal{G}'_i}$.
Finally, we use Proposition~\ref{prop:offensivetodominion} to turn this Audrey dominion strategy on~$\Def{\Strah{\mathcal{H}_i}-1}{\mathcal{G}_i}$ into an Audrey dominion strategy on~$\Reg{\Strah{\mathcal{H}_i}-1}{\mathcal{G}_i}$.
\end{proof}
\begin{lemma}
\label{lemma:AudreyStrategyTight}
Every offensively optimal Steven attractor decomposition is tight.
\end{lemma}
\begin{proof}
Let
$\mathcal{H} = \seq{A, (S_1, \mathcal{H}_1, A_1), \ldots,
(S_\ell, \mathcal{H}_\ell, A_\ell)}$
be an offensively optimal $d$-attractor decomposition of a parity
game and let $k=\Strah{\mathcal{H}}$.
We construct a strategy for Audrey in~$\Def{k-1}{\mathcal{G}}$ that is
winning for her from at least one state in which the value of
register $k-1$ is $d$.
We define $\mathcal{G}_i'$ and $A_i'$ as in Definition~\ref{def:Opt}.
\textit{Case 1:} $\Strah{\mathcal{H}_i}=k$ for some unique $i$ in
$\{1,\ldots, \ell\}$. In this case, we show that Audrey has a dominion
strategy on $\Def{k-1}{\mathcal{G}_i}$.
Since $\mathcal{G}_i$ is a trap for Steven in~$\mathcal{G}$, this gives the desired
result. This directly follows from Proposition~\ref{prop:dominioni}.
\textit{Case 2:} There are $1 \leq i<j \leq \ell$ such that
$\Strah{\mathcal{H}_i} = \Strah{\mathcal{H}_j} = k-1$.
We construct a strategy for Audrey in~$\Def{k-1}{\mathcal{G}}$ that is winning
for her from all states in~$\mathcal{G}_j$ whose register~$k-1$ has
value~$d$.
Firstly, since $\mathcal{H}$ is offensively optimal, Audrey has a dominion
strategy on $\Def{k-1}{\mathcal{G}_i'\setminus S_i}$, denoted by~$\tau_i$, and
a dominion strategy on~$\Reg{k-2}{\mathcal{G}'_i}$, denoted by~$\tau'_i$.
Moreover, by Proposition~\ref{prop:dominioni}, we have that Audrey has a
dominion strategy, denoted by $\tau_j$, on $\Reg{k-2}{\mathcal{G}_j}$
(note that $\mathcal{G}_j$ is a trap for Steven in $\mathcal{G}$).
Consider the following strategy for Audrey in $\Def{k-1}{\mathcal{G}}$,
starting from a state whose vertex component is in~$\mathcal{G}_j$ and
register~$k-1$ has value~$d$:
\begin{itemize}
\item
As long as the value of register~$k-1$ is larger than~$d-1$, Audrey
follows the strategy induced by~$\tau_j$, while ignoring the value
of register~$k-1$, as long as this value is larger than~$d-1$.
\item
If the value in register $k-1$ is at most $d-1$:
\begin{itemize}
\item
In states whose vertex component is in $A'_i$, Audrey follows a
strategy induced by the reachability strategy from~$A'_i$ to a
vertex of priority~$d-1$
(picking any move if the vertex has priority $d-1$);
\item
In states whose vertex component is in $\mathcal{G}'_i \setminus S_i$ and
whose register~$k-2$ has value at most~$d-2$, Audrey
follows~$\tau_i$;
\item
In states whose vertex component is in~$\mathcal{G}'_i$ and whose register
$k-1$ has value~$d-1$, Audrey follows the strategy induced
by~$\tau'_i$, while ignoring the value of regiser~$k-1$.
\end{itemize}
\end{itemize}
Audrey plays any move if none of the above applies.
We argue that this strategy is winning for Audrey
in~$\Def{k-1}{\mathcal{G}}$ from states whose vertex component is
in~$\mathcal{G}_j$ and register~$k-1$ has value~$d$.
Consider an infinite path that starts in such a state.
As long as register $k-1$ has value~$d$, Audrey follows~$\tau_j$.
If Steven never resets register $k-1$ then Audrey wins.
Otherwise, once register $k-1$ has been reset, its value is at
most~$d-1$.
Note that $\mathcal{G}_j$ is included in $A'_i \cup (\mathcal{G}'_i \setminus S_i)$.
If register $k-1$ has a value smaller than $d-1$, and the play never
visits a state whose vertex component is in~$A'_i$, then
Audrey has followed $\tau_i$ along the play
(she has never left $\mathcal{G}'_i \setminus S_i$ as the only way for Steven
to go out $\mathcal{G}'_i \setminus S_i$ is to go to $A'_i$) and wins.
Otherwise, the play visits a state whose vertex component is
in~$A'_i$, and so it visits a state whose vertex component has
priority~$d-1$, leading to a state in which register $k-1$ has
value~$d-1$.
Finally, if a state whose vertex component is in $A'_i$ is visited
infinitely many times then Audrey wins.
Otherwise, Audrey eventually plays according to~$\tau'_i$.
If Steven never resets register $k-1$ then Audrey wins.
Otherwise, if Steven resets register~$k-1$, which at this point has
value~$d-1$, a state of rank $2k-1$ is visited and Audrey wins.
\end{proof}
\begin{lemma}
\label{lemma:TightStrahler}
Every Steven dominion in a parity game has an offensively optimal
Steven attractor decomposition.
\end{lemma}
\begin{proof}
Consider a parity game $\mathcal{G}$ which is a Steven dominion. Let $k$ be
the Lehtinen number of $\mathcal{G}$ and let $d$ be the largest even value
such that $\pi^{-1}(\{d,d-1\})\neq \emptyset$.
We construct an offensively optimal Steven attractor decomposition
by induction.
If $d=0$, it is enough to consider $\seq{A, \emptyset}$, where $A$ is
the set of all vertices in~$\mathcal{G}$.
If $d>1$, let $A$ be the Steven attractor of the set of vertices of
priority $d$ in $\mathcal{G}$. Let $\mathcal{G}_0 = \mathcal{G} \setminus A$. If $\mathcal{G}_0 =
\emptyset$ then $\seq{A,\emptyset}$ is an offensively optimal Steven
attractor decomposition for~$\mathcal{G}$. Otherwise, $\mathcal{G}_0$ is a non-empty
trap for Steven in~$\mathcal{G}$ and therefore $\mathcal{G}_0$ has a Lehtinen number
at most $k$.
Let $A'$ be the Audrey attractor of all the vertices of priority
$d-1$ in the sub-game $\mathcal{G}_0$ and let $\mathcal{G}'_0 = \mathcal{G}_0\setminus A'$.
Given a positive integer $b$, let $L^{b}$ be the largest dominion in
$\mathcal{G}'_0$ such that Steven has a dominion strategy on
$\Def{b}{\mathcal{G}'_0}$. We define $m$ to be the smallest number such that
$L^{m}\neq \emptyset$ and let $S_0 = L^{m}$.
We show that $m \leq k$.
To prove this, we construct an Audrey dominion strategy on
$\Def{b}{\mathcal{G}_0}$ for all $b$ such that $L^b = \emptyset$.
Since the Lehtinen number of $\mathcal{G}_0$ is at most $k$, this implies
that $m \leq k$.
The Audrey dominion strategy on $\Def{b}{\mathcal{G}_0}$, assuming
$L^{b} = \emptyset$, is as follows:
\begin{itemize}
\item
If the vertex component of a state is in~$A'$ then Audrey uses the
strategy in~$A'$ induced by the reachability strategy to vertices
of priority~$d-1$;
\item
If the vertex component of a state is in~$\mathcal{G}'_0$ then Audrey uses
her dominion strategy on~$\Def{b}{\mathcal{G}'_0}$, which exists because the
Steven dominion $L^{b}$ in $\Def{b}{\mathcal{G}'_0}$ is empty.
\end{itemize}
Any play following the above strategy and visiting infinitely often
a state of $\Def{b}{\mathcal{G}_0 \cap A'}$ is winning for Audrey.
A play following the above strategy and remaining eventually in
$\Def{b}{\mathcal{G}'_0}$ is also winning for Audrey.
Let $\mathcal{H}_0$ be the $(d-2)$-attractor decomposition of $S_0$ obtained
by induction.
In particular, $\mathcal{H}_0$ is offensively optimal.
Let $A_0$ be the Steven attractor to $S_0$ in $\mathcal{G}_0$ and
let~$\mathcal{G}_1 = \mathcal{G}_0 \setminus A_0$.
Subgame $\mathcal{G}_1$ is a trap for Steven and therefore it is a Steven
dominion.
Let
$\mathcal{H}' = \seq{\emptyset, (S_1, \mathcal{H}_1, A_1), \ldots,
(S_\ell, \mathcal{H}_\ell, A_\ell)}$
be an offensively optimal Steven $d$-attractor decomposition of
$\mathcal{G}_1$ obtained by induction.
We claim that
$\mathcal{H} = \seq{A, (S_0, \mathcal{H}_0, A_0), (S_1, \mathcal{H}_1, A_1), \ldots,
(S_\ell, \mathcal{H}_\ell, A_\ell)}$
is an offensively optimal Steven $d$-attractor decomposition
of~$\mathcal{G}$.
Since $\mathcal{H}'$ is offensively optimal, it is enough to show that:
\begin{itemize}
\item
Audrey has a dominion strategy on
$\Def{\Strah{\mathcal{H}_0}-1}{\mathcal{G}'_0}$,
\item
Audrey has a dominion strategy on
$\Def{\Strah{\mathcal{H}_0}}{\mathcal{G}'_0 \setminus S_0}$.
\end{itemize}
Since $\mathcal{H}_0$ is offensively optimal, Audrey has a winning
strategy from at least one state in $\Def{\Strah{\mathcal{H}_0}-1}{S_0}$, by
Lemma~$\ref{lemma:AudreyStrategyTight}$, and hence
$m \geq \Strah{\mathcal{H}_0}$.
So, by choice of $m$, Steven does not have a defensive dominion strategy
on $\Def{\Strah{\mathcal{H}_0}-1}{\mathcal{G}'_0}$ from any state. This means that Audrey has a dominion strategy on $\Def{\Strah{\mathcal{H}_0}-1}{\mathcal{G}'_0}$.
Moreover, by construction of $S_0$, Audrey has a dominion strategy
on $\Def{m}{\mathcal{G}'_0 \setminus S_0}$.
This implies that Audrey has a dominion strategy on
$\Def{\Strah{\mathcal{H}_0}}{\mathcal{G}'_0 \setminus S_0}$.
\end{proof}
\section{Strahler-Universal Progress Measure Lifting Algorithm}
\label{sec:coda}
Jurdzi\'nski and Lazi\'c~\cite[Section~IV]{JL17} have implicitly
suggested that the progress-measure lifting algorithm~\cite{Jur00} can
be run on any ordered tree and they have established the correctness
of such an algorithm if their \emph{succinct multi-counters trees} were
used.
This has been further clarified by Czerwi\'nski et
al.~\cite[Section~2.3]{CDFJLP19}, who have explicitly argued that any
$(n, d/2)$-universal ordered tree is sufficient to solve an
$(n, d)$-small parity game in this way.
We make explicit a more detailed observation that follows using the
same standard arguments
(see, for example, Jurdzi\'nski and Lazi\'c~\cite[Theorem~5]{JL17}).
\begin{proposition}
\label{prop:output-of-lifting-algo}
Suppose the progress measure-lifting algorithm is run on a parity
game~$\mathcal{G}$ and on an ordered tree~$T$.
Let $D$ be the largest Steven dominion in~$\mathcal{G}$
on which there is a Steven progress measure whose tree can be
embedded in~$T$.
Then the algorithm returns a Steven dominion strategy on~$D$.
\end{proposition}
An elementary corollary of this observation is that if the
progress-measure lifting algorithm is run on the tree of a progress
measure on some Steven dominion in a parity game, then the algorithm
produces a Steven dominion strategy on a superset of that dominion.
Note that this is achieved in polynomial time because the tree of a
progress measure on an $(n, d)$-small parity game is
$(n, d/2)$-small and the running time of the algorithm is dominated by
the size of the tree~\cite[Section~IV.B]{JL17}.
\begin{theorem}
\label{thm:Strahler-pm-run-time}
There is an algorithm for solving $(n, d)$-small parity games of
Strahler number~$k$ in quasi-linear space and time
$n^{O(1)} \cdot (d/2k)^k = n^{{k \lg({d}/{k})}/{\lg n} + O(1)}$,
which is polynomial in~$n$ if $k \cdot \lg(d/k) = O(\log n)$.
\end{theorem}
\begin{proof}
By Proposition~\ref{prop:Strahler-small}, we may assume that
$k \leq 1 + \lg n$.
In order to solve an $(n, d)$-small parity game of Steven Strahler
number~$k$, run the progress-measure lifting algorithm for Steven on
tree~$\mathcal{B}_{\floor{\lg n}, {d}/{2} + 1}^{k+1}$, which is $(k+1)$-Strahler
$(n, {d}/{2} + 1)$-universal by
Lemma~\ref{lem:U-n-h-k-Strahler-universal} and
Corollary~\ref{cor:Bc-eq-U}.
By Theorem~\ref{thm:ad-Strahler-eq-pm-Strahler} and by
Proposition~\ref{prop:output-of-lifting-algo}, the algorithm will
then return a Steven dominion strategy on the largest Steven
dominion.
The running time and space upper bounds follow from
Theorem~\ref{thm:size-of-U-n-h-k}, by the standard analysis of
progress-measure lifting as in~\cite[Theorem~7]{JL17}, and by
Lemma~\ref{lemma:leaf-successor-poly-log}.
\end{proof}
\begin{remark}
\label{remark:2-sqrt-lg}
We highlight the $k \cdot \lg(d/k) = O(\log n)$ criterion from
Theorem~\ref{thm:Strahler-pm-run-time}
as offering a novel trade-off between two natural structural
complexity parameters of parity games
(number of of priorities~$d$ and the Strahler/Lehtinen number~$k$)
that enables solving them in time that is polynomial in the number
of vertices~$n$.
It includes as special cases both the $d < \lg n$ criterion of
Calude et al.~\cite[Theorem~2.8]{CJKLS17} and the $d = O(\log n)$
criterion of Jurdzi\'nski and Lazi\'c~\cite[Theorem~7]{JL17}
(set $k = \floor{\lg n} + 1$ and use
Propositions~\ref{prop:tree-of-decomposition-is-small}
and~\ref{prop:Strahler-small} to justify it),
and the $k = O(1)$ criterion of Lehtinen~\cite[Theorem~3.6]{Leh18}
(by Theorem~\ref{thm:Lehtinen-bounds-Strahler}).
We argue that the new
$k \cdot \lg(d/k) = O(\log n)$
criterion
(Theorem~\ref{thm:Strahler-pm-run-time})
enabled by our results
(coincidence of the Strahler and the Lehtinen
numbers: Theorem~\ref{thm:Lehtinen-bounds-Strahler})
and techniques
(small and efficiently navigable Strahler-universal
trees:
Theorem~\ref{thm:size-of-U-n-h-k},
Corollary~\ref{cor:Bc-eq-U},
and Lemma~\ref{lemma:leaf-successor-poly-log})
considerably expands the asymptotic ranges of the natural structural
complexity parameters in which parity games can be solved in
polynomial time.
We illustrate it by considering the scenario in which the rates of
growth of both $k$ and $\lg d$ as functions of~$n$ are
$O\!\left(\sqrt{\log n}\right)$, i.e.,
$d$ is $2^{O\left(\sqrt{\log n}\right)}$.
Note that the number of priorities~$d$ in this scenario is allowed
to grow as fast as $2^{b \cdot \sqrt{\lg n}}$ for an arbitrary positive
constant~$b$, which is
significantly larger than
what is allowed by the $d = O(\log n)$ criterion of Jurdzi\'nski and
Lazi\'c~\cite[Theorem~7]{JL17}.
Indeed, its rate of growth is much larger than any
poly-logarithmic function of~$n$, because for every positive
constant~$c$, we have $(\lg n)^c = 2^{c \cdot {\lg {\lg n}}}$, and
$c \cdot {\lg {\lg n}}$ is exponentially smaller
than~$b \cdot \sqrt{\lg n}$.
At the same time, the $O\!\left(\sqrt{\log n}\right)$ rate of
growth
allowed in this scenario for the Strahler number~$k$
substantially exceeds $k = O(1)$ required by
Lehtinen~\cite[Theorem~3.6]{Leh18}.
\end{remark}
\section{Context}
\subparagraph*{Parity Games.}
Parity games are a fundamental model in automata theory and
logic~\cite{EJ91,Zie98,GTW01,BW18}, and their applications to
verification, program analysis, and synthesis.
In particular, they are intimately linked to the problems of emptiness
and complementation of non-deterministic automata on
trees~\cite{EJ91,Zie98}, model checking and satisfiability of fixpoint
logics~\cite{EJS93,BW18}, and evaluation of nested fixpoint
expressions~\cite{BKMP19,HS19}.
It is a long-standing open problem whether parity games can be solved
in polynomial time~\cite{EJS93}.
The impact of parity games goes well beyond their home turf of
automata theory, logic, and formal methods.
For example, an answer~\cite{Fri09} of a question posed originally for
parity games~\cite{VJ00} has strongly inspired major breakthroughs on
the computational complexity of fundamental algorithms in stochastic
planning~\cite{Fea10} and linear optimization~\cite{Fri11,FHZ11}.
\subparagraph*{Strahler Number.}
The Strahler number has been proposed by Horton (1945) and made
rigorous by Strahler~(1952), in their morphological study of river
networks in hydrogeology.
It has been also studied in other sciences, such as botany, anatomy,
neurophysiology, physics, and molecular biology, where branching
patterns appear.
The Strahler number has been identified in computer science by
Ershov~\cite{Ers58} as the smallest number of registers needed to
evaluate an arithmetic expression.
It has since been rediscovered many times in various areas of computer
science;
see the surveys of Knuth~\cite{Knu73}, Viennot~\cite{Vie90}, and
Esparza, Luttenberger, and Schlund~\cite{ELS16}.
\subparagraph{Related Work.}
A major breakthrough in the quest for a polynomial-time algorithm for
parity games was achieved by Calude, Jain, Khoussainov, Li, and
Stephan~\cite{CJKLS17}, who have given the first quasi-polynomial
algorithm.
Other quasi-polynomial algorithm have been developed soon after by
Jurdzi\'nski and Lazi\'c~\cite{JL17}, and Lehtinen~\cite{Leh18}.
Czerwi\'nski, Daviaud, Fijalkow, Jurdzi\'nski, Lazi\'c, and
Parys~\cite{CDFJLP19} have introduced the concepts of
\emph{universal trees} and \emph{separating automata}, and argued
that all the aforementioned quasi-polynomial algorithms were intimately
linked to them.
By establishing a quasi-polynomial lower bound on the size of
universal trees, Czerwi\'nski et al.\ have highlighted the fundamental
limitations of the above approaches, motivating further the study of
the attractor decomposition algorithm due to McNaughton~\cite{McN93}
and Zielonka~\cite{Zie98}.
Parys~\cite{Par19} has proposed an ingenious quasi-polynomial version
of McNaughton-Zielonka algorithm, but Lehtinen, Schewe, and
Wojtczak~\cite{LSW19}, and Jurdzi\'nski and Morvan~\cite{JM20} have
again strongly linked all quasi-polynomial variants of the
attractor decomposition algorithm to universal trees.
Among several prominent quasi-polynomial algorithms for parity games,
Lehtinen's approach~\cite{Leh18} has relatively least attractive
worst-case running time bounds.
Parys~\cite{Par20} has offered some running-time improvements to
Lehtinen's algorithm, but it remains significantly worse than
state-of-the-art bounds of Jurdzi\'nski and Lazi\'c~\cite{JL17}, and
Fearnley, Jain, de Keijzer, Schewe, Stephan, and
Wojtczak~\cite{FJKSSW19}, in particular because it always requires at
least quasi-polynomial working space.
\subparagraph{Our Contributions.}
We propose the Strahler number as a parameter that measures the
structural complexity of dominia in a parity game and that governs the
computational complexity of the most efficient algorithms currently
known for solving parity games.
We establish that the Strahler number is a robust, and hence natural,
parameter by proving that it coincides with its version based on trees
of progress measures and with the register number defined by
Lehtinen~\cite{Leh18}.
We give a construction of small Strahler-universal trees that, when
used with the progress measure lifting algorithm~\cite{Jur00,JL17} or
with the universal attractor decomposition algorithm~\cite{JM20},
yield algorithms that work in quasi-linear space and
quasi-polynomial time.
Moreover, usage of our small Strahler-universal trees allows to solve
parity games in polynomial time for a wider range of asymptotic
settings of the two natural structural complexity parameters
(number of priorities~$d$ and the Strahler/register number~$k$)
than previously known, and that covers as special cases the $k = O(1)$
criterion of Lehtinen~\cite{Leh18} and the $d < \lg n$ and $d =
O(\log n)$ criteria of of Calude et al.~\cite{CJKLS17}, and of
Jurdzi\'nski and Lazi\'c~\cite{JL17}, respectively.
\input{tuning.tex}
\input{prelude.tex}
\input{ballade.tex}
\input{polonaise.tex}
\input{scherzo.tex}
\input{coda.tex}
\section{Strahler-Universal Trees}
Our attention now shifts to tackling
Question~\ref{question:Strahler-algorithmic}.
The approach is to develop constructions of small ordered trees into
which trees of attractor decompositions or of progress measures can
be embedded.
Such trees can be seen as natural search spaces for dominion
strategies, and existing meta-algorithms such as the universal
attractor decomposition algorithm~\cite{JM20} and progress measure
lifting algorithm~\cite{Jur00,JL17} can use them to guide their
search, performed in time proportional to the size of the trees in the
worst case.
An ordered tree is \emph{universal} for a class of trees if all trees
from the class can be embedded into it.
The innovation offered in this work is to develop optimized
constructions of trees that are universal for classes of trees whose
complex structural parameter, such as the Strahler number, is
bounded.
This is in contrast to less restrictive universal trees introduced by
Czerwi\'nski et al.~\cite{CDFJLP19} and implicitly constructed by
Jurdzi\'nski and Lazi\'c~\cite{JL17}, whose sizes therefore grow
faster with size parameters, leading to slower algorithms.
Firstly, we give an inductive construction of Strahler-universal trees
and an upper bound on their numbers of leaves.
Then we introduce labelled ordered trees, provide a succinct
bit-string labelling of the Strahler-universal trees, and give an
alternative and more explicit characterization of the
succinctly-labelled Strahler-universal trees.
Finally, we argue how the succinct bit-string labelling of
Strahler-universal trees facilitates efficient computation of the
so-called ``level-$p$ successors'' in them, which is the key
computational primitive that allows using ordered trees to solve
parity games.
The constructions and techniques we develop here are inspired by and
significantly refine those introduced by Jurdzi\'nski and
Lazi\'c~\cite{JL17}.
\subparagraph*{Strahler-Universal Trees and Their Sizes}
\label{subsec:Strahler-universal}
Intuitively, an ordered tree \emph{can be embedded in} another if the
former can be obtained from the latter by pruning some subtrees.
More formally, the trivial tree~$\seq{}$ can be embedded in every
ordered tree, and $\seq{T_1, T_2, \dots, T_k}$ can be embedded
in $\seq{T'_1, T'_2, \dots, T'_{\ell}}$ if there are indices
$i_1, i_2, \dots, i_k$ such that
$1 \leq i_1 < i_2 < \cdots < i_k \leq \ell$
and for every $j = 1, 2, \dots, k$, we have that $T_j$ can be
embedded in~$T'_{i_j}$.
An ordered tree is \emph{$(n, h)$-universal}~\cite{CDFJLP19}
if every $(n, h)$-small ordered tree can be embedded in it.
We define an ordered tree to be
\emph{$k$-Strahler $(n, h)$-universal} if every $(n, h)$-small ordered
tree whose Strahler number is at most~$k$ can be embedded in it, and
we give a construction of small Strahler-universal trees.
\begin{definition}[Trees $U_{t, h}^k$ and~$V_{t, h}^k$]
\label{def:U-and-V}
For all $t \geq 0$, we define trees
$U_{t, h}^k$ (for all $h$ and~$k$ such that $h \geq k \geq 1$)
and
$V_{t, h}^k$ (for all $h$ and~$k$ such that $h \geq k \geq 2$)
by mutual induction:
\begin{enumerate}
\item
if $h = k = 1$ then $U_{t, h}^k = \seq{}$;
\item
if $h>1$ and $k=1$ then
$U_{t, h}^k = \seq{U_{t, h-1}^k}$;
\item
\label{item:U-and-V--n-eq-1}
if $h \geq k \geq 2$ and $t=0$ then
$U_{t, h}^k = V_{t, h}^k = \seq{U_{t, h-1}^{k-1}}$;
\item
if $h \geq k \geq 2$ and $t \geq 1$ then
$V_{t, h}^k =
V_{t-1, h}^k \cdot \seq{U_{t, h-1}^{k-1}} \cdot V_{t-1, h}^k$;
\item
\label{item:U-and-V--h-eq-k}
if $h = k \geq 2$ and $n \geq 2$ then $U_{t, h}^k = V_{t, h}^k$;
\item
\label{item:U-and-V--h-g-k}
if $h > k \geq 2$ and $n \geq 2$ then
$U_{t, h}^k = V_{t, h}^k \cdot \seq{U_{t, h-1}^k} \cdot V_{t, h}^k$.
\end{enumerate}
\end{definition}
For $g \geq 0$, let $I_g$ be the trivial tree, that is the tree with
exactly one leaf, of height~$g$.
For example, $I_1 = \seq{}$ and
$I_3 = \seq{\seq{\seq{}}} = \seq{\seq{\circ}}$.
It is routine to verify that if $h \geq k=1$ or $t=0$ then
$U_{t, h}^k = I_h$,
and if $h \geq k \geq 2$ and $t=0$ then $V_{t, h}^k = I_h$.
\begin{lemma}
\label{lem:U-n-h-k-Strahler-universal}
For all $n \geq 1$ and $h \geq k \geq 1$, the ordered tree
$U_{\floor{\lg n}, h}^k$ is $k$-Strahler $(n, h)$-universal.
\end{lemma}
\begin{proof}
We say that a tree has \emph{weak Strahler number} at most $k$ if
every subtree rooted in a child of the root has Strahler number at
most~$k-1$.
A tree is then \emph{weakly $k$-Strahler $(n, h)$-universal}
if every $(n, h)$-small ordered tree whose weak Strahler number is
at most~$k$ can be embedded in it.
We proceed by induction on the number of leaves in an ordered tree
and its height,
using the following strengthened inductive hypothesis:
\begin{itemize}
\item
for all $n \geq 1$ and $h \geq k \geq 1$,
ordered tree $U_{\floor{\lg n}, h}^k$ is $k$-Strahler
$(n, h)$-universal;
\item
for all $n \geq 1$ and $h \geq k \geq 2$,
ordered tree $V_{\floor{\lg n}, h}^k$ is weakly $k$-Strahler
$(n, h)$-universal.
\end{itemize}
Let $T$ be an $(n, h)$-small ordered tree of Strahler number at
most~$k$.
If $n=1$, $h=1$, or $k=1$,
then $T$ is the trivial tree (with just one leaf) of height at
most~$h$, and hence it can be embedded
in~$U_{\floor{\lg n}, h}^k = I_h$, the trivial tree of height~$h$.
Likewise, if $h \geq k \geq 2$ and $n=1$,
then $T$ is the trivial tree of height at most~$h$, and hence it can
be embedded in~$V_{\floor{\lg n}, h}^k = I_h$, the trivial tree of
height~$h$.
Otherwise, we have that $T = \seq{T_1, \dots, T_j}$ for
some~$j \geq 1$.
We consider two cases:
either $\Strah{T_i} \leq k-1$ for all $i=1, \dots, j$, or there
is~$q$ such that $\Strah{T_q} = k$.
Note that by Proposition~\ref{prop:Strahler-small}, the latter case
can only occur if $h > k$.
If $\Strah{T_i} \leq k-1$ for all $i = 1, \dots, j$,
then we argue that $T$ can be embedded in~$V_{\floor{\lg n}, h}^k$,
and hence also in~$U_{\floor{\lg n}, h}^k$, because
$V_{\floor{\lg n}, h}^k$ can be embedded
in $U_{\floor{\lg n}, h}^k$ by definition
(see items~\ref{item:U-and-V--n-eq-1}., \ref{item:U-and-V--h-eq-k}.,
and~\ref{item:U-and-V--h-g-k}.\ of Definition~\ref{def:U-and-V}).
Let~$p$ (a pivot) be an integer such that
both trees
$T' = \seq{T_1, \dots, T_{p-1}}$ and
$T'' = \seq{T_{p+1}, \dots, T_j}$
are $(\floor{n/2}, h)$-small.
Then by the strengthened inductive hypothesis,
each of the two trees $T'$ and~$T''$ can be embedded in
tree~$V_{\floor{\lg \floor{n/2}}, h}^k = V_{\floor{\lg n}-1, h}^k$
and tree $T_p$ can be embedded in~$U_{\floor{\lg n}, h-1}^{k-1}$.
It then follows that tree
$T = T' \cdot \seq{T_p} \cdot T''$ can be embedded in
$V_{\floor{\lg n}, h}^k = V_{\floor{\lg n}-1, h}^k \cdot
\seq{U_{\floor{\lg n}, h-1}^{k-1}} \cdot
V_{\floor{\lg n}-1, h}^k$.
If $\Strah{T_q} = k$ for some~$q$ (the pivot), then we argue that
$T$ can be embedded in~$U_{\floor{\lg n}, h}^k$.
Note that each of the two trees
$T' = \seq{T_1, \dots, T_{q-1}}$
and $T'' = \seq{T_{q+1}, \dots, T_j}$
is $(n, h)$-small and all trees $T_1, \dots, T_{q-1}$ and
$T_{q+1}, \dots, T_j$
have Strahler numbers at most~$k-1$.
By the previous paragraph,
it follows that each of the two trees~$T'$ and~$T''$ can be embedded
in~$V_{\floor{\lg n}, h}^k$.
Moreover, tree $T_q$ is $(n, h-1)$-small and hence, by the
inductive hypothesis, it can be embedded
in~$U_{\floor{\lg n}, h-1}^k$.
It follows that tree $T = T' \cdot \seq{T_q} \cdot T''$ can be
embedded in
$U_{\floor{\lg n}, h}^k =
V_{\floor{\lg n}, h}^k \cdot \seq{U_{\floor{\lg n}, h-1}^k} \cdot
V_{\floor{\lg n}, h}^k$.
\end{proof}
\begin{lemma}
\label{lemma:size-of-U}
For all $t \geq 0$, we have:
\begin{itemize}
\item
if $h \geq k = 1$ then $\leaves{U_{t, h}^k} = 1$;
\item
if $h \geq k \geq 2$ then
$\leaves{U_{t, h}^k}
\: \leq \:
2^{t + k}
{{t + k - 2} \choose {k-2}}
{{h-1} \choose {k-1}}$.
\end{itemize}
\end{lemma}
\begin{proof}
The proof is by structural induction,
where the inductive hypothesis contains both the statement that for
all $t \geq 0$ and $h \geq k \geq 2$, we have:
\begin{equation}
\label{eq:size-of-U}
\leaves{U_{t, h}^k}
\: \leq \:
2^{t + k}
{{t + k - 2} \choose {k-2}}
{{h-1} \choose {k-1}}\,,
\end{equation}
and that for all $t \geq 0$ and $h \geq k \geq 2$, we have the
following analogous bound on the number of leaves of
trees~$V_{t, h}^k$:
\begin{equation}
\label{eq:size-of-V}
\leaves{V_{t, h}^k}
\: \leq \:
2^{t + k-1}
{{t + k - 2} \choose {k-2}}
{{h-2} \choose {k-2}}\,.
\end{equation}
The following cases correspond to the six items in
Definition~\ref{def:U-and-V}.
\begin{enumerate}
\item
\label{item:h-eq-0}
If $h = k = 1$ then $\leaves{U_{t, h}^k} = \leaves{\seq{}} = 1$.
\item
\label{item:h-g-1--k-eq-1}
If $h > 1$ and $k = 1$ then a straightforward induction on~$h$ can
be used to show that $\leaves{U_{t, h}^k} = 1$.
\item
\label{item:h-geq-k-geq-2--t-e-0}
If $h \geq k \geq 2$ and $t = 0$ then, again, a straightforward
induction on~$h$ yields that
$\leaves{V_{t, h}^k} = 1 <
2^{t+k-1} {{t + k - 2} \choose {k-2}} {{h-2} \choose {k-2}}$
and
$\leaves{U_{t, h}^k} = 1 <
2^{t+k} {{t + k - 2} \choose {k-2}} {{h-1} \choose {k-1}}$.
\item
Suppose that $h \geq k \geq 2$ and $t \geq 1$.
Firstly, for $h \geq k = 2$ and $t \geq 0$, we slightly strengthen
the inductive hypothesis~(\ref{eq:size-of-V}) to:
\begin{equation}
\label{eq:size-of-V-n-h-1}
\leaves{V_{t, h}^2} \: \leq \: 2^{t + 1} - 1\,,
\end{equation}
which we prove by induction on~$t$.
Indeed, for $t = 0$ it follows from
item~\ref{item:h-geq-k-geq-2--t-e-0}.\ above, and for $t \geq 1$,
we have:
\begin{multline*}
\leaves{V_{t, h}^2}
\: = \:
\leaves{U_{t, h-1}^1} + 2 \cdot \leaves{V_{t-1, h}^2}
\\
\: \leq \:
1 + 2\left(2^{(t-1)+1}-1\right)
\: = \:
2^{t+1}-1
\: < \:
2^{t+1} {{t} \choose {0}} {{h-2} \choose {0}}\,,
\end{multline*}
where the first inequality follows from
items~\ref{item:h-eq-0}.\ or~\ref{item:h-g-1--k-eq-1}.\ above, and
from the strengthened inductive
hypothesis~(\ref{eq:size-of-V-n-h-1}).
Secondly, for $h \geq k \geq 3$ and $t \geq 1$ we have:
\begin{multline*}
\leaves{V_{t, h}^k}
\: = \:
\leaves{U_{t, h-1}^{k-1}} + 2 \cdot \leaves{V_{t-1, h}^k}
\\
\: \leq \:
2^{t+k-1}
{{t + k - 3} \choose {k-3}}
{{h-2} \choose {k-2}}
+ 2 \cdot
2^{t+k-2}
{{t + k - 3} \choose {k-2}}
{{h-2} \choose {k-2}}
\\
\: = \:
2^{t+k-1}
\left[
{{t + k - 3} \choose {k-3}}
+
{{t + k - 3} \choose {k-2}}
\right]
{{h-2} \choose {k-2}}
\: = \:
2^{t+k-1}
{{t + k - 2} \choose {k-2}}
{{h-2} \choose {k-2}}\,,
\end{multline*}
where the first inequality follows from the inductive hypothesis and
the last equality follows from Pascal's identity.
\item
Suppose that $h = k \geq 2$ and $t \geq 1$.
Then we have:
\begin{multline*}
\leaves{U_{t, h}^k}
\: = \:
\leaves{V_{t, h}^k}
\: \leq \:
2^{t+k-1} {{t+k-2} \choose {k-2}} {{h-2} \choose {k-2}}
\\
\: < \:
2^{t+k} {{t+k-2} \choose {k-2}} {{h-1} \choose {k-1}}\,,
\end{multline*}
where the first inequality follows by the inductive hypothesis
and the other one from~$h = k$.
\item
Suppose $h > k \geq 2$ and $t \geq 1$.
Then we have:
\begin{multline*}
\leaves{U_{t, h}^k}
\: = \:
\leaves{U_{t, h-1}^k} + 2 \cdot \leaves{V_{t, h}^k}
\\
\: \leq \:
2^{t+k}
{{t + k - 2} \choose {k-2}}
{{h-2} \choose {k-1}}
+ 2 \cdot
2^{t+k-1}
{{t + k - 2} \choose {k-2}}
{{h-2} \choose {k-2}}
\\
\: = \:
2^{t+k}
{{t + k - 2} \choose {k-2}}
\left[
{{h-2} \choose {k-1}}
+
{{h-2} \choose {k-2}}
\right]
\: = \:
2^{t+k}
{{t + k - 2} \choose {k-2}}
{{h-1} \choose {k-1}}\,,
\end{multline*}
where the first inequality follows from the inductive hypothesis and
the last equality follows from Pascal's identity.
\qedhere
\end{enumerate}
\end{proof}
\begin{theorem}
\label{thm:size-of-U-n-h-k}
For $k \leq \lg n$, the number of leaves of the
$k$-Strahler $(n, h)$-universal ordered trees
$U_{\floor{\lg n}, h}^k$ is
$n^{O(1)} \cdot \left({h}/{k}\right)^k
= n^{{k \lg (h/k)}/{\lg n} + O(1)}$,
which is polynomial in~$n$ if
$k \cdot \lg\left({h}/{k}\right) \: = \: O(\log n)$.
In more detail, the number is at most
$n^{c(n)} \cdot (h/k)^k$, where
$c(n) = 5.45$ if $k \leq \lg n$,
$c(n) = 3 + o(1)$ if $k = o(\log n)$,
and $c(n) = 1 + o(1)$ if $k = O(1)$.
\end{theorem}
\begin{remark}
By Proposition~\ref{prop:Strahler-small} and
Lemma~\ref{lem:U-n-h-k-Strahler-universal}, for all positive
integers $n$ and~$h$, the tree $U_{\floor{\lg n}, h}^{\floor{\lg n}+1}$ is
$(n, h)$-universal.
Theorem~\ref{thm:size-of-U-n-h-k} implies that the number of leaves
of $U_{\floor{\lg n}, h}^{\floor{\lg n}+1}$ is
$n^{\lg(h/{\lg n})+O(1)}$,
which matches the asymptotic number of leaves of $(n, h)$-universal
trees
of Jurdzi\'nski and Lazi\'c~\cite[Lemma~6]{JL17}.
In particular, if $h = O(\log n)$ then
$\lg({h}/{\lg n}) = O(1)$, and hence the number of leaves of
$U_{\floor{\lg n}, h}^{\floor{\lg n}+1}$ is polynomial in~$n$.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:size-of-U-n-h-k}]
By Lemma~\ref{lem:U-n-h-k-Strahler-universal}, ordered
tree~$U_{\floor{\lg n}, h}^k$ is $k$-Strahler $(n, h)$-universal.
By Lemma~\ref{lemma:size-of-U}, its number of leaves is at most
$2^{\floor{\lg n}+k} {{\floor{\lg n}+k-2} \choose {k-2}}
{{h-1} \choose {k-1}}$.
We analyze in turn the three terms $2^{\floor{\lg n}+k}$,
${\floor{\lg n}+k-2} \choose {k-2}$, and ${h-1} \choose {k-1}$.
Firstly, we note that
$2^{\floor{\lg n}+k}$ is $O\left(n^{p_1(n, k)}\right)$, where
$p_1(n, k) = 1+k/{\lg n}$, because $2^k = n^{k/{\lg n}}$.
Secondly, $k \leq \lg n$ implies that
$\floor{\lg n} + k - 2 < 2 \lg n$,
therefore we have
${{\floor{\lg n}+k-2} \choose {k-2}} <
2^{2 \lg n} \: = \: n^2$,
and hence
${{\floor{\lg n}+k-2} \choose {k-2}}$ is $O(n^{p_2(n, k)})$,
where $p_2(n, k) \leq 2$.
Thirdly, applying the inequality
${i \choose j} \leq \left(ei/j\right)^j$
to the binomial coefficient
${{h} \choose {k}}$, we obtain
${{h-1} \choose {k-1}}
\: < \:
{{h} \choose {k}}
\: \leq \:
\left({eh}/{k}\right)^k
\: = \:
2^{k \lg(eh/k)}$,
and hence
${{h-1} \choose {k-1}}$ is $O(n^{p_3(n, h, k)})$,
where
$p_3(n, h, k) \: = \: {k \lg(eh/k)}/{\lg n}
\: = \: {k \lg(h/k)}/{\lg n} + {k \lg e}/{\lg n}$.
Note that if we let
$p(n, h, k) = p_1(n, k) + p_2(n, k) + p_3(n, h, k)$ then the number of
leaves in trees
$U_{\floor{\lg n}, h}^k$ is $O\!\left(n^{p(n, h, k)}\right)$.
Since $k \leq \lg n$ implies ${k}/{\lg n} \leq 1$
and ${k \lg e}/{\lg n} \leq \lg e$, we obtain
$p(n, h, k)
\: \leq \: {k \lg(h/k)}/{\lg n} + 4 + \lg e
\: < \: {k \lg(h/k)}/{\lg n} + 5.45$,
and hence the number of leaves in trees $U_{\floor{\lg n}, h}^k$ is
$n^{{k \lg(h/k)}/{\lg n} + O(1)}$.
If we further assume that $k = o(\log n)$ then the constant~$5.45$
can be straightfowardly reduced to~$3+o(1)$ because then
${k}/{\lg n}$ and ${k \lg e}/{\lg n}$ are~$o(1)$.
Moreover, the estimate
${{\floor{\lg n}+k-2} \choose {k-2}} = O(n^2)$
can be improved with further assumptions about~$k$ as a function
of~$n$;
for example, if $k = O(1)$ then
${{\floor{\lg n}+k-2} \choose {k-2}}$
is only polylogarithmic in~$n$ and hence
${{\floor{\lg n}+k-2} \choose {k-2}}$ is $n^{o(1)}$,
bringing $3+o(1)$ down to~$1+o(1)$.
\end{proof}
\subparagraph*{Labelled Strahler-Universal Trees}
\emph{Labelled ordered tree} are similar to ordered trees:
the trivial tree~$\seq{}$ is an \emph{$A$-labelled ordered tree} and
so is a sequence
$\seq{(a_1, \mathcal{L}_1), (a_2, \mathcal{L}_2), \dots, (a_k, \mathcal{L}_k)}$, where
$\mathcal{L}_1$, $\mathcal{L}_2$, \dots, $\mathcal{L}_k$ are $A$-labelled ordered trees, and
$a_1$, $a_2$, \dots, $a_k$ are distinct elements of a linearly ordered
set~$(A, \leq)$ and $a_1 < a_2 < \cdots < a_k$ in that linear
order.
We define the \emph{unlabelling} of a labelled ordered tree
$\seq{(a_1, \mathcal{L}_1), (a_2, \mathcal{L}_2), \dots, (a_k, \mathcal{L}_k)}$,
by straightforward induction, to be the ordered tree
$\seq{T_1, T_2, \dots, T_k}$, where $T_i$ is the unlabelling
of~$\mathcal{L}_i$ for every $i = 1, 2, \dots, k$.
An \emph{$A$-labelling} of an ordered tree~$T$ is an $A$-labelled
tree~$\mathcal{L}$ whose unlabelling is~$T$.
We define the \emph{natural labelling} of an ordered
tree~$T = \seq{T_1, \dots, T_k}$, again by a straightfoward
induction, to be the $\mathbb{N}$-labelled tree
$\seq{(1, \mathcal{L}_1), \dots, (k, \mathcal{L}_k)}$, where $\mathcal{L}_1$, \dots, $\mathcal{L}_k$
are the natural labellings of trees $T_1$, \dots, $T_k$.
For an $A$-labelled tree $\seq{(a_1, \mathcal{L}_1), \dots, (a_k, \mathcal{L}_k)}$,
its set of \emph{nodes} is defined inductively to consist of the
root~$\seq{}$ and all the sequences in~$A^*$ of the form $\seq{a_i}
\cdot v$, where $v \in A^*$ is a node in $\mathcal{L}_i$ for some
$i = 1, \dots, k$, and where the symbol $\cdot$ denotes concatenation
of sequences.
For example, the natural labelling of tree
$\seq{\seq{\circ^3}, \circ^4, \seq{\seq{\circ}}^2}$ has the set of
nodes that consists of the following set of leaves
$\seq{1, 1}$, $\seq{1, 2}$, $\seq{1, 3}$, $\seq{2}$, $\seq{3}$,
$\seq{4}$, $\seq{5}$, $\seq{6, 1, 1}$, $\seq{7, 1, 1}$, and all of
their prefixes.
Indeed, the set of nodes of a labelled ordered tree is always
prefix-closed.
Moreover, if $L \subseteq A^*$ then its closure under prefixes
uniquely identifies a labelled ordered tree that we call the
labelled ordered tree \emph{generated} by~$L$, and its unlabelling is
the ordered tree generated by~$L$.
For example, the set
$\eset{\seq{1}, \seq{3, 1}, \seq{3, 4, 1}, \seq{6, 1}}$
generates ordered tree
$\seq{\circ, \seq{\circ, \seq{\circ}}, \seq{\circ}}$.
Consider the following linear order on the set $\eset{0, 1}^*$ of
bit strings:
for each bit $b \in \eset{0, 1}$, and for all bit strings
$\beta, \beta' \in \eset{0, 1}^*$, if $\varepsilon$ is the empty
string, then we have $0\beta < \varepsilon$, $\varepsilon < 1\beta$,
and $b\beta < b\beta'$ iff $\beta < \beta'$.
For a bit string $\beta \in \eset{0, 1}^*$, we write
$\left|\beta\right|$ for the number of bits used in the string.
For example, we have $\left|\varepsilon\right| = 0$ and
$\left|010\right| = 3$, and $\left|11\right| = 2$.
Suppose that $\seq{\beta_i, \beta_{i-1}, \dots, \beta_1}$ is a
node in a $\eset{0, 1}^*$-labelled ordered tree.
Then if $\beta_j = b \beta$ for some $j = 1, 2, \dots, i$,
$b \in \eset{0, 1}$, and $\beta \in \eset{0, 1}^*$, then we refer to
the first bit~$b$ as the \emph{leading bit} in~$\beta_j$, and we refer
to all the following bits in~$\beta$ as \emph{non-leading bits}
in~$\beta_j$.
For example, node
$\seq{\varepsilon, 010, \varepsilon, \varepsilon, 11}$
has two non-empty strings and hence two leading bits, and it uses
three non-leading bits overall, because
$\left|010\right| + \left|11\right| - 2 = 3$.
For a bit $b \in \eset{0, 1}$ and a
$\eset{0, 1}^*$-labelled ordered tree
$\mathcal{L} = \seq{\left(\beta_1, \mathcal{L}_1\right), \dots,
\left(\beta_\ell, \mathcal{L}_\ell\right)}$,
we define the
$\eset{0, 1}^*$-labelled
ordered tree $\left[\mathcal{L}\right]^b$ to be equal to
$\mathcal{L} = \seq{\left(b \beta_1, \mathcal{L}_1\right), \dots,
\left(b \beta_\ell, \mathcal{L}_\ell\right)}$.
In other words, $\left[\mathcal{L}\right]^b$ is the labelled ordered tree that
is obtained from~$\mathcal{L}$ by adding an extra copy of bit~$b$ as the
leading bit in the labels of all children of the root of~$\mathcal{L}$.
The inductive structure of the next definition is identical to that of
Definition~\ref{def:U-and-V}, and hence labelled ordered trees
$\mathcal{U}_{t, h}^k$ and~$\mathcal{V}_{t, h}^k$ defined here are labellings of the
ordered trees $U_{t, h}^k$ and~$V_{t, h}^k$, respectively.
\begin{definition}[Trees $\mathcal{U}_{t, h}^k$ and $\mathcal{V}_{t, h}^k$]
\label{def:Uc-and-Vc}
For all $t \geq 0$, we define $\eset{0, 1}^*$-labelled ordered
trees~$\mathcal{U}_{t, h}^k$
(for all $h$ and~$k$ such that $h \geq k \geq 1$)
and $\mathcal{V}_{t, h}^k$ (for all $h$ and~$k$ such that $h \geq k \geq 2$)
by mutual induction:
\begin{enumerate}
\item
if $h = k = 1$ then
$\mathcal{U}_{t, h}^k = \seq{}$;
\item
\label{item:h0-k0}
if $h>1$ and $k=1$ then
$\mathcal{U}_{t, h}^k =
\seq{\tpl{\varepsilon, \mathcal{U}_{t, h-1}^k}}$;
\item
\label{item:t0}
if $h \geq k \geq 2$ and $t=0$ then
$\mathcal{V}_{t, h}^k = \seq{\tpl{\varepsilon, \mathcal{U}_{t, h-1}^{k-1}}}$
and $\mathcal{U}_{t, h}^k = \left[\mathcal{V}_{t, h}^k\right]^0 =
\seq{\tpl{0, \mathcal{U}_{t, h-1}^{k-1}}}$;
\item
if $h \geq k \geq 2$ and $t \geq 1$ then
$\mathcal{V}_{t, h}^k =
\left[\mathcal{V}_{t-1, h}^k\right]^0 \cdot
\seq{\tpl{\varepsilon, \mathcal{U}_{t, h-1}^{k-1}}} \cdot
\left[\mathcal{V}_{t-1, h}^k\right]^1$;
\item
\label{item:h-eq-k}
if $h = k \geq 2$ and $t \geq 1$ then
$\mathcal{U}_{t, h}^k = \left[\mathcal{V}_{t, h}^k\right]^0$;
\item
\label{item:h-g-k}
if $h > k \geq 2$ and $t \geq 1$ then
$\mathcal{U}_{t, h}^k =
\left[\mathcal{V}_{t, h}^k\right]^0 \cdot
\seq{\left(\varepsilon, \mathcal{U}_{t, h-1}^k\right)} \cdot
\left[\mathcal{V}_{t, h}^k\right]^1$.
\end{enumerate}
\end{definition}
The inductive definition of labelled ordered trees $\mathcal{U}_{t, h}^k$
and~$\mathcal{V}_{t, h}^k$ makes it straightforward to argue that their
unlabellings are equal to trees $U_{t, h}^k$ and~$V_{t, h}^k$,
respectively, and hence to transfer to them Strahler-universality
established in Lemma~\ref{lem:U-n-h-k-Strahler-universal} and upper
bounds on the numbers of leaves established in
Lemma~\ref{lemma:size-of-U} and Theorem~\ref{thm:size-of-U-n-h-k}.
We now give an alternative and more explicit characterization of those
trees, which will be more suitable for algorithmic purposes.
To that end, we define $\eset{0, 1}^*$-labelled trees $\mathcal{B}_{t, h}^k$
and~$\mathcal{C}_{t, h}^k$ and then we argue that they are equal to
trees~$\mathcal{U}_{t, h}^k$ and~$\mathcal{V}_{t, h}^k$, respectively, by showing that
they satisfy all the recurrences in Definition~\ref{def:Uc-and-Vc}.
\begin{definition}[Trees $\mathcal{B}_{t, h}^k$ and $\mathcal{C}_{t, h}^k$]
For all $t \geq 0$ and $h \geq k \geq 1$, we define
$\eset{0, 1}^*$-labelled ordered trees $\mathcal{B}_{t, h}^k$ as the tree
generated by sequences $\seq{\beta_{h-1}, \dots, \beta_1}$ such
that:
\begin{enumerate}
\item
the number of non-empty bit strings among $\beta_{h-1}$, \dots,
$\beta_1$ is $k-1$;
\item
the number of bits used in bit strings $\beta_{h-1}$,
\dots, $\beta_1$ overall is at most $(k-1)+t$;
\end{enumerate}
and for every $i = 1, \dots, h-1$, we have the following:
\begin{enumerate}
\setcounter{enumi}{2}
\item
if there are less than $k-1$ non-empty bit strings among
$\beta_{h-1}$, \dots, $\beta_{i+1}$, but there are $t$ non-leading
bits used in them, then $\beta_i = 0$;
\item
if all bit strings $\beta_i$, \dots, $\beta_1$ are non-empty, then
each of them has $0$ as its leading bit.
\end{enumerate}
For all $t \geq 0$ and $h \geq k \geq 2$, we define
$\eset{0, 1}^*$-labelled ordered trees $\mathcal{C}_{t, h}^k$ as the tree
generated by sequences $\seq{\beta_{h-1}, \dots, \beta_1}$ such
that:
\begin{enumerate}
\item
the number of non-empty bit strings among $\beta_{h-2}$, \dots,
$\beta_1$ is $k-2$;
\item
the number of bits used in bit strings $\beta_{h-1}$,
\dots, $\beta_1$ overall is at most $(k-2)+t$;
\end{enumerate}
and for every $i = 1, \dots, h-1$, we have the following:
\begin{enumerate}
\setcounter{enumi}{2}
\item
if there are less than $k-2$ non-empty bit strings among
$\beta_{h-2}$, \dots, $\beta_{i+1}$, but there are
$t-\left|\beta_{h-1}\right|$ non-leading bits used in them, then
$\beta_i = 0$;
\item
if all bit strings $\beta_i$, \dots, $\beta_1$ are non-empty, then
each of them has $0$ as its leading bit.
\end{enumerate}
\end{definition}
\begin{lemma}
\label{lemma:Uc-eq-Bc}
For all $t \geq 0$ and $h \geq k \geq 1$, we have
$\mathcal{U}_{t, h}^k = \mathcal{B}_{t, h}^k$.
\end{lemma}
The following corollary follows from Lemma~\ref{lemma:Uc-eq-Bc}, and
from the identical inductive structures of
Definitions~\ref{def:U-and-V} and~\ref{def:Uc-and-Vc}.
\begin{corollary}
\label{cor:Bc-eq-U}
For all $t \geq 0$ and $h \geq k \geq 1$,
the unlabelling of
$\mathcal{B}_{t, h}^k$ is equal to
$U_{t, h}^k$.
\end{corollary}
The next proposition formalizes the following non-rigorous
interpretation of the difference between trees $\mathcal{B}_{t, h}^k$
and~$\mathcal{C}_{t, h}^k$:
\begin{itemize}
\item
If a sequence $\seq{\beta_{h-1}, \dots, \beta_1}$ is a node
in~$\mathcal{B}_{t, h}^k$ then the bit string $\beta_{h-1}$ can be either
empty or non-empty, and if it is non-empty then its first bit is the
leading bit.
\item
On the other hand, if a sequence $\seq{\beta_{h-1}, \dots, \beta_1}$
is a node in~$\mathcal{C}_{t, h}^k$ then the bit string $\beta_{h-1}$ is
always to be understood as non-empty.
It can be thought of as obtained by removal of its ``original''
leading bit in the corresponding leaf in tree~$\mathcal{B}_{t, h}^k$, and
hence it consists only of (possibly zero) non-leading bits.
\end{itemize}
\begin{proposition}
\label{prop:Bc-vs-Cc}
For all $t \geq 1$ and $h \geq k \geq 2$, we have:
\begin{enumerate}
\item
\label{item:Bc-vs-Cc--h-eq-k}
if $h = k$ then
$\seq{\beta_{h-1}, \dots, \beta_1}$ is a leaf in~$\mathcal{C}_{t, h}^k$
if and only if
$\seq{0 \beta_{h-1}, \beta_{h-2}, \dots, \beta_1}$ is a leaf
in~$\mathcal{B}_{t, h}^k$;
\item
\label{item:Bc-vs-Cc--h-g-k}
if $h > k$ then for both $b \in \eset{0, 1}$, we have that
$\seq{\beta_{h-1}, \dots, \beta_1}$ is a leaf in~$\mathcal{C}_{t, h}^k$
if and only if
$\seq{b \beta_{h-1}, \beta_{h-2}, \dots, \beta_1}$ is a leaf
in~$\mathcal{B}_{t, h}^k$;
\item
\label{item:Bc-vs-Cc--shorten}
$\seq{\varepsilon, \beta_{h-2}, \dots, \beta_1}$ is a
leaf in $\mathcal{C}_{t, h}^k$
if and only if
$\seq{\beta_{h-2}, \dots, \beta_1}$ is a leaf
in~$\mathcal{B}_{t, h-1}^{k-1}$.
\end{enumerate}
\end{proposition}
\begin{proof}[{Proof of Lemma~\ref{lemma:Uc-eq-Bc}}]
We argue that trees~$\mathcal{B}_{t, h}^k$ and~$\mathcal{C}_{t, h}^k$ satisfy all
the recurrences in Definition~\ref{def:Uc-and-Vc} that involve trees
$\mathcal{U}_{t, h}^k$ and~$\mathcal{V}_{t, h}^k$, respectively.
\begin{enumerate}
\item
If $h = k = 1$ then
tree $\mathcal{B}_{t, h}^k$ is the trivial tree~$\seq{}$.
\item
If $h > k = 1$ then
$\mathcal{B}_{t, h}^k$ has only one leaf
$\seq{\varepsilon^{h-1}}$,
and hence we have
$\mathcal{B}_{t, h}^k = \seq{\tpl{\varepsilon, \mathcal{B}_{t, h-1}^k}}$.
\item
Suppose that $h \geq k \geq 2$ and $t=0$.
Then $\mathcal{B}_{t, h}^k$ has exactly one leaf, which is of
the form $\seq{0^{k-1}, \varepsilon^{h-k}}$,
and $\mathcal{C}_{t, h}^k$ has exactly one leaf, which is of the form
$\seq{\varepsilon, 0^{k-2}, \varepsilon^{h-k}}$.
It follows that
$\mathcal{C}_{t, h}^k = \seq{\tpl{\varepsilon, \mathcal{B}_{t, h-1}^{k-1}}}$
and
$\mathcal{B}_{t, h}^k = \left[\mathcal{C}_{t, h}^k\right]^0 =
\seq{\tpl{0, \mathcal{B}_{t, h-1}^{k-1}}}$.
\item
Suppose that $h \geq k \geq 2$ and $t \geq 1$.
We argue that the following recurrence holds:
\[
\mathcal{C}_{t, h}^k
\: = \:
\left[\mathcal{C}_{t-1, h}^k\right]^0 \cdot
\seq{\tpl{\varepsilon, \mathcal{B}_{t, h-1}^{k-1}}} \cdot
\left[\mathcal{C}_{t-1, h}^k\right]^1\,.
\]
First, we show that every leaf in~$\mathcal{C}_{t, h}^k$ is
also a leaf in tree
$\seq{\tpl{\varepsilon, \mathcal{B}_{t, h-1}^{k-1}}}$
or in tree~$\left[\mathcal{C}_{t-1, h}^k\right]^b$ for
some~$b \in \eset{0, 1}$.
Suppose that
$\ell = \seq{\beta_{h-1}, \dots, \beta_1}$
is a leaf in~$\mathcal{C}_{t, h}^k$.
\begin{itemize}
\item
If $\beta_{h-1} = \varepsilon$ then
$\seq{\beta_{h-2}, \dots, \beta_1}$ is a leaf
in~$\mathcal{B}_{t, h-1}^{k-1}$, and hence
$\ell = \seq{\varepsilon, \beta_{h-2}, \dots, \beta_1}$
is a leaf in tree~$\seq{\tpl{\varepsilon, \mathcal{B}_{t, h-1}^{k-1}}}$.
\item
If $\beta_{h-1} = b \beta$ for some $b \in \eset{0, 1}$ then
$\seq{\beta, \beta_{h-2}, \dots, \beta_1}$ is a leaf
in~$\mathcal{C}_{t-1, h}^k$, and hence
$\ell = \seq{b \beta, \beta_{h-2}, \dots, \beta_1}$
is a leaf in~$\left[\mathcal{C}_{t-1, h}^k\right]^b$.
\end{itemize}
Conversely, we now argue that if
$\ell =
\seq{\beta_{h-1}, \dots, \beta_1}$
is a leaf in labelled ordered
tree $\seq{\tpl{\varepsilon, \mathcal{B}_{t, h-1}^{k-1}}}$,
then it is also a leaf in~$\mathcal{C}_{t, h}^k$.
Note that the premise implies that $\beta_{h-1} = \varepsilon$ and
$\seq{\beta_{h-2}, \dots, \beta_1}$
is a leaf in~$\mathcal{B}_{t, h-1}^{k-1}$, and hence, by
item~\ref{item:Bc-vs-Cc--shorten}.\ in
Proposition~\ref{prop:Bc-vs-Cc}, we have that
$\ell = \seq{\varepsilon, \beta_{h-2}, \dots, \beta_1}$
is indeed a leaf in~$\mathcal{C}_{t, h}^k$.
Finally, we argue that if
$\ell =
\seq{\beta_{h-1}, \dots, \beta_1}$
is a leaf in
a tree $\left[\mathcal{C}_{t-1, h}^k\right]^b$ for
$b \in \eset{0, 1}$,
then it is also a leaf in~$\mathcal{C}_{t, h}^k$.
Indeed, the premise implies that $\beta_h = b \beta$ and
$\seq{\beta, \beta_{h-2}, \dots, \beta_1}$
is a leaf in~$\mathcal{C}_{t-1, h}^k$,
and hence
$\ell = \seq{b \beta, \beta_{h-2}, \dots, \beta_1}$ is indeed a
leaf in~$\mathcal{C}_{t, h}^k$.
\item
Suppose that $h = k \geq 2$ and $t \geq 1$.
We argue that then we have
$\mathcal{B}_{t, h}^k = \left[\mathcal{C}_{t, h}^k\right]^0$.
First, let
$\ell =
\seq{\beta_{h-1}, \dots, \beta_1}$ be a leaf in
tree~$\mathcal{B}_{t, h}^k$.
Since $h = k$, all bit strings $\beta_{h-1}$, \dots, $\beta_1$ are
non-empty, and hence $\beta_{h-1} = 0 \beta$ for some
$\beta \in \eset{0, 1}^*$.
By item~\ref{item:Bc-vs-Cc--h-eq-k}.\ of
Proposition~\ref{prop:Bc-vs-Cc}, it follows that the sequence
$\seq{\beta, \beta_{h-2}, \dots, \beta_1}$
is a leaf in~$\mathcal{C}_{t, h}^k$, and hence
$\ell = \seq{0 \beta, \beta_{h-2}, \dots, \beta_1}$
is indeed a leaf in~$\left[\mathcal{C}_{t, h}^k\right]^0$.
Conversely, let
$\ell =
\seq{\beta_{h-1}, \dots, \beta_1}$
be a leaf in tree~$\left[\mathcal{C}_{t, h}^k\right]^0$.
Then $\beta_{h-1} = 0 \beta$ for some $\beta \in \eset{0, 1}^*$
and sequence
$\seq{\beta, \beta_{h-2}, \dots, \beta_1}$
is a leaf in~$\mathcal{C}_{t, h}^k$.
By item~\ref{item:Bc-vs-Cc--h-eq-k}.\ of
Proposition~\ref{prop:Bc-vs-Cc}, it follows that
$\ell = \seq{0 \beta, \beta_{h-2}, \dots, \beta_1}$ is indeed a
leaf in~$\mathcal{B}_{t, h}^k$.
\item
Suppose that $h > k \geq 2$ and $t \geq 1$.
We argue that then the following recurrence holds:
\[
\mathcal{B}_{t, h}^k \: = \:
\left[\mathcal{C}_{t, h}^k\right]^0 \cdot
\seq{\left(\varepsilon, \mathcal{B}_{t, h-1}^k\right)} \cdot
\left[\mathcal{C}_{t, h}^k\right]^1\,.
\]
First, we show that every leaf in $\mathcal{B}_{t, h}^k$ is also a leaf in
tree
$\seq{\left(\varepsilon, \mathcal{B}_{t, h-1}^k\right)}$
or in tree $\left[\mathcal{C}_{t, h}^k\right]^b$ for some
$b \in \eset{0, 1}$.
Suppose that
$\ell =
\seq{\beta_{h-1}, \dots, \beta_1}$ is a leaf in~$\mathcal{B}_{t, h}^k$.
\begin{itemize}
\item
If $\beta_{h-1} = \varepsilon$ then
$\seq{\beta_{h-2}, \dots, \beta_1}$
is a leaf in~$\mathcal{B}_{t, h-1}^k$, and hence
$\ell = \seq{\varepsilon, \beta_{h-2}, \dots, \beta_1}$ is a
leaf in~$\seq{\left(\varepsilon, \mathcal{B}_{t, h-1}^k\right)}$.
\item
If $\beta_{h-1} = b \beta$ for some $b \in \eset{0, 1}$
then, by item~\ref{item:Bc-vs-Cc--h-g-k}.\ of
Proposition~\ref{prop:Bc-vs-Cc},
$\seq{\beta, \beta_{h-2}, \dots, \beta_1}$
is a leaf in~$\mathcal{C}_{t, h}^k$, and hence
$\ell = \seq{b \beta, \beta_{h-2}, \dots, \beta_1}$
is a leaf in~$\left[\mathcal{C}_{t, h}^k\right]^b$.
\end{itemize}
Conversely, we now argue that if
$\ell =
\seq{\beta_{h-1}, \dots, \beta_1}$
is a leaf in labelled ordered
tree~$\seq{\left(\varepsilon, \mathcal{B}_{t, h-1}^k\right)}$,
then it is also a leaf in~$\mathcal{B}_{t, h}^k$.
Note that the premise implies that $\beta_{h-1} = \varepsilon$ and
$\seq{\beta_{h-2}, \dots, \beta_1}$
is a leaf in~$\mathcal{B}_{t, h-1}^k$.
It follows that
$\ell = \seq{\varepsilon, \beta_{h-2}, \dots, \beta_1}$
is indeed a leaf in~$\mathcal{B}_{t, h}^k$.
Finally, we argue that if
$\ell =
\seq{\beta_{h-1}, \dots, \beta_1}$
is a leaf
in~$\left[\mathcal{C}_{t, h}^k\right]^b$ for some $b \in \eset{0, 1}$,
then it is also a leaf in~$\mathcal{B}_{t, h}^k$.
The premise implies that $\beta_{h-1} = b \beta$ for some
$\beta \in \eset{0, 1}^*$ and that
$\seq{\beta, \dots, \beta_1}$ is a leaf in~$\mathcal{C}_{t, h}^k$.
By item~\ref{item:Bc-vs-Cc--h-g-k}.\ of
Proposition~\ref{prop:Bc-vs-Cc},
it follows that
$\ell = \seq{b \beta, \beta_{h-2}, \dots, \beta_1}$
is indeed a leaf in~$\mathcal{B}_{t, h}^k$.
\end{enumerate}
Straightforward structural induction
(on the structure of labelled ordered trees~$\mathcal{U}_{t, h}^k$
and~$\mathcal{V}_{t, h}^k$)
yields that $\mathcal{B}_{t, h}^k = \mathcal{U}_{t, h}^k$
and~$\mathcal{C}_{t, h}^k = \mathcal{V}_{t, h}^k$.
\end{proof}
\subparagraph*{Efficiently Navigating Labelled Strahler-Universal Trees.}
The computation of the \emph{level-$p$ successor} of a leaf in a
labelled ordered tree of height~$h$ is the following problem:
given a leaf
$\seq{\beta_h, \beta_{h-1}, \dots, \beta_1}$ in the tree
and given a number~$p$,
such that $1 \leq p \leq h$, compute the
$<_{\mathrm{lex}}$-smallest leaf
$\seq{\beta'_h, \beta'_{h-1}, \dots, \beta'_1}$ in
the tree, such that
$\seq{\beta_h, \dots, \beta_p}
<_{\mathrm{lex}} \seq{\beta'_h, \dots, \beta'_p}$.
As (implicitly) explained by Jurdzi\'nski and
Lazi\'c~\cite[Proof of Theorem~7]{JL17}, the level-$p$ successor
computation is the key primitive used extensively in an implementation
of a progress measure lifting algorithm.
\begin{lemma}
\label{lemma:leaf-successor-poly-log}
Every leaf in tree $\mathcal{B}_{t, h}^k$ can be
represented using $O\left((k+t) \log h\right)$ bits and for every
$p = 1, 2, \dots, h$, the level-$p$ successor of a leaf in
tree~$\mathcal{B}_{t, h}^k$ can be computed in time
$O\left((k+t) \log h\right)$.
\end{lemma}
\begin{proof
Consider the following representation of a leaf
$\seq{\beta_{h-1}, \dots, \beta_1}$ in~$\mathcal{B}_{t, h}^k$:
for each of the at most $k+t$ bits used in the bit
strings $\beta_{h-1}, \dots, \beta_1$ overall, store the value of
the bit itself and the number, written in binary, of the component
in the $h$-tuple that this bit belongs to.
Altogether, the number of bits needed is
$O((k+t) \cdot (1 + \lg h)) \, = \, O((k+t) \log h)$.
We now consider computing the level-$p$ successor of a leaf
$\ell = \seq{\beta_{h-1}, \dots, \beta_1}$ in tree~$\mathcal{B}_{t, h}^k$.
We split the task of computing the level-$p$ successor
$\ell'$ of leaf $\ell$ into the following two steps:
\begin{itemize}
\item
find the lowest ancestor $\seq{\beta_{h-1}, \dots, \beta_q}$ of
$\seq{\beta_{h-1}, \dots, \beta_p}$
(that is, smallest $q$ satisfying $q \geq p$)
that has the next sibling
$\seq{\beta_{h-1}, \dots, \beta_{q+1}, \beta'_q}$ in~$\mathcal{B}_{t, h}^k$;
\item
find the smallest leaf
$\ell' = \seq{\beta_{h-1}, \dots, \beta_{q+1}, \beta'_q, \beta'_{q-1},
\dots, \beta'_1}$
that is a descendant of node
$\seq{\beta_{h-1}, \dots, \beta_{q+1}, \beta'_q}$ in~$\mathcal{B}_{t, h}^k$.
\end{itemize}
For node $\ell_r = \seq{\beta_{h-1}, \dots, \beta_r}$, where
$q \leq r \leq h-1$, we can determine whether it has the next sibling
$\ell'_r = \seq{\beta_{h-1}, \dots, \beta_{r+1}, \beta'_r}$
in~$\mathcal{B}_{t, h}^k$ and find it, by considering the following cases.
Firstly, we identify the cases in which $\ell_r$ does not have the
next sibling:
\begin{itemize}
\item
the number of non-empty strings among $\beta_{h-1}$, \dots,
$\beta_{r+1}$ is $k-1$;
\item
the number of non-leading bits used in strings $\beta_{h-1}$,
\dots, $\beta_{r+1}$ is $t$;
\item
$\beta_r = 0 1^j$ for some $j \geq 0$, the number of non-leading
bits used in strings $\beta_{h-1}$, \dots, $\beta_r$ is~$t$, and
all bit strings $\beta_r$, \dots, $\beta_1$ are non-empty;
\item
$\beta_r = 1^j$ for some $j \geq 1$, and the number of non-leading
bits used in strings $\beta_{h-1}$, \dots, $\beta_r$ is~$t$.
\end{itemize}
Define $k_{r+1}$ to be equal to $k-1$ minus the number of non-empty
bit strings among $\beta_{h-1}$, \dots, $\beta_{r+1}$, and define
$t_{r+1}$ to be equal to $t$ minus the number of non-leading bits
used in strings $\beta_{h-1}$, \dots, $\beta_{r+1}$.
We note that the subtree of~$\mathcal{B}_{t, h}^k$ that is rooted at node
$\ell_{r+1}$ is a copy of tree~$\mathcal{B}_{t_{r+1}, r+1}^{k_{r+1}}$.
Recall that trees $\mathcal{B}_{t, h}^k$ satisfy the same recurrences as
trees~$\mathcal{U}_{t, h}^k$.
Observe that the four cases above capture $\ell_r$ being the largest
child of the root of the copy of~$\mathcal{B}_{t_{r+1}, r+1}^{k_{r+1}}$ rooted
in node~$\ell_{r+1}$ in~$\mathcal{B}_{t, h}^k$, that correspond to
items~\ref{item:h0-k0}., \ref{item:t0}., \ref{item:h-eq-k}.,
and~\ref{item:h-g-k}.\ of Definition~\ref{def:Uc-and-Vc},
respectively.
Secondly, we consider the remaining two cases in which $\ell_r$ does
have the next sibling and we show how to find it by setting the
value of $\beta_r'$ accordingly.
\begin{itemize}
\item
If less than $t$ non-leading bits are used in strings
$\beta_{h-1}$, \dots, $\beta_r$ then set
$\beta'_r = \beta_r 1 0^j$ for some $j \geq 0$, so that exactly
$t$ non-leading bits are used in strings $\beta_{h-1}$, \dots,
$\beta_{r+1}$, $\beta'_r$.
\item
If exactly $t$ non-leading bits are used in strings $\beta_{h-1}$,
\dots, $\beta_r$, and $\beta_r = \beta 0 1^j$ for some
$\beta \in \eset{0, 1}^*$ and $j \geq 0$, then set
$\beta'_r = \beta$.
\end{itemize}
Finally, we set
$\ell' \: = \:
\seq{\beta_{h-1}, \dots, \beta_{q+1}, \beta'_q, 00^i, 0, \dots,
0, \varepsilon, \dots, \varepsilon}$
for some suitable $i \geq 0$, so as to make the number of non-empty
bit strings in~$\ell'$ equal to~$k-1$, and the number of bits used
in all the bit strings in~$\ell'$ equal to $(k-1)+t$.
To argue that the above case analyses can be implemented to work in
time $O((k+t)\log h)$, while using the succinct representation
described above, is tedious and hence we eschew it.
\end{proof}
\section{Strahler Strategies in Register Games}
This section establishes a connection between the register number of a
parity game defined by Lehtinen~\cite{Leh18} and the Strahler number.
More specifically, we argue that from every Steven attractor
decomposition of Strahler number~$k$, we can derive a dominion
strategy for Steven in the $k$-register game.
Once we establish the Strahler number upper bound on the register
number, we are faced with the following two natural questions:
\begin{question}
\label{question:Strahler-eq-register}
Do the Strahler and the register numbers coincide?
\end{question}
\begin{question}
\label{question:Strahler-algorithmic}
Can the relationship between Strahler and register numbers be
exploited algorithmically, in particular, to improve the running
time and space complexity of solving register games studied by
Lehtinen~\cite{Leh18} and Parys~\cite{Par20}?
\end{question}
This work has been motivated by those two questions and it answers
them both positively
(Lemma~\ref{lem:Strahler-bounds-Lehtinen} and
Theorem~\ref{thm:Lehtinen-bounds-Strahler},
and Theorem~\ref{thm:Strahler-pm-run-time}, respectively).
For every positive number~$k$, a Steven \emph{$k$-register game}
on a parity game~$\mathcal{G}$ is another parity game $\Reg{k}{\mathcal{G}}$ whose
vertices, edges, and priorities will be referred to as \emph{states},
\emph{moves}, and \emph{ranks}, respectively, for disambiguation.
The states of the Steven $k$-register game on~$\mathcal{G}$ are either
pairs $\left(v, \seq{r_{k}, r_{k-1}, \dots, r_1}\right)$ or
triples $\left(v, \seq{r_{k}, r_{k-1}, \dots, r_1}, p\right)$,
where $v$ is a vertex in~$\mathcal{G}$,
$d \geq r_{k} \geq r_{k-1} \geq \cdots \geq r_1 \geq 0$, and
$1 \leq p \leq 2k+1$.
The former states have rank~$1$ and the latter have rank~$p$.
Each number $r_i$, for $i = k, k-1, \dots, 1$, is referred to as
the value of the $i$-th register in the state.
Steven owns all states
$\left(v, \seq{r_{k}, r_{k-1}, \dots, r_1}\right)$ and the owner
of vertex~$v$ in~$\mathcal{G}$ is the owner of states
$\left(v, \seq{r_{k}, r_{k-1}, \dots, r_1}, p\right)$ for
every~$p$.
How the game is played by Steven and Audrey is determined by the
available moves:
\begin{itemize}
\item
at every state
$\left(v, \seq{r_{k}, r_{k-1}, \dots, r_1}\right)$,
Steven picks $i$, such that $0 \leq i \leq k$, and \emph{resets}
registers $i, i-1, i-2, \dots, 1$, leading to state
$\left(v,
\seq{r'_{k}, \dots, r'_{i+1}, r'_i, 0, \dots, 0}, p\right)$
of rank~$p$ and with updated register values, where:
\[
p \: = \:
\begin{cases}
2i &
\text{if $i \geq 1$ and $\max\left(r_i, \pi(v)\right)$ is even},
\\
2i+1 &
\text{if $i = 0$, or if $i \geq 1$ and
$\max\left(r_i, \pi(v)\right)$ is odd};
\end{cases}
\]
$r'_j = \max\!\left(r_j, \pi(v)\right)$ for $j \geq i+1$,
and $r'_i = \pi(v)$;
\item
at every state
$\left(v, \seq{r_{k}, r_{k-1}, \dots, r_1}, p\right)$,
the owner of vertex~$v$ in~$\mathcal{G}$ picks an edge $(v, u)$ in~$\mathcal{G}$,
leading to
state~$\left(u, \seq{r_{k}, r_{k-1}, \dots, r_1}\right)$ of
rank~$1$ and with unchanged register values.
\end{itemize}
For example, at state
$\left(v, \seq{9, 6, 4, 4, 3}\right)$ of rank~$1$, if the priority
$\pi(v)$ of vertex~$v$ is~$5$ and
Steven picks~$i = 3$, this leads to state
$\left(v, \seq{9, 6, 5, 0, 0}, 7\right)$ of rank~$2i+1 = 7$ because
$\max\!\left(r_3, \pi(v)\right) = \max(4, 5) = 5$ is odd,
$r'_4 = \max(r_4, \pi(v)) = \max(6, 5) = 6$, and
$r'_3 = \pi(v) = 5$.
Observe that the first components of states on every cycle in
game~$\Reg{k}{\mathcal{G}}$ form a (not necessarily simple) cycle in parity
game~$\mathcal{G}$;
we call it the cycle in~$\mathcal{G}$ \emph{induced} by the cycle
in~$\Reg{k}{\mathcal{G}}$.
If a cycle in~$\Reg{k}{\mathcal{G}}$ is even
(that is, the highest state rank on it is even)
then the induced cycle in~$\mathcal{G}$ is also even.
Lehtinen~\cite[Lemmas~3.3 and~3.4]{Leh18} has shown that a vertex~$v$
is in the largest Steven dominion in~$\mathcal{G}$ if and only if there is a
positive integer~$k$ such that a state $\tpl{v, \overline{r}}$, for
some register values~$\overline{r}$ is in the largest Steven dominion
in~$\Reg{k}{\mathcal{G}}$.
Lehtinen and Boker~\cite[a comment after Definition~3.1]{LB20} have
further clarified that for every~$k$, if a player has a dominion
strategy in~$\Reg{k}{\mathcal{G}}$ from a state whose first component is a
vertex~$v$ in~$\mathcal{G}$, then they also have a dominion strategy
in~$\Reg{k}{\mathcal{G}}$ from every state whose first component is~$v$.
This allows us to say without loss of rigour that a vertex~$v$
in~$\mathcal{G}$ is in a dominion in~$\Reg{k}{\mathcal{G}}$.
By defining the
\emph{(Steven) register number}~\cite[Definition~3.5]{Leh18}
of a parity game~$\mathcal{G}$ to be the smallest number~$k$ such that
all vertices $v$ in the largest Steven dominion in~$\mathcal{G}$ are in a
Steven dominion in~$\Reg{k}{\mathcal{G}}$,
and by proving the $1 + \lg n$ upper bound on the register number of
every $(n, d)$-small parity game~\cite[Theorem~4.7]{Leh18}, Lehtinen
has contributed a novel quasi-polynomial algorithm for solving parity
games, adding to those by Calude et al.~\cite{CJKLS17} and
Jurdzi\'nski and Lazi\'c~\cite{JL17}.
Lehtinen~\cite[Definition~4.8]{Leh18} has also considered the concept
of a Steven \emph{defensive dominion strategy} in a
$k$-register game (for brevity, we call it a $k$-defensive strategy):
it is a Steven dominion strategy on a set of states
in~$\Reg{k}{\mathcal{G}}$ in which there is no state of rank~$2k+1$.
Alternatively, the same concept can be formalized by defining the
\emph{defensive $k$-register game} $\Def{k}{\mathcal{G}}$, which is played
exactly like the $k$-register game~$\Reg{k}{\mathcal{G}}$, but in which Audrey
can also win just by reaching a state of rank~$2k+1$.
Note that the game $\Def{k}{\mathcal{G}}$ can be thought of as having the
winning criterion for Steven as being a conjunction of a parity
and a safety criteria, and the winning criterion for Audrey
as a disjunction of a parity and a reachability criteria.
Routine arguements allow to extend positional determinacy from parity
games to such games with combinations of parity, and safety or
reachability winning criteria.
We follow Lehtinen~\cite[Definition~4.9]{Leh18} by defining the
\emph{(Steven) defensive register number} of a Steven dominion~$D$
in~$\mathcal{G}$ as the smallest number~$k$ such that Steven has a defensive
dominion strategy in~$\Reg{k}{\mathcal{G}}$ on a set of states that
includes all $\tpl{v, \seq{r_{k}, \dots, r_1}}$ for $v \in D$,
and such that $r_{k}$ is an even number at least as large as every
vertex priority in~$D$.
We propose to call it the \emph{Lehtinen number} of a Steven dominion
in~$\mathcal{G}$ to honour Lehtinen's insight that led to this---as we argue
in this work---fundamental concept.
We also define the Lehtinen number of a vertex in~$\mathcal{G}$ to be the
smallest Lehtinen number of a Steven dominion in~$\mathcal{G}$ that includes
the vertex, and the Lehtinen number of a parity game to be the
Lehtinen number of its largest Steven dominion.
We also note that the register and the Lehtinen numbers of a parity
game nearly coincide (they differ by at most one), and hence the
conclusions of our analysis of the latter also apply to the former.
\begin{lemma}
\label{lem:Strahler-bounds-Lehtinen}
The Lehtinen number of a parity game is
no larger than
its Strahler number.
\end{lemma}
The arguments used in our proof of this lemma are similar to those
used in the proof of the main result of
Lehtinen~\cite[Theorem~4.7]{Leh18}.
Our contribution here is to pinpoint the Strahler number of an
attractor decomposition as the structural parameter of a dominion that
naturally bounds the number of registers used in Lehtinen's
construction of a defensive dominion strategy.
\begin{proof}[Proof of Lemma~\ref{lem:Strahler-bounds-Lehtinen}]
Consider a parity game $\mathcal{G}$ and let $d$ be the least even integer
no smaller than any of the priority in~$\mathcal{G}$. Consider a Steven
d-attractor decomposition $\mathcal{H}$ of $\mathcal{G}$ of Strahler number $k$. We
construct a defensive $k$-register strategy for Steven
on~$\Reg{k}{\mathcal{G}}$. The strategy is defined inductively on the height
of $\mathcal{T}_\mathcal{H}$, and has the additional property of being
\textit{$\mathcal{G}$-positional} in the following sense: if $\left(\left(v,
\seq{r_{k}, \dots, r_1}\right), \left(v, \seq{r'_{k}, \dots, r'_1},p
\right) \right)$ is a move then the register reset by Steven only
depends on $v$, not on the values in the registers. Similarly, if
$\left(\left(v, \seq{r_{k}, \dots, r_1}, p\right), \left(u,
\seq{r_{k}, \dots, r_1} \right) \right)$ is a move and $v$ is owned
by Steven, $u$ only depends on $v$ and not on the values of the
registers or $p$.
\subparagraph{Strategy for Steven.}
If $\mathcal{H} = \seq{A,\emptyset}$, then $\mathcal{G}$ consists of the set of
vertices of priority $d$ and of its Steven attractor. In this case,
Steven follows the strategy induced by the reachability strategy in
$A$ to the set of vertices of priority $d$, only resetting register
$r_1$ immediately after visiting a state with first component a
vertex of priority $d$ in~$\mathcal{G}$.
More precisely, the Steven defensive strategy is defined with the
following moves:
\begin{itemize}
\item
$\left( \left(v, \seq{r_1}\right), \left(v, \seq{r_1}, 1\right)
\right)$ if $v$ is not a vertex of priority $d$ in~$\mathcal{G}$;
\item
$\left( \left(v, \seq{r_1}\right), \left(v, \seq{r'_1}, 2\right)
\right)$ if $v$ is a vertex of priority $d$ in~$\mathcal{G}$ and $r'_1 =
\max(r_1,d)$ is even;
\item
$\left( \left(v, \seq{r_1}\right), \left(v, \seq{r'_1}, 3\right)
\right)$ if $v$ is a vertex of priority $d$ in~$\mathcal{G}$ and $r'_1 =
\max(r_1,d)$ is odd (we state this case for completeness but this
will never occur);
\item
$\left(\left(v, \seq{r_1}, p\right), \left(u, \seq{r_1}\right)
\right)$ where $(v,u)$ belongs to the Steven reachability strategy
from $A$ to the set of vertices of priority $d$ in~$\mathcal{G}$.
\end{itemize}
Note that this strategy is $\mathcal{G}$-positional.
Suppose now that
$\mathcal{H} \: = \: \seq{A, (S_1, \mathcal{H}_1, A_1), \dots,
(S_\ell, \mathcal{H}_\ell, A_\ell)}$
and that it has Strahler number~$k$.
For all $i = 1, 2, \dots, \ell$, let $k_i$ be the Strahler number
of~$\mathcal{H}_i$.
By induction, for all $i$, we have a Steven defensive $k_i$-register
strategy $\sigma_i$, which is $(\mathcal{G} \cap S_i)$-positional, on a set
of states $\Omega_i$ in~$\Reg{k_i}{\mathcal{G} \cap S_i}$ including all the
states $\left(v, \seq{r_{k_i}, \dots, r_1}\right)$ for $v \in S_i$ and
$r_{k_i}$ an even number at least as large as every vertex
priority in~$S_i$.
Let $\Gamma_i$ be the set of states in~$\Reg{k}{\mathcal{G} \cap S_i}$ defined as all the states $\left(v, \seq{d, r_{k-1}, \dots, r_1}\right)$ for $v \in S_i$ if $k_i \neq k$ and as the union of the states $\left(v, \seq{d, r_{k-1}, \dots, r_1}\right)$ for $v \in S_i$ and $\Omega_i$, otherwise.
The strategy $\sigma_i$ induces a strategy
on $\Gamma_i$ in~$\Reg{k}{\mathcal{G} \cap S_i}$ by simply ignoring registers $r_{k_i+1}, \ldots, r_{k}$, and using $(\mathcal{G} \cap S_i)$-positionality to define moves from the states not in $\Omega_i$. More precisely, in a state $\left(v, \seq{r_{k}, \ldots, r_1}\right)$, Steven resets register $j$ if and only if register $j$ is reset in a state $\left(v, \seq{r'_{k_i}, \dots, r'_1}\right)$ of $\Omega_i$ according to $\sigma_i$. This is well defined by $(\mathcal{G} \cap S_i)$-positionality. Similarly, we add moves $\left(\left(v, \seq{r_{k}, \dots, r_1}, p\right), \left(u, \seq{r_{k}, \dots, r_1} \right) \right)$ to the strategy if and only if there is a move $\left(\left(v, \seq{r'_{k_i}, \dots, r'_1},p'\right), \left(u, \seq{r'_{k_i}, \dots, r'_1}\right) \right)$ in $\sigma_i$. This is again well-defined by $(\mathcal{G} \cap S_i)$-positionality.
This strategy is denoted by $\tau_i$. Note that $\tau_i$ is a defensive $k$-register strategy on $\Gamma_i$, which is $\mathcal{G}$-positional.
The Steven defensive strategy in~$\Reg{k}{\mathcal{G}}$ is defined by the following moves, where $S$ denotes the set of vertices of priority $d$ in $\mathcal{G}$:
\begin{itemize}
\item On the set of states with first component a vertex of $A_i\setminus S_i$, the moves are given by $\tau_i$.
\item On the set of states with first component a vertex of $A\setminus S$, Steven uses the strategy induced by the reachability strategy from $A_i$ to $S_i$, without resetting any registers.
\item On $\Reg{k}{\mathcal{G} \cap (A\setminus S)}$, Steven uses the strategy induced by the reachability strategy from $A$ to $S$, without resetting any registers.
\item On the set of states with first component a vertex of $S$,
\begin{itemize}
\item $\left( \left(v, \seq{r_k, \ldots, r_1}\right), \left(v, \seq{d, 0, \ldots, 0}, p\right) \right)$ where $v$ is a vertex in $S$ and $p=2k$ if $\max(r_k,d)$ is even and $p=2k+1$ otherwise.
\item $\left( \left(v, \seq{r_k, \ldots, r_1},p\right), \left(u, \seq{r_k, \ldots, r_1}\right) \right)$ for some uniquely chosen $u$ such that $(v,u)$ in $E$ if $v$ is owned by Steven and for all $u$ such that $(v,u)$ in $E$ if $v$ is owned by Audrey.
\end{itemize}
\end{itemize}
Observe that this strategy is $\mathcal{G}$-positional.
\subparagraph{Correctness of the Strategy.}
We prove now that the strategy defined above is indeed a defensive
$k$-register strategy.
We proceed by induction on the height of $\mathcal{T}_\mathcal{H}$ and define a set of
states $\Gamma$, including all the states
$\left(v, \seq{d,r_{k-1}, \dots, r_1}\right)$ such that $v$ is a
vertex of $\mathcal{G}$.
\textit{Base Case:}
If the height of $\mathcal{T}_\mathcal{H}$ is $0$ and $\mathcal{H} = \seq{A,\emptyset}$, let $\Gamma$ be the set of states $\left(v, \seq{r_1}\right)$ and $\left(v, \seq{r_1} , p\right)$ with $v$ a vertex of $\mathcal{G}$, $1\leq r_1 \leq d$ and $p$ being either $1$ or $2$. It is easy to see that the strategy defined above is a defensive dominion strategy on this set.
\textit{Inductive step:} If $\mathcal{H} \: = \: \seq{A, (S_1, \mathcal{H}_1, A_1), \dots, (S_\ell, \mathcal{H}_\ell, A_\ell)}$ with Strahler number $k$ and $k_i$ being the Strahler number of $\mathcal{H}_i$ for all $i$ (note that $k_i \leq k$ for all $i$, and by definition of Strahler number, there is at most one $m$ such that $k_m=k$), we define $\Gamma$ to be the set comprising the union of the $\Gamma_i$ and all the states of the form $\left(v, \seq{r_{k}, \ldots, r_1}\right)$ and $\left(v, \seq{r_{k}, \ldots , r_1} , p\right)$ with $v$ a vertex of $(A_i\setminus S_i) \cup A$ and $1\leq p \leq 2k$.
\textit{Case 1: } For each $i$, $k_i<k$.
We first show that $\Gamma$ is a trap for Audrey for the strategy defined above, showing that rank $2k+1$ can never be reached (implying that the strategy is defensive). This comes from the fact that the register of rank $k$ is only reset in a state $\left(v, \seq{r_k, \ldots, r_1}\right)$ with $v$ in $S$. Since $\max(r_k, d)= d$ is even then this leads to a state $\left(v, \seq{d,0, \ldots, 0}, 2k\right)$. Otherwise, register $k$ is never reset, so a state with rank $2k+1$ cannot be reached.
Consider now any cycle in $\Reg{k}{\mathcal{G}}$ with moves restricted to the strategy constructed above. If this cycle contains a state whose first component is a vertex of $S$, then as explained above, the highest rank in the cycle is $2k$. Otherwise, the cycle is necessarily in $\Reg{k}{\mathcal{G}\cap S_i}$ for some $i$. By induction, $\tau_i$ is winning and so the cycle is even.
\textit{Case 2: } There is a unique $m$ such that $k_m = k$.
We first show that a state of rank $2k+1$ is never reached. Observe that register $k$ is reset in two places: (1) immediately after a state with first component a vertex of $S$ is visited, (2) if register $k$ is reset by $\tau_m$. In the first case, similarly as shown above, a state of rank $2k$ is reached. In the second case, register $k$ is either reset in a state $\left(v, \seq{d, r_{k-1}, \ldots, r_1}\right)$, and similarly as above, a state of rank $2k$ is reached, or in a state of $\Omega_i$. In this case, as $\tau_i$ is defensive on $\Omega_i$ by induction, a state of rank $2k+1$ cannot be reached, and the highest rank that can be reached is $2k$.
Proving that every cycle is even is similar to the previous case.
\end{proof}
\section{Progress-Measure Strahler Numbers}
Consider a parity game~$\mathcal{G}$ in which all vertex priorities are at
most an even number~$d$.
If $(A, \leq)$ is a well-founded linear order then we write sequences
in $A^{d/2}$
in the following form
$\seq{m_{d-1}, m_{d-3}, \dots, m_1}$,
and for every priority~$p \in \eset{0, 1, \dots, d}$, we define the
\emph{$p$-truncation} of $\seq{m_{d-1}, m_{d-3}, \dots, m_1}$, denoted
by ${\seq{m_{d-1}, m_{d-3}, \dots, m_1}}|_p$,
to be the
sequence $\seq{m_{d-1}, \dots, m_{p+2}, m_p}$ if $p$ is odd and
$\seq{m_{d-1}, \dots, m_{p+3}, m_{p+1}}$ if $p$ is even.
We use the lexicographic order~$\leq_{\mathrm{lex}}$ to linearly order
the set~$A^* = \bigcup_{i=0}^\infty A^i$.
A \emph{Steven progress measure}~\cite{EJ91,Jur00,JL17} on a parity
game~$\mathcal{G}$ is a map $\mu : V \to A^{d/2}$ such that for every
vertex~$v \in V$:
\begin{itemize}
\item
if $v \in V_{\mathrm{Even}}$ then there is a $\mu$-progressive edge
$(v, u) \in E$;
\item
if $v \in V_{\mathrm{Odd}}$ then every edge $(v, u) \in E$ is
$\mu$-progressive;
\end{itemize}
where we say that an edge $(v, u) \in E$ is \emph{$\mu$-progressive}
if:
\begin{itemize}
\item
if $\pi(v)$ is even then
${\mu(v)}|_{\pi(v)} \geq_{\mathrm{lex}} {\mu(u)}|_{\pi(v)}$;
\item
if $\pi(v)$ is odd then
${\mu(v)}|_{\pi(v)} >_{\mathrm{lex}} {\mu(u)}|_{\pi(v)}$.
\end{itemize}
We define \emph{the tree of a progress measure~$\mu$} to be the
ordered tree generated by the image of~$V$ under~$\mu$.
\begin{theorem}[\cite{EJ91,Jur00,JL17}]
\label{thm:pm-win-str}
There is a Steven progress measure on a parity game~$\mathcal{G}$ if and
only if every vertex in~$\mathcal{G}$ is in its largest Steven dominion.
If game $\mathcal{G}$ is $(n, d)$-small then the tree of a progress measure
on~$\mathcal{G}$ is $(n, {d}/{2} + 1)$-small.
\end{theorem}
We define the \emph{Steven progress-measure Strahler number} of a
parity game~$\mathcal{G}$ to be the smallest Strahler number of a tree of a
progress measure on~$\mathcal{G}$.
The following theorem refines and strengthens
Theorems~\ref{thm:attractor-decompositions-of-largest-dominia}
and~\ref{thm:pm-win-str} by establishing that the
Steven Strahler number and the Steven progress-measure Strahler number
of a parity game nearly coincide.
\begin{theorem}
\label{thm:ad-Strahler-eq-pm-Strahler}
The Steven
Strahler number and the Steven progress-measure Strahler number of
a parity game differ by at most~$1$.
\end{theorem}
The translations between progress measures and attractor
decompositions are as given by Daviaud, Jurdzi\'nski, and
Lazi\'c~\cite{DJL18};
here we point out that they do not increase the Strahler number of the
underlying trees by more than~$1$.
This coincidence of the two complexity measures, one based on
attractor decompositions and the other based on progress measures,
allows us in Section~\ref{sec:coda} to use a progress measure
lifting algorithm to solve games with bounded Strahler number.
\begin{proof}[Proof of Theorem~\ref{thm:ad-Strahler-eq-pm-Strahler}]
Let $\mathcal{G}$ be a $(n,d)$-small parity game. To prove
Theorem~\ref{thm:ad-Strahler-eq-pm-Strahler} we will prove the
following two lemmas.
\begin{lemma}
\label{lemma:dir1}
If $\mathcal{G}$ is a parity game where all the vertices belong to Audrey and $\mathcal{G}$ has a Steven attractor decomposition of Strahler number $k$, then it has a Steven progress measure of Strahler number at most $k +1$.
\end{lemma}
\begin{proof}
Let $\mathcal{G}$ be a parity game where all the vertices belong to Audrey.
The proof is by induction on the height of the tree of a Steven attractor decomposition of $\mathcal{G}$.
\subparagraph*{Induction hypothesis:} Given a $d$-attractor decomposition $\mathcal{H}$ of $\mathcal{G}$ and its tree $\mathcal{T}_\mathcal{H}$ of height $h$, there is a progress measure tree $\mathcal{T}$ of height $h$ and an embedding $f$ from $\mathcal{T}_\mathcal{H}$ to $\mathcal{T}$ such that all the nodes of $\mathcal{T}$ which are not in the image of $f$ are leaves.
\subparagraph*{Base case:} If the height of $\mathcal{T}$ is at most $0$, then the $d$-attractor decomposition is $(A,\emptyset)$. Let $C$ be the set of vertices which do not have priority $d$. Consider the topological order: $u<v$ if there is a path from $v$ to $u$ in $A$. We consider the tree $\seq{\circ^{|C|}}$ and $\mu$ which maps the vertices of priority $d$ to its root and the vertices in $C$ to leaves, respecting the topological order, i.e. if $u<v$ then $u$ is mapped to a node on the right of the node $v$ is mapped to. This defines a progress measure of Strahler number at most $2$.
\subparagraph*{Induction Step.}
Consider a Steven-$d$-attractor decomposition:
$$\mathcal{H} \: = \: \seq{A, (S_1, \mathcal{H}_1, A_1), \dots, (S_j, \mathcal{H}_j, A_j)}$$
and let $\mathcal{T}_{\mathcal{H}_i}$ be the tree of $\mathcal{H}_i$ and $\mathcal{G}_i$ as defined in the definition of an attractor decomposition.
Inductively, for all $i$, there is a progress measure tree $\mathcal{T}_i$ (and an associated progress measure mapping $\mu_i$) of the same height as $\mathcal{T}_{\mathcal{H}_i}$ and an embedding $f_i$ from $\mathcal{T}_{\mathcal{H}_i}$ to $\mathcal{T}_i$ such that all the nodes of $\mathcal{T}_i$ which are not in the image of $f_i$ are leaves.
Let us construct a progress measure tree for $\mathcal{G}$ as follows. Let $C_i = A_i\setminus S_i$ for each $i$ and $C$ be the set of nodes in $A$ that have priority at most $d-1$. Set:
$$\mathcal{T} \: = \: \seq{ \circ^{|C|} , \mathcal{T}_1, \circ^{|C_1|},\ldots, \mathcal{T}_j, \circ^{|C_j|}}$$
Set $\mu$ to be a mapping from the set of vertices of $\mathcal{G}$ to the nodes of $\mathcal{T}$ which extends $\mu_i$ on vertices in $S_i$, maps the vertices of priority $d$ to the root of the tree, the vertices in $C$ to the first $|C|$ children of the root and the vertices in $C_i$ to the corresponding $|C_i|$ children of the root which respects the topological ordering in $\mathcal{G}$ as viewed as a graph, i.e, if for vertices $u$ and $v$ in $C$, resp. $C_i$, there is a path from $u$ to $v$ in $C$, resp. $C_i$, then $u$ is mapped to a node that appears on the right of the node $v$ is mapped to.
By construction and induction hypothesis, the tree $\mathcal{T}$ embeds $\mathcal{T}_\mathcal{H}$ and the only nodes that are not images of nodes in $\mathcal{T}_\mathcal{H}$ are leaves. Moreover, $\mathcal{T}$ is a progress measure tree with mapping $\mu$ by induction hypothesis, and the construction which is compatible with the Steven reachability strategy on $A$, and the $A_i$'s.
The lemma follows from the fact that the Strahler number of a tree increases by at most 1 when leaves are added to it.
\end{proof}
\begin{lemma}
\label{lemma:dir2}
If $\mathcal{G}$ has a Steven progress measure of Strahler number $k$, then it has a Steven attractor decomposition of Strahler number at most $k$.
\end{lemma}
\begin{proof}
We will prove the following by induction, which proves the lemma:
\subparagraph*{Induction Hypothesis on $n$:} Given an $(n,d)$-small parity game $\mathcal{G}$ where $d$ is even and a progress measure tree $\mathcal{T}$ on $\mathcal{G}$, there exist a Steven attractor decomposition whose tree embeds in $\mathcal{T}$.
\begin{remark}
\label{remark:embed}
Given a progress measure mapping $\mu$ on $\mathcal{G}$ and its corresponding progress measure tree $\mathcal{T}$, and given a trap $R$ for Audrey in $\mathcal{G}$, the restriction of $\mu$ to the vertices in $R$ is a progress measure with the tree induced by the nodes images of the vertices of $R$ by $\mu$.
\end{remark}
\subparagraph*{Base Case.} For games with one vertex, any progress measure tree on $\mathcal{G}$ and any tree of a Steven attractor decomposition are $\seq{}$. Therefore the induction hypothesis is satisfied.
\subparagraph*{Induction step.}
Let $\mathcal{G}$ be an $(n,d)$-small parity game where $d$ is the least even integer no smaller than any priority in $\mathcal{G}$ and let $\mathcal{T}$ be a progress measure tree on $\mathcal{G}$.
\medskip
\noindent \textit{Case 1: If the highest priority in $\mathcal{G}$ is even, i.e. equal to $d$.}
Let $A$ be the Steven attractor of the set of vertices of priority $d$. Let $\mathcal{G}' = \mathcal{G}\setminus A$. As $\mathcal{G}'$ is a trap for Audrey in $\mathcal{G}$, the tree $\mathcal{T}'$ induced by the nodes images of the vertices in $\mathcal{G}'$ in $\mathcal{T}$ is a progress measure tree of $\mathcal{G}'$. By induction hypotheses, there exist a Steven attractor decomposition $\mathcal{H}$ of $\mathcal{G}'$ whose tree $\mathcal{T}_\mathcal{H}$ embeds in $\mathcal{T}'$. By appending $A$ to $\mathcal{H}$, one gets a Steven attractor decomposition of $\mathcal{G}$ of same tree $\mathcal{T}_\mathcal{H}$, which then embeds in $\mathcal{T}$.
\medskip
\noindent \textit{Case 2: If the highest priority in $\mathcal{G}$ is odd, i.e. equal to $d-1$.}
No vertex is mapped to the root in the progress measure tree $\mathcal{T}$.
Let $\mathcal{T}_0, \mathcal{T}_1, \ldots, \mathcal{T}_j$ be the subtrees, children of the root of $\mathcal{T}$.
Let us note that vertices of priority $d-1$ cannot be mapped to nodes in $\mathcal{T}_0$ as they would not have progressive outgoing edges if that was the case. Let $S_0$ be the set of vertices mapped to nodes in $\mathcal{T}_0$ and let $A_0$ be the Steven attractor of $S_0$ in $\mathcal{G}$. We can assume that $S_0$ is non-empty (otherwise we remove $\mathcal{T}_0$ from $\mathcal{T}$ and start again).
Let $\mathcal{G}' = \mathcal{G}\setminus A_0$. As $\mathcal{G}'$ is a subgame, trap for Audrey, the tree $\mathcal{T}'$ with subtrees $\mathcal{T}_1, \ldots, \mathcal{T}_j$ is a progress measure tree on $\mathcal{G}'$. By induction, one gets a Steven attractor decomposition:
$$\mathcal{H}' \: = \: \seq{\emptyset, (S_1, \mathcal{H}_1, A_1), \dots, (S_j, \mathcal{H}_j, A_j)}$$
whose tree embeds in $\mathcal{T}'$.
Now, let us prove that $S_0$ is a trap for Audrey. Let $u$ be in $S_0$ and $v$ be one of its successor. For $(u,v)$ to be progressive, $v$ has to be mapped to a node in $\mathcal{T}_0$ and is then in $S_0$. Since there is always an outgoing progressive edge for Steven's vertices and all edges of Audrey's vertices are progressive, we can conclude that $S_0$ is a trap for Audrey, is a sub-game, and $\mathcal{T}_0$ is a progress measure tree on it. By induction, one gets a Steven attractor decomposition $\mathcal{H}_0$ of $S_0$, whose tree embeds in $\mathcal{T}_0$.
We have proved that:
$$\mathcal{H} \: = \: \seq{\emptyset,(S_0,\mathcal{H}_0,A_1), (S_1, \mathcal{H}_1, A_1), \dots, (S_j, \mathcal{H}_j, A_j)}$$
is a Steven attractor decomposition of $\mathcal{G}$ whose tree embeds in $\mathcal{T}$.
\end{proof}
Lemma~\ref{lemma:dir2} gives one direction of the theorem. For the reverse direction, consider $\mathcal{G}$ a parity game and $\mathcal{H}$ a Steven attractor decomposition of Strahler number $k$. This decomposition induces a winning strategy for Steven (with exactly one edge going out any vertex owned by Steven in $\mathcal{G}$). Consider the restriction of $\mathcal{G}$ to this Steven strategy. This is a game where all the vertices belong to Audrey, and which has $\mathcal{H}$ as a Steven attractor decomposition. We can apply Lemma~\ref{lemma:dir2} and obtain a Steven progress measure of Strahler number at most $k + 1$. The progress measure thus obtained is also a progress measure of $\mathcal{G}$, which concludes the proof.
\end{proof}
\section{Dominions, Attractor Decompositions, and Their Trees}
\label{section:tuning}
\subparagraph*{Strategies, Traps, and Dominions.}
A \emph{parity game}~\cite{EJ91} $\mathcal{G}$ consists of a finite directed
graph~$(V, E)$, a partition $(V_{\mathrm{Even}}, V_{\mathrm{Odd}})$ of the set of
vertices~$V$, and a function $\pi : V \to \eset{0, 1, \dots, d}$ that
labels every vertex~$v \in V$ with a non-negative integer~$\pi(v)$
called its \emph{priority}.
We say that a cycle is \emph{even} if the highest vertex priority on
the cycle is even; otherwise the cycle is \emph{odd}.
We say that a parity game is \emph{$(n, d)$-small} if it has at
most~$n$ vertices and all vertex priorities are at most~$d$.
For a set~$S$ of vertices, we write $\mathcal{G} \cap S$
for the substructure of~$\mathcal{G}$ whose graph is the subgraph of~$(V, E)$
induced by the sets of vertices~$S$.
Sometimes, we also write $\mathcal{G} \setminus S$ to denote
$\mathcal{G} \cap (V \setminus S)$.
We assume throughout that every vertex has at least one outgoing edge,
and we reserve the term \emph{subgame} to substructures $\mathcal{G} \cap S$,
such that every vertex in the subgraph of $(V, E)$ induced by~$S$ has
at least one outgoing edge.
A (positional) \emph{Steven strategy} is a set $\sigma \subseteq E$
of edges such that:
\begin{itemize}
\item
for every $v \in V_{\mathrm{Even}}$, there is an edge $(v, u) \in \sigma$,
\item
for every $v \in V_{\mathrm{Odd}}$, if $(v, u) \in E$ then $(v, u) \in \sigma$.
\end{itemize}
For a non-empty set of vertices $R$, we say that a Steven
strategy~$\sigma$ \emph{traps Audrey in $R$} if
$w \in R$ and $(w, u) \in \sigma$ imply $u \in R$.
We say that a set of vertices~$R$ is a
\emph{trap for Audrey}~\cite{Zie98} if there is a Steven strategy that
traps Audrey in~$R$.
Observe that if~$R$ is a trap in a game~$\mathcal{G}$ then $\mathcal{G} \cap R$ is a
subgame of~$\mathcal{G}$.
For a set of vertices $D \subseteq V$, we say that a Steven
strategy~$\sigma$ is a
\emph{Steven dominion strategy on $D$} if
$\sigma$ traps Audrey in~$D$ and
every cycle in the subgraph $(D, \sigma)$ is even.
Finally, we say that a set~$D$ of vertices is a
\emph{Steven dominion}~\cite{JPZ08} if there is a Steven dominion
strategy on it.
Audrey strategies, trapping Steven, and Audrey dominions are defined in
an analogous way by swapping the roles of the two players.
We note that the sets of Steven dominions and of Audrey dominions are each
closed under union, and hence the largest Steven and Audrey dominions
exist, and they are the unions of all Steven and Audrey dominions,
respectively.
Moreover, every Steven dominion is disjoint from every Audrey
dominion.
\subparagraph*{Attractor Decompositions.}
In a parity game~$\mathcal{G}$, for a target set of vertices~$B$
(``bullseye'') and a set of vertices~$A$ such that $B \subseteq A$,
we say that a Steven strategy~$\sigma$ is a
\emph{Steven reachability strategy to $B$ from~$A$} if every infinite
path in the subgraph $(V, \sigma)$ that starts from a vertex in~$A$
contains at least one vertex in~$B$.
For every target set~$B$, there is the largest
(with respect to set inclusion) set from which there is a Steven
reachability strategy to~$B$ in~$\mathcal{G}$;
we call this set the
\emph{Steven attractor to~$B$ in~$\mathcal{G}$}~\cite{Zie98}.
\emph{Audrey reachability strategies} and \emph{Audrey attractors} are
defined analogously.
We highlight the simple fact that if~$A$ is an attractor for a player
in~$\mathcal{G}$ then its complement $V \setminus A$ is a trap for them.
If $\mathcal{G}$ is a parity game in which all priorities do not exceed a
non-negative even number~$d$ then we say that
$\mathcal{H} \: = \:
\seq{A, (S_1, \mathcal{H}_1, A_1), \dots, (S_\ell, \mathcal{H}_\ell, A_\ell)}$
is a \emph{Steven $d$-attractor decomposition}~\cite{DJL18,DJL19,JM20}
of~$\mathcal{G}$ if:
\begin{itemize}
\item
$A$ is the Steven attractor to the (possibly empty) set of vertices
of priority~$d$ in~$\mathcal{G}$;
\end{itemize}
and setting $\mathcal{G}_1 = \mathcal{G} \setminus A$, for all $i = 1, 2, \dots, \ell$,
we have:
\begin{itemize}
\item
$S_i$ is a non-empty trap for Audrey in~$\mathcal{G}_i$ in which every
vertex priority is at most~$d-2$;
\item
$\mathcal{H}_i$ is a Steven $(d-2)$-attractor decomposition of
subgame~$\mathcal{G} \cap S_i$;
\item
$A_i$ is the Steven attractor to $S_i$ in~$\mathcal{G}_i$;
\item
$\mathcal{G}_{i+1} = \mathcal{G}_i \setminus A_i$;
\end{itemize}
and the game $\mathcal{G}_{\ell+1}$ is empty.
If $d = 0$ then we require that $\ell = 0$.
The following proposition states that if a subgame induced by a trap
for Audrey has a Steven attractor decomposition then the trap is a
Steven dominion.
Indeed, a routine proof argues that the union of all the Steven
reachability strategies, implicit in the attractors listed in the
decomposition, is a Steven dominion strategy.
\begin{proposition}[\cite{Zie98,DJL18,JM20}]
\label{prop:decomposition-dominion-even}
If $d$ is even, $R$ is a trap for Audrey in~$\mathcal{G}$, and there is
a Steven $d$-attractor decomposition of~$\mathcal{G} \cap R$, then $R$ is a
Steven dominion in~$\mathcal{G}$.
\end{proposition}
Attractor decompositions for Audrey can be defined in the analogous
way by swapping the roles of players as expected, and then a dual
version of the proposition holds routinely.
The following theorem implies that every vertex in a parity game is
either in the largest Steven dominion or in the largest Audrey
dominion---it is often referred to as the
\emph{positional determinacy theorem} for parity games.
\begin{theorem}[\cite{EJ91,McN93,Zie98,JM20}]
\label{thm:attractor-decompositions-of-largest-dominia}
For every parity game~$\mathcal{G}$, there is a partition of the set of
vertices into a trap for Audrey~$W_{\mathrm{Even}}$ and a trap for
Steven~$W_{\mathrm{Odd}}$, such that there is a Steven attractor
decomposition of $\mathcal{G} \cap W_{\mathrm{Even}}$ and an Audrey attractor
decomposition of $\mathcal{G} \cap W_{\mathrm{Odd}}$.
\end{theorem}
\subparagraph*{Ordered Trees and Their Strahler Numbers.}
Ordered trees are defined inductively;
the trivial tree $\seq{}$ is an ordered tree and so is a sequence
$\seq{T_1, T_2, \dots, T_\ell}$, where $T_i$ is an ordered
tree for every $i = 1, 2, \dots, \ell$.
The trivial tree has only one node called the root, which is a leaf;
and a tree of the form $\seq{T_1, T_2, \dots, T_\ell}$ has the root
with $k$ children, the root is not a leaf, and the $i$-th child of the
root is the root of ordered tree~$T_i$.
Because the trivial tree~$\seq{}$ has just one node, we sometimes
write~$\circ$ to denote it.
If $T$ is an ordered tree and $i$ is a positive integer, then
we use the notation $T^i$ to denote the sequence
$T, T, \dots, T$ consisting of $i$ copies of tree~$T$.
Then the expression $\seq{T^i} = \seq{T, \dots, T}$ denotes
the tree whose root has~$i$ children, each of which is the root of a
copy of~$T$.
We also use the $\cdot$ symbol to denote concatenation of sequences,
which in the context of ordered trees can be interpreted as sequential
composition of trees by merging their roots;
for example,
$\seq{\seq{\circ^3}} \cdot \seq{\circ^4, \seq{\seq{\circ}}^2}
=
\seq{\seq{\circ^3}, \circ^4, \seq{\seq{\circ}}^2}
=
\seq{\seq{\circ, \circ, \circ}, \circ, \circ, \circ, \circ, \seq{\seq{\circ}}, \seq{\seq{\circ}}}$.
For an ordered tree~$T$, we write $\height{T}$ for its
\emph{height} and $\leaves{T}$ for its
\emph{number of leaves}, which are defined by the following routine
induction:
the trivial tree~$\seq{} = \circ$ has $1$ leaf and its height is~$1$;
the number of leaves of tree $\seq{T_1, T_2, \dots, T_\ell}$ is
the sum of the numbers of leaves of trees~$T_1$, $T_2$, \dots,
$T_\ell$;
and its height
is $1$ plus the maximum height of trees~$T_1$, $T_2$, \dots, $T_\ell$.
For example, the tree
$\seq{\seq{\circ^3}, \circ^4, \seq{\seq{\circ}}^2}$
has $9$ leaves and height~$4$
We say that an ordered tree is \emph{$(n, h)$-small}
if it has at most $n$ leaves and its height is at most~$h$.
The \emph{Strahler number} $\Strah{T}$ of a tree~$T$ is defined to
be the largest height of a perfect binary tree that is a minor
of~$T$.
Alternatively, it can be defined by the following structural
induction:
the Strahler number of the trivial tree~$\seq{} = \circ$ is~$1$;
and if $T = \seq{T_1, \dots, T_\ell}$ and
$m$ is the largest Strahler number of trees~$T_1, \dots, T_\ell$,
then $\Strah{T} = m$ if there is a unique $i$ such that
$\Strah{T_i} = m$,
and $\Strah{T} = m+1$ otherwise.
For example,
we have
$\Strah{\seq{\seq{\circ^3}, \circ^4, \seq{\seq{\circ}}^2}} = 2$
because $\Strah{\circ} = \Strah{\seq{\seq{\circ}}} = 1$ and
$\Strah{\seq{\circ^3}} = 2$.
\begin{proposition}
\label{prop:Strahler-small}
For every $(n, h)$-small tree~$T$, we have
$\Strah{T} \leq h$ and $\Strah{T} \leq \lfloor \lg n \rfloor + 1$.
\end{proposition}
\subparagraph*{Trees of Attractor Decompositions.}
The definition of an attractor decomposition is inductive and we
define an ordered tree that reflects the hierarchical structure of an
attractor decomposition.
If $d$ is even and
$\mathcal{H} = \seq{A, (S_1, \mathcal{H}_1, A_1), \dots, (S_\ell, \mathcal{H}_\ell, A_\ell)}$
is a Steven $d$-attractor decomposition then we define the
\emph{tree of attractor decomposition~$\mathcal{H}$}~\cite{DJL19,JM20},
denoted by $T_{\mathcal{H}}$, to be the trivial ordered tree~$\seq{}$ if
$\ell = 0$, and otherwise, to be the ordered tree
$\seq{T_{\mathcal{H}_1}, T_{\mathcal{H}_2}, \dots, T_{\mathcal{H}_\ell}}$, where for every
$i = 1, 2, \dots, \ell$, tree $T_{\mathcal{H}_i}$ is the tree of attractor
decomposition~$\mathcal{H}_i$.
Trees of Audrey attractor decompositions are defined analogously.
Observe that the sets $S_1, S_2, \dots, S_\ell$ in an attractor
decomposition as above are non-empty and pairwise disjoint, which
implies that trees of attractor decompositions are small relative to
the number of vertices and the number of distinct priorities in a
parity game.
The following proposition can be proved by routine structural
induction.
\begin{proposition}[\cite{DJL19,JM20}]
\label{prop:tree-of-decomposition-is-small}
If $\mathcal{H}$ is an attractor decomposition of an $(n, d)$-small parity
game
then its tree $T_{\mathcal{H}}$ is $(n, \ceil{d/2}+1)$-small.
\end{proposition}
We define the
\emph{Strahler number of an attractor decomposition~$\mathcal{H}$}, denoted
by~$\Strah{\mathcal{H}}$, to be the Strahler number
$\Strah{T_{\mathcal{H}}}$ of its tree~$T_{\mathcal{H}}$.
We define the \emph{Strahler number of a parity game} to be the
maximum of the smallest Strahler numbers of attractor decompositions
of the largest Steven and Audrey dominions, respectively.
|
2,869,038,156,804 | arxiv | \section{Introduction}
\input{tex_final/01_intro.tex}
\section{Related Work}
\input{tex_final/02_related.tex}
\section{Domain Randomization-Enhanced Depth Simulation}
\input{tex_final/03_dreds}
\section{STD Dataset}
\input{tex_final/04_dts}
\section{Method}
\input{tex_final/05_method.tex}
\section{Tasks, Benchmarks and Results}
\input{tex_final/06_experiment.tex}
\section{Conclusions}
\input{tex_final/07_conclusion.tex}
\clearpage
\bibliographystyle{splncs04}
\subsection{DREDS Dataset}
\vspace{-2mm}
We present the DREDS-CatKnown dataset, where the category-level objects are from ShapeNetCore~\cite{chang2015shapenet}, and the DREDS-CatNovel dataset, where we transfer random materials to the objects of GraspNet-1Billion~\cite{fang2020graspnet}. Figure \ref{fig: DREDS_example} shows the examples and annotations of DREDS dataset. For each virtual scene, we provide the RGB image, stereo IR images, simulated depth, ground truth depth, NOCS map, surface normal, instance mask, \emph{etc}.
\begin{figure}[htbp]
\centering
\centering
\includegraphics[trim=0 0 0 0,clip, width=\linewidth]
{figure/DREDS_example.pdf}
\caption{\textbf{Paired RGB and simulated depth examples and annotations of DREDS-CatKnown and DREDS-CatNovel datasets.}}
\label{fig: DREDS_example}
\end{figure}
\subsection{STD Dataset}
\vspace{-1mm}
\textbf{Example of CAD Models.} We obtain CAD models of 42 category-level objects and 8 category-novel objects using the 3D reconstruction algorithm. For most of the objects, especially specular and transparent objects, we spray the dye and decorate objects with ink to enhance the reconstruction performance. 50 CAD models are shown in Figure \ref{fig: cad_model}.
\vspace{-6mm}
\begin{figure}[htbp]
\centering
\centering
\includegraphics[trim=0 0 0 0,clip, width=\linewidth]
{figure/cad_model.pdf}
\vspace{-6mm}
\caption{\textbf{CAD models of the STD object set.} The 1st to 7th
rows show 42 objects in 7 categories, and the last row shows 8 objects in novel categories. }
\label{fig: cad_model}
\vspace{-5mm}
\end{figure}
\textbf{Data Annotation.}
It is quite time-consuming to annotate such a large amount of real data. We propose to annotate the 6D poses of the objects in the first frame of each scene. Then the annotated 6D poses are propagated to the subsequent frames according to the camera poses with respect to the first frame. We calculate the camera poses using COLMAP~\cite{schoenberger2016sfm}. In our annotation, we develop a program with GUI, enabling the user to move the CAD model, switching back and forth between the 2D image and 3D point cloud space to determine its pose, which facilitates labeling specular and transparent objects whose point clouds are severely missing or incorrect. After annotating 6D pose, we can easily render other annotations like the ground truth depth, instance mask, \emph{etc}. Figure \ref{fig: STD_example} shows the examples and annotations of DREDS dataset.
\begin{figure}[htbp]
\centering
\centering
\includegraphics[trim=0 0 0 0,clip, width=\linewidth]
{figure/STD_example.pdf}
\caption{\textbf{Examples and annotations of STD-CatKnown and STD-CatNovel datasets.} The ground truth depth maps are labeled only in the area of 42 objects in 7 categories and 8 objects in novel categories. Moreover, the NOCS maps are not annotated in STD-CatNovel dataset because we do not define the normalized object coordinate space for novel categories.}
\label{fig: STD_example}
\vspace{-6mm}
\end{figure}
\subsection{Depth Restoration}
\textbf{Qualitative Comparison to State-of-the-art Methods.}
Figure \ref{fig: Qualitative comparison to state-of-the-art methods} shows the qualitative comparison of STD dataset, demonstrating that our method can predict a more accurate depth on the area with missing or incorrect values while preserving the depth value of the correct area of the raw depth map.
\begin{figure}[htbp]
\centering
\includegraphics[trim=0 0 0 0,clip, width=\linewidth]{figure/depth_restoration.pdf}
\caption{\textbf{Qualitative comparison to state-of-the-art methods.} For a fair comparison, all the methods are trained on the train split of DREDS-CatKnown. Red boxes highlight the specular or transparent objects.}
\vspace{-5mm}
\label{fig: Qualitative comparison to state-of-the-art methods}
\end{figure}
\textbf{Cross-Sensor Evaluation.} In this work, depth sensor simulation and real-world data capture are both based on Intel RealSense D415. To investigate the robustness of the proposed SwinDRNet on other types of depth sensors, we evaluate the performance on data of two scenes from STD-CatKnown dataset, captured by Intel RealSense D435. Table \ref{table:depth_restoration_d435} shows a comparison of the results evaluated on D415 and D435 data after training on DREDS-CatKnown dataset. We observe that SwinDRNet has similar performance on data from these two different depth sensors in each scene, which verifies the good cross-sensor generalization ability of SwinDRNet.
\begin{table}
\scriptsize
\setlength{\tabcolsep}{3pt}
\renewcommand{\arraystretch}{1.2}
\begin{center}
\vspace{-4mm}
\caption{\textbf{Quantitative results for cross-sensor evaluation.} The performance of SwinDRNet is evaluated on RGB-D data captured by Intel RealSense D415 and D435 in each of the two scenes.}
\label{table:depth_restoration_d435}
\begin{tabular}{cc|cccccc}
\hline
Scenes & Sensors & RMSE$\downarrow$ & REL$\downarrow$ & MAE$\downarrow$ & $\delta_{1.05}\uparrow$ & $\delta_{1.10}\uparrow$ & $\delta_{1.25}\uparrow$ \\ \hline
\multirow{2}{*}{1} & D415 & 0.017/0.017 & 0.015/0.016 & 0.009/0.010 & 94.62/94.30 & 98.34/98.60 & 99.94/99.95 \\
& D435 & 0.021/0.023 & 0.022/0.025 & 0.013/0.015 & 89.30/86.23 & 97.95/97.85 & 99.95/99.98 \\ \hline
\multirow{2}{*}{2} & D415 & 0.013/0.018 & 0.011/0.014 & 0.008/0.011 & 97.93/96.02 & 99.47/98.94 & 100.00/100.00 \\
& D435 & 0.016/0.024 & 0.015/0.024 & 0.010/0.017 & 95.25/89.29 & 99.16/97.69 & 100.00/100.00 \\ \hline
\end{tabular}
\vspace{-5mm}
\end{center}
\end{table}
\vspace{-10mm}
\subsection{Category-level Pose Estimation}
\textbf{Qualitative Comparison to Baseline Methods.}
Figure \ref{fig: pose} shows the qualitative results of different experiments on DREDS and STD datasets. We can see that the qualities of our predictions are generally better than others. The figure also shows that NOCS~\cite{wang2019normalized}, SGPA~\cite{chen2021sgpa} and our method all perform better with the help of restoration depth, especially for specular and transparent objects like the mug, bottle and bowl, which indicates that depth restoration does help category-level pose estimation task.
\textbf{Quantitative Comparison to Restored Depth Inputs.} We further evaluate the influence of different restored depths for category-level pose estimation, which is presented in Table \ref{table:ablation_studies_diff_depth_pose}. The proposed SwinDRNet+NOCSHead network receives the restored depth from SwinDRNet and the competing depth restoration methods for pose fitting. Quantitative results under all metrics demonstrate the superiority of SwinDRNet over other baseline methods in boosting the performance of category-level pose estimation.
\begin{table}
\vspace{-5mm}
\scriptsize
\renewcommand{\arraystretch}{1.2}
\begin{center}
\caption{\textbf{Quantitative results for category-level pose estimation using different restored depths from SwinDRNet and the competing baseline methods.} The left of '/' shows the results evaluated on all objects, and the right of '/' shows the results evaluated on specular and transparent objects.
\vspace{-3mm}}
\label{table:ablation_studies_diff_depth_pose}
\begin{tabular}{c|c c c c c c c c}
\hlin
Methods & IoU25 & IoU50 & IoU75 & $5^{\circ}2$cm & $5^{\circ}5$cm & $10^{\circ}2$cm & $10^{\circ}5$cm & $10^{\circ}10$cm \\
\hline
& \multicolumn{8}{c}{DREDS-CatKnown (Sim)} \\ \hline
NLSPN & \textbf{94.7}/98.1 & 84.6/90.3 & 65.9/71.2 &39.4/39.4 & 40.3/40.4 & 65.2/67.8& 67.6/70.4 & 67.6/70.4 \\ \hline
LIDF & 94.4/97.9 & 83.3/89.5 & 59.3/66.4 & 33.7/37.4 & 36.3/39.8& 57.9/63.7& 64.3/69.8 & 64.6/70.0\\ \hline
Ours& \textbf{94.7}/\textbf{98.2}&\textbf{84.8}/\textbf{90.8}& \textbf{68.0}/\textbf{74.0}&\textbf{49.1}/\textbf{51.5} & \textbf{50.1}/\textbf{52.9} & \textbf{69.8}/\textbf{73.9} & \textbf{72.4}/\textbf{77.0} & \textbf{72.5}/\textbf{77.1}\\ \hline
& \multicolumn{8}{c}{STD-CatKnown (Real)} \\ \hline
NLSPN & 92.3/99.5 & 87.7/94.8 & 73.5/73.5 & 45.2/31.5 & 46.2/33.3 & 72.5/57.1 &75.1/60.9 & 75.1/60.9 \\ \hline
LIDF &92.3/99.1& 87.2/93.4 & 67.0/68.5 &34.6/35.4& 37.1/40.2& 64.7/60.8 & 70.4/\textbf{69.0} & 70.5/\textbf{69.2}\\ \hline
Ours& \textbf{92.4}/\textbf{99.7}&\textbf{88.0}/\textbf{95.0} &\textbf{ 75.9}/\textbf{78.8} & \textbf{52.9}/\textbf{40.0} & \textbf{53.8}/\textbf{41.3} & \textbf{77.1}/\textbf{66.3} & \textbf{79.1}/68.7& \textbf{79.1}/68.7\\ \hline
\end{tabular}
\end{center}
\vspace{-8mm}
\end{table}
\begin{figure}[htbp]
\centering
\centering
\includegraphics[trim=0 0 0 0,clip, width=\linewidth]
{figure/pose3.pdf}
\caption{\textbf{Qualitative results of pose estimations on DREDS and STD datasets.} The ground truths are shown in green while the estimations are shown in red. \emph{only} means using raw depth in the whole experiment, \emph{Refined} means using restored depth for training and inference in SGPA and for pose fitting in NOCS and our method.}
\label{fig: pose}
\vspace{-10mm}
\end{figure}
\vspace{-5mm}
\subsection{Robotic Grasping}
The illustration of a real robot experiment for specular and transparent object grasping is shown in Figure \ref{fig: robot}. We carry out the table-clearing using the Franka Emika Panda robot arm with the parallel-jaw gripper, and RealSense D415 depth sensor for RGBD images capture.
\begin{figure}[htbp]
\vspace{-2mm}
\centering
\includegraphics[scale=0.25]{figure/robot-arxiv.pdf}
\caption{\textbf{The setting of real robot experiment for specular and transparent object grasping.}}
\label{fig: robot}
\end{figure}
\vspace{-3mm}
\subsection{Ablation Study}
To analyze the components of the proposed SwinDRNet, as well as domain randomization and the scale of the proposed DREDS dataset, we conduct the ablation studies with different configurations.
\textbf{Analysis of the Modules of SwinDRNet.} We first evaluate the effect of different modules of SwinDRNet with three configurations: 1) Take the concatenated RGBD images as input without the RGB-D fusion and confidence interpolation module. 2) Have no confidence module compared with SwinDRNet. 3) The complete SwinDRNet. As shown in Table \ref{table:ablation_studies_modules}, the performance of depth restoration improves when using these two modules. Note that the network with and without the confidence interpolation module obtain the similar depth restoration performance. However, in Table~\ref{table:ablation_studies_modules_for_pose_estimation}, we observe that SwinDRNet with this module achieves higher performance on object pose estimation, because the module keeps the correct geometric features from the original depth input which benefits the downstream task. The results above indicate the effectiveness of the RGB-D fusion and confidence interpolation module of SwinDRNet.
\textbf{Analysis of Material Randomization.} We analyze the effect of material randomization on depth restoration. We create a dataset of the same size as the fully randomized DREDS-CatKnown dataset. The original materials from ShapeNetCore~\cite{chang2015shapenet} are directly applied to the objects without any transfer or randomization of specular, transparent, diffuse materials.
Table \ref{table:mat_random} shows the results of depth restoration, evaluating on specular and transparent objects. Without material randomization, the performance drops significantly, since the network cannot consider real-world data as the variation of the synthetic training data without seeing sufficient material variation, which demonstrates the significance of material randomization.
\textbf{Analysis of the Scale of Training Data.} In Table \ref{table:ablation_scale}, we show the performance dependence on the dataset scale. Compared to the full scale, the depth restoration performance of SwinDRNet trained on the half scale also degraded, demonstrating the necessity of the scale of the DREDS dataset for the method.
\begin{table}
\scriptsize
\renewcommand{\arraystretch}{1.2}
\begin{center}
\caption{\textbf{Ablation studies for the effect of different modules on depth restoration.} \checkmark denotes prediction with the module.
\vspace{-3mm}
}
\label{table:ablation_studies_modules}
\begin{tabular}{c c|c c c c c c}
\hlin
Fusion & Confidence & RMSE$\downarrow$ & REL$\downarrow$ & MAE$\downarrow$ & $\delta_{1.05}\uparrow$ & $\delta_{1.10}\uparrow$ & $\delta_{1.25}\uparrow$ \\
\hline
& & \multicolumn{6}{c}{STD-CatKnown} \\ \hline
& & 0.019/0.027 & 0.019/0.032 & 0.0123/0.021 & 91.09/79.20 & 98.92/97.73 & \textbf{99.95}/\textbf{99.91}\\ \hline
\checkmark & & \textbf{0.014}/\textbf{0.017} & \textbf{0.013}/0.017 & 0.009/0.012 & 96.33/94.18 & \textbf{99.36}/\textbf{99.01} & 99.92/\textbf{99.91}\\ \hline
\checkmark & \checkmark & 0.015/0.018 &\textbf{0.013}/\textbf{0.016} &\textbf{0.008}/\textbf{0.011} &\textbf{96.66}/\textbf{94.97} &99.03/98.79 & 99.92/99.85\\ \hline
\end{tabular}
\end{center}
\end{table}
\vspace{-10mm}
\begin{table}
\scriptsize
\renewcommand{\arraystretch}{1.2}
\begin{center}
\vspace{-6mm}
\caption{\textbf{The effect of confidence for category-level pose estimation.}
\vspace{-3mm}}
\label{table:ablation_studies_modules_for_pose_estimation}
\begin{tabular}{c|c c c c c c c c}
\hlin
Confidence & IoU25 & IoU50 & IoU75 & $5^{\circ}$2cm & $5^{\circ}5$cm & $10^{\circ}2$cm & $10^{\circ}5$cm & $10^{\circ}10$cm \\
\hline
& \multicolumn{8}{c}{STD-CatKnownl} \\ \hline
& \textbf{92.4} & \textbf{88.0} & 75.6& 51.0 & 51.9 & 76.0 & 78.2 & 78.3\\ \hline
\checkmark & \textbf{92.4} &\textbf{88.0} &\textbf{75.9}&\textbf{52.9} & \textbf{53.8} &\textbf{77.1} & \textbf{79.1} & \textbf{79.1}\\ \hline
\end{tabular}
\end{center}
\vspace{-6mm}
\end{table}
\vspace{-10mm}
\begin{table}
\scriptsize
\renewcommand{\arraystretch}{1.2}
\begin{center}
\caption{\textbf{Quantitative results for material randomization on depth restoration task.} The left of '/' shows the results evaluated on all objects, and the right of '/' evaluated on specular and transparent objects. Note that only one result is reported on STD-CatNovel, because all the objects are specular or transparent.
\vspace{-3mm}}
\label{table:mat_random}
\resizebox{1.\columnwidth}{!}{
\begin{tabular}{c|cccccc}
\hline
Model & RMSE$\downarrow$ & REL$\downarrow$ & MAE$\downarrow$ & $\delta_{1.05}\uparrow$ & $\delta_{1.10}\uparrow$ & $\delta_{1.25}\uparrow$ \\ \hline
& \multicolumn{6}{c}{STD-CatKnow (Real)} \\ \hline
Fixed material & 0.024/0.038 & 0.024/0.045 & 0.015/0.029 & 86.20/65.63 & 96.12/90.94 & 99.87/99.72 \\ \hline
Full randomization &\textbf{0.015/0.018} &\textbf{0.013/0.016} &\textbf{0.008/0.011} &\textbf{96.66/94.97} &\textbf{99.03/98.79} &\textbf{99.92/99.85} \\ \hline
& \multicolumn{6}{c}{STD-CatNovel (Real)} \\ \hline
Fixed material & 0.038 & 0.051 & 0.027 & 67.52 & 84.86 & 98.51 \\ \hline
Full randomization & \textbf{0.025} & \textbf{0.033} & \textbf{0.017} & 81.55 & \textbf{93.10} & \textbf{99.84} \\ \hline
\end{tabular}
}
\end{center}
\vspace{-8mm}
\end{table}
\begin{table}
\scriptsize
\renewcommand{\arraystretch}{1.2}
\begin{center}
\caption{\textbf{Ablation study for the scale of training data on depth restoration.} SwinDRNet is trained on DREDS-CatKnown and evaluated on the specular and transparent objects of STD.
}
\begin{tabular}{c|cccccc}
\hline
Scale & RMSE$\downarrow$ & REL$\downarrow$ & MAE$\downarrow$ & $\delta_{1.05}\uparrow$ & $\delta_{1.10}\uparrow$ & $\delta_{1.25}\uparrow$ \\ \hline
& \multicolumn{6}{c}{STD-CatKnow (Real)} \\ \hline
Half & 0.021 & 0.020 & 0.014 & 92.71 & 98.54 & 99.83 \\\hline
Full &\textbf{0.018}&\textbf{0.016}&\textbf{0.011}&\textbf{94.97}&\textbf{98.79}&\textbf{99.84}\\ \hline
& \multicolumn{6}{c}{STD-CatNovel (Real)} \\ \hline
Half & 0.028 & 0.037 & 0.020 & 80.37 & 91.16 & 99.79 \\ \hline
Full & \textbf{0.025} & \textbf{0.033} & \textbf{0.017} & \textbf{81.55} & \textbf{93.10} & \textbf{99.84} \\ \hline
\end{tabular}
\label{table:ablation_scale}
\end{center}
\end{table}
\subsection{Depth Estimation and Restoration}
The increasing popularity of RGBD sensors has encouraged much research on depth estimation and restoration.
Many works~\cite{eigen2014depth,jiao2018look,long2021adaptive} directly estimate the depth from a monocular RGB image, but fail to restore accurate geometries of the point cloud because of the few geometric constraints of the color image.
Other studies~\cite{park2020non,xiong2020sparse,qu2021bayesian} restore the dense depth map given the RGB image and the sparse depth from LiDAR, but the estimated depth still suffers from low quality due to the limited geometric guidance of the sparse input.
Recent research focuses on commercial depth sensors, trying to complete and refine the depth values from the RGB and noisy dense depth images.
Sajjan \emph{et al.}~\cite{sajjan2020clear} proposed a two-stage method for transparent object depth restoration, which firstly estimates surface normals, occlusion boundaries, and segmentations from RGB images, and then calculates the refined depths via global optimization. However, the optimization is time-consuming, and heavily relies on the previous network predictions.
Zhu \emph{et al.}~\cite{zhu2021rgb} proposed an implicit transparent object depth completion model, including the implicit representation learning from ray-voxel pairs and the self-iterating refinement, but voxelization of the 3D space results in heavy geometric discontinuity of the refined point cloud.
Our method falls into this category and outperforms those methods, ensuring fast inference time and better geometries to improve the performance of downstream tasks.
\subsection{Depth Sensor Simulation}
To close the sim-to-real gap, the recent research focuses on generating simulated depth maps with realistic noise distribution.~\cite{landau2015simulating} simulated the pattern projection and capture system of Kinect to obtain simulated IR images and perform stereo matching, but could not simulate the sensor noise caused by object materials and scene environments.~\cite{planche2017depthsynth} proposed an end-to-end framework to simulate the mechanism of various types of depth sensors. However, the rasterization method limits the
photorealistic rendering and physically correct simulation.~\cite{planche2021physics} presented a new differentiable structure-light depth sensor simulation pipeline, but cannot simulate the transparent material, limited by the renderer.
Recently,~\cite{zhang2022close} proposed a physics-grounded active stereovision depth sensor simulator for various sim-to-real applications, but focused on instance-level objects and the robot arm workspace.
Our DREDS pipeline generates realistic RGBD images for various materials and scene environments, which can generalize the proposed model to category-level unseen object instances and novel categories.
\subsection{Domain Randomization}
Domain randomization bridges the sim-to-real gap in the way of data augmentation. Tobin \emph{et al.}~\cite{tobin2017domain} first explore transferring to real environments by generating training data through domain randomization.
Subsequent works~\cite{tremblay2018training,yue2019domain,prakash2019structured} generate synthetic data with sufficient variation by manually setting randomized features. Other studies~\cite{zakharov2019deceptionnet} perform randomization using the neural networks. These works have verified the effectiveness of domain randomization on the tasks such as robotic manipulation~\cite{peng2018sim}, object detection and pose estimation~\cite{khirodkar2019domain}, \emph{etc}. In this work, we combine the depth sensor simulation pipeline with domain randomization, which, for the first time, enables direct generalization to unseen diverse real instances on specular and transparent object depth restoration.
\subsection{Overview}
In this work, we propose a simulated RGBD data generation pipeline, namely Domain Randomization Enhanced Depth Simulation (DREDS), for tasks of depth restoration, object perception, and robotic grasping. We build a depth sensor simulator, modeling the mechanism of the active stereo vision depth camera system based on the physically based rendering, along with the domain randomization technique to handle real-world variations.
\begin{table}
\scriptsize
\renewcommand{\arraystretch}{1.2}
\begin{center}
\caption{\textbf{Comparisons of specular and transparent depth restoration dataset.} S, T, and D refer to specular, transparent, and diffuse materials, respectively. \#Objects refers to the number of objects. SN+CG means the objects are selected from ShapeNet and ClearGrasp (the number are not mentioned).}
\label{table:depth_restoration_dataset}
\begin{tabular}{c|c|c|c|c}
\hline
Dataset & Type & \#Objects & Type of Material & Size\\ \hline
ClearGrasp-Syn~\cite{sajjan2020clear} & Syn & 9 & T & 50K \\
Omniverse~\cite{zhu2021rgb} & Syn & SN+CG & T+D & 60K \\
ClearGrasp-Real~\cite{sajjan2020clear} & Real & 10 & T & 286 \\
TODD~\cite{xu2021seeing} & Real & 6 & T & 1.5K \\ \hline
\textbf{DREDS} & Sim & 1,861 & S+T+D & 130K \\
\textbf{STD} & Real & 50 & S+T+D & 27K \\ \hline
\end{tabular}
\end{center}
\end{table}
Leveraging domain randomization and active stereo sensor simulation, we present DREDS, the large-scale simulated RGBD dataset, containing photorealistic RGB images and depth maps with the real-world measurement noise and error, especially for the hand-scale objects with specular and transparent materials.
The proposed DREDS dataset bridges the sim-to-real domain gap, and generalizes the RGBD algorithms to unseen objects. DREDS dataset's comparison to the existing specular and transparent depth restoration datasets is summarized in Table \ref{table:depth_restoration_dataset}.
\subsection{Depth Sensor Simulation}
A classical active stereo depth camera system contains an infrared (IR) projector, left and right IR stereo cameras, and a color camera. To measure the depth, the projector emits an IR pattern with dense dots to the scene. Subsequently, the two stereo cameras capture the left and right IR images, respectively. Finally, the stereo matching algorithm is used to calculate per-pixel depth values based on the discrepancy between the stereo images, to get the final depth scan.
Our depth sensor simulator follows this mechanism, containing light pattern projection, capture, and stereo matching. The simulator is mainly built upon Blender~\cite{Blender}.
\textbf{Light Pattern Capture via physically based rendering.}
For real-world specular and transparent objects, the IR light from the projector may not be received by the stereo cameras, due to the reflection on the surface or the refraction through the transparent objects, resulting in inaccurate and missing depths.
To simulate the physically correct IR pattern emission and capture process, we thus adopt physically based ray tracing, a technique that mimics the real light transportation process, and supports various surface materials especially specular and transparent materials.
Specifically, the textured spotlight projects a binary pattern image into the virtual scene. Sequentially, the binocular IR images are rendered from the stereo cameras. We manage to simulate IR images via visible light rendering, where both the light pattern and the reduced environment illumination contribute to the IR rendering. From the perspective of physics, the difference between IR and visible light is the reflectivity and refractive index of the object. We note that the wavelength (850 nm) of IR light used in depth sensors, \textit{e.g.} RealSense D415, is close to the visible light (400-800 nm). So the resulting effects have already been well-covered by the randomization in object reflectivity and refractive index used in DREDS, which constructs a superset of real IR images. To mimic the portion of IR in environmental light, we reduce its intensity. Finally, all RGB values are converted to intensity, which is our final IR image.
\textbf{Stereo Matching.}
We perform stereo matching to obtain the disparity map, which can be transferred to the depth map leveraging the intrinsic parameters of the depth sensor. In detail, we compute a matching cost volume over the left and right IR images along the epipolar line and find the matching results with minimum matching cost. Then we perform sub-pixel detection to generate a more accurate disparity map using the quadratic curve fitting method. To generate a more realistic depth map, we perform post-processing, including left/right consistency check, uniqueness constraint, median filtering, \emph{etc}.
\subsection{Simulated Data Generation with Domain Randomization}
Based on the proposed depth sensor simulator, we formulate the simulated RGBD data generation pipeline as
$D = Sim(\mathcal{S}, \mathcal{C})$,
where $\mathcal{S} = \{\mathcal{O}, \mathcal{M}, \mathcal{L}, \mathcal{B}\}$ denotes scene-related simulation parameters in the virtual environment, including $\mathcal{O}$ the setting of the objects with random categories, poses, arrangements, and scales, $\mathcal{M}$ the setting of random object materials from specular, transparent, to diffuse, $\mathcal{L}$ the setting of environment lighting from varying scenes with different intensities, $\mathcal{B}$ the setting of background floor with diverse materials. $\mathcal{C}$ is the cameras' statue parameters, consisting of intrinsic and extrinsic parameters, the pattern image, baseline distance, \emph{etc}. Taking these settings as input, the proposed simulator $Sim$ generates the realistic RGB and depth images $D$.
To construct scenes with sufficient variations so that the proposed method can generalize to the real, we adopt domain randomization to enhance the generation, considering all these aspects. See supplementary materials for more details.
\subsection{Simulated Dataset: DREDS}
Making use of domain randomization and depth simulation, we construct the large-scale simulated dataset, DREDS.
In total, DREDS dataset consists of two subsets: 1) \textbf{DREDS-CatKnown}: 100,200 training and 19,380 testing RGBD images made of 1,801 objects spanning 7 categories from ShapeNetCore~\cite{chang2015shapenet}, with randomized specular, transparent, and diffuse materials, 2) \textbf{DREDS-CatNovel}: 11,520 images of 60 category-novel objects, which is transformed from GraspNet-1Billion~\cite{fang2020graspnet} that contains CAD models and annotates poses, by changing their object materials to specular or transparent, to verify the ability of our method to generalize to new object categories. Examples of paired simulated RGBD images of DREDS-Catknown and DREDS-CatNovel datasets are shown in Figure~\ref{fig:simexam}.
\subsection{Real-world Dataset: STD}
To further examine the proposed method in real scenes, we curate a real-world dataset, composed of Specular, Transparent, and Diffuse objects, which we call it STD dataset. Similar to DREDS dataset, STD dataset contains 1) \textbf{STD-CatKnown}: the subset with category-level objects, for the evaluation of depth restoration and category-level pose estimation tasks, and 2) \textbf{STD-CatNovel}: the subset with category-novel objects for evaluating the generalization ability of the proposed SwinDRNet method.
Figure~\ref{fig:realexam} shows the scene examples and annotations of STD dataset.
\subsection{Data Collection}
We collect an object set, covering specular, transparent, and diffuse materials. Specifically, for STD-CatKnown dataset, we collect 42 instances from 7 known ShapeNetCore~\cite{chang2015shapenet} categories, and several category-unseen objects from the YCB dataset~\cite{calli2017yale} and our own as the distractors. For STD-CatNovel dataset, we pick 8 specular and transparent objects from unseen categories.
For each object except the distractors, we utilize the photogrammetry-based reconstruction tool, Object Capture API~\cite{macOSAPI}, to obtain its clean and accurate 3D mesh for ground truth poses annotation, so that we can yield ground truth depth and object masks.
We capture data from 30 different scenes (25 for STD-CatKnown, 5 for STD-CatNovel) with various backgrounds and illuminations, using RealSense D415. In each scene, over 4 objects with random arrangements are placed in a cluttered way. The sensor moves around the objects in an arbitrary trajectory.
In total, we take 22,500 RGBD frames for STD-CatKnown, and 4,500 for STD-CatNovel.
Overall, the proposed real-world STD dataset consists of 27K RGBD frames, 30 diverse scenes, and 50 category-level and category-novel objects, making it facilitate the further generalizable object perception and grasping research.
\subsection{SwinDRNet for Depth Restoration}
\label{sec:depth_restore}
\textbf{Overview.} To restore the noisy and incomplete depth, we propose a SwinTransformer~\cite{liu2021swin} based depth restoration network, namely \textbf{SwinDRNet}.
SwinDRNet takes as input a RGB image $\ \mathcal{I}_{c} \in \mathbb{R}^{H\times W\times 3}$ along with its aligned depth image $\mathcal{I}_{d} \in \mathbb{R}^{H\times W}$ and outputs a refined depth $\hat{ {\mathcal{I}_d}} \in \mathbb{R}^{H\times W}$ that restores the error area of the depth image and completes the invalid area, where $H$ and $W$ are the input image sizes.
We notice that prior works, \emph{e.g.} PVN3D~\cite{he2020pvn3d}, usually leverage a heterogeneous architecture that extracts CNN features from RGB and extracts PointNet++~\cite{qi2017pointnet++} features from depth. We, for the first time, devise a homogeneous and mirrored architecture that only leverages SwinT to extract and hierarchically fuse the RGB and depth features.
As shown in Figure \ref{fig: overview of SwinDR}, the architecture of SwinDRNet is a two-stream fused encoder-decoder and can be further divided into three phases:
in the first phase of feature extraction, we leverage two separate SwinT backbones to extract hierarchical features $\{\mathcal{F}_{c}^i\}$ and $\{\mathcal{F}_{d}^i\}$ from the input RGB image $\mathcal{I}_{c}$ and depth $\mathcal{I}_{d}$, respectively;
In the second stage of RGBD feature fusion, we propose a fusion module $M_{f}$ that utilizes cross-attention transformers to combine the features from the two streams and generate
fused hierarchical features $\{\mathcal{H}^i\}$ ;
and finally in the third phase, we propose two decoder modules,
the depth decoder module $D_{depth}$ decodes the fused feature into a raw depth and the confidence decoder module $D_{conf}$ outputs a confidence map of the predicted raw depth, and from the outputs we can compute the final restored depth by using the confidence map to select accurate depth predictions at noisy and invalid areas of the input depth while keeping the originally correct area as much as possible.
\textbf{SwinT-based Feature Extraction.} To accurately restore the noisy and incomplete depth, we need to leverage visual cues from the RGB image that helps depth completion as well as geometric cues from the depth that may save efforts at areas with correct input depths.
To extract rich features, we propose to utilize SwinT~\cite{liu2021swin} as our backbone, since it is a very powerful and efficient network that can produce hierarchical feature representations at different resolutions and has linear computational complexity with respect to input image size.
Given our inputs contain two modalities -- RGB and depth,
we deploy two seperate SwinT networks, $SwinT_\text{color}$ and $SwinT_\text{depth}$, to extract features from $\mathcal{I}_c$ and $\mathcal{I}_d$, respectively.
For each one of them, we basically follow the design of SwinT.
Taking the $SwinT_\text{color}$ as an example: we first divide the input RGB image $\ \mathcal{I}_{c} \in \mathbb{R}^{H\times W\times 3}$ into non-overlapping patches, which is also called tokens, $\mathcal{T}_{c} \in \mathbb{R}^{{\frac{H}{4}}\times {\frac{W}{4}}\times 48}$; we then pass $ {\mathcal{T}_{c}} $ through the four stages of SwinT to generate the multi-scale features $\{\mathcal{F}_{c}^i\}$, which are especially useful for dense depth prediction thanks to the hierarchical structure. The encoder process can be formulated as:
\begin{equation} \{\mathcal{F}_c^i\}_{i=1,2,3,4} = SwinT_\text{color}(\mathcal{T}_{c}), \end{equation}
\begin{equation} \{\mathcal{F}_d^i\}_{i=1,2,3,4} = SwinT_\text{depth}(\mathcal{T}_d). \end{equation}
where
$ {\mathcal{F}^i} \in \mathbb{R}^{{\frac{H}{4i}}\times {\frac{W}{4i}}\times iC} $ and $C$ is the output feature dimension of the linear embedding layer in the first stage of SwinT.
\textbf{Cross-Attention Transformer based RGB-D Feature Fusion.} Given the hierarchical features $\{\mathcal{F}_{c}^i\}$ and $\{\mathcal{F}_{d}^i\}$ from the two-stream SwinT backbone, our RGB-D fusion module $M_f$ leverages cross-attention transformers to fuse the corresponding $\mathcal{F}_{c}^i$ and $\mathcal{F}_{d}^i$ into $\mathcal{H}^i$.
For attending feature $\mathcal{F_A}$ to $\mathcal{F_B}$, a common cross-attention transformer $T_{CA}$ first calculates the query vector $Q_A$ from $\mathcal{F}_A$ and the key $K_B$ and value $V_B$ vectors from feature $\mathcal{F}_B$:
\begin{equation} \ Q_A = \mathcal{F}_A \cdot W_q, ~~K_B = \mathcal{F}_B \cdot W_k, ~~V_B = \mathcal{F}_B \cdot W_v, \end{equation}
where $W$s are the learnable parameters,
and then computes the cross-attention feature $\mathcal{H}_{\mathcal{F}_A\rightarrow \mathcal{F}_B}$ from $\mathcal{F}_A$ to $\mathcal{F}_B$:
\begin{equation} \ \mathcal{H}_{\mathcal{F}_A\rightarrow \mathcal{F}_B} = T_{CA}(\mathcal{F}_A, \mathcal{F}_B)
=\text{softmax}\left(\frac{Q_A \cdot K_B^T}{\sqrt{d_K}}\right) \cdot V_B, \end{equation}
where $d_K$ is the dimension of $Q$ and $K$.
In our module $M_f$, we leverage bidirectional cross-attention by deploying two cross-attention transformers to obtained the cross-attention features from both directions, and then concatenates them with the original features to form the fused hierarchical features $\{\mathcal{H}^i\}$, as shown below:
\begin{equation} \ \mathcal{H}^i =
\mathcal{H}_{\mathcal{F}_c^i\rightarrow \mathcal{F}_d^i}
\bigoplus \mathcal{H}_{\mathcal{F}_d^i\rightarrow \mathcal{F}_c^i} \bigoplus \mathcal{F}_c^i \bigoplus \mathcal{F}_d^i, \end{equation}
where $\bigoplus$ represents concatenation along the channel axis.
\textbf{Final Depth Prediction via Confidence Interpolation.} The credible area of the input depth map (\emph{e.g.}, the edges of specular or transparent objects in contact with background or diffusive objects) plays a critical role in providing information about spatial arrangement. Inspired by the previous works~\cite{van2019sparse,hu2021penet}, we make use of a confidence map between the raw and predicted depth maps. However, unlike~\cite{van2019sparse,hu2021penet} predicting the confidence map between the multi-modality, we focus on preserving the correct original value to generate more realistic depth maps with less distortion. The final depth map can be formulated as:
\begin{equation} \hat{\mathcal{I}}_d = C \bigotimes \tilde{\mathcal{I}}_{d} + (1-C) \bigotimes \mathcal{I}_{d} \end{equation} where $\bigotimes$ represents elementwise multiplication, and $\hat{\mathcal{I}}_d$ and $\tilde{\mathcal{I}}_{d}$ denote the final restored depth and the output of depth decoder head, respectively.
\textbf{Loss Functions}
For SwinDRNet training, we supervise both the final restored depth $\hat{\mathcal{I}}_d$ and the output of depth decoder head $\tilde{\mathcal{I}}_{d}$, which is formulated as:
\begin{equation}
\mathcal{L} = \omega_{\tilde{\mathcal{I}}_d}\mathcal{L}_{\tilde{\mathcal{I}}_d} + \omega_{\hat{\mathcal{I}}_d}\mathcal{L}_{\hat{\mathcal{I}}_d},
\end{equation}
where $\mathcal{L}_{\hat{\mathcal{I}}_d}$ and $\mathcal{L}_{\tilde{\mathcal{I}}_d}$ are the losses of $\hat{\mathcal{I}}_d$ and $\tilde{\mathcal{I}}_{d}$, respectively. $\omega_{\hat{\mathcal{I}}_d}$ and $\omega_{\tilde{\mathcal{I}}_d}$ are weighting factors. Each of the two loss can be formulated as:
\begin{equation}
\mathcal{L}_i = \omega_{n}\mathcal{L}_n + \omega_{d}\mathcal{L}_d + \omega_{g}\mathcal{L}_g,
\end{equation}
where $\mathcal{L}_n$, $\mathcal{L}_d$ and $\mathcal{L}_g$ are the L1 losses between the predicted and ground truth surface normal, depth and the gradient map of depth image, respectively. $\omega_{n}$, $\omega_{d}$ and $\omega_{g}$ are the weights for different losses. We further add higher weight to the loss within the foreground region, to push the network to concentrate more on the objects.
\subsection{Downstream Tasks}
\label{sec:downstream}
\textbf{Category-level 6D Object Pose Estimation.}
Inspired by~\cite{wang2019normalized}, we use the same backbone with SwinDRNet, and add two decoder heads to predict coordinates of the NOCS map and semantic segmentation mask. Then we follow the method~\cite{wang2019normalized}, perform pose fitting between the restored object point clouds in the world coordinate space and the predicted object point clouds in the normalized object coordinate space, and perform pose fitting to get the 6D object pose.
\textbf{Robotic Grasping.} By combining SwinDRNet to the object grasping task, we can analyze the performance of depth restoration on the robotic manipulation. We adopt the end-to-end network, GraspNet-baseline~\cite{fang2020graspnet}, to predict the 6-DoF grasping poses directly from the scene point cloud. Given the restored depth map from SwinDRNet, the scene point cloud is transformed and sent to GraspNet-baseline. Then the model predicts the grasp candidates. Finally, the gripper of the parallel-jaw robot arm executes the target rotation and position selected from those candidates.
\subsection{Depth Restoration}
\textbf{Evaluation Metrics.}
We follow the metrics of transparent objects depth completion in~\cite{zhu2021rgb}: 1) \textbf{RMSE}: the root mean squared error, 2) \textbf{REL}: the mean absolute relative difference, 3) \textbf{MAE}: the mean absolute error, 4) the percentage of $d_i$ satisfying $max(\frac{d_i}{d_i^*}, \frac{d_i^*}{d_i}) < \bm{\delta} $, where $d_i$ denotes the predicted depth, $d_i^*$ is GT and $\delta \in {\{1.05, 1.10, 1.25\}}$. We resize the prediction and GT to $126 \times 224$ resolution for fair comparisons, and evaluate in all objects area and challenging area (specular and transparent objects), respectively.
\textbf{Baselines.}
We compare our method with several state-of-the-art methods, including LIDF \cite{zhu2021rgb}, the SOTA method for depth completion of transparent objects, and NLSPN~\cite{park2020non}, the SOTA method for depth completion on NYUv2~\cite{uhrig2017sparsity} dataset. All baselines are trained on the train split of DREDS-CatKnown and evaluated on four types of testing data: 1) the test split of DREDS-CatKnown: simulated images of category-known objects. 2) DREDS-CatNovel: simulated images of category-novel objects. 3) STD-CatKnown: real images of category-known objects; 4) STD-CatNovel. real images of category-novel objects.
\textbf{Results.} The quantitative results reported in Table \ref{table:depth_restoration_ourdataset} show that we achieve the best performance compared to other methods on DREDS and STD datasets, and have a powerful generalization ability to transfer to not only novel category objects in the simulation environment but also in the real world.
In addition to performance gain, ours (30 FPS) is significantly faster than LIDF (13 FPS) and the two-branch baseline that uses PointNet++ on depth (6 FPS). Although it is a little slower than NLSPN (35 FPS), SwinDRNet has achieved real-time depth restoration, and our code still has room for optimization and speedup. The methods are all evaluated on an NVIDIA RTX 3090 GPU.
\begin{table}
\scriptsize
\renewcommand{\arraystretch}{1.2}
\begin{center}
\caption{\textbf{Quantitative comparison to state-of-the-art methods on DREDS and STD.} $\downarrow$ means lower is better, $\uparrow$ means higher is better. The left of '/' shows the results evaluated on all objects, and the right of '/' shows the results evaluated on specular and transparent objects. Note that only one result is reported on STD-CatNovel, because all the objects are specular or transparent.}
\label{table:depth_restoration_ourdataset}
\begin{tabular}{c|cccccc}
\hlin
Methods & RMSE$\downarrow$ & REL$\downarrow$ & MAE$\downarrow$ & $\delta_{1.05}\uparrow$ & $\delta_{1.10}\uparrow$ & $\delta_{1.25}\uparrow$ \\
\hline
& \multicolumn{6}{c}{DREDS-CatKnown (Sim)} \\ \hline
NLSPN &\textbf{0.010}/0.011 & 0.009/0.011 &0.006/0.007 &97.48/96.41 &99.51/99.12 &99.97/99.74\\ \hline
LIDF &0.016/0.015 &0.018/0.017 &0.011/0.011 &93.60/94.45 &98.71/98.79 &99.92/99.90\\ \hline
Ours &\textbf{0.010/0.010} &\textbf{0.008/0.009} &\textbf{0.005/0.006} &\textbf{98.04/97.76} &\textbf{99.62/99.57} &\textbf{99.98/99.97}\\ \hline
& \multicolumn{6}{c}{DREDS-CatNovel (Sim)} \\ \hline
NLSPN &0.026/0.031 &0.039/0.054 &0.015/0.021 &78.90/69.16 & 89.02/83.55&97.86/96.84\\ \hline
LIDF &0.082/0.082 &0.183/0.184 &0.069/0.069 &23.70/23.69 &42.77/42.88 &75.44/75.54\\ \hline
Ours &\textbf{0.022/0.025} &\textbf{0.034/0.044} &\textbf{0.013/0.017} &\textbf{81.90/75.27} &\textbf{92.18/89.15} &\textbf{98.39/97.81}\\ \hline
& \multicolumn{6}{c}{STD-CatKnown (Real)} \\ \hline
NLSPN & 0.114/0.047 &0.027/0.031 & 0.015/0.018&94.83/89.47 &98.37/97.48 &99.38/99.32\\ \hline
LIDF &0.019/0.022 &0.019/0.023 &0.013/0.015 &93.08/90.32 &98.39/97.38 &99.83/99.62\\ \hline
Ours &\textbf{0.015/0.018} &\textbf{0.013/0.016} &\textbf{0.008/0.011} &\textbf{96.66/94.97} &\textbf{99.03/98.79} &\textbf{99.92/99.85}\\ \hline
& \multicolumn{6}{c}{STD-CatNovel (Real)} \\ \hline
NLSPN &0.087 & 0.050&0.025 &\textbf{81.95} &90.36 &96.06 \\ \hline
LIDF & 0.041 & 0.060 & 0.031 & 53.69 & 79.80 & 99.63 \\ \hline
Ours & \textbf{0.025} & \textbf{0.033} & \textbf{0.017} & 81.55 & \textbf{93.10} & \textbf{99.84} \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\scriptsize
\renewcommand{\arraystretch}{1.2}
\begin{center}
\caption{\textbf{Quantitative results for Sim-to-Real.} \emph{Synthetic} means taking the cropped synthetic depth images for training, and \emph{Simulated} means taking the simulated depth images from the train split of DREDS-CatKnown for training.}
\label{table:depth_restoration_sim2real}
\begin{tabular}{c|cccccc}
\hline
Trainset & RMSE$\downarrow$ & REL$\downarrow$ & MAE$\downarrow$ & $\delta_{1.05}\uparrow$ & $\delta_{1.10}\uparrow$ & $\delta_{1.25}\uparrow$ \\ \hline
& \multicolumn{6}{c}{STD-CatKnown (Real)} \\ \hline
Synthetic & 0.0467/0.056 & 0.0586/0.070 & 0.0377/0.047 & 49.12/39.42 & 86.50/79.85 & 98.98/97.66 \\ \hline
Simulated & \textbf{0.015/0.018} &\textbf{0.013/0.016} &\textbf{0.008/0.011} &\textbf{96.66/94.97} &\textbf{99.03/98.79} &\textbf{99.92/99.85}\\ \hline
& \multicolumn{6}{c}{STD-CatNovel (Real)} \\ \hline
Synthetic & 0.065 & 0.101 & 0.053 & 21.04 & 55.87 & 96.96\\ \hline
Simulated & \textbf{0.025} & \textbf{0.033} & \textbf{0.017} & \textbf{81.55} & \textbf{93.10} & \textbf{99.84} \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\scriptsize
\renewcommand{\arraystretch}{1.2}
\begin{center}
\caption{\textbf{Quantitative results for domain transfer.} \emph{The previous best results} means that the best previous method is trained on ClearGrasp and Omniverse, and evaluated on ClearGrasp. \emph{Domain transfer} means that SwinDRNet is trained on DREDS-CatKnown and evaluated on ClearGrasp.}
\label{table:depth_restoration_domain_transfer}
\begin{tabular}{c|cccccc}
\hline
Model & RMSE$\downarrow$ & REL$\downarrow$ & MAE$\downarrow$ & $\delta_{1.05}\uparrow$ & $\delta_{1.10}\uparrow$ & $\delta_{1.25}\uparrow$ \\ \hline
& \multicolumn{6}{c}{ClearGrasp real-known} \\ \hline
The previous best results & 0.028 & 0.033 & 0.020 & 82.37 & 92.98 & 98.63 \\ \hline
Domain transfer & \textbf{0.022} & \textbf{0.017} & \textbf{0.012} & \textbf{91.46} & \textbf{97.47} & \textbf{99.86} \\ \hline
& \multicolumn{6}{c}{ClearGrasp real-novel} \\ \hline
The previous best results & 0.025 & 0.036 & 0.020 & 79.5 & 94.01 & 99.35 \\ \hline
Domain transfer & \textbf{0.016} & \textbf{0.008} & \textbf{0.005} & \textbf{96.73} & \textbf{98.83} & \textbf{99.78} \\ \hline
\end{tabular}
\end{center}
\end{table}
\textbf{Sim-to-Real and Domain Transfer.}
We perform sim-to-real and domain transfer experiments to verify the generalization ability of the DREDS dataset. For sim-to-real experiments, SwinDRNet is trained on DREDS-CatKnown, but takes different depth images as input of training (one follow~\cite{zhu2021rgb} and takes the cropped synthetic depth image as input, and another takes the simulated depth image). The results evaluated on STD in Table \ref{table:depth_restoration_sim2real} reveal the powerful potential of our depth simulation pipeline, which can significantly close the sim-to-real gap and generalize to the new categories. For domain transfer experiments, we train SwinDRNet on the train split of DREDS-CatKnown dataset and evaluate on Cleargrasp dataset. The results reported in Table \ref{table:depth_restoration_domain_transfer} testify that model only trained on DREDS-CatKnown can easily generalize to the new domain ClaerGrasp and outperform the previous results directly trained on ClearGrasp and Omniverse~\cite{zhu2021rgb} (LIDF train on Omniverse and ClearGrasp), which verifies the generalization ability of our dataset.
\subsection{Category-level Pose Estimation}
\textbf{Evaluation Metrics.}
We use two aspects of metrics to evaluate: 1) \textbf{3D IoU.} It computes the intersection over union of ground truth and predicted 3D bounding boxes. We choose the threshold of 25$\%$ (IoU25), 50$\%$(IoU50) and 75$\%$(IoU75) for this metric. 2) \textbf{Rotation and translation errors.} It computes the rotation and translation errors between the ground truth pose and predicted pose. We choose 5$^{\circ}$2cm, 5$^{\circ}$5cm, 10$^{\circ}$2cm, 10$^{\circ}$5cm, 10$^{\circ}$10cm for this metric.
\textbf{Baselines.} We choose two models as baselines to show the usefulness of the restored depth for category-level pose estimation and the effectiveness of SwinDRNet+NOCSHead: 1) \textbf{NOCS}~\cite{wang2019normalized}. It takes a RGB image as input to predict the per-pixel normalized coordinate map and obtain the pose with the help of the depth map. 2) \textbf{SGPA}~\cite{chen2021sgpa}. The state-of-the-art method. It leverages one object and its corresponding category prior to dynamically adapting the prior to the observed object. Then the prior adaptation is used to reconstruct the 3D canonical model of the specific object for pose fitting.
\textbf{Results.} To verify the usefulness of the restored depth, we report the results of three methods using raw or restored (output of SwinDRNet) depth in Table \ref{table:pose_estimation}. \emph{-only} means using raw depth in the whole experiment, \emph{Refined depth+} means using restored depth for pose fitting in NOCS and SwinDRNet+NOCSHead. Due to the fact that SGPA deforms the point cloud to get the results which are sensitive to depth, we use restored depth for both training and inference. We observe that restored depth improves the performance of three methods by large margins under all the metrics on both dataset. These performance gains suggest that depth restoration is truly useful for category-level pose estimation. Moreover, SwinDRNet+NOCSHead outperforms NOCS and SGPA under all the metrics.
\begin{table}
\scriptsize
\renewcommand{\arraystretch}{1.2}
\begin{center}
\caption{\textbf{Quantitative results for category-level pose estimation.} \emph{only} means using raw depth in the whole experiment,\emph{Refined} means using restored depth for training and inference in SGPA and for pose fitting in NOCS and our method.
\label{table:pose_estimation}
\begin{tabular}{c|c c c c c c c c}
\hlin
Methods & IoU25 & IoU50 & IoU75& $5^{\circ}2$cm & $5^{\circ}5$cm & $10^{\circ}2$cm & $10^{\circ}5$cm & $10^{\circ}10$cm \\
\hline
& \multicolumn{8}{c}{DREDS-CatKnown (Sim)} \\ \hline
NOCS-only & 85.7 & 66.0 & 23.0 &21.3 & 25.4 & 40.0 & 47.9 & 49.0\\ \hline
SGPA-only & 79.5 & 66.7 & 49.1&29.5 & 32.5 & 48.7 & 54.7 & 55.7\\ \hline
Refined depth + NOCS & 86.7 & 73.2 & 40.7 & 30.4 & 31.8 & 54.1 & 57.5 & 57.6 \\ \hline
Refined depth + SGPA & 82.3 & 72.0 & 60.5 & 45.9 & 46.8 & 66.4 & 68.4& 68.5 \\ \hline
Ours-only & 94.3 &82.5 & 57.9& 34.5 & 37.6 & 55.7 & 62.6 & 63.2\\ \hline
Refined depth + Ours & \textbf{94.7} &\textbf{84.8} &\textbf{68.0} & \textbf{49.1} & \textbf{50.1} & \textbf{69.8} & \textbf{72.4} & \textbf{72.5}\\ \hline
& \multicolumn{8}{c}{STD-CatKnown (Real)} \\ \hline
NOCS-only & 83.2 & 66.9 & 16.9 & 20.4 & 26.0 & 37.9 & 52.5 & 53.5\\ \hline
SGPA-only & 77.6 & 67.1 & 46.6& 30.0 & 32.3 & 47.7 & 53.3 & 53.9\\ \hline
Refined depth + NOCS & 82.6 & 72.6 & 35.6 & 28.5 & 30.0 & 54.4 & 57.6 & 57.7 \\ \hline
Refined depth + SGPA & 78.8 & 71.6 & 62.8 & 49.3 & 49.7 & 70.5 & 71.5 & 71.6 \\ \hline
Ours-only & \textbf{92.4} & 87.4 & 61.7 &37.9 & 42.6 &57.8 & 70.6 & 71.0\\ \hline
Refined depth + Ours & \textbf{92.4} &\textbf{88.0}& \textbf{75.9} &\textbf{52.9} & \textbf{53.8} &\textbf{77.1} & \textbf{79.1} & \textbf{79.1}\\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Robotic Grasping}
\textbf{Experiments Setting.}
We conduct real robot experiments to evaluate the depth restoration performance on robotic grasping tasks. In our physical setup, we use a 7-DoF Panda robot arm from Franka Emika with a parallel-jaw gripper. RealSense D415 depth sensor is mounted on the tripod in front of the arm. We set 6 rounds of table clearing experiments. For each round, 4 to 5 specular and transparent objects are randomly picked from STD objects to construct a cluttered scene. For each trial, the robot arm executes the grasping pose with the highest score, and removes the grasped object until the workspace is cleared, or 10 attempts are reached.
\textbf{Evaluation Metrics.}
Real grasping performance is measured using the following metrics: 1) \textbf{Success Rate}: the ratio of grasped object number and attempt number, 2) \textbf{Completion Rate}: the ratio of successfully removed object number and the original object number in a scene.
\textbf{Baselines.}
We follow the 6-DoF grasping pose prediction network GraspNet-baseline, using the released pretrained model. \emph{GraspNet} means GraspNet-baseline directly takes the captured raw depth as input, while \emph{SwinDRNet+GraspNet} means the network receives the refined point cloud from SwinDRNet that is trained only on DREDS-CatKnown dataset.
\begin{table}
\scriptsize
\renewcommand{\arraystretch}{1.2}
\begin{center}
\caption{\textbf{Results of real robot experiments.} \emph{\#Objects} denotes the sum of grasped object numbers in all rounds.
\emph{\#Attempts} denotes the sum of robotic grasping attempt numbers in all rounds.}
\label{table:grasping_realrobot}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{c|cccc}
\hline
Methods & \#Objects & \#Attempts & Success Rate & Completion Rate\\ \hline
GraspNet & 19 & 49 & 38.78\% & 40\% \\ \hline
SwinDRNet+GraspNet & 25 & 26 & \textbf{96.15\%} & \textbf{100\%}\\ \hline
\end{tabular}
\end{center}
\end{table}
\textbf{Results.}
Table~\ref{table:grasping_realrobot} reports the performance of real robot experiments. \emph{SwinDRNet+GraspNet} obtains high success rate and completion rate, while \emph{GraspNet} is lower. Without depth restoration, it is difficult for a robot arm to grasp specular and transparent objects due to the severely incomplete and inaccurate raw depth. The proposed SwinDRNet significantly improves the performance of specular and transparent object grasping.
|
2,869,038,156,805 | arxiv | \section{Introduction}
Chirp-pulse amplification (CPA) technique has allowed construction of compact very high-intensity femtosecond laser pulses enabling experimental observation of physics of relativistic Laser-Plasma interactions. The knowledge of these processes of laser-plasma interactions is being applied to the development of compact proton and ion accelerator systems. These compact and relatively cheaper accelerators may allow larger adoption of hadron-therapy for cancer treatment by the medical community. In addition to medical applications these ultra-short and ultra-intense proton beams may be potentially applied to Fast-ignition fusion research, particle production and compact alternatives to conventional particle injectors. Amongst the various desirable characteristics of an accelerated beam, mono-energeticity and collimation are crucial. However, the existing laser-plasma acceleration techniques do not demonstrate the capabilities to meet the requirements for many of these applications. The scaling laws with picosecond and femtosecond laser pulses and different plasma / target parameters have been extensively studied\cite{scaling-laws-2006}\cite{scaling-laws-2007}.
\begin{figure}
\begin{center}
\includegraphics[width=3.25in]{CHIRP_Snowplow_cartoon_concept.pdf}
\end{center}
\caption{(a) Conceptual diagram depicting a chirped laser pulse propagating towards a rising plasma density gradient. (b) 1-D OSIRIS\cite{osiris-code-2002} simulations showing ChITA-snowplow in electron density and the corresponding electric field at $t=1996\frac{1}{\omega_0}$ with a circularly-polarized (CP) laser of normalized magnetic vector potential, $a_0=3.54$, laser frequency chirp parameters of $\epsilon_0=0.4$, $\theta=2000\frac{c}{\omega_0}$ ($\Rightarrow 850fs$) and plasma density gradient scale length of $\alpha=50\frac{c}{\omega_0}$ ($\simeq 6.4\mu m$). (c) The ChITA-snowplow at a later time, $t=3892.2\frac{1}{\omega_0}$, propagating further longitudinally into the rising plasma density gradient. The longitudinal distance is represented in $\mu m$ assuming a Ti:Sapphire laser with a wavelength of $0.8\mu m$}
\label{Snowplow-concept}
\end{figure}
The existing regimes include Target Normal Sheath Acceleration (TNSA), Collision-less Electrostatic Shock acceleration (CESA) and Radiation Pressure Acceleration (RPA). The CESA and RPA techniques offer potential to generate relatively mono-energetic proton beams. However, the recent experiments using the CESA and RPA techniques have had to rely on high-energy $CO_2$ lasers (picosecond pulse lengths) which are gas (active medium) laser and unlike the Ti:Sapphire crystal CPA lasers, may not be conducive to miniaturization.
In this paper, we present preliminary analysis of a unique scheme that accelerates the protons and light ions mono-energetically by reflecting them off of a continuously driven acceleration structure which we refer to as a ChITA-{\it Snowplow}. The controlled positive frequency chirp of a relativistic laser pulse interacting with the critical density layer of the plasma {\it continuously} drives the snowplow longitudinally forward. The control of frequency chirp is assumed to be experimentally realizable considering the fact that CPA laser pulses have residual frequency chirp. And, there is also a significant research dedicated to higher-harmonic generation. The radiation pressure of the laser pushes the critical layer electrons longitudinally forward, displacing them from the heavier background ions, creating a region of charge imbalance which develops the snowplow electric field. The propagating snowplow potential (when above a threshold) reflects the protons and light ions ahead of it to twice its velocity. The laser-plasma parameters of this scheme can be used to control the snowplow velocity and thereby tune the proton energies.
\section{ChITA scheme - analytical model}
We analyse the propagation of a frequency-chirped relativistic laser pulse into a rising plasma density gradient. The laser is modeled with normalized magnetic vector potenital $a_0$ ($=\frac{e|\vec{A}|}{m_ec^2}=\frac{\vec{p}_\perp^{~e}(t)}{m_ec}$) and frequency of $\omega_0$ at its head. The rate of rise of the frequency of the laser pulse is controlled and can be described by frequency-rise chirp-fraction $\epsilon_0$ (eq.\ref{laser-chirp}) and its rise scale-length $\theta$. For every time interval of $\frac{\theta}{c}$ the laser frequency increases by $\Delta\omega_0$.
\begin{align}
\nonumber \epsilon_0 &= \frac{\Delta\omega_0}{\omega_0} \\
\newline \epsilon(x,t) &= \epsilon_0\left(\frac{ct-x}{\theta}\right)H(ct-x)
\label{laser-chirp}
\end{align}
\noindent The Heaviside step function $H(ct-x)$ ensures that the frequency chirping effect is observed at a point x in space only after the head of the pulse has reached that point, x (when $(ct-x)>0$).
The dispersion relation for the {\it Transverse mode} of electromagnetic radiation (pure sinusoid of frequency $\omega_0$) interacting with collision-less cold-plasma of electron-plasma frequency, $\omega_p = \sqrt{\frac{4\pi e^2 n_e}{m_e}}$, is in eq.\ref{transverse-mode-dispersion}. When the electromagnetic radiation encounters the condition where $\omega_0=\omega_p$, its wave-vector, $k=0$ and it cannot propagate into plasma beyond the critical density, $n_c=\frac{m_e\omega_0^2}{4\pi e^2}$.
\begin{equation}
\omega_0^2=\omega_p^2+k^2c^2
\label{transverse-mode-dispersion}
\end{equation}
However, if the intensity of the laser field is high enough such that it can cause the plasma electrons to undergo relativistic quiver oscillations, increasing their mass by the Lorentz factor, laser pulse can propagate beyond the critical density. This is called Relativistically Induced Transparency\cite{rit-prl-1971}. The Lorentz factor of the electrons quivering relativistically in the laser field is, $\gamma_{\perp}^{e^-}=\sqrt{1+\frac{\vec{p}_{\perp}.\vec{p}_{\perp}}{m_e^2c^2}}=\sqrt{1+a_0^2}$.
This relativistic electron mass increase thereby changes the plasma frequency to $\omega_p^{\gamma} = \sqrt{\frac{4\pi e^2 n_e}{m_e \sqrt{1+a_0^2}}}$.
In the ChITA scheme frequency of the laser pulse is increased to enable the laser pulse to propagate further into the rising plasma density, with increasing $n_e(x)$ resulting in the increase in electron fluid oscillation frequency, $\omega_{pe}(x)$. If the laser frequency $\omega(x,t)=\omega_0(1+\epsilon(x,t))$ is increased such that the laser field can only be shielded by plasma electron fluid oscillating at a higher $\omega_{pe}$, then the laser with increasing frequency can propagate further longitudinally. We refer to this process as {\it Chirp Induced Transparency}. The control of the frequency rise is important to maintain resonance with plasma electron fluid, to enable optimal transfer of energy from laser to plasma electrons. The propagating critical layer which we refer to as snowplow is depleted of electrons which being pushed by the laser radiation-pressure pile-up in a steepened density just beyond the critical layer, giving rise to the snowplow electric-field. The propagating snowplow potential reflects and accelerates the protons and light ions ahead of it. We refer to this acceleration method as {\it Chirp Induced Transparency Acceleration} (ChITA).
The laser pulse incident at the plasma interface is assumed to be a long flat pulse with approximately 5 laser periods ($30\frac{1}{\omega_0}$) in the rise and fall of the pulse. The laser pre-pulse (picoseconds to nanosecond long) creates the heavy-ion metal plasma by ablation. The diffusion of plasma away from metal-air interface over the pre-pulse time-scales creates the plasma density gradient, before the main pulse arrives.
The plasma density profile created by laser pre-pulse ablating the metallic foil (``over-dense" plasma) is assumed to be linearly rising with the rise-scale length of $\alpha$ as in eq.\ref{plasma-density-gradient}.
\begin{equation}
n_e(x) = n_c\left(\frac{x}{\alpha}\right)
\label{plasma-density-gradient}
\end{equation}
\noindent This model simplifies the analysis and understanding of the interaction process. The scale length can be experimentally controlled by varying the intensity and duration of the pre-pulse.
In consideration of the relativistic laser frequency-chirp (time varying frequency, $\omega(t)$) and its interaction with the rising plasma-density gradient, the plasma frequency at the relativistically corrected critical density ($k=0$) is a function of space and time, as in eq.\ref{plasma-frequency-variation}.
\begin{align}
\omega(x,t) = \omega_p^{\gamma} &= \sqrt{\frac{4\pi e^2 n_e(x)}{\gamma m_e}} = \sqrt{\frac{4\pi e^2 n_c(x/\alpha)}{m_e \sqrt{1+a_0^2}}} \\
\newline \omega(x,t) = \omega_0\left(1+\epsilon(x,t)\right) &= \sqrt{\frac{4\pi e^2 n_c}{m_e}}\sqrt{\left(\frac{(x/\alpha)}{\gamma}\right)}\\
\newline \left(1+\epsilon_0\left(\frac{ct-x}{\theta}\right)\right)^2 &= \frac{(x/\alpha)}{\gamma}
\label{plasma-frequency-variation}
\end{align}
By solving eq.\ref{plasma-frequency-variation}, we obtain the time-dependent expression for the location of the moving critical density driven by the frequency-chirp. This is the density at which the laser is shielded and stopped by the plasma electrons and thereby at this density the energy exchange between the laser and plasma is through a resonant process. The electron quiver frequency in the laser electric field is equal to the plasma electron fluid frequency.
\begin{align}
\nonumber & x_{sp}(t) =\\
\newline & \frac{2 c t \alpha \gamma \epsilon_0 ^2 + 2 \alpha \gamma \epsilon_0 \theta + \theta ^2 \pm \theta \sqrt{4 c t \alpha \gamma \epsilon_0 ^2 + 4 \alpha \gamma \epsilon_0 \theta + \theta ^2}} {2 \alpha \gamma \epsilon_0 ^2}
\label{x-sp-location}
\end{align}
Partially differentiating eq.\ref{x-sp-location} with respect to time, we obtain the velocity of the chirp driven snow-plow, eq.\ref{v-sp-velocity}.
\begin{align}
& v_{sp}(t)=c-\frac{c \theta }{\sqrt{4 c t \alpha \gamma \epsilon_0 ^2+\theta (4 \alpha \gamma \epsilon_0 + \theta )}}
\label{v-sp-velocity}
\end{align}
\noindent We substitute the parameters of Fig.\ref{Snowplow-concept}, into the ChiITA-snowplow velocity expression, eq.\ref{v-sp-velocity}. This gives us a ChiITA-snowplow velocity at the time $t=0$, of $v_{sp}=0.066c$.
\section{Simulation results}
To verify the formation of ChITA-snowplow and its frequency-chirp driven propagation we use OSIRIS Particle-In-Cell (PIC) 1D code for simulating a chirped laser pulse interacting with plasma density gradient. The background plasma ions are assumed to be fixed. The simulation is Eulerian and setup with 20 cells per $\frac{c}{\omega_0}$, 40 particles per cell and with the normalization of $\frac{\omega_p}{\omega_0}=1$. The results presented in this paper are with a circularly polarized (CP) laser pulse. Linearly polarized (LP) laser pulse has also been verified. The simulation evolves with time-steps of $\Delta t=0.0499\frac{1}{\omega_0}$ and a sliding window time average over
one laser period is applied before fields and phase-space data are dumped. The trace protons are modeled using test species at $10^{-4}\times n_c$.
Using 1-D simulations we find that the ChITA snowplow for the laser-plasma conditions propagates approximately at the analytically predicted velocity, $v_{sp}=0.066c$. It can also be observed from the trace-proton longitudinal phase space Fig.\ref{Chirp-Snowplow-1D-long-phase-space}(a)(b), that the protons are reflected off the propagating snowplow at $2\times v_{sp}=0.132c$, thereby gaining a momentum of $0.132m_pc$. In Fig.\ref{Chirp-Snowplow-1D-long-phase-space}(a), an initial bunch which is launched close to $0.19m_pc$, due to laser-pulse rise-time effects can also be observed \cite{ieee-pac-2011}. In the simulations when the laser is not chirped, $\epsilon_0=0$, the propagation occurs initially only during the time-period that can be attributed to rise-time snowplow \cite{ieee-pac-2011}. There is negligible forward propagation of the critical layer in the flat-part of the laser pulse.
\begin{figure}
\begin{center}
\includegraphics[width=3.25in]{CHIRP_Snowplow_Long_phase_space.pdf}
\end{center}
\caption{(a) Longitudinal phase space of the trace protons (setup as test particles at $10^{-4}\times n_c$) being reflected by the ChITA-snowplow potential corresponding to the density snapshot, Fig.\ref{Snowplow-concept}(b), with identical laser-plasma parameters at $t=1996\frac{1}{\omega_0}$. An initial bunch is launched at momentum of $0.19m_pc$ by the rise-time effects \cite{ieee-pac-2011} but when the pulse rises to its flat part ChITA-snowplow drives and protons are launched at momentum of $0.132m_pc$. (b) $t=3892.2\frac{1}{\omega_0}$,Fig.\ref{Snowplow-concept}(c). ChITA-snowplow has advanced longitudinally and continues to reflects protons mono-energetically at a momentum of $0.132m_pc$.}
\label{Chirp-Snowplow-1D-long-phase-space}
\end{figure}
|
2,869,038,156,806 | arxiv | \section{Introduction}
In the \emph{Equivalence Class Sorting} problem, we are given
a set, $S$, of $n$ elements and an equivalence relation, and we are asked
to group the elements of the set into their equivalence classes by only
making pairwise equivalence tests
(e.g., see~\cite{Jayapaul2015}).
For example,
imagine a convention of $n$ political interns where each
person at the convention belongs to one of $k$ political parties,
such as Republican, Democrat, Green, Labor, Libertarian, etc.,
but no intern wants to openly
express his or her party affiliation unless they know they are
talking with someone of their same party.
Suppose further that each party has a secret handshake that
two people can perform that allows them to
determine whether they are in the same political party (or
they belong to different unidentified parties).
We are interested in this paper in the computational complexity of
the equivalence class sorting problem in distributed and parallel settings,
where we would like to minimize the total number of parallel comparison
rounds and/or the total number of comparisons needed in order to classify
every element in $S$.
An important property of the equivalence class sorting problem is that it is
not possible to order the elements in $S$ according to some total ordering
that is consistent with the equivalence classes. Such a restriction could
come from a general lack of such an ordering or from security or privacy
concerns. For example, consider the following applications:
\begin{itemize}
\item
\emph{Generalized fault diagnosis}. Suppose that each of $n$ different
computers are in one of $k$ distinct malware states, depending on whether
they have been infected with various computer worms.
Each worm does not wish to reveal its presence, but it
nevertheless has an ability to detect when another computer
is already infected with it (or risk autodetection by an exponential
cascade, as occurred with the Morris worm~\cite{Morris}).
But a worm on one computer
is unlikely to be able to detect a different kind of worm
on another computer.
Thus, two computers can only compare each
other to determine if they have exactly the same kinds of infections or not.
The generalized fault diagnosis problem, therefore, is to have the $n$
computers classify themselves into $k$ malware groups depending on their
infections, where the only testing method available is for two computers to
perform a pairwise comparison that tells
them that they are either in the same malware
state or they are in different states.
This is a generalization of the classic fault diagnosis problem, where
there are only two states, ``faulty'' or ``good,'' which is studied in
a number of interesting papers, including one from the very first
SPAA conference (e.g., see
\cite{Beigel:1989,Beigel:1993,b492587,Goodrich2008199,PU:46643,p4039201}).
\item
\emph{Group classification via secret handshakes}.
This is a cryptographic analogue to
the motivating example given above of interns at a political convention.
In this case, $n$ agents are each assigned to one of $k$
groups, such that any two agents can perform a cryptographic
``secret handshake'' protocol that results in them learning only whether
they belong to the same group or not
(e.g., see~\cite{Castelluccia2004,Jarecki2007,Sorniotti2010619,Xu:2004}).
The problem is to perform an efficient number of pairwise secret-handshake
tests in a few parallel rounds so that each agent identifies itself with
the others of its group.
\item
\emph{Graph mining}.
Graph mining is the study of structure in collections of
graphs~\cite{Cook:2006}.
One of the algorithmic problems in this
area is to classify which of a collection of
$n$ graphs
are isomorphic to one another (e.g., see~\cite{Parthasarathy2010}).
That is, testing if two graphs are in the same group involves performing
a graph isomorphism comparison of the two graphs, which is a computation that
tends to be nontrivial but is nevertheless computationally feasible in some
contexts (e.g., see~\cite{graphs}).
\end{itemize}
Note that each of these applications contains two
important features that
form the essence of the equivalence class sorting problem:
\begin{enumerate}
\item
In each application,
it is not possible to sort elements according to a known total order,
either because no such total order exists or because it would break
a security/privacy condition to provide such a total order.
\item
The equivalence or nonequivalence between two
elements can be determined only through pairwise comparisons.
\end{enumerate}
There are nevertheless some interesting differences between
these applications, as well, which motivate
our study of two different versions of the equivalence class sorting problem.
Namely, in the first two applications, the comparisons done in any given
round in an algorithm must be disjoint, since the elements themselves
are performing the comparisons. In the latter two
applications, however, the elements are the objects of
the comparisons, and
we could, in principle, allow for comparisons involving multiple
copies of the same element in each round.
For this reason, we allow for two versions of the equivalence class sorting
problem:
\begin{itemize}
\item
\emph{Exclusive-Read (ER)} version. In this version, each element in $S$ can
be involved in at most a single comparison of itself and another
element in $S$ in any given comparison round.
\item
\emph{Concurrent-Read (CR)} version. In this version, each element in $S$ can
be involved in multiple comparisons of itself and other elements in $S$
in any comparison round.
\end{itemize}
In either version, we are interested in minimizing the number of parallel
comparison rounds and/or the total number of comparisons needed
to classify every element of $S$ into its group.
Because we expect the
number parallel comparison rounds and the total number of comparisons
to be the main performance bottlenecks,
we are interested here in studying the equivalence class sorting problem in
Valiant's parallel comparison model~\cite{Valiant},
which only counts steps in which
comparisons are made.
This is a synchronous computation
model that does not count any steps done between comparison steps,
for example, to aggregate groups of equivalent elements based on
comparisons done in previous steps.
\subsection{Related Prior Work}
In addition to the references cited above that motivate
the equivalence class sorting problem or study the
special case when the number of groups, $k$, is two,
Jayapaul {\it et al.}~\cite{Jayapaul2015} study the general
equivalence class sorting problem,
albeit strictly from a sequential perspective.
For example, they show that one can solve the equivalence class sorting
problem using $O(n^2/\ell)$ comparisons, where $\ell$ is the size of the smallest
equivalence class.
They also show that this problem has a lower bound of $\Omega(n^2/\ell^2)$ even
if the value of $\ell$ is known in advance.
The equivalence class sorting problem is, of course,
related to comparison-based
algorithms for computing the majority or mode of a set of elements,
for which there is an extensive set of prior research
(e.g., see~\cite{Alonso1993253,Alonso2013495,DOBKIN1980255,ref11}).
None of these algorithms for majority or mode result in efficient parallel
algorithms for the equivalence class sorting problem, however.
\subsection{Our Results}
In this paper, we study the equivalence class sorting (ECS)
problem from a parallel perspective, providing a
number of new results, including the following:
\begin{enumerate}
\item
The CR version of the ECS problem can be solved in $O(k + \log\log n)$
parallel rounds using $n$ processors, were $k$ is the number of equivalence classes.
\item
The ER version of the ECS problem can be solved in $O(k\log n)$
parallel rounds using $n$ processors, were $k$ is the number of equivalence classes.
\item
The ER version of the ECS problem can be solved in $O(1)$
parallel rounds using $n$ processors, for the case when $\ell$ is at least $\lambda n$, for
a fixed constant $0<\lambda\le 0.4$,
where $\ell$ is the size of the smallest equivalence class.
\item
If every equivalence class is of size $f$, then solving the ECS problem
requires $\Omega(n^2/f)$ total comparisons.
This improves a lower bound of $\Omega(n^2/f^2)$ by
Jayapaul {\it et al.}~\cite{Jayapaul2015}.
\item
Solving the ECS problem requires $\Omega(n^2/\ell)$ total comparisons,
where $\ell$ is the size of the smallest equivalence class.
This improves a lower bound of $\Omega(n^2/\ell^2)$ by
Jayapaul {\it et al.}~\cite{Jayapaul2015}.
\item
In Section~\ref{sec:sort-dists},
we study how to efficiently solve
the ECS problem when the input is drawn from a
known distribution on equivalence classes. In this setting, we assume
$n$ elements have been sampled and fed as input to the algorithm.
We establish a relationship between the mean of the distribution
and the algorithm's total number of comparisons,
obtaining upper bounds with high
probability for a variety of interesting distributions.
\item
We provide the results of several
experiments to validate the
results from Section~\ref{sec:sort-dists} and study how total comparison
counts change as parameters of the distributions change.
\end{enumerate}
Our methods are based on several novel techniques, including
a two-phased compounding-comparison technique for the parallel upper bounds and
the use of a new coloring argument for the lower bounds.
\section{Parallel Algorithms} \label{sec:para-alg}
In this section, we provide efficient parallel algorithms for solving
the equivalence class sorting (ECS) problem in Valiant's parallel
model of computation~\cite{Valiant}.
We focus on both the exclusive-read (ER) and concurrent-read (CR) versions
of the problem, and
we assume we have $n$ processors, each of which
can be assigned to one equivalence comparison test to perform in a given
parallel round.
Note, therefore, that any lower bound, $T(n)$, on the
total number of comparisons needed to solve the ECS problem (e.g., as
given by Jayapaul {\it et al.}~\cite{Jayapaul2015}
and as we discuss in Section~\ref{sec:lower-bounds}), immediately implies
a lower bound of $\Omega(T(n)/n)$ for the number of parallel rounds
of computation using $n$ processors per round.
For instance,
these lower bounds imply that
the number of parallel rounds for solving the ECS problem with $n$ processors
must be $\Omega(n/\ell)$ and $\Omega(k)$, respectively,
where $k$ is the number of equivalence classes and $\ell$ is the size of the
smallest equivalence class.
With respect to upper bounds, recall that
Jayapaul {\it et al.}~\cite{Jayapaul2015}
studied the ECS problem from a sequential perspective.
Unfortunately, their algorithm cannot
be easily parallelized, because the comparisons performed in a ``round'' of
their algorithm depend on the results from other comparisons in that same
round.
Thus, new parallel ECS algorithms are needed.
\subsection{Algorithms Based on the Number of Groups}
In this subsection, we describe CR and ER algorithms based on knowledge
of the number of groups, $k$.
If two sets of elements are sorted into their equivalence classes,
merging the two answers into the answer for the union requires at
most $k^2$ equivalence tests by simply performing a comparison
between every pair of equivalence class one from the first answer
and one from the second. This idea leads to the following
algorithm, which uses a two-phased compounding-comparison technique to
solve the ECS problem:
\begin{enumerate}
\item Initialize a list
of $n$ answers containing the individual input elements.
\item While the number of processors per answer is less than $4k^2$,
merge pairs of answers by performing $k^2$ tests.
\item While there is more than one answer, let $ck^2$ be the number of processors available per answer and merge $c$ answers together by performing at most ${c \choose 2} k^2$ tests between each of the answers.
\end{enumerate}
We analyze this algorithm in the following two lemmas
and we illustrate it in Figure~\ref{fig:algo-phases}.
\begin{figure*}[tbp]
\centering
\includegraphics[scale=0.8]{crew-algo-table.pdf}
\caption{A visualization of the parallel algorithm with a table on the right keeping track of relevant numbers for each loop iteration.}
\label{fig:algo-phases}
\end{figure*}
\begin{lemma}\label{lem:first-while}
The first while loop takes $O(k)$ rounds to complete.
\end{lemma}
\begin{proof}
In each round the number of equivalence classes in an answer at
most doubles until it reaches the upper bound of $k$. In loop
iteration $i \leq \lceil \log k \rceil$, the answers are size at
most $2^i$ and there are $2^i$ processors per answer. Therefore
it takes at most $2^i$ rounds to merge two answers. The number of
rounds to reach the $\lceil \log k \rceil$ loop iteration is $O(k)$.
For loop iterations $\lceil \log k \rceil < i < \lceil \log k
\rceil^2$, the answers are size at most $k$, but there are still
at most $2^i$ processors per answer. The number of rounds needed
for these iterations is also $O(k)$, as it forms a geometric sum that
adds up to be $O(k)$.
This part of the algorithm is illustrated in the bottom half of
Figure~\ref{fig:algo-phases}.
\end{proof}
\begin{lemma}\label{lem:second-while}
The second while loop takes $O(\log \log n)$ rounds to complete.
\end{lemma}
\begin{proof}
When entering the second while, there are more processors per answer
than needed to merge just two answers at a time. If an answer has
access to $ck^2$ processors, then a group of ${c \choose 2}$ answers
can merge into one answer in a single round. This means that if
there are $n/(ck^2)$ answers at the start of a round, then we merge
groups of $c^2/2$ answers into one answer and there are $n/(c^3k/2)$
answers remaining. Because $c\geq 4$ by the condition of the first
while loop, in the iteration $i$ of the second while loop, there
are at most $n/(2^{2^i}k)$ answers. And so the second while loop
will terminate after $O(\log \log n)$ rounds with the single answer
for the entire input.
This is illustrated in the top half of Figure~\ref{fig:algo-phases}.
\end{proof}
Combining these two lemmas, we get the following.
\begin{theorem}
The CR version of the equivalence class sorting problem on $n$ elements and $k$ equivalence classes can be solved in $O(k + \log \log n)$ parallel rounds of equivalence tests,
using $n$ processors in Valiant's parallel comparison model.
\end{theorem}
\begin{proof}
Lemmas~\ref{lem:first-while}~and~\ref{lem:second-while}.
\end{proof}
We also have the following.
\begin{theorem}
The ER version of the equivalence class sorting problem on $n$ elements and $k$ equivalence classes can be solved in $O(k\log n)$ parallel rounds of equivalence tests,
using $n$ processors in Valiant's parallel comparison model.
\end{theorem}
\begin{proof}
Merging two answers for the ER version of the ECS problem
model will always take at most $k$ rounds. Repeatedly merging answers will arrive at one answer in $\log n$ iterations. So equivalence class sorting can be done in $O(k\log n)$ parallel rounds of equivalence tests.
\end{proof}
\subsection{Algorithms Based on the Smallest Group Size}
In this subsection, we describe ER algorithms based on knowledge
of $\ell$, the size of the smallest equivalence class. We assume in this
section that $\ell\ge \lambda n$, for some constant $\lambda>0$, and we
show how to solve the ECS problem in this scenario using $O(1)$
parallel comparison rounds.
Our methods are generalizations of previous methods for
the parallel fault diagnosis problem when there are only two classes, ``good''
and ``faulty''~\cite{Beigel:1989,Beigel:1993,b492587,Goodrich2008199}.
Let us assume, therefore, that there are at least 3 equivalence classes.
We begin with a theorem from Goodrich~\cite{Goodrich2008199}.
\begin{theorem}[Goodrich~\cite{Goodrich2008199}]
\label{thm-fault}
Let $V$ be a set of $n$ vertices, and let $0<\gamma,\lambda<1$.
Let $H_d=(V,E)$ be a directed graph
defined by the union of $d$ independent
randomly-chosen\footnote{That is, $H_d$ is defined by the
union of cycles determined by $d$ random
permutations of the $n$ vertices in $V$, so $H_d$ is, by definition,
a simple directed graph.}
Hamiltonian cycles on $V$
(with all such cycles equally likely).
Then, for all subsets $W$ of $V$ of $\lambda n$ vertices,
$H_d$ induces at least one strongly connected component on $W$ of
size greater than $\gamma\lambda n$,
with probability at least
\[
1 - e^{n[(1+\lambda) \ln 2 + d(\alpha \ln \alpha + \beta \ln \beta
- (1-\lambda) \ln (1-\lambda))] + O(1)} ,
\]
where $\alpha=1-\frac{1-\gamma}{2} \lambda$
and $\beta=1-\frac{1+\gamma}{2} \lambda$.
\end{theorem}
In the context of the present paper,
let us take $\gamma=1/4$, so
$\alpha=1-(3/8)\lambda$ and $\beta=1-(5/8)\lambda$.
Let us also assume that $\lambda\le 0.4$, since we are considering the
case when the number of equivalence classes is at least $3$; hence, the smallest
equivalence class is of size at most $n/3$.
Unfortunately, using standard approximations for the natural
logarithm is not sufficient for us to employ the above probability bound
for small values of $\lambda$.
So instead we use the following inequalities,
which hold for $x$ in the range $[0,0.4]$ (e.g., see~\cite{Kozma}),
and are based on the Taylor series for the natural logarithm:
\[
-x - \frac{x^2}{2} - \frac{x^3}{2} \le \ln (1-x)
\le -x - \frac{x^2}{2} - \frac{x^3}{4}.
\]
These bounds allow us to bound the main term, $t$, in the above probability
of Theorem~\ref{thm-fault}
(for $\gamma=1/4$) as follows:
\begin{eqnarray*}
t &=&
\alpha \ln \alpha\ +\ \beta \ln \beta
- (1-\lambda) \ln (1-\lambda) \\
&=&
(1-\frac{3}{8}\lambda) \ln (1-\frac{3}{8}\lambda)\ +\ (1-\frac{5}{8}\lambda)
\ln (1-\frac{5}{8}\lambda) \\
&& -\ (1-\lambda) \ln (1-\lambda) \\
&\le&
(1-\frac{3}{8}\lambda) \left(-\frac{3}{8}\lambda -
\frac{1}{2}\left(\frac{3}{8}\lambda\right)^2
- \frac{1}{4}\left(\frac{3}{8}\lambda\right)^3\right) \\
&&+\ (1-\frac{5}{8}\lambda) \left(-\frac{5}{8}\lambda -
\frac{1}{2}\left(\frac{5}{8}\lambda\right)^2
- \frac{1}{4}\left(\frac{5}{8}\lambda\right)^3\right) \\
&&-\ (1-\lambda) \left(-\lambda -
\frac{\lambda^2}{2} - \frac{\lambda^3}{2}\right) \\
&\le& -\frac{3743}{8192}\lambda^4 + \frac{19}{256} \lambda^3 -
\frac{15}{64} \lambda^2,
\end{eqnarray*}
which, in turn, is at most
\[
-\frac{\lambda^2}{8},
\]
for $0<\lambda\le 0.4$.
Thus, since this bound is negative
for any constant $0<\lambda\le 0.4$, we can set $d$ to be a constant
(depending on $\lambda$) so that
Theorem~\ref{thm-fault} holds with high probability.
Our ECS algorithm, then, is as follows:
\begin{enumerate}
\item
Construct a graph, $H_d$, as in Theorem~\ref{thm-fault},
as described above, with $d$ set
to a constant so that
the theorem holds for the fixed $\lambda$ in the range
$(0,0.4]$ that is given.
Note that this step does not require
any comparisons; hence, we do not count the time for this step in our
analysis (and the theorem holds with high probability in any case).
\item
Note that $H_d$ is a union of $d$ Hamiltonian cycles.
Thus, let us perform all the comparisons in $H_d$ in $2d$ rounds.
Furthermore, we can do this set of comparisons
even for the ER version of the problem.
Moreover, since $d$ is $O(1)$, this step involves a constant number of parallel
rounds (of $O(n)$ comparisons per round).
\item
For each strongly connected component, $C$, in $H_d$ consisting of elements
of the same equivalence class, compare the elements in $C$ with the other
elements in $S$, taking $|C|$ at a time.
By Theorem~\ref{thm-fault}, $|C|\ge \lambda n/8$. Thus, this step can
be performed
in $O(1/\lambda)=O(1)$ rounds for each connected component; hence it
requires $O(1)$ parallel rounds in total.
Moreover, after this step completes, we will necessarily have identified
all the members of each equivalence class.
\end{enumerate}
We summarize as follows.
\begin{theorem}
Suppose $S$ is a set of $n$ elements,
such that the smallest equivalence class in $S$ is of size at least $\lambda n$,
for a fixed constant, $\lambda$, in the range $(0,0.4]$.
Then the
ER version of the equivalence class sorting problem on $S$ can be solved
in $O(1)$ parallel rounds using $n$ processors in Valiant's parallel comparison
model.
\end{theorem}
This theorem is true regardless of whether or not $\lambda$ is known. If the value of $\lambda$ is not known, it is possible to repeatedly run the ECS algorithm starting with an arbitrary constant of $0.4$ for $\lambda$ and halving the constant whenever the algorithm fails. Once the value is less than the unknown $\lambda$, the algorithm will succeed and the number of rounds will be independent of $n$ and a function of only the constant $\lambda$.
As we show in the next section, this performance is optimal when
$\ell\ge \lambda n$, for a fixed constant $\lambda\in(0,0.4]$.
\section{Lower Bounds} \label{sec:lower-bounds}
The following lower bound questions were left open by
Jayapaul {\it et al.}~\cite{Jayapaul2015}:
\begin{itemize}
\item
If every equivalence class has size $f$, the
total number of comparisons needed to solve
the equivalence class sorting problem
$\Theta(n^2/f)$ or $\Theta(n^2/f^2)$?
\item
Is the total number of
comparisons
for finding an element in the smallest equivalence class $\Theta(n^2/\ell)$ or $\Theta(n^2/\ell^2)$?
\end{itemize}
Speaking loosely these lower bounds can be thought of as a question of how difficult it is for an element to locate its equivalence class. The $\Theta(n^2/f)$ and $\Theta(n^2/\ell)$ bounds can be interpreted as saying the average element needs to compare to at least one element in most of the other equivalence classes before it finds an equivalent element. Because there must be ${x \choose 2}$ comparisons between $x$ equivalence classes, the $\Theta(n^2/f^2)$ and $\Theta(n^2/\ell^2)$ bounds say we do not need too many more comparisons then the very minimal number needed just to differentiate the equivalence classes. It seems unlikely that so few comparisons are required and we prove that this intuition is correct by proving lower bounds of $\Omega(n^2/f)$ and $\Omega(n^2/\ell)$ comparisons.
Note that these lower bounds are on the total number of comparisons needed to accomplish a task, that is they bound the work a parallel algorithm would need to perform. By dividing by $n$, they also give simple bounds on the number of rounds needed in either the ER or CR models.
With respect to such lower bound questions as these,
let us maintain the state of an algorithm's
knowledge about element relationships in a simple graph. At each
step, the vertex set of this graph is a partition of the elements
where each set is a partially discovered equivalence class for $S$.
Thus, each element in $S$ is associated with exactly one vertex in this graph
at each step of the algorithm, and a vertex can have multiple elements
from $S$ associated with it.
If a pair of elements was compared and found to not be equal, then
there should be an edge in between the two vertices containing those
elements. So initially the graph has a vertex for each element and
no edges. When an algorithm tests equivalence for a pair of elements,
then, if the elements are not equivalent, the appropriate edge is
added (if it is absent) and, if the elements are equivalent, the two
corresponding vertices are contracted into a new vertex whose set
is the union of the two. A depiction of this is shown in
\autoref{fig:equiv-test}. An algorithm has finished sorting once
this graph is a clique and the vertex sets are the corresponding
equivalence classes.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.8]{equiv-test.pdf}
\caption{We test if $x$ and $y$ are in the same equivalence class. If they are, their vertices are contracted together. If they are not, an edge is added.}
\label{fig:equiv-test}
\end{figure*}
An \emph{equitable $k$-coloring} of a graph is a proper coloring of a graph such that the size of each color class is either $\lfloor n/k \rfloor$ or $\lceil n/k \rceil$. A \emph{weighted equitable $k$-coloring} of a vertex weighted graph is a proper coloring of a graph such that the sum of the weight in each color class is either $\lfloor n/k \rfloor$ or $\lceil n/k \rceil$. Examples of these can be seen in \autoref{fig:equitable-colorings}.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.8]{equitable-colorings.pdf}
\caption{On the left we have a graph with an equitable $3$-coloring and on the right we have a graph with a weighted equitable $3$-coloring.}
\label{fig:equitable-colorings}
\end{figure*}
An adversary for the problem of equivalence class
sorting when every equivalence class has the same size $f$ (so $f$ divides $n$) must maintain that the graph has a weighted equitable $n/f$-coloring where the weights are the size of the vertex sets. The adversary we describe here will maintain such a coloring and additionally mark the elements and the color classes in a special way. It proceeds as follows.
First, initialize an arbitrary equitable coloring on the starting
graph that consists of $n$ vertices and no edges.
For each comparison of two elements done by the adversary algorithm, let us
characterize how we react based on the following case analysis:
\begin{itemize}
\item
If either of the elements is unmarked and this comparison would
increase its degree to higher than $n/4f$, then mark it as having
``high'' element degree.
\item
If either element is still unmarked, they currently have the same
color, and there is another unmarked vertex such that it is not
adjacent to a vertex with the color involved in the comparison and
no vertex with its color is adjacent to the unmarked vertex in the
comparison (i.e. we can have it swap colors with one of the vertices
in the comparison), then swap the color of that element and the unmarked
element in the comparison.
\item
If either element is still unmarked, they currently have the same
color, and there is no other unmarked vertex with a different
unmarked color not adjacent to the color of the two elements being
compared, then mark all elements with the color involved in the
comparison as having ``high'' color degree and mark the color as having
``high'' degree.
\item At this point,
either both elements are marked and we answer based on their color,
or one of the elements is unmarked and they have different colors,
so we answer ``not equal'' to the adversary algorithm.
\end{itemize}
\begin{figure*}[t]
\centering
\includegraphics[scale=1.2]{adversary.pdf}
\caption{Three cases of how the adversary works to mark vertices and swap colors. The dashed line indicates the two elements being compared. Marked vertices are denoted with stars.}
\label{fig:adversary}
\end{figure*}
At all times, the vertices that contain unmarked elements all have
weight one, because the adversary only answers equivalent for
comparisons once both vertices are marked. When a color class is
marked, all elements in that color class are marked as having ``high''
color degree. A few of the cases the adversary goes through are
depicted in Figure~\ref{fig:adversary}.
\begin{lemma}\label{lem:comp-count}
If $n/8$ elements are marked during the execution of an algorithm, then $\Omega(n^2/f)$ comparisons were performed.
\end{lemma}
\begin{proof}
There are three types of marked vertices: those with ``high'' element
degree marks, those with ``high'' color degree marks,
and those with both marks.
The color classes must have been marked as having ``high'' degree
when a comparison was being performed between two elements of that
color class and there were no unmarked color candidates to swap
colors with. Because one of the elements in the comparison had
degree less than $n/4f$, only a quarter of the elements have a color
class it cannot be swapped with. So if there were at least $n-n/4$
unmarked elements in total, then the elements in the newly marked color
class must have been in a comparison $n/2$ times.
The ``high'' element degree
elements were involved in at least $n/4f$ comparisons each.
So if $i$ color classes were marked and $j$ elements were
only marked with ``high'' element degree, then
the marked elements must have been a part of a test at least
$ni/2 + nj/4f \geq (i + j/f)n/4$ times.
Once $if + j \geq n/8$, then at least $n^2/64f$ equivalence tests
were performed.
\end{proof}
\begin{theorem}
If every equivalence class has the same size $f$, then sorting requires at least $\Omega(n^2/f)$ equivalence comparisons.
\end{theorem}
\begin{proof}
When an algorithm finishes sorting, each vertex will have weight
$f$ and so the elements must all be marked. Thus, by
\autoref{lem:comp-count}, at least $\Omega(n^2/f)$ comparisons must
have been performed.
\end{proof}
We also have the following lower bound as well.
\begin{theorem}
Finding an element in the smallest equivalence class, whose size is $\ell$, requires at least $\Omega(n^2/\ell)$ equivalence comparisons.
\end{theorem}
\begin{proof}
We use an adversary argument similar to the previous one, but we start
with $\ell$ vertices colored a special \emph{smallest class color
(scc)} and seperate the remaining $n-\ell$ vertices into $\lfloor(n
- \ell)/(\ell + 1)\rfloor$ color classes of size $\frac{n}{\lfloor(n
- \ell)/(\ell + 1)\rfloor}$ or $\frac{n}{\lfloor(n - \ell)/(\ell +
1)\rfloor} + 1$.
There are two changes to the previous adversary responses. First,
the degree requirement for having ``high'' degree is now $n/4\ell$.
Second, if an scc element is about to be marked as having ``high'' degree,
we attempt to swap its color with any valid unmarked vertex. Otherwise,
we proceed exactly as before.
If an algorithm attempts to identify an element as belonging to the
smallest equivalence class, no scc elements are marked, and
there have been fewer than $n/8$ elements marked, then the identified
element must be able to be swapped with a different color and the
algorithm made a mistake. Therefore, to derive a lower bound
for the total number of comparisons,
it suffices to derive a lower bound for the number of equivalence tests
until an scc element is marked.
The scc color class cannot be marked as having ``high'' color degree
until at least one scc element has high element degree. However,
as long as fewer than $n/8$ elements are marked, we will never mark
an scc element with ``high'' degree. So at least $n/8$ elements need
to be marked as having ``high'' element degree or ``high'' color degree and,
by the same type of counting as in Lemma~\ref{lem:comp-count},
$\Omega(n^2/\ell)$ equivalence tests are needed. \end{proof}
\section{Sorting Distributions} \label{sec:sort-dists}
In this subsection, we study a version of the equivalence class sorting
problem where we are
given a distribution, $D$, on a countable set, $S$,
and we wish to enumerate the set in order of most likely to
least likely, $s_0, s_1, s_2,\dots$.
For example, consider the following distributions:
\begin{itemize}
\item
Uniform: In this case, $D$ is a distribution on $k$ equivalence classes,
with each equivalence class being equally likely for every element of $S$.
\item
Geometric: Here, $D$ is a distribution such that the $i$th
most probable equivalence class has probability $p^i(1-p)$.
Each element ``flips'' a biased coin where ``heads'' occurs with probability
$p$ until it comes up ``tails.'' Then that element is in equivalence
class $i$ if it flipped $i$ heads.
\item
Poisson:
In this case, $D$ is model of the number of times an event occurs in
an interval of time, with an expected number of events determined by a
parameter $\lambda$.
Equivalence class $i$ is defined to be all the samples
that have the same number of events occurring, where the probability of
$i$ events occurring is
\[
\frac{\lambda^i e^{-\lambda}}{i!}\ .
\]
\item
Zeta:
This distribution, $D$, is related to Zipf's law, and models when the
sizes of the equivalence classes follows a power law, based on
a parameter, $s>1$, which is common
in many real-world scenarios, such as the frequency of words in natural
language documents.
With respect to equivalence classes, the $i$th equivalence class
has probability
\[
\frac{i^{-s}}{\zeta(s)},
\]
where $\zeta(s)$ is Riemann zeta function (which normalizes the probabilities
to sum to 1).
\end{itemize}
So as to number equivalence classes from most likely to least likely,
as $i=0,1,\ldots$,
define $D_\mathbb{N}$ to be a distribution on the natural numbers such that
\[
\Pr_{x\sim D_\mathbb{N}} \left[ x = i\right] = \Pr_{y\sim D} \left[ y = s_i\right].
\]
Furthermore,
so as to ``cut off'' this distribution at $n$,
define $D_\mathbb{N}(n)$ to be a distribution on the natural numbers less
than or equal to $n$ such that, for $0 \leq i<n$,
\[
\Pr_{x\sim D_\mathbb{N}(n)} \left[ x = i\right] = \Pr_{y\sim D_\mathbb{N}} \left[ y = i\right]
\]
and
\[
\Pr_{x\sim D_\mathbb{N}(n)} \left[ x = n\right] = \Pr_{y\sim D_\mathbb{N}} \left[ y \geq n\right].
\]
That is, we are ``piling up'' the tail of the $D_\mathbb{N}$ distribution
on $n$.
The following theorem shows that we can use $D_\mathbb{N}(n)$ to bound
the number of comparisons in an ECS algorithm
when the equivalence classes are drawn from $D$.
In particular, we focus here on
an algorithm by Jayapaul {\it et al.}~\cite{Jayapaul2015} for equivalence
class sorting,
which involves a round-robin testing regiment, such
that each element, $x$, initiates
a comparison with the next element, $y$, with an unknown relationship to $x$,
until all equivalence classes are known.
\begin{theorem}\label{thm:dist-runtime}
Given a distribution, $D$, on a set of equivalence classes,
then $n$ elements who have corresponding equivalence class
independently drawn from $D$ can be equivalence class sorted using
a total number of comparisons stochastically dominated by twice the sum of $n$ draws
from the distribution $D_\mathbb{N}(n)$.
\end{theorem}
\begin{proof}
Let $V_i$ denote the random variable that is equal to the natural number
corresponding to the equivalence class of element $i$
in $D_\mathbb{N}(n)$.
We denote the number of
elements in equivalence class $i$ as $Y_i$.
Let us denote
the number of equivalence tests performed by
the algorithm
by Jayapaul {\it et al.}~\cite{Jayapaul2015}
using the random variable, $R$.
By a lemma from~\cite{Jayapaul2015},
for any pair of equivalence classes, $i$ and $j$,
the round-robin ECS algorithm
performs at most $2 \min(Y_i,Y_j)$ equivalence tests in total.
Thus, the total number of equivalence tests in our distribution-based
analysis is upper bounded by
\begin{eqnarray*}
R & \leq & \sum_{i=0}^\infty \sum_{j=0}^{i-1} 2\min(Y_i,Y_j)\\
& = & 2\sum_{i=0}^n \sum_{j=0}^{i-1} \min(Y_i,Y_j)
+ 2\sum_{i=n+1}^\infty \sum_{j=0}^{i-1} \min(Y_i,Y_j)\\
&\leq& 2\sum_{i=0}^n \sum_{j=0}^{i-1} Y_i + 2\sum_{i=n+1}^\infty nY_i\\
& \leq & 2\left(\sum_{i=0}^n i Y_i + \sum_{i=n+1}^\infty n Y_i\right) = 2 \sum_{i=1}^{n} V_i
\end{eqnarray*}
The second line in the above simplification
is a simple separation of the double summation
and the third line follows because $\sum_{j=0}^{i-1} \min(Y_i,Y_j)$ is zero
if $Y_i$ is zero and at most $n$, otherwise.
So the total number of comparisons in the algorithm is
bounded by twice the sum of $n$ draws from $D_\mathbb{N}(n)$.
\end{proof}
Given this theorem, we can apply it to a number of distributions to
show that the total number of comparisons performed is linear with
high probability.
\begin{theorem}
If $D$ is a discrete uniform, a geometric, or a Poisson distribution on a set equivalence classes, then it is possible to equivalence class sort
using linear total number of comparisons
with exponentially high probability.
\end{theorem}
\begin{proof}
The sum of $n$ draws from
$D_\mathbb{N}(n)$ is stochastically dominated
by the sum of $n$ draws from $D_\mathbb{N}$.
Let us consider each distribution in turn.
\begin{itemize}
\item
Uniform:
The sum of $n$ draws from a discrete uniform distribution
is bounded by $n$ times the maximum value.
\item
Geometric:
Let $p$ be the parameter of a geometric distribution and let
$X = \sum_{i=0}^{n-1} X_i$ where the $X_i$ are drawn from $\mathop{Geom}(p)$,
which is, of course, related to the Binomial distribution, $\mathop{Bin}(n,p)$,
where one flips $n$ coins with probability $p$ and records
the number of ``heads.''
Then, by a Chernoff bound
for the geometric distribution (e.g., see~\cite{mitzenmacher2005probability}),
\begin{eqnarray*}
\Pr[X - (1/p)n > k] & = & \Pr[\mathop{Bin}(k + (1/p)n,p) < n] \\
& \leq & e^{-2\frac{(pk + n - n)^2}{k + (1/p)n}}\\
\Pr[X > (2/p)n] & \leq & e^{-np}
\end{eqnarray*}
\item
Poisson:
Let $\lambda$ be the parameter of a Poisson distribution and let $Y = \sum_{i=0}^{n-1} Y_i$ where the $Y_i$ are drawn from $\mathop{Poisson}(\lambda)$.
Then, by a Chernoff bound
for the Poisson distribution (e.g., see~\cite{mitzenmacher2005probability}),
\begin{eqnarray*}
\Pr[Y > (\lambda (e-1) + 1) n] & = & \Pr[e^Y > e^{(\lambda (e-1) + 1)n}] \\
& \leq & \frac{(E[e^{Y_i}])^n}{e^{(\lambda (e-1) + 1)n}} \\
& = & \frac{e^{\lambda(e - 1)n}}{e^{(\lambda (e-1) + 1)n}} = e^{-n}
\end{eqnarray*}
\end{itemize}
So, in each case with exponentially high probability, the sum of $n$ draws
from the distribution is $O(n)$ and the round-robin
algorithm does $O(n)$ total equivalence tests.
\end{proof}
We next address the zeta distribution.
\begin{theorem}
Given a zeta distribution with parameter $s>2$, $n$ elements who have corresponding equivalence class independently drawn from the zeta distribution can be equivalence class sorted in $O(n)$ work in expectation.
\end{theorem}
\begin{proof}
When $s > 2$, the mean of the zeta distribution is
\[
\frac{\zeta(s-1)}{\zeta(s)},
\]
which is a constant.
So the sum of $n$ draws from the distribution is expected to be linear.
Therefore, the expected total number of
comparisons in the round-robin algorithm is linear.
\end{proof}
Unfortunately, for zeta distributions it is not immediately
clear if it is possible to improve the above theorem
so that total number of comparisons is shown to be linear
when $2 \geq s > 1$ or obtain high probability bounds on these bounds.
This uncertainty
motivates us to look experimentally at how
different values of $s$ cause the runtime to behave.
Likewise, our high-probability bounds on the total number
of comparisons in the round-robin algorithm for the other distibutions
invites experimental analysis as well.
\section{Experiments} \label{sec:sort-exper}
In this section, we report on experimental validatations of
the theorems from the
previous section and investigations of the behavior of running
the round-robin algorithm on the
zeta distribution. For the uniform, geometric, and Poisson distributions,
we ran ten tests on sizes of $10,000$ to $200,000$ elements
incrementing in steps of $10,000$. For the zeta distribution,
because setting $s < 2$ seems to lead to a super linear number of
comparisons, we reduced the test sizes by a factor of $10$ and ran
ten tests each on sizes from $1,000$ to $20,000$ in increments of
$1,000$. For each distribution we used the following parameter
settings for various experiments:
\begin{center}
\begin{tabular}{ l l }
Uniform: & $k = 10,25,100$\\
Geometric: & $p = \frac{1}{2},\frac{1}{10},\frac{1}{50}$\\
Poisson: & $\lambda = 1,5,25$\\
Zeta: & $s = 1.1,1.5,2,2.5$
\end{tabular}
\end{center}
The results of these tests are plotted in Figure~\ref{fig:exper-results}.
Best fit lines were fitted whenever we have theorems
stating that there will be a linear number of comparisons with
high probability or in expectation (i.e., everything except for
zeta with $s < 2$).
We include extra plots of the zeta distribution tests
with the $s=1.1$ data and
the $s = 1.1,1.5$ data removed to better see the other data sets.
\begin{figure*}[t]
\centering
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[scale=0.35]{uniform.png}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[scale=0.35]{geometric.png}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[scale=0.35]{poisson.png}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[scale=0.35]{zeta_all.png}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[scale=0.35]{zeta_no11.png}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[scale=0.35]{zeta_no1115.png}
\end{subfigure}
\caption{The results of the experiments are plotted and best fit lines are placed when we have a linear number of comparisons with high probability or in expectation.}
\label{fig:exper-results}
\end{figure*}
We can see from the data that the number of comparisons for
the uniform, geometric, and Poisson distributions are so tightly concentrated
around the best fit line that only one data point is visible.
Contrariwise, the data points for the zeta distributions do not cluster nearly as nicely.
Even when we have linear expected comparisons with $s=2$, the data points
vary by as much as $10\%$.
\section{Conclusion}
In this paper we have
studied the equivalence class sorting problem,
from a parallel perspective,
giving several new algorithms,
as well as new lower bounds and distribution-based analysis.
We leave as open problems the following interesting questions:
\begin{itemize}
\item
Is it possible to find all equivalance
classes in the ER version of the ECS problem
in $O(k)$ parallel rounds, for $k\ge 3$, where $k$ is the number
of equivalence classes?
Note that the answer is ``yes'' for $k=2$, as it follows from previous
results for the parallel fault diagnosis
problem~\cite{Beigel:1989,Beigel:1993,b492587}.
\item
Is it possible to bound the number of comparisons away from
$O(n^2)$ for the zeta distribution when $s<2$ even just in expectation?
\item
Is it possible to prove a high-probability concentration bound for
the zeta distribution, similar to the concentration bounds we proved
for other distributions?
\end{itemize}
\subsection*{Acknowledgments}
This research was supported in part by
the National Science Foundation under grant 1228639
and a gift from the 3M Corporation.
We would like to thank David Eppstein and Ian Munro
for several helpful discussions
concerning the topics of this paper.
{\raggedright
\bibliographystyle{abbrv}
|
2,869,038,156,807 | arxiv | \section{Introduction}
\label{sec:introduction}
The Discrete-Time Quantum Walk (DTQW, or quantum walk for
short)\cite{kempe_2003}, a quantum mechanical generalization of the
random walk, has in the recent years received more and more attention
from both the theoretical and experimental side. The main drive to
understand the properties of the DTQW come from its possible use for
quantum information processing, be it quantum search
algorithms\cite{Shenvi03}, or even general purpose quantum
computation\cite{dtqw_universal}. Experiments on quantum walks range
from realizations on trapped
ions\cite{travaglione_02,roos_ions,schmitz_ion}, to cold atoms in
optical lattices\cite{meschede_science,alberti_electric_experiment},
and on light on an optical
table\cite{gabris_prl,schreiber_science,peruzzo_science_2010,
white_photon_prl,sciarrino_twoparticle}, but there are many
other experimental proposals\cite{rydberg_walk,kalman_09}.
The distinguishing feature of quantum walks is that on regular graphs,
they spread faster than their classical counterparts: the
root-mean-square distance of the walker from the origin increases with
the number $N$ of steps as $\mathcal{O}(N)$, rather than
$\mathcal{O}(\sqrt{N})$ as in the classical case. This can be put to good
use for algorithms based on quantum walks\cite{Shenvi03} that find a
marked state among $N$ states in only $\mathcal{O}(\sqrt{N})$ steps,
outperforming their classical counterparts -- the same scaling
advantage as of the Grover algorithm\cite{grover_prl}, which can also
be understood as a DTQW. The intuitive explanation for this ballistic
scaling is that a DTQW can be seen as a stroboscopic simulator for an
effective Hamiltonian, and thus, in a clean system, its eigenstates
are plane waves.
If we understand a DTQW to be a stroboscopic simulator for a
Hamiltonian, we can expect that static disorder can impede the
spreading of the walk, even bringing it to a complete standstill,
through Anderson localization\cite{tiggelen_phystoday}.
This prediction has been mathematically proven for some types of
one-dimensional DTQWs\cite{joye_10,ahlbrecht_2011}, and even observed
in an optical implementation \cite{gabris_anderson}. However, even in
one dimension, some types of disorder lead to a slow, subdiffusive
spreading of the walk rather than complete
localization\cite{obuse_delocalization}; this phenomenon can also be
explained in terms of the effective
Hamiltonian\cite{obuse_delocalization,brouwer_delocalization}. Two-dimensional
DTQWs are also expected to suffer Anderson
localization\cite{Svozilik2012}, although in some cases disorder causes
diffusion\cite{jonathan_2014}.
In this paper we address the question: is there a way to create an
efficient transport channel in a 2-dimensional split-step DTQW (2DQW)
that defeats localization even if static disorder is present? We take
a DTQW on a square lattice, with two special sites: $A$, where the
walk is started from, and $B$, where we want the walker to ultimately
end up, rather than escaping to infinity or remaining in the vicinity
of $A$. To create a channel, we cut links on the lattice, thus
restricting the movement of the walker. The first idea, cutting out a
narrow island, with $A$ on the one end, and $B$ on the other, is
rendered ineffective by static disorder. We find a somewhat
counterintuitive strategy that does work, however: cutting the links
along a single line connecting $A$ to $B$ creates a conveyor belt for
the walker, transporting it efficiently and ballistically from $A$ to
$B$ even in the presence of considerable amount of static disorder.
The way that a cut along a line on the lattice of the quantum walk
forms a robust conveyor belt for the walker is reminiscent of how
electrons are transported along line defects by edge states in
topological insulators\cite{rmp_kane}. This seems to be a promising
direction for an understanding of the transport mechanism, since the
effective Hamiltonians of DTQWs can be engineered to realize all
classes of topological phases in 1 and 2
dimensions\cite{kitagawa_exploring}. However, the effective
Hamiltonian of the 2DQW is topologically
trivial\cite{kitagawa_exploring}. Thus, if there is a bulk topological
invariant protecting these states from disorder, it is not covered by
standard theory\cite{schnyder_tenfold}.
The topological structure of DTQWs is in fact richer than that of
time-independent Hamiltonians, and exploration of that structure is
far from complete. The telltale signs of extra topology are protected
edge states at the edges of bulks where the topological invariants of
the effective Hamiltonian predict none. An example is one-dimensional
DTQWs with chiral symmetry, where such edge states have been detected
in an optical experiment\cite{kitagawa_observation}, and have been
predicted to exist between two bulks with the same effective
Hamiltonian\cite{asboth_prb}. In that case, the extra topological
structure responsible for the protection of these states has been
found, and can be described based on time-delayed effective
Hamiltonians\cite{asboth_2013}, scattering
matrices\cite{scattering_walk2014}, or as winding numbers of one part
of the timestep operator\cite{asboth_2014}. Edge states between two
bulks in the 2DQW have been found
numerically\cite{kitagawa_introduction}, but the extra topological
invariants that they indicate are unknown.
In this paper we show that there are chiral (one-way propagating)
edge states along a cut in a 2DQW, and identify the bulk topological
invariant responsible for their appearance. We map the quantum walk to
a periodically driven Hamiltonian, and thus identify the invariant as
the winding number found by Rudner et al.\cite{rudner_driven}, which
we refer to as Rudner invariant.
The paper is structured as follows. We introduce the type of 2DQW we
consider, together with the prescription of how to cut links on the
graph, in Section \ref{sec:definitions}. Then, in Section
\ref{sec:first_transport}, we consider two strategies to enhance
transport in the 2DQW: in a clean case, the straightforward, ``island
cut'' approach works fine, but in the presence of disorder, only the
less intuitive, ``line cut'' approach gives efficient transport. We
show that there are edge states along the line cut in Section
\ref{sec:edge_states}. In Section \ref{sec:top_invariants} we find the
bulk topological invariants responsible for the edge states. In
Sect.~\ref{sec:robustn-conv-belt} we consider the effects of disorder
on the edge state transport.
\section{Definitions }
\label{sec:definitions}
Of the wide variety of two-dimensional quantum walks, we choose the
split-step walk on a square lattice (2DQW), defined in
Ref.~\onlinecite{kitagawa_exploring}, for its simplicity: it requires
only two internal states for the walker. In
this section we recall the definition of the 2DQW,
introduce the conditional wavefunction method which allows us to treat
transport in the quantum walk setting, discuss how to cut links in the
quantum walk and how disorder is introduced.
\begin{figure}
\includegraphics[width=6cm]{Fig1}
\caption{Layout of the 2-dimensional quantum walk, with a source at
$A$, a detector at the target site at $B$, and detectors at the
edges. For the conditional wavefunction, the detectors play the role
of absorbers.
}
\label{fig:alice_bob}
\end{figure}
\subsection{Walker wavefunction and time evolution operator}
\label{sec:walk-wavef-time}
We consider a particle, or \emph{walker}, on a square lattice, with
two internal states, which we refer to as spin. The wavefunction of
the walker can be written as
\begin{align}
\ket{\Psi} &= \sum_{\vec{r}\in \mathcal{D}} (
\Psi(\vec{r},\uparrow)\ket{\vec{r},\uparrow} +
\Psi(\vec{r},\downarrow)\ket{\vec{r},\downarrow}).
\end{align}
Here $\vec{r}=(x,y)$ is a 2-dimensional vector of integers, which labels
the nodes of the lattice, taken from
$\mathcal{D}=\{(x,y)|x=1,\ldots,x_\text{max}, y = 1,\ldots,y_\text{max}\}$.
The walker is initialized at site $A=(x_A,y_A)$ as
\begin{align}
\ket{\Psi(t=0)} &= \ket{A,\uparrow}.
\end{align}
The dynamics of the walker takes place in discrete time $t\in \mathbb{N}$,
and is determined by
\begin{align}
\ket{\Psi(t+1)} &= U \ket{\Psi(t)};\\
U &= S_y R_2 S_x R_1.
\label{eq:U_def}
\end{align}
The operator $R_j$, with $j=1,2$, denotes a rotation of the spin about
the $y$ axis,
\begin{align}
R_j &= \sum_{\vec{r}\in \mathcal{D}} \ket{\vec{r}}\bra{\vec{r}} \otimes
e^{-i\theta_j(\vec{r})\sigma_y}.
\end{align}
The angles $\theta_1$ and $\theta_2$ of the first and second
rotation can depend on the position $\vec{r}=(x,y)$ of the walker.
The operators $S_x$ and $S_y$ denote spin-dependent translations along
links between the sites on the lattice,
\begin{align}
S_x &= \sum_{\vec{r}\in \mathcal{D}} \ket{\vec{r}+\hat{x},\uparrow}\bra{\vec{r},\uparrow} +
\ket{\vec{r},\downarrow}\bra{\vec{r}+\hat{x},\downarrow};\\
S_y &= \sum_{\vec{r}\in \mathcal{D}} \ket{\vec{r}+\hat{y},\uparrow}\bra{\vec{r},\uparrow} +
\ket{\vec{r},\downarrow}\bra{\vec{r}+\hat{y},\downarrow},
\end{align}
where $\hat{x}=(1,0)$ and $\hat{y}=(0,1)$.
\subsection{Conditional wavefunction}
We want to measure how efficient transport is to a given site, $B =
(x_B,y_B)$, as opposed to propagation to the boundary of the system,
denoted by the sites $C_j$, as shown in Fig.~\ref{fig:alice_bob}. We
place a detector at site $B$, and at the boundary sites $C_j$. After
every timestep, each detector performs a dichotomic measurement on the
wavefunction: if the walker is at the detector, it is detected, if
not, it is undisturbed. To calculate the resulting probability
distribution for the transmission times, we compute the
\emph{conditional wavefunction} $\ket{\Psi(t)}$, conditioned on no
detection events up to time $t$. To obtain the time evolution of the
conditional wavefunction, at the end of each timestep the components
of the wavefuntion at the sites $B$ and $C_j$ are projected out,
\begin{align}
\Psi(t) &= \Big(1-\ket{B}\bra{B}-\sum_j\ket{C_j}\bra{C_j} \Big)
U \ket{\Psi(t-1)}.
\label{eq:timestep_def}
\end{align}
Note that measurements are performed at each step, but since the
measurement record is kept, the whole process is still completely
coherent.
The norm of the walker's wavefunction, $\braket{\Psi(t)}{\Psi(t)}$, is
the probability that the particle is still in the system after $t$
steps. Due to the postselection involved in the timestep,
Eq.~\eqref{eq:timestep_def}, this norm decreases over time as the
walker is found at $B$ (successful transmission) or leaks out at the
edges (transmission failure).
The probability of success, i.e., of detecting the walker at $B$ at
time $t$, is given by
\begin{align}
p_t &= \sum_{s=\uparrow,\downarrow}
\abs{\bra{B,s} U \ket{\Psi(t-1)}}^2.
\label{eq:def_pt}
\end{align}
The arrival probability at time $t$ is the summed probabilities of absorption up to time $t$ and is given by
\begin{align}
\label{eq:def_of_arrival_prob_Pt}
P_t=\sum_{t'=1}^t p_{t'}
\end{align}
\subsection{Disorder through the rotation angles. }
We will consider the effects of disorder that enters the system
through the angles $\theta$. The rotation angles become position
dependent, uncorrelated random variables, chosen from a uniform
distribution,
\begin{align}
\theta_j(\vec r) \in [\theta_j-\delta, \theta_j + \delta].
\label{eq:angle_disorder}
\end{align}
In this paper we will consider
time-independent (i.e.,
static, or quenched) disorder, i.e., the angles $\theta$ depend only on
position, but not on time.
The effects of disorder will be addressed in
section~\ref{sec:robustn-conv-belt}.
\subsection{Cutting links}
To enhance transport, we consider modifying the graph on which the
walk takes place by cutting some of the links. If the link between
sites $(x,y)$ and $(x+1,y)$ is cut, the $\ensuremath{\uparrow}$ component of the
wavefunction is not transported from site $(x,y)$ to $(x+1,y)$ during
the $S_x$ shift operation and similarly the $\ensuremath{\downarrow}$ component from
$(x+1,y)$ is not shifted to $(x,y)$. The analogous definition for cut
links holds for the $S_y$ operation between sites $(x,y)$ and
$(x,y+1)$.
If
we were dealing with a lattice Hamiltonian instead of a lattice
timestep operator, cutting a link could be done by just setting the
corresponding hopping amplitude to 0. In the case of the timestep
operator, however, maintaining the unitary of the time evolution --
orthogonal states always have to stay orthogonal\cite{asboth_prb} --
is more involved.
The only sensible unitary and short-range way to do that is to induce
a spin flip instead of a hop, with possibly an additional phase
factor. This extra phase plays an important role in the 1D quantum
walk, where it affects the quasienergy of the end
states\cite{asboth_prb}. For 2D quantum walks, however, this extra
phase factor unimportant. For convenience, we flip the spin
using $-i\sigma_y$.
The complete shift operator $S_d$, with $d=x$ or $y$, including the
prescription for cutting the links, reads
\begin{align}
S_d&=\sum_{\vec r \in \mathcal{L}_d}
\left(
\ket{\vec r + \hat d, \ensuremath{\uparrow}} \bra{\vec r,\ensuremath{\uparrow}} + \ket{\vec r,
\ensuremath{\downarrow}}\bra{\vec r + \hat d, \ensuremath{\downarrow}}
\right)
\nonumber\\&\quad
+ \sum_{\vec r \in \mathcal{C}_d}
\left(
\ket{\vec r, \ensuremath{\downarrow}}\bra{\vec r , \ensuremath{\uparrow}} - \ket{\vec r + \hat d, \ensuremath{\uparrow}}\bra{\vec r + \hat d, \ensuremath{\downarrow}}
\right).
\label{shift_op_x_with_cut}
\end{align}
Here $\mathcal{L}_d$ is the set of vectors $\vec r$ such that the link
between node at $\vec r$ and the node at $\vec r + \hat d$ is not cut,
while its complement $\mathcal{C}_d$ is the set of vectors to nodes
$\vec r$ for which the link connecting them to node $\vec r + \hat d$
has been cut, with $\hat d$ denoting the unit vector in the direction
$d$ (i.e., $\hat{x}$ or $\hat{y}$).
\section{Transport in the presence of a cut}
\label{sec:first_transport}
We now address the question: which links should we cut to optimize the
transport from A to B? The first idea that comes to mind to ensure
efficient transport is to cut out a narrow island from the lattice: at
the one end of the island is $A$, the source, at the other end $B$,
the site where we want the walker to be transported to. However, as we
see, in the presence of disorder, there is a much more efficient
construction.
\subsection{The island cut}
\begin{figure}
\includegraphics[width=6cm]{Fig2}
\caption{
To increase the efficiency of transport from $A$ to $B$, the first
idea is to cut an island that will form a transport channel, as
indicated by the dashed line. All links crossing the dashed line are
cut; a particle attempting to hop across a cut link will have its spin
flipped instead of hopping. }
\label{fig:alice_bob_island}
\end{figure}
Perhaps the most straightforward way to ensure that the walker gets
from $A$ to $B$ is to restrict its motion to a narrow island
connecting these two sites, by cutting links as illustrated in
Fig.~\ref{fig:alice_bob_island}. In a clean system, this strategy
achieves the desired effect. Simulations on large system sizes, shown
in Fig.~\ref{fig:localization_barrier}.a, show a high success
probability, independent of system size (island length), with a time
required for transport proportional to the length of the island,
indicating ballistic transport.
The simple strategy of cutting out an island to guide the walker to
$B$ no longer works if there is quenched disorder in the rotation
angles. As shown in Fig.~\ref{fig:localization_barrier}b), the time
evolution of the walker's wavefunction now shows signs of
localization. With a disorder of $\delta\theta = 0.07 \pi$, the
average distance from the origin stops growing after some time,
independent of system size.
\begin{figure}
\includegraphics[width=4.4cm]{Fig3a}%
\includegraphics[width=4.4cm]{Fig3c}
\includegraphics[width=4.4cm]{Fig3b}%
\includegraphics[width=4.4cm]{Fig3d}
\caption{
Mean displacement of the quantum walker (continuous lines)
and transmission probability (dotted lines) in the geometry of the
``island'' of Fig.~\ref{fig:alice_bob_island}, for a horizontal
island of fixed width of 1, and varying island length of 40 (thick
light gray), 80 (medium gray), and 160 (thin black). Mean rotation
angles are set to $\theta_1=0.35\pi$, $\theta_2 =0.15\pi$. Without
disorder, $\delta=0$, the wavefunction spreads ballistically (a) and
the transmission probability reaches a value close to 1 as the
wavepacket arrives (b). To illustrate the effects of disorder, we
set $\delta=0.2 \pi$, and use a single disorder realization, varying
only the distance $n$ between $A$ and $B$ (and correspondingly, the
length of the island). For large enough system ($n=160$), the mean
distance from $A$ saturates at around 30 (c), and in this case there
is virtually no transmission ((d): $P_t<10^{-4}$ for $n=160$ for all
times $t$).}
\label{fig:localization_barrier}
\end{figure}
\subsection{The single line cut}
There is a somewhat counterintuitive strategy to defeat localization,
and ensure efficient transport from $A$ to $B$ even with static
disorder. This involves cutting links along a line from A to B, as
shown in Fig.~\ref{fig:alice_bob_1cut}.
As shown in Fig.~\ref{fig:transport_with_cut}, in spite of the
disorder, the single cut ensures ballistic propagation of the quantum
walker and greatly enhances the transmission probability: the line of
cut links acts like a conveyor belt for the quantum walker.
Although for the detailed numerics we used cuts that are along a
straight line, numerical examples convincingly show that the shape of
the cut can delay the transport, but not inhibit it. For an example,
see the Appendix \ref{app:star}.
\begin{figure}
\includegraphics[width=6cm]{Fig4}%
\caption{An alternative strategy to create a channel between $A$ and
$B$ is a single cut.}
\label{fig:alice_bob_1cut}
\end{figure}
\begin{figure}
\includegraphics[width=4.4cm]{Fig5a}%
\includegraphics[width=4.4cm]{Fig5b}
\caption{ Mean displacement of the quantum walker (a, continuous
lines) and transmission probability (b, dotted lines) in the
geometry of the ``single cut'' of Fig.~\ref{fig:alice_bob_1cut}, for
a horizontal cut. A single realization of a disordered quantum walk
is taken, with mean rotation angles $\theta_1=0.35\pi$, $\theta_2
=0.15\pi$, and disorder $\delta=0.3 \pi$. The $A-B$ distance $n$ is
varied: $n=40$ (thick light gray), $n=80$ (medium gray), and $n=160$
(thin black). The walker propagates ballistically along the cut (b),
and arrives at $B$ with a high probability (c).}
\label{fig:transport_with_cut}
\end{figure}
The rest of this paper is devoted to this conveyor belt mechanism.
Our principal aims will be to answer the following two questions: Why
does the conveyor mechanism work? How robust is it?
\section{Edge states along a cut}
\label{sec:edge_states}
In this section we show that the single cut transports the walker
efficiently from the source $A$ to the target site $B$ because the
quantum walk has unidirectional (chiral) edge states along the cut. We
find the edge states along the cut using the effective Hamiltonian.
The effective Hamiltonian $\Heff$ of a DTQW is defined as
\begin{align}
\Heff &= i \log U,
\label{eq:def_heff}
\end{align}
where $U$, as in Eq.~\eqref{eq:U_def}, is the unitary timestep
operator of the quantum walk without the projectors corresponding to
the measurements. We fix the branch cut of the logarithm to be along
the negative part of the real axis. If we only look at the DTQW at
integer times $t$, we cannot distinguish a DTQW from the time
evolution that would be produced by the time-independent lattice
Hamiltonian $\Heff$, since,
\begin{align}
\ket{\Psi(t)} &= U^t \ket{\Psi(0)}= e^{-i \Heff t} \ket{\Psi(0)}
\,\,\text{ for } t \in \mathbb{N}.
\end{align}
Every DTQW is thus a stroboscopic simulator for its effective
Hamiltonian $\Heff$.
We now consider the quasienergy dispersion relation of a clean system
in the vicinity of (below) a horizontal cut, as shown in
Fig.~\ref{fig:edge_state_def}. We make use of translation
invariance, and use $k$ to denote the quasimomentum along $x$, a
conserved quantity. We take system of width $1$ ($x=1$) and height $L$
($y=1,\ldots,L$), with modified periodic boundary conditions along
both directions. Along $x$, twisted boundary conditions are taken,
i.e., periodic boundary conditions with an extra phase factor of
$e^{\mp ik}$ for right/left shifts, with $k$ denoting the
quasimomentum we are interested in. Along $y$, we leave the periodic
boundary conditions, but cut the link connecting site $(1,L)$ with
$(1,1)$, and we insert an absorber at $(1,1)$. We diagonalize the
timestep operator $U$ on this system, obtaining the eigenvalues
$\lambda_n = \abs{\lambda_n} e^{-i\varepsilon_n}$ and the
corresponding eigenvectors $\ket{\Psi}_n$. The magnitudes
$\abs{\lambda_n} \le 1$ give us information about the lifetime of the
states, while the phases $\varepsilon_n$ can be identified with the
quasienergies. Repeating this procedure for $-\pi < k \le \pi $ gives
us the dispersion relation of a clean strip with a cut at the top and
absorbers at the bottom.
We show the numerically obtained dispersion relation of the 2DQW on a
stripe with an edge in Fig.~\ref{fig:stripe_dispersions}.
We omitted states with short lifetimes, whose eigenvalue of $U$ has
magnitude $\abs{\lambda}<0.9$. We used thick (blue) to highlight edge
states, defined as states for which $\abs{\braket{L}{\Psi}}^2+
\abs{\braket{L-1}{\Psi}}^2 + \abs{\braket{L-2}{\Psi}}^2 > 0.9$.
Whenever the gaps around $\varepsilon=0$ and $\varepsilon=\pi$ are
open, one can clearly see edge states traversing these gaps. The edge
states are unidirectional (i.e., chiral), and propagate in the same
direction in the two gaps.
We obtained simple analytical formulas for the dispersion relations of
the edge states along the horizontal cut, for $\varepsilon\approx 0$
and $\varepsilon\approx \pi$, using the transfer matrix method. We
relegate the details to the Appendix \ref{app:edge_state}, and
summarize the main results here. When $\sin(\theta_1+\theta_2) > 0$,
the edge states are around $k = \varepsilon = 0$ and
$k=\varepsilon=\pm\pi$ (as in Fig.~\ref{fig:stripe_dispersions}a-d),
when $\sin(\theta_1 + \theta_2) < 0$, they are around $k=\pm\pi,
\varepsilon=0$ and $k=0, \varepsilon=\pm\pi$ (as in
Fig.~\ref{fig:stripe_dispersions}f). Near the center of the gaps, the
edge states group velocity reads
\begin{align}
v=\frac{d \varepsilon}{d k} &=
\sin(\theta_2-\theta_1) \text{ sign }\left[\sin(\theta_1+\theta_2) \right].
\label{eq:group_velocity}
\end{align}
The edge states decay exponentially towards the bulk as
$\Psi \propto e^{-\abs{y}/\xi}$, where $y$ is the distance from the edge.
Using the analytical
calculations of Appendix \ref{app:edge_state}, we obtain the
penetration depth $\xi$ of the edge states into the bulk as
\begin{align}
\xi &=-\left( \text{log }
\frac{1-\abs{\sin(\theta_1+\theta_2}}{\abs{\cos(\theta_1-\theta_2)}}
\right)^{-1}.
\label{eq:edge_penetration}
\end{align}
Although the penetration depth and the magnitude of the group velocity
can depend on the orientation of the edge, the direction of
propagation of these chiral edge states constitutes a topological
invariant. We show this topologically protected quantity as a function
of the parameters $\theta_1$ and $\theta_2$ by boldface numbers in
Fig.~\ref{fig:phase_space_windings}.
The direction of propagation (chirality) of the edge states is
topologically protected: it can only be changed if the rotation angles
$\theta_j$ are themselves changed so that the system is taken across a
gap closing point. There are two different scenarios here,
corresponding to gap closings where $\theta_1-\theta_2=n\pi$ (lines
slanting upwards in Fig.~\ref{fig:phase_space_windings}, e.g., labels
(a)-(c) in Figs.~\ref{fig:stripe_dispersions} and
\ref{fig:phase_space_windings}), and where $\theta_1+\theta_2=n\pi$
(lines slanting downwards in Fig.~\ref{fig:phase_space_windings},
e.g., labels (d)-(f) in Figs.~\ref{fig:stripe_dispersions} and
\ref{fig:phase_space_windings}). In the first case, during the gap
closing, the number of edge states constituting the edge mode does not
change, their penetration depth, Eq.~\eqref{eq:edge_penetration} stays
finite, it is only the edge mode velocity that goes to zero and then
changes sign, see Eq.~\eqref{eq:group_velocity}. In the second case,
the velocity of the edge mode does not change as the gap is closed; it
is the number of edge states that goes to zero and then grows
again. In this case, the penetration depth $\xi$ diverges as the gap
is closed. The two scenarios of this paragraph correspond to edge
states at a zigzag or an armchair edge in the Haldane
model\cite{haldane_model} (e.g., Fig.~5.~of
Ref.~\onlinecite{PhysRevB.89.205408}).
\begin{figure}
\includegraphics[width=0.8\columnwidth]{Fig6}
\caption{For the analytical calculation, we consider a simple geometry
with reflecting edge on top, and absorbers on the bottom. An
infinite strip (left) can be treated as a 1-dimensional
chain with twisted boundary conditions, i.e. with periodic
boundaries along $x$ with an extra phase of $e^{\pm ikx}$ for
right/left hopping. The top three rows, with dark (blue) background
are defined as the edge region.}
\label{fig:edge_state_def}
\end{figure}
\begin{figure}
\includegraphics[width=0.95\columnwidth]{Fig7ac}
\includegraphics[width=0.95\columnwidth]{Fig7df}
\caption{Dispersion relation of a 2DQW on a strip with cut links
on top, and absorbers at the bottom. Quasienergies of long-lived
states (magnitude of Floquet eigenvalue higher than $0.9$) are shown,
with edge states (more than 80\% of the weight on the top three
rows) highlighted in thick (blue). The bulk gap is closed and reopened
by setting the rotation angles to
(a): $\theta_1=0.35\pi$,
$\theta_2=0.15\pi$, (b): $\theta_1=\theta_2=0.25\pi$, (c):
$\theta_1=0.15\pi$, $\theta_2=0.35\pi$.
(d): $\theta_1=0.65\pi$,
$\theta_2=0.15\pi$, (e): $\theta_1=0.75\pi$, $\theta_2=0.25\pi$, (f):
$\theta_1=0.85\pi$, $\theta_2=0.35\pi$.
}
\label{fig:stripe_dispersions}
\end{figure}
\begin{figure}
\includegraphics[width=5cm]{Fig8}
\caption{Parameter space of the split-step 2D discrete-time quantum
walk. Along continuous (dotted) lines, the bulk quasienergy gap
around 0 ($\pi$) quasienergy closes. Each gapped domain supports
edge states near a cut, at both quasienergies 0 and $\pi$. The
number of these edge states (equal to the Rudner winding number as
per Eq.~\eqref{eq:def_rudner_winding}), is written in bold. The Chern
number, which is always 0 due to the sublattice symmetry, is shown in
normal typeface. }
\label{fig:phase_space_windings}
\end{figure}
\section{Topological invariant of the 2-dimensional split-step quantum
walk }
\label{sec:top_invariants}
In a free lattice system with unitary dynamics, the number of
unidirectional (chiral) edge states in the bulk energy gap cannot be
altered by any local changes in the dynamics, as long as the bulk
energy gap is open. Thus, the number of such edge states constitutes a
topological invariant for each bulk gap. In time-independent lattice
Hamiltonians, this invariant can be obtained from the bulk Hamiltonian
as the sum of the Chern numbers of all the bands with energy below the
gap. The Chern number for the bands of the 2DQW, however, is always
zero, due to a discrete sublattice symmetry of the timestep operator,
as we show in Appendix \ref{app:sublattice_symmetry}. Thus, there has
to be some other bulk topological invariant of the 2DQW. This extra
topological invariant is also indicated by the fact that edge states
appear at an interface between two domains of the 2DQW with the same
Chern number\cite{kitagawa_introduction}. We will now identify this
bulk topological invariant.
\subsection{The Rudner Invariant in periodically driven quantum systems}
A candidate for the topological invariant of the 2DQW is
the winding number of periodically driven 2-dimensional lattice
Hamiltonians found by Rudner et al.\cite{rudner_driven}, which we
summarize here. Consider a periodically driven lattice Hamiltonian,
\begin{align}
H(t+1, k_x,k_y) &= H(t,k_x,k_y).
\end{align}
The unitary time evolution operator for one complete period reads
\begin{align}
U(k_x,k_y) &= \mathbb{T} e^{-i\int_0^1 H(k_x,k_y,t) dt}.
\end{align}
Next, define a loop in the following way,
\begin{align}
U_2(t, k_x,k_y) = \left\{ \begin{array}{rl}
\mathbb{T} e^{-2i\int_0^t H(k_x,k_y,2t') dt'} &\mbox{ if $t<\frac{1}{2}$} \\
e^{2i(t-1/2) \Heff} U(k_x,k_y)
&\mbox{ if $t\ge\frac{1}{2}$}
\end{array} \right.
\end{align}
This corresponds to going forward in time until $t=1/2$ with the full
Hamiltonian, and then backwards in time with the effective
Hamiltonian, as in Eq.~\eqref{eq:def_heff}, whose branch cut is chosen
at $\varepsilon=\pi$. Thus, $U_2(t=0)=U_2(t=1)=1$, and $U_2(t=1/2)=U$.
The winding number associated with $U_2$ is
\begin{align}
W[U_2] &= \frac{1}{8 \pi^2} \int dt dk_x dk_y \text{Tr } \big(
U_2^{-1} \partial_t U_2 \cdot \nonumber \\
\quad &\quad [U_2^{-1}\partial_{k_x} U_2,
U_2^{-1}\partial_{k_y} U_2] \big).
\label{eq:def_rudner_winding}
\end{align}
As Rudner et al.\cite{rudner_driven} show, the periodically driven
system will have a number $W$ of chiral edge states in addition to
those predicted by the Chern numbers of the bands. These edge states
appear in each gap, including the gap around $\varepsilon=\pi$ (if
there is a gap there; if not, the branch cut of the logarithm in
Eq.~\eqref{eq:def_heff} needs to be shifted to be in a gap).
\subsection{Rudner invariant from an equivalent lattice Hamiltonian }
Rudner's invariant is defined for periodically driven lattice
Hamiltonians, not DTQWs. To define this invariant for the 2DQW, we
need a realization of the 2DQW as time periodic Hamiltonian. We
construct such a realization analogously to the one-dimensional
case\cite{asboth_tarasinski_delplace}.
We consider a square lattice of unit cells, each containing two sites,
denoted by filled circles $\bullet$ and empty circles $\circ$, as
shown in Fig.~\ref{fig:driven_H}. These sites are identified with states of the walker as
\begin{align}
c^\dagger_{x,y,\bullet} \ket{0} &= \ket{x,y,\uparrow};&
c^\dagger_{x,y,\circ} \ket{0} &= -i \ket{x,y,\downarrow};.
\end{align}
We take a nearest neighbor hopping Hamiltonian on this lattice,
without any onsite terms,
\begin{align}
H(t) &= \sum_{x,y} \big(
u(t) \hat{c}_{x,y,\bullet}^\dagger
\hat{c}_{x,y,\circ}
\,+\,v(t) \hat{c}_{x,y,\bullet}^\dagger
\hat{c}_{x-1,y,\circ} \nonumber \\
\quad &\quad \,\,+\,\,w(t) \hat{c}_{x,y,\bullet}^\dagger
\hat{c}_{x,y-1,\circ}
\,+ h.c.
\,\big).
\label{eq:ssh_walk}
\end{align}
We distinguish between three kinds of hoppings. \emph{Intracell}
hoppings, along the black lines in the grey unit cells in
Fig.~\ref{fig:driven_H}, have amplitudes
$u(t)$. \emph{Horizontal intercell} hoppings, along the dotted red
lines in in Fig.~\ref{fig:driven_H}, have amplitudes
$v(t)$. Finally, \emph{vertical intercell} hoppings, along the dashed
blue lines in Fig.~\ref{fig:driven_H}, have amplitudes
$w(t)$.
To realize the 2DQW, we use a non-overlapping sequence of pulses where
at any time, only one type of hopping is switched on.
A pulse of intracell
hopping $u$ of area $\pi/2$, followed by a pulse of intercell hopping
$v$, of area $-\pi/2$, realizes the operation $S_x$; if the pulse of
$u$ is followed by a pulse of $w$ of area $-\pi/2$, we obtain
$S_y$. The pulse sequence realizing a timestep of the 2DQW then
consists of 6 pulses, shown in Fig.~\ref{fig:driven_H}, and summarized
using the Heaviside function $\chi(x)=(\text{sign}(x)+1)/2$ as
\begin{align}
G(t) &= 6 \chi\left(t+\frac{1}{12}\right)\chi\left(\frac{1}{12}-t\right) ;\\
u(t) &=
\theta_1 G\Big(t-\frac{1}{12}\Big) + \frac{\pi}{2} G\Big(t-\frac{3}{12}\Big) \nonumber \\
\quad &\quad+ \theta_2 G\Big(t-\frac{7}{12}\Big) + \frac{\pi}{2}
G\Big(t-\frac{9}{12}\Big);\\
v(t) &= -\frac{\pi}{2} G\Big(t-\frac{5}{12}\Big);\\
w(t) &= -\frac{\pi}{2} G\Big(t-\frac{11}{12}\Big);
\end{align}
\begin{figure}
\includegraphics[width=3cm]{Fig9a}
\hspace{0.5cm}
\includegraphics[width=4.5cm]{Fig9b}
\caption{Left: the lattice on which the 2DQW is realized as a continuously
driven Hamiltonian. Gray shaded unit cells include two sites
each. The three types of hoppings allowed are intracell (black),
horizontal intercell (red) and vertical intercell (blue). Right: the drive
sequence for the lattice Hamiltonian. }
\label{fig:driven_H}
\end{figure}
For this continuously driven Hamiltonian, we calculate the Rudner
invariant numerically, discretizing the integral of
Eq.~\eqref{eq:def_rudner_winding}, and find quantized values to a
great precision. The results are shown in
Fig.~\ref{fig:phase_space_windings}. We checked numerically that these
invariants correctly predict the edge states at reflective edges, and
also reproduce the edge states between different bulk phases of
Ref.~\onlinecite{kitagawa_introduction}.
\subsection{Cut links as a bulk phase: the 4-step 2D discrete-time quantum walk}
To obtain a more complete picture of the conveyor belt mechanism, it
is instructive to view the line where the links are cut as the
limiting case of a long thin domain of a more general quantum walk
with modified parameters. To obtain this more general quantum walk, we
start from the continuous-time periodically driven Hamiltonian,
Eq.~\eqref{eq:ssh_walk}. There is a straightforward way to cut the link
in the $x$ $(y)$ direction: simply omit the pulse of $v(t)$ $(w(t)) $
from the sequence. This leads us to consider periodically driven
systems composed of pulses of arbitrary area, as represented in
Fig.~\ref{fig:Hamiltonian_partial},
\begin{align}
u(t) &=
\left(\theta_1 \!+ \frac{\pi}{2} \right)
G\Big(t-\frac{1}{8}\Big)
+
\left(\theta_2 \!+ \frac{\pi}{2} \right)
G\Big(t-\frac{5}{8}\Big);\\
v(t) &=
\left(\phi_1 - \frac{\pi}{2} \right)
G\Big(t-\frac{3}{8}\Big);\\
w(t) &=
\left(\phi_2 - \frac{\pi}{2} \right)
G\Big(t-\frac{7}{8}\Big).
\end{align}
We can interpret this pulse sequence as a continuous-time
realization of a discrete-time quantum walk. This is the 4-step
walk, defined by
\begin{align}
U = S_y\,e^{-i \phi_2 \sigma_y } \,S_y\,e^{-i \theta_2 \sigma_y } \,
S_x\, e^{-i \phi_1 \sigma_y } \, S_x\, e^{-i \theta_1 \sigma_y }.
\label{eq:4step_2D}
\end{align}
This walk is easiest represented on a Lieb lattice, as shown in
Fig.~\ref{fig:Hamiltonian_partial}. At the beginning and end of each
cycle, the walker is on one of the (gray) lattice sites with
coordination number 4, while during the timestep, it can also occupy
the (red and blue) sites with coordination number 2.
The 4-step walk has two topological invariants: the Chern number $C$,
and the Rudner winding number $W$. Its Chern number can be nonzero,
because at the end of the timestep the walker can also return to its
starting point, and so it does not have the sublattice property
detailed in the Appendix \ref{app:sublattice_symmetry}. We find that,
depending on the angles $\phi_1,\phi_2,\theta_1,\theta_2$, the
invariants can take on the values $-1,0,+1$, as shown in
Fig.~\ref{fig:phase_space_partial}. In particular, the trivial
insulator, with $C=W=0$, is realized in the areas in parameter space
defined by $n \pi - \abs{\phi_1-\phi_2} < \theta_1-\theta_2 < n \pi +
\abs{\phi_1-\phi_2} $, for $n=0$ (including $U=-1$) and $n=\pm 1$
(including $U=1$). The phase with all links cut corresponds to
$\theta_1=\theta_2 = -\phi_1 = -\phi_2= -\pi/2$; in this case, the
time evolution operator does nothing to the state.
\begin{figure}
\includegraphics[width=2.5cm]{Fig10a}
\includegraphics[width=6cm]{Fig10b}
\caption{The 4-step quantum walk is set on a Lieb lattice (left). The
driving sequence of the corresponding continuously driven
Hamiltonian consists of nonoverlapping pulses of arbitrary area
(right). }
\label{fig:Hamiltonian_partial}
\end{figure}
\begin{figure}
\includegraphics[width=6.5cm]{Fig11}
\caption{Parameter space of the 4-step 2DQW as defined in
Eq.~\eqref{eq:4step_2D}. Gapped domains, with Rudner winding numbers
$W$ (boldface) and Chern numbers (normal typeface), are separated by
lines, along which the bulk quasienergy gap around $\varepsilon=0$
(continuous lines) or around $\varepsilon=\pi$ (dotted lines)
closes. Since sublattice symmetry of the walk is broken by the
extra rotations through angles $\phi_1, \phi_2$, the gaps can close
independently, and the Chern number can take on nonzero values. The
angles shown on the left are $\phi_+ = \abs{\phi_2+\phi_1}$,
$\phi_-= \abs{\phi_2-\phi_1}$, assuming both of these are less than
$\pi$. In the example shown, $\phi_1=-\pi/10$ and $\phi_2 = \pi/5$.}
\label{fig:phase_space_partial}
\end{figure}
\section{Robustness of the conveyor belt in the presence of disorder}
\label{sec:robustn-conv-belt}
We now investigate how the transport along the cut is affected by static
disorder in the rotation angles $\theta_1 $ and
$\theta_2$, as defined in Eq.~\eqref{eq:angle_disorder}.
\subsection{Effects of static disorder}
\label{sec:effects-stat-disord}
We choose a system of dimensions $(4 M\times 2 M) $. The walker is
initialised at the position $A=(M, M) $. The position of the final
(absorbing) point B is chosen to be $(3M-1, M-1) $. The cut cuts all
the links between the sites $(x, M) $ and $(x, M -1) $ for $M\leq
x\leq 3M$. Thus there is a path of cut links connecting the initial
and final site. For $M=10$ the system is plotted for three different
times in Fig.~\ref{fig:wavfn_cut_init}, thereby showing the initial
wavefunction, the wavefunction as it propagates along the conveyor and
the state after the majority of the wavefunction has been absorbed.
The boundaries of the system are absorbing boundaries.
This geometry is chosen
such that the walker cannot reach the absorbing boundary too quickly.
\begin{figure}[tb]
\centering
\def \columnwidth {\columnwidth}
\includegraphics[width=\columnwidth]{Fig12a}
\includegraphics[width=\columnwidth]{Fig12b}
\includegraphics[width=\columnwidth]{Fig12c}
\caption{Wavefunction for a particular disorder realisation and the
cut for a system size given by $M=10$. The values of rotation angles are $\theta_1=0.35\pi, \theta_2=0.15\pi, \delta=0.1\pi$. The cut is represented by the black line. The starting
point for the wave function is just above the black line on the
left. The final point at which the wave function gets absorbed is
represented by the orange dot on the right-hand side. On the
top plot the wave function is plotted for $t = 0 $. The
middle plot is for $t = 2M =20$ when a good fraction of the walker is
on the conveyor. On the right-hand side the wave function is
plotted for $t = 10M=100$, long after the bulk of the walker has been
absorbed by the orange point B.}
\label{fig:wavfn_cut_init}
\end{figure}
We quantify the efficiency of the transport along the cut by looking
at the arrival probability $P_t$, as in
Eq. (\ref{eq:def_of_arrival_prob_Pt}) and the total survival
probability, i.e., the norm of the conditional wavefunction,
$\braket{\Psi(t)}{\Psi(t)}$. If these add up to 1, no part of the
walker is absorbed by the boundary. If the walker is transported
ballistically along the defect we expect the total arrival probability
to suddenly increase by an appreciable amount at the time $ t=2M / v$,
where $v $ is the transport velocity of the walker, given in the clean
limit by eq. (\ref{eq:group_velocity}). A delay in the onset of the
arrival at the final point B indicates a slowdown of the transport. On
the other hand, if the total survival probability decreases without
the probability at the final point B increasing, this also indicates a
loss of transport efficiency. It indicates that diffusion towards the
boundary increases in importance, whereas ballistic transport along
the cut decreases in importance. For different disorder strengths
$\delta$ we have plotted the results of such a calculation in
Fig.~\ref{fig:probs_vary_dtheta}.
\begin{figure}[tb]
\centering
\def \columnwidth {\columnwidth}
\includegraphics[width=\columnwidth]{Fig13a}
\includegraphics[width=\columnwidth]{Fig13b}
\caption{Top: Arrival and survival probabilities for
$\theta_1=0.35\pi, \theta_2=0.15\pi$, $M=30$ and different amounts
of rotation angle disorder $\delta$. Solid lines are cumulative
arrival probabilities at the point B and dashed lines are the
remaining wave function amplitudes, thus the probability of
survival up to time $t $. The solid and dashed lines of the same
colour correspond to the same system. We have averaged over 100
different disorder realisations. Bottom: Plot showing the arrival
probability as a function of time for a different system sizes and
different disorder strengths. The time axes is scaled with the
system size. The curves for different system sizes collapse on one
another, showing that the propagation along the cut is
ballistic. The plot also shows that we may choose a system size of
$M=30 $ in order to further investigate the system.}
\label{fig:probs_vary_dtheta}
\end{figure}
One may obtain an overview of the behaviour as a function of
$\theta_1,\theta_2$ and $\delta$ by simply
looking at the total survival probability and the total arrival
probability for $t\gg 2M $ . Ballistically the walker should have arrived at the final
point B. This allows us to see whether the transport along a conveyor
is efficient for a range of parameters.
\begin{figure}[tb]
\centering
\def \columnwidth {\columnwidth}
\includegraphics[width=\columnwidth]{Fig14}
\caption{The arrival probability at point B after 322 time steps,
averaged over 100 disorder configurations for $\theta_1=\frac\pi4$
as a function of $\theta_2$ and $\delta$. The system has the same
geometry as in Fig.~\ref{fig:wavfn_cut_init}, but is three times
larger, having $M=30$. In the plot the azimuthal angle represents
$\theta_2$ and the radius is related to $\delta$ by
$r=1-2\delta/\pi$, such that the largest possible value of
$\delta=\pi/2$ is taken at the centre at $r=0$, at which point
$\theta_1$ and $\theta_2$ are irrelevant. The black dashed line
markes the regime at which $\delta$ becomes large enough for both
types of topological invariants to be locally present in the
system. Beyond that line transport begins to be suppressed. The
magenta diamonds mark the points at which the group velocity
becomes too small for the walker to arrive within the simulation
time. }
\label{fig:surv_and_arr_prob_fn_th1_dth1}
\end{figure}
In Fig.~\ref{fig:surv_and_arr_prob_fn_th1_dth1} we have plotted the
final arrival probability for $\theta_1=\frac\pi4$, different values
of $\theta_2$ and a range of disorder strengths. We see that if
disorder is strong enough, the ballistic transport along the defect is
suppressed, and thus no part of the walker arrives at point B.
A naive expectation is that disorder can start to affect the edge
states only if it is large enough so that different
topological invariants can be present in different parts of the system.
This occurs for
\begin{align}
\label{eq:criterion_of_delta_too_large}
\delta>\delta_{max}=
\begin{cases}
\abs{\frac12(\theta_2-\pi/4)} ,& \theta_2<\frac\pi2\\
\abs{\frac12(3\pi/4-\theta_2} ,& \theta_2\ge\frac\pi2
\end{cases}
\end{align}
The curve $\delta_{max}(\theta)$ is plotted as the dashed black line
in Fig.~\ref{fig:surv_and_arr_prob_fn_th1_dth1}: the numerical data
are more or less in agreement with the naive expectation.
The arrival probability also reduces to zero as $\theta_2$ approaches
$\theta_2=\frac\pi4$ and $\theta_2=\frac{3\pi}4$, independent of the
disorder. At the point $\theta_2=\frac\pi2$, we have
$\sin(\theta_1-\theta_2)=0$ and thus the group velocity along the
conveyor is zero, cf. Eq. \ref{eq:group_velocity}. Since the walker
has to traverse a distance of $2M$ and the simulation time only runs
up to $t_{max}$, the walker will not arrive if
$v<v_{crit}=2M/t_{max}$. For
Fig.~\ref{fig:surv_and_arr_prob_fn_th1_dth1} $v_{crit}=0.19$. From
eq.~\eqref{eq:group_velocity} it then follows that the arrival
probability should be zero even in the clean limit when $\theta_2$ is
within a distance $\delta\theta_2^{crit}=0.06\pi$ of
$\theta_2=\frac\pi4$. These points are marked as magenta diamonds in
Fig.~\ref{fig:surv_and_arr_prob_fn_th1_dth1}. This estimate agrees
well with the position at which the arrival probability vanishes
in Fig.~\ref{fig:surv_and_arr_prob_fn_th1_dth1}.
Around the point
$\theta_2=\frac{3\pi}4$ on the other hand the group velocity does not
vanish. Instead, according to eq.~\eqref{eq:edge_penetration} the penetration depth of the edge state into the bulk
$\xi$ diverges. Thus
the overlap of the initial state of the quantum walk with the conveyor
vanishes, as initially the quantum walker is localised to a single
lattice site. Also the overlap of the conveyor state with the final
absorbing point disappears. Together with eq.~\eqref{eq:edge_penetration} this implies that the arrival
probability $P_\infty$ around $\theta_2=\frac{3\pi}4$ will vanish as
\begin{align}
P_\infty = 2 \,\delta\theta_z^2
\label{eq:P_arr_around_34pi}
\end{align}
where $\delta\theta_z=\theta_2-\frac{3\pi}4$. We have numerically
checked this behaviour for the clean system and find that
eq.~\eqref{eq:P_arr_around_34pi} provides a good fit without any
adjustable parameters. So we observe qualitatively quite different
behaviour around the points $\theta_2=\frac\pi4$ and
$\theta_2=\frac{3\pi}4$. For $\theta_2=\frac\pi4$, $P_\infty$ vanishes
abruptly and stays zero over a finite range of $\theta_2$, namely, between the
two magenta diamonds in
Fig.~\ref{fig:surv_and_arr_prob_fn_th1_dth1}. On the other hand,
$P_\infty$ vanishes gradually around $\theta_2=\frac{3\pi}4$ and is
only strictly zero at one point.
\section{Conclusions}
In this work we have shown that in the 2-dimensional split-step
discrete-time quantum walk, a cut on the underlying lattice creates a
transport channel for the walker that is robust against
time-independent disorder. The mechanism for the transport is given by
edge states that form in the vicinity of the cut. We derived
analytical formulas for some properties of the edge states, and found
the bulk topological invariant that predicts their emergence. This
invariant is the winding of the quasienergy\cite{rudner_driven}.
The edge states we found are resistant to a moderate amount
time-independent disorder, but, as we have seen, above a certain
threshold they no longer exist. It is an interesting challenge to
study the details of this transition. In other words: how does
disorder destroy the topological phase? An important step in this
direction is understanding the effect of disorder on the 2DQW without
edges, our results on which are published
elsewhere\cite{jonathan_2014}.
There are quite promising perspectives for detecting the type of edge
states we found in quantum walk experiments. In fact, edge states due
to the Chern numbers have already been seen in a continuous-time
quantum walk experiment: there, the walker was a pulse of light
coupled into an array of waveguides etched into a block of dielectric,
a ``photonic topological insulator''\cite{rechtsman_photonic_2013}.
Modifying the pattern of the waveguides would allow for a direct
realization of the 2DQW. A more direct realization, which would also
allow the study of interactions, would be on ultracold atoms trapped
in an optical lattice\cite{alberti_electric_experiment}.
\begin{acknowledgments}
We acknowledge useful discussions with Mark Rudner, Carlo Beenakker
and Cosma Fulga.
We also acknowledge the use of the
Leiden computing facilities.
This research was realized in the frames of TAMOP
4.2.4. A/1-11-1-2012-0001 ''National Excellence Program -- Elaborating
and operating an inland student and researcher personal support
system'', subsidized by the European Union and co-financed by the
European Social Fund. This work was also supported by the Hungarian
National Office for Research and Technology under the contract
ERC\_HU\_09 OPTOMECH, by the Hungarian Academy of Sciences (Lend\"ulet
Program, LP2011-016), and by by the Hungarian Scientific Research Fund
(OTKA) under Contract Nos. K83858 and NN109651.
This work was funded by NORDITA.
\end{acknowledgments}
|
2,869,038,156,808 | arxiv | \section{Introduction}
With its mass being close to the electroweak scale the top quark is very special. It might intimately
be connected to the underlying mechanism of electroweak symmetry breaking (EWSB). Consequently, studying
top-quark production and decays at colliders might provide a portal to New Physics (NP). The
Large Hadron Collider (LHC), providing proton--proton collisions currently at 13 TeV centre-of-mass energy,
can be seen as a top-quark factory. It allows to search for anomalous top-quark production and decay processes,
considered as low energy modifications of the Standard Model (SM) parametrized by effective operators~\cite{Barger:2011pu,Choi:2012fc,Franzosi:2015osa,Zhang:2016omx,Englert:2016aei,Cirigliano:2016nyn}, or,
as the direct production of intermediate resonances, which have been hunted for a long time at different
experiments~\cite{Aaltonen:2009tx,Khachatryan:2015sma,Chatrchyan:2013lca}.
Heavy scalar resonances that decay into a pair of top quarks are predicted by several NP scenarios,
in particular the Two Higgs Doublet Model (THDM), supersymmetric theories and models of dynamical EWSB.
In this paper, we provide a framework to reinterpret the SM $t\bar{t}$ differential cross section measurements as exclusion limits for signatures of NP resonances decaying into $t\bar{t}$.
The framework relies on the comparison between particle-level data with state-of-the-art event simulation
and the interpretation of deviations in terms of NP models. It is based on four main ingredients
\begin{enumerate}
\item
A Monte Carlo event generator which allows the precise and realistic description of particle-level observables.
In order to theoretically describe top-quark pair production at the LHC, we make use of state-of-the-art
event simulations provided by the \textsc{Sherpa}~\cite{Gleisberg:2008ta} event-generator framework. This implies
the usage of techniques to match leading and next-to-leading order QCD matrix elements with parton showers
and merging different parton-multiplicity final states.
\item The precise measurement of SM processes from fiducial kinematical regions provided as differential
particle-level observables by LHC experiments, and available through the \textsc{Rivet}
package~\cite{Buckley:2010ar}. Here we used the ATLAS analyses of top-quark pair production in the
boosted~\cite{Aad:2015hna} and resolved~\cite{Aad:2015mbv} regimes.
\item A general parametrization of NP whose predictions for colliders can be computed efficiently.
We adopt a Lagrangian which describes scalar resonances that can be CP-even or odd and color singlet
or octet. We devise a \emph{reweighting} method to describe the model prediction in the $m(t\bar{t})$
distribution for a wide range of the parameter space in a fast and efficient manner.
\item A statistical interpretation to decide what regions of parameter space of the model are ruled
out at a given confidence level. We adopt here a simplified $\chi^2$ analysis.
\end{enumerate}
A similar method to constrain NP with SM measurements in several other channels has recently been
presented in Ref.~\cite{Butterworth:2016sqg}.
These approaches are complementary to model-specific searches in the respective final states.
They provide systematic methods for the theory community to derive more realistic exclusion limits
for any particular model, not relying on the experiment-specific assumptions.
In the rest of the paper we explain these 4 points in detail.
In Sec. II we describe the set-up of our event simulation. In Sec. III
we give details on the analyses used in the boosted and the resolved regime and validate our SM predictions
by comparing them to experimental data. In Sec. IV we introduce our simplified model of beyond the SM scalar
resonances and describe the implementation in our simulation framework, based on an event-by-event \emph{reweighting}.
In Sec. V we present a statistical analysis to assess the region in parameter space accessible by the LHC
experiments and provide interpretations in terms of some specific models. We finally conclude in Sec. VI.
\section{Simulation framework}
\label{sec:simulation}
When searching for imprints of resonant contributions in top-quark pair production at the LHC, a detailed
understanding of the SM production process is vital. In particular, as there are non-trivial interference
effects between NP signals and SM amplitudes that determine the shape of the resulting top-pair invariant-mass
distribution. In order to obtain realistic and reliable predictions for the top-pair production process,
we make use of state-of-the-art particle-level simulations, based on higher-order matrix elements matched
to parton-shower simulations and hadronization.
Our analysis focuses on observables in the semi-leptonic decay channel of top-quark pair production, i.e.
\begin{equation}
pp\to t\bar{t} \to b\bar{b}jj\ell\nu\text{ + jets}\,,
\label{eq:process}
\end{equation}
where $\ell$ denotes muons or electrons, $\nu$ the corresponding neutrinos, $b$ are bottom quarks and $j$
light quarks or gluons. These decay products and the associated radiation might be reconstructed as
well-separated objects, i.e. light-flavour jets, $b$-jets and a lepton, or, in the boosted regime, as a large-area jet,
containing the hadronic decay products, additional jets and a lepton. In either case, to realistically simulate
the associated QCD activity, higher-order QCD corrections need to be considered.
To describe the SM top-pair production process we use the \textsc{Sherpa}\ event-generation
framework~\cite{Gleisberg:2003xi,Gleisberg:2008ta}. We employ the techniques to match LO and NLO QCD matrix
elements to \textsc{Sherpa}'s dipole shower~\cite{Schumann:2007mg} and to merge processes of variable partonic
multiplicity~\cite{Hoeche:2009rj,Hoeche:2012yf}. Leading-order and real-emission correction matrix elements
are obtained from \textsc{Comix}~\cite{Gleisberg:2008fv}. Virtual one-loop amplitudes, contributing at NLO QCD, are
obtained from the \textsc{Recola}\ generator~\cite{Actis:2016mpe,Biedermann:2017yoi} that employs the \textsc{Collier}\
library~\cite{Denner:2016kdg}. Top-quark decays are modelled at leading-order accuracy through \textsc{Sherpa}'s
decay handler, that implements Breit-Wigner smearing for the intermediate resonances and preserves spin
correlations between production and decay~\cite{Hoche:2014kca}. We treat bottom-quarks as
massive in the top-quark decays and the final-state parton-shower evolution~\cite{Krauss:2016orf}.
To validate the SM predictions we also consider leading-order simulations in the \textsc{MadGraph\_aMC\@NLO} framework~\cite{Alwall:2014hca}.
The hard-process' partonic configurations get showered and hadronized through \textsc{Pythia8}~\cite{Sjostrand:2014zea}.
The spin-correlated decays of top quarks are implemented through the \textsc{MadSpin} package \cite{Artoisenet:2012st}.
Samples of different partonic multiplicity are merged according to the $k_T$-MLM prescription described in \cite{Alwall:2007fs}.
For the top-quark and $W$-boson, the following mass values are used
\begin{equation}
m_t=172\;{\rm GeV}\,,\quad m_W=80.39\;{\rm GeV}\,,
\end{equation}
and the corresponding widths are calculated at leading order, assuming for the remaining electroweak input parameters
$m_Z=91.19\;{\rm GeV}$ and $G_\mu=1.16637\times 10^{-5}\;{\rm GeV}^{-2}$. In the following section
we present a comparison of our simulated predictions against ATLAS measurements and discuss their systematics. Alongside, we give details on the QCD input parameters and calculational choices used there.
\section{Analysis framework}
\label{sec:analysis}
In what follows we describe the event selections used to identify the top-quark pair-production
process, used later on to study the imprint of resonant NP contributions. Thereby,
we closely follow the strategies used by the LHC experiments. Our simulated events from
\textsc{Sherpa}\ and \textsc{MadGraph\_aMC\@NLO}\ are produced in the \textsc{HepMC} output format \cite{Dobbs:2001ck} and
passed to \textsc{Rivet}~\cite{Buckley:2010ar} where we implement our particle-level selections.
We consider two analyses, based on measurements performed using the ATLAS detector of the
differential $t\bar{t}$ production cross sections in proton-proton collisions at $\sqrt{s} = 8 $ \,\text{TeV}\
with an integrated luminosity of $L=20.3\,{\rm fb}^{-1}$
\cite{Aad:2015hna,Aad:2015mbv}.
Both analyses select events in the \emph{leptons+jets} decay channel. The two measurements indicated
in the following as \emph{Resolved} and \emph{Boosted} are optimized for different regions
of phase space. The \emph{Boosted} analysis, cf. Ref.~\cite{Aad:2015hna}, is designed to
enhance the selection and reconstruction efficiency of highly-boosted top quarks with transverse
momentum $p_T > $ 300 GeV, that might originate from the decay of a heavy resonance with
mass $m> 600\,{\rm GeV}$. In such events the decay products of the hadronic top overlap, due to
the high Lorentz boost. In turn, they cannot be reconstructed as three distinct jets. The
\emph{Resolved} analysis, based on Ref.~\cite{Aad:2015mbv}, measures the differential cross section
as a function of the full kinematic spectrum of the $t\bar{t}$ system and is useful to identify and
reconstruct rather light resonances.
The selection requirements are applied on leptons and jets at particle level, i.e. after hadronization.
In our simulated data we discard any detector resolution, i.e. smearing effects. All the leptons used
in the analyses, i.e. $e, \mu, \nu_{e}$ and $\nu_{\mu}$ must not originate from hadrons, neither directly
nor through a $\tau$-lepton decay. In this way the leptons are guaranteed to originate from $W$-boson
decays without a specific matching requirement. The four-momenta of the charged leptons are modified by
adding the four-momenta of all photons found in a cone of $\Delta R$ = 0.1 around the leptons'
direction, thus representing dressed leptons. The missing transverse energy of the events ($E_T^{miss}$)
is defined from the four-vector sum of the neutrinos not resulting from hadron decays.
Jets are clustered using the anti-$k_{T}$ algorithm \cite{Cacciari:2008gp} with a radius of $R=0.4$ for
small-R jets and $R=1.0$ for the large-R jets, using all stable particles, excluding the selected
dressed leptons, as input. All small-R jets considered during the selections are required to have
$p_T > $ 25 GeV and $|\eta| < $ 2.5, while for large-R jets we demand $p_T > $ 300 GeV and $|\eta| < $ 2.
The small-R jets are considered $b$-tagged if a $b$-hadron with $p_T > $ 5 GeV is associated to the jet
through a ghost-matching procedure \cite{Cacciari:2008gn,Cacciari:2007fd}. To remove most of the
contribution coming from the interaction of the proton remnants, i.e. the underlying event, and to
reduce the dependence on the generator, large-R jets are groomed following a trimming procedure with
parameters $R_{sub} = 0.3$ and $f_{cut}$ = 0.05, for details of the procedure see Ref.~\cite{Krohn:2009th}.
Both the \emph{Resolved} and the \emph{Boosted} selections require a single lepton with $p_T > $ 25 GeV
and $|\eta| < $ 2.5. In the \emph{Resolved} analysis, apart from the leptons, the events are required to
have at least four small-R jets and at least two of them have to be $b$-tagged. In the \emph{Boosted}
analysis the events are required to have $E_T^{miss} > $ 20 GeV and $E_T^{miss}+m^W_{T}> 60\;{\rm GeV}$, with
$m^W_T = \sqrt{2 p_T^l E_T^{miss} (1 - \cos\Delta \phi)}$, the transverse mass of the leptonically decaying
$W$-boson, where $\Delta \phi$ denotes the azimuthal angle between the lepton and the $E_{T}^{miss}$ vector.
The presence of at least one small-R jet with $\Delta R ({\rm lepton},\,${\rm small-R jet})$<1.5$ is required.
In case more than one jet fulfills this requirement the jet with higher $p_T$ is considered as the
jet originating from the leptonic top decay, dubbed \emph{lep-jet} candidate. Furthermore, it is required the
presence of a trimmed large-R jet with mass $m_j^{R=1.0}> 100$ GeV and $\sqrt{d_{12} }> $ 40 GeV, where
$\sqrt{d_{12} }$ is the $k_t$ distance~\cite{Aad:2013gja,Butterworth:2002tt} between the two subjets in the
last step of the jet reclustering, \emph{i.e.} $\sqrt{d_{12}} = \min(p_{T1}, p_{T2})\,\Delta R_{1,2}$ . If more
than one large-R jet fulfills these requirements the one with highest transverse momentum is considered as
the \emph{had-jet} candidate. The \emph{had-jet} candidate must furthermore satisfy certain kinematic
requirements: $\Delta \phi({\emph{had-jet}},\,{\rm lepton}) > 2.3$ and
$\Delta R({\emph{had-jet}},\,{\emph{lep-jet}}) > 1.5$. The final requirement in the \emph{Boosted} selection
is that at least one $b$-tagged jet in $\Delta R ({\emph{had-jet}},\,{\rm jet}) < 1$ is found or
that the \emph{lep-jet} candidate is $b$-tagged. The \emph{Resolved} and \emph{Boosted} event selections
are summarized in Tab.~\ref{tab:cuts}.
\setlist{nolistsep}
\begin{table}[h!]
\centering
\begin{tabular}{p{0.33\textwidth}|p{0.33\textwidth}p{0.33\textwidth}}
\toprule
\multicolumn{3}{c} {event selections} \\
\hline
\multicolumn{3}{c} {Exactly one lepton ($\mu$ or $e$) with $p_T > 25$ GeV and $|\eta| < 2.5$ } \\
\hline
\emph{Resolved analysis} & \multicolumn{2}{l}{\emph{Boosted analysis}} \\
\vphantom{nulla} & \multicolumn{2}{l}{$E_T^{miss} > 20$ GeV and $E_T^{miss}+m^W_{T} > $ 60 GeV} \\
$\ge$ 4 small-R jet:
\begin{itemize}[label={-}]
\item $p_T > 25$ GeV, $|\eta| < $ 2.5
\end{itemize}
& \multicolumn{2}{p{0.66\textwidth}}{$\ge$ 1 large-R jet:
\begin{itemize}[label={-}]
\item $p_T > 300$ GeV, $|\eta| < $ 2
\item $\sqrt{d_{12}} > $ 40 GeV
\item $m_j^{R=1.0} > 100$ GeV
\item $\Delta \phi($ large-R jet, lepton$) > 2.3 $
\end{itemize}}\\
& \multicolumn{2}{p{0.66\textwidth}}{$\ge$ 1 small-R jet:
\begin{itemize}[label={-}]
\item $p_T > 25$ GeV, $|\eta| < $ 2.5
\item $\Delta R ($lepton, small-R jet$) < 1.5$
\item $\Delta R($small-R jet, large-R jet$) > 1.5$
\end{itemize}}\\
$\ge$ 2 $b$-tagged jets & \multicolumn{2}{p{0.66\textwidth}}{$\ge$ 1 $b$-tagged jet:
\begin{itemize}[label={-}]
\item $\Delta R ($large-R jet, b-tagged jet$) < $ 1 or,
\item the small-R jet is $b$-tagged.
\end{itemize}}\\
\hline\hline
\end{tabular}
\caption{Event selections applied in the \emph{Resolved} and \emph{Boosted} analyses.}
\label{tab:cuts}
\end{table}
For the selected events the $t\bar{t}$ system is reconstructed based on the event topology:
\begin{itemize}
\item \textbf{Resolved analysis:} The leptonic top is reconstructed using the $b$-tagged jet nearest in $\Delta R$ to the lepton
and the missing-momentum four vector, the hadronic top is reconstructed using the other $b$-tagged jet and the
two light jets with invariant mass closest to the $W$ mass.
\item \textbf{Boosted analysis:} The leptonic top is reconstructed using the \emph{lep-jet} candidate, the lepton
and the missing-momentum four vector, the \emph{had-jet} candidate is directly considered as the hadronic top.
\end{itemize}
In order to validate our simulations of SM top-quark pair-production we compare our predictions
against ATLAS data for the \emph{Boosted} and \emph{Resolved} selection, supplemented by studies of systematic
variations. To begin with, we check the impact of the grooming procedure on the reconstructed hadronic-top
candidate mass, \emph{i.e.} the mass of the \emph{had-jet} candidate in the \emph{Boosted} event selection.
We consider event samples from \textsc{Sherpa}\ and \textsc{MadGraph\_aMC\@NLO}, based on the leading-order matrix element for top-quark
pair production, labelled as $0j$. In these calculations, \emph{i.e.} without merging-in higher-multiplicity matrix elements,
we set the renormalization ($\mu_R$) and factorization scale ($\mu_F$) to
\begin{equation}
\mu_R^2=\mu_F^2 = \frac14 [ m_t^2 + \frac12 (p_{T,t}^2 + p_{T,\bar{t}}^2) ]\,,
\end{equation}
with $p_{T,t}$ ($p_{T,\bar{t}}$) the transverse momentum of the decaying (anti) top quark.
\begin{figure}[h!]
\includegraphics[width=0.49\textwidth]{h_hadtop_diffXsec_untrim_mass_sum.pdf}
\includegraphics[width=0.49\textwidth]{h_hadtop_diffXsec_trim_mass_sum.pdf}
\caption{Invariant mass distribution of the hadronic-top candidates in the \emph{Boosted} event selection.
The theoretical predictions from \textsc{Sherpa}\ and \textsc{MadGraph\_aMC\@NLO}+\textsc{Pythia}8 are based on LO matrix elements
dressed with parton showers, left panel without and right panel with applying the trimming procedure.}
\label{fig:Trimming}
\end{figure}
In Fig.~\ref{fig:Trimming} we present the resulting invariant-mass distributions obtained from \textsc{Sherpa}\ and \textsc{MadGraph\_aMC\@NLO}
before and after applying the grooming procedure. Comparing the untrimmed distributions (left panel) both samples
exhibit a clear peak at the nominal top-quark mass. However, due to parton-shower radiation and non-perturbative
corrections from hadronization and underlying event the peak is rather broad and sizeable differences are observed
when comparing the predictions from \textsc{Sherpa}\ and \textsc{MadGraph\_aMC\@NLO}+\textsc{Pythia}8. Note that the uncertainty bands shown represent
the statistical uncertainty of the samples only. When applying the trimming procedure to the \emph{had-jet} candidates
the mass distributions agree to a much better degree, both in the tails of the distribution and the peak region. Therefore,
trimming of the large-R jets significantly reduces the dependence on the generator and the details of its parton-shower
formalism and the modelling of non-perturbative effects.
In Figs.~\ref{fig:scaleUncertaintyLO} and \ref{fig:scaleUncertaintyNLO} we compare predictions from \textsc{Sherpa}\ based on
LO and NLO matrix elements against data measured by the ATLAS experiment for the \emph{Boosted} (left panels) and the
\emph{Resolved} (right panels) event selections. For the MEPS@LO sample we merge LO QCD matrix elements for
$t\bar{t}+0,1,2,3$jet production dressed with the \textsc{Sherpa}\ dipole parton shower~\cite{Hoeche:2009rj}. The merging-scale
parameter is set to $Q_{\rm cut}=20\,{\rm GeV}$. The MEPS@NLO sample combines QCD matrix elements at NLO for
$t\bar{t}+0,1$jet and $t\bar{t}+2,3$jets at LO according to the methods described in
\cite{Hoche:2010kg,Hoeche:2012yf}, again using a merging scale of $Q_{\rm cut}=20\,{\rm GeV}$. Both methods share
the event-wise reconstruction of an underlying $jj\to t\bar t$ core process through consecutive clusterings of the
external legs. For this reconstructed core process the renormalization and factorization scales are set
to $\mu_R=\mu_F=\mu_{\rm core}$, with
\begin{equation}
\mu^2_{\rm core} = \frac14 [ m_t^2 + \frac12 (p_{T,t}^2 + p_{T,\bar{t}}^2) ]\,.
\end{equation}
For the reconstructed clusterings the strong coupling is evaluated at the respective splitting scale. The
scale $\mu_{\rm core}$ is furthermore used as the resummation, \emph{i.e.} parton-shower starting scale, denoted $\mu_Q$. To
assess the scale uncertainty of the predictions we perform variations by common factors of $2$ and $1/2$ for
the core scale and the local splitting scales, using the event-reweighting technique described in
\cite{Bothmann:2016nao}. In the figures the resulting uncertainty estimate is represented by the red band, while
the blue band indicates the statistical uncertainty.
\begin{figure}[h!]
\includegraphics[width=0.49\textwidth]{h_hadtop_diffXsec_pt_sum_MEPS_LO_uncertainty.pdf}
\includegraphics[width=0.49\textwidth]{mtt_8TeV_MEPS_LO_uncertainty.pdf}
\caption{Comparison of predictions based on \textsc{Sherpa}\ MEPS@LO simulations to data measured
by the ATLAS experiment. The left panel shows the $p_T$ of the hadronic top in the \emph{Boosted} selection,
data taken from ~\cite{Aad:2015hna}.
In the right panel the reconstructed invariant mass of the $t\bar{t}$ system in the \emph{Resolved} event selection is
depicted, with data taken from~\cite{Aad:2015mbv}.}
\label{fig:scaleUncertaintyLO}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.49\textwidth]{h_hadtop_diffXsec_pt_sum_MEPS_NLO_uncertainty.pdf}
\includegraphics[width=0.49\textwidth]{mtt_8TeV_MEPS_NLO_uncertainty.pdf}
\caption{As Fig.~\ref{fig:scaleUncertaintyLO} but based on \textsc{Sherpa}\ MEPS@NLO simulations.}
\label{fig:scaleUncertaintyNLO}
\end{figure}
For the boosted-top selection we show the transverse-momentum distribution of the hadronic-top candidate in the left
panels of Figs.~\ref{fig:scaleUncertaintyLO} and \ref{fig:scaleUncertaintyNLO}, respectively. Notably, both samples,
\emph{i.e.} the MEPS@LO and the MEPS@NLO prediction, describe the ATLAS measurement \cite{Aad:2015hna} very well, both in
terms of the production rate and in particular concerning the shape of the distribution. For the MEPS@LO result the
scale uncertainty is quite significant, reaching up to 50\%. However, the dominant effect is a mere rescaling of the
total production rate, the shape of the distribution stays almost unaltered. This is also observed for the MEPS@NLO
sample, however, the scale uncertainty reduces to $\pm20\%$.
For the resolved-decay selection we compare the \textsc{Sherpa}\ MEPS@(N)LO predictions for the reconstructed invariant mass
of the $t\bar t$ system against data from the ATLAS experiment~\cite{Aad:2015mbv}, see right panels of
Figs.~\ref{fig:scaleUncertaintyLO} and \ref{fig:scaleUncertaintyNLO}. Note that the data and the theoretical predictions
are normalized to their respective fiducial cross section. The MEPS@LO and MEPS@NLO results agree very well with the data.
For this normalized distribution the scale uncertainties largely cancel. For the MEPS@LO sample this results in an
uncertainty estimate of $\pm 2\%$. For the MEPS@NLO sample the shape modifications induced by the scale variations
amount to $\pm 5\%$.
For both observables considered, the MEPS@(N)LO predictions from \textsc{Sherpa}\ yield a very satisfactory description of
the data. No significant alteration of the distributions shape is observed upon inclusion of the QCD one-loop
corrections in the MEPS@NLO sample. However, in particular the uncertainty on the production rate reduces significantly.
For the normalized top-pair invariant mass distribution we consider the more realistic $\pm 5\%$ estimate from the
MEPS@NLO calculation. By normalizing the distribution to the cross section in a certain mass window, this
uncertainty might in fact be reduced further, cf. Ref.~\cite{Czakon:2016vfr}, where, ultimately, an uncertainty
estimate of ${\cal{O}}(1\%)$ was quoted for the corresponding NNLO QCD prediction.
In what follows we want to study the imprint of New Physics resonant contributions on the top-pair invariant
mass distribution. To this end we currently rely on a leading-order description of the signal, interfering with the
corresponding SM amplitudes. However, from the considerations above we can conclude that the MEPS@LO calculation
of the SM production process captures the dominant QCD corrections, which are of real-radiation type. To illustrate
this further, we present in Fig.~\ref{fig:NLOxLO} a comparison of MEPS@LO samples using different parton-multiplicity
matrix elements for the mass and the transverse momentum of the $t\bar{t}$ system in the \emph{Boosted} selection.
These results get compared to the corresponding MEPS@NLO prediction described above.
\begin{figure}[h!]
\includegraphics[width=0.49\textwidth]{h_ttbar_diffXsec_mass_sum.pdf}
\includegraphics[width=0.49\textwidth]{h_ttbar_diffXsec_pt_sum.pdf}
\caption{Comparison of MEPS@LO predictions based on different maximal parton-multiplicity matrix elements and the
MEPS@NLO calculation for the \emph{Boosted} event selection. The left panel shows the top-pair invariant mass,
the right panel the transverse momentum of the $t\bar{t}$ system. }
\label{fig:NLOxLO}
\end{figure}
For the top-pair invariant mass all the predictions with at least one extra hard jet agree within their statistical errors. In particular, even the
MEPS@LO sample, based on merging the LO matrix elements for $t\bar{t}+0,1$jet only, well reproduces the MEPS@NLO result and greatly improves the 0jet sample.
As might be expected, for the transverse momentum of the $t\bar{t}$ system, the inclusion of higher-multiplicity
matrix elements improves the agreement with the MEPS@NLO result. The MEPS@LO calculation based on $t\bar{t}+0,1$jet
predicts a somewhat softer spectrum, \emph{i.e.} is lacking configuration corresponding to multiple hard emissions.
However, the bulk of the events in the \emph{Boosted} selection is reasonably modeled by this simple LO merging
setup and describes the data presented above very well. We will therefore rely on this setup when invoking New
Physics contributions.
In the following we also introduce a simple \emph{Parton Analysis}, used to quantify the effect of the NP without
any smearing due to the reconstruction of the top quarks. In the \emph{Parton Analysis} no cuts are applied to the
events and the two top quarks are identified, before any decay, using truth-level information from the generator.
\section{Simplified model}
\label{sec:model}
Several models of NP predict resonances decaying to top-quarks. Scalar resonances in particular have
large branching ratios in this decay channel due to the fact that their couplings with fermions are often
proportional to the fermion masses. In this case, the resonance is at the LHC dominantly produced via
gluon fusion through loops of colored particles. These colored particles can be either light compared
to the resonance (like the top quark itself), in which case the structure of the loop is resolved as
illustrated in Fig.~\ref{fig:diag}(a), or they can be heavy, in which case a point-like interaction
sketched in Fig.~\ref{fig:diag}(b) can describe the interactions.
\begin{figure}
\includegraphics[width=0.8\textwidth]{signal-diagrams.pdf} \\
(a) \hspace{6.5cm} (b)
\caption{Feynman diagrams for production of a scalar resonance with subsequent decay into top-quarks, mediated by a \emph{resolved}
loop (a) or via high-scale New Physics (b).}
\label{fig:diag}
\end{figure}
It has been shown in~\cite{Manohar:2006ga} that the most general scalar extension of the SM which couples to fermions and maintains
naturally small flavour changing neutral currents is provided by scalars with the same quantum numbers of the Higgs doublet
or that transform as a color octet $(\bf 8, 2)_{1/2}$ under the SU(3)$\times$SU(2)$\times$U(1) SM gauge group.
Color neutral and octet scalars arise also naturally in several models of dynamical EWSB, such as in the seminal
Farhi-Susskind model~\cite{Farhi:1980xs} and models where the top is partially composite~\cite{Belyaev:2016ftv}.
Although the specific origin of the scalar-top couplings is important, determining the relation to other couplings and their magnitudes, we here adopt a more phenomenological simplified approach relevant for top-quark pair production, in which the \emph{left-handed} top is stripped off from its doublet and couples directly to the scalars.
In our simplified model we assume the only \emph{light} state \emph{running} in the loop to be the top-quark.
This is a good approximation if two conditions are fulfilled: (i) - the bottom-quark contribution is suppressed;
and (ii) - the extra states contributing significantly to the gluon--scalar couplings are heavy
(at least as much as the scalar resonance itself). This is a good approximation in many models beyond the SM.
In the THDM~\cite{Branco:2011iw} for example, there is no new particle living at higher scale apart from the new
scalar sector. Moreover, the loop of bottom-quarks is usually suppressed in the cases relevant for $t\bar{t}$ production.
Specializations of the THDM such as the Minimal Supersymmetric Standard Model (MSSM) where the super-partners are heavy enough to be integrated out can also be described in this framework.
Composite models typically predict relatively degenerate spectra of first excitations, thus they can be usually described by the effective point-like interaction.
Similarly, for the color octet in the model of Manohar and Wise~\cite{Manohar:2006ga} the scalars are produced purely by top and bottom loops.
In some other models intermediate states much lighter than the first scalar excitations are present, \emph{e.g.} top partners and stops may be light in some models of partial compositeness and SUSY -- in these cases our approximation is not applicable.
Under this assumption we can describe the scalar sector interactions relevant for $t\bar{t}$ production via the following Lagrangian:
\begin{eqnarray}
\mathcal{L}_\phi &=& i c^\eta_t\frac{m_t}{v} \bar{t}\gamma_5 t \eta + c^\sigma_t \frac{m_t}{v} \bar{t} t \sigma
+ i c^{\tilde{\eta}}_t\frac{m_t}{v} \bar{t}\gamma_5 \frac{\lambda^a}{2} t \tilde{\eta}^a
+ c^{\tilde{\sigma}}_t\frac{m_t}{v} \bar{t} \frac{\lambda^a}{2} t {\tilde{\sigma}}^a \nonumber \\
&+& c_g^\sigma \frac{\alpha_S}{12\pi v}\sigma G^a_{\mu\nu}G^{a\mu\nu}
- c_g^\eta \frac{\alpha_S}{8\pi v}\eta G^a_{\mu\nu}\widetilde{G}^{a\mu\nu} \nonumber \\
&-& c_g^{\tilde{\eta}} \frac{\alpha_S}{8\pi v} \tilde{\eta}^a d^{abc} \widetilde{G}^{a\mu\nu} G^{b\rho\sigma}
+ c_g^{\tilde{\sigma}} \frac{\alpha_S}{12\pi v}{\tilde{\sigma}}^a d^{abc} \widetilde{G}^{a\mu\nu} G^{b\rho\sigma} \,.
\label{eq:ygg}
\end{eqnarray}
It contains a CP-odd isosinglet scalar $\eta$, a CP-even isosinglet scalar $\sigma$, a CP-odd color octet scalar $\tilde{\eta}$ and a CP-even octet scalar $\tilde{\sigma}$ which we collectively call $\phi$.
$G^{\mu\nu}$ is the gluon field-strength tensor, $\widetilde{G}^{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}G_{\rho\sigma}$, $\lambda^a$ are the SU(3) generators and $d^{abc}=\frac{1}{4}Tr[\lambda^a{\lambda^b,\lambda^c}]$ is the fully symmetric SU(3) tensor.
The top-quark loops generate form factors that describe the gluon-scalar interaction.
The loop triangles contribute to the trilinear $gg\phi$ vertices in the form
\begin{eqnarray}
\eta g_a^\mu(k_1) g_b^\nu(k_2) &:&\quad \frac{\alpha_S c_t^\eta}{2\pi v} A^{A}_{1/2}\left(\frac{s}{4m_t^2}\right)
\epsilon^{\mu\nu\lambda\sigma}k_{1\lambda} k_{2\sigma}\delta_{ab} \,,\\
\sigma g_a^\mu(k_1) g_b^\nu(k_2) &:&\quad \frac{\alpha_S c_t^\sigma}{3\pi v} A^{S}_{1/2}\left(\frac{s}{4m_t^2}\right)
(k_1^\nu k_2^\mu-k_1\cdot k_2 g^{\mu\nu})\delta_{ab} \,,\label{eq:formfactors}
\end{eqnarray}
with
\begin{eqnarray}
A^{A}_{1/2}(\tau)&=&f(\tau)/\tau \,, \label{eq:AA}\\
A^{S}_{1/2}(\tau)&=&\frac{3}{2\tau^2}\left(\tau+(\tau-1)f(\tau)\right) \,, \label{eq:AS}\\
f(\tau) &=&
\begin{cases}
\text{arcsin}^2(\sqrt{\tau}), & \quad \tau\leq 1 \\
-\frac{1}{4}\left[\log\left(\frac{1+\sqrt{1-\tau^{-1}}}{1-\sqrt{1-\tau^{-1}}} \right)-i\pi \right]^2, & \quad \tau>1\,.
\end{cases}
\label{eq:ftau}
\end{eqnarray}
Similar expressions for the color octet top-quark loop generated form factor can be found \emph{e.g.} in~\cite{Hayreter:2017wra}.
As a matter of fact, resonant top pair production is accompanied by other signatures. In particular, diphoton,
dijet, $\gamma Z$, $ZZ$ and $W^+W^-$ signatures are generated via diagrams induced by a top-quark loop, and in
general by high-scale physics. Tree-level $ZZ$, $W^+ W^-$ decay channels are typically present for a scalar state,
while decays into lighter fermions are typically suppressed. Color octets decays into $g\gamma$ and $g Z$ might
give striking signatures. The detailed analysis of these channels is not in the scope of this work, however, we
provide some qualitative discussion about the regions in parameter space where they can be competitive in
sensitivity to $t\bar{t}$ search.
Loop (or anomaly) induced decays are typically suppressed and might be competitive to $t\bar{t}$ searches only
for small Yukawa couplings $c_t$. They are often the only possible decay channels for pseudo-scalars besides
that into $t\bar{t}$. As an example, consider some partial widths of a color-singlet pseudo-scalar
\begin{eqnarray}
\Gamma_{\eta\to t\bar{t}} &=& \frac{3}{8\pi}
\frac{m_t^2}{v^2}(c^\eta_t)^2\, m_{\eta}\sqrt{1-4 m_t^2/m_{\eta}^2} \,, \\
\Gamma_{\eta \to gg} &\simeq& \frac{\alpha_s^2m_{\eta_X}^3}{32 \pi^3 v^2}
\left| c_t^\eta A^{A}_{1/2}\left(\frac{m_\eta^2}{4m_t^2}\right)+ c_g^\eta \right|^2 \,,\\
\label{eq:gamma-eta-gg}
\Gamma_{\eta \to \gamma\gamma} &\simeq& \frac{\alpha^2 m_{\eta_X}^3}{256 \pi^3 v^2}
\left| c_t^\eta 3(2/3)^2 A^{A}_{1/2}\left(\frac{m_\eta^2}{4m_t^2}\right)+ c_\gamma^\eta \right|^2\,.
\label{Eq:partialwidths}
\end{eqnarray}
Here we parametrize the photon interaction with $\eta$ by the following gauge invariant operators
\begin{equation}
\mathcal{L}_{\phi,\gamma} = - c_W^\eta \frac{\alpha}{8\pi v}\eta W^i_{\mu\nu}\widetilde{W}^{i\mu\nu} - c_B^\eta \frac{\alpha}{8\pi v}\eta B_{\mu\nu}\widetilde{B}^{\mu\nu}\,,
\end{equation}
with $c_\gamma^\eta\equiv c_W^\eta+c_B^\eta$.
These operators also give rise to decays into weak bosons, but not competitive in
sensitivity to diphoton searches (unless there is some cancellation in $c_W+c_B$).
From the above expressions it can be noticed that the $gg$ partial width is much larger than $\gamma\gamma$,
however, the corresponding search is not as competitive to the diphoton channel due to
the clean signature of the latter.
On the other hand, scalar resonances tend to decay into weak bosons at tree level, with
large contributions to their decay width and good sensitivity in the corresponding channels.
The color octets have more unexplored signatures, like \emph{e.g.} $g\gamma$, studied for example in Ref.~\cite{Aad:2015ywd,Aaboud:2017nak}.
\subsection{Model description and simulation}
Our goal is to achieve accurate predictions for a wide parameter range of our generic model in an efficient and fast way.
For this purpose, the Lagrangian given in \eq{eq:ygg} has been implemented into the \textsc{FeynRules}~\cite{Alloul:2013bka} package to produce
a corresponding UFO model file~\cite{Degrande:2011ua}.
The required helicity amplitudes have been extracted to C++ codes via the Madgraph~\cite{Alwall:2011uj} program and incorporated in the \textsc{Rivet}\, analyses in order to perform a \emph{reweighting} method and reproduce the signal line-shape.
To this end, each event of the \textsc{Sherpa}\ SM event sample is given a weight, $w$, proportional to the ratio of the amplitudes,
\begin{equation}
w=\frac{\overline{|\mathcal{M}_{\rm SM}+\mathcal{M}_\phi|^2}}{\overline{|\mathcal{M}_{\rm SM}|^2}},
\end{equation}
where $\overline{|\mathcal{M}_{\rm SM}|^2}$ is the SM amplitude squared summed and averaged over color and spin. In the numerator the amplitude $\mathcal{M}_\phi$ corresponding to the resonant diagrams depicted in Fig.~\ref{fig:diag} is added on top of the SM diagrams. The further decay of top quarks is included neglecting non-resonant diagrams. Therefore, the full process in \eq{eq:process} -- including possible extra hard radiation -- is considered with full spin correlation of the top-quark decays.
We note that our signal includes not only the purely resonant contribution. The complete squared amplitude can be split into three contributions:
\begin{equation}
|\mathcal{M}_{\rm SM}+\mathcal{M}_\phi|^2=
|\mathcal{M}_{\rm SM}|^2+|\mathcal{M}_{\phi}|^2
+2{\rm Re}\, \mathcal{M}_{\rm SM}^*\mathcal{M}_{\phi}
\equiv B_\mathcal{M}+S_\mathcal{M}+I_\mathcal{M}\,.
\end{equation}
The last term defines the SM background ($B_\mathcal{M}$), the pure signal ($S_\mathcal{M}$) and the interference between signal and SM ($I_\mathcal{M}$).
We use as the test observable the
$m(t\bar{t})$ distribution of the signal hypothesis $H$ normalized bin-by-bin to the SM QCD prediction,
\begin{equation}
r(H)\equiv\frac{d\sigma_H/dm}{d\sigma_{\rm SM}/dm}\,.
\label{eq:r}
\end{equation}
The signal hypothesis differential cross section $d\sigma_H/dm$ is defined as the total differential cross section subtracted by the SM prediction.
Such normalized distribution is less affected by systematic errors, \emph{i.e.} theoretical uncertainties~\cite{Czakon:2017wor}.
In order to assess the importance of the interference we study both the full signal including interference $d\sigma_{S+I}/dm$ and the pure signal hypothesis neglecting interference $d\sigma_S/dm$.
To simplify the notation in the remaining of the text we use the following definitions:
\begin{equation}
d\sigma_S/dm\equiv S_\sigma,\quad d\sigma_I/dm\equiv I_\sigma, \quad d\sigma_{\rm SM}/dm\equiv B_\sigma\,.
\end{equation}
Interference between signal (Fig.~\ref{fig:diag}) and QCD diagrams are known to be important in this process. In fact,
they can completely change the line-shape of the resonance from a pure Breit-Wigner peak to a peak-dip structure, or
even dip-peak, pure dip or an enhanced peak~\cite{Gaemers:1984sj,Dicus:1994bm,Frederix:2007gi,Gori:2016zto,Jung:2015gta,Djouadi:2016ack}. QCD corrections to this effect have recently been computed~\cite{Bernreuther:2015fts,Hespel:2016qaf,BuarqueFranzosi:2017jrj} and shown to be important.
A pilot experimental analysis investigating such interference effects has been presented recently~\cite{Aaboud:2017hnm}.
The form factors in \eq{eq:formfactors} have been implemented in the helicity amplitudes used in the reweighting step.
However, the corresponding box diagram contributing to the four-gluon--scalar coupling was kept as an effective vertex without
momentum dependence. For the color octet the form factor is approximated by a fixed momentum flowing through the loop
that is equal to the mass of the resonance. The interference between top-quark loops and point-like interactions is also
manifest in the calculation.
Higher-order QCD corrections are partially taken into account through the radiation of extra gluons in the MEPS@LO simulation.
The contribution from real-emission $t\bar{t}j$ matrix elements also get reweighted with the NP theory hypotheses.
We note however that the method neglects the signals' color-singlet
color flow contribution when attaching parton showers, which affects the subsequent radiation pattern only.
We nevertheless found that these effects are small in the description of the top-pair mass distribution.
In Fig.~\ref{fig:rwgtXfull} we show the distribution of variable $r(S)$ defined in \eq{eq:r} for a color-singlet
pseudo-scalar of mass $1.5$ TeV in the pure signal hypothesis, comparing the \textsc{Sherpa} ~\emph{reweighted} events with a dedicated simulation of the full
process with \textsc{MadGraph\_aMC\@NLO}+\textsc{Pythia}8.
In the latter, the color-flow contribution corresponding to the signal diagrams are considered as
seeds for the subsequent parton shower. The error near the resonance peak is
about 10\% and the reweighted prediction underestimates the yields.
We removed the top-quark loop form factor considering only the effective scalar-gluon coupling for this
comparison. The distributions were derived according to the \emph{Parton Analysis} framework
described in \sec{sec:analysis}. In the more realistic boosted analysis we expect the reweighting
method to predict a more smeared distribution due to the extra connected color lines that favor
extra hard radiation connecting the top quarks with initial gluons. We will neglect these effects and employ the \emph{reweighting}
method in what follows to make predictions for a large region of parameter space of the model, while avoiding massive
time and machine consuming event generation and ``fake" MC statistical error.
Our results are expected to give conservative limits since for colour-singlet resonances the signal color flow induces
less smearing of the resonance peak.
\begin{figure}
\includegraphics[width=0.49\textwidth]{PY8xRW.pdf}
\caption{Comparison of the predictions for the top-pair invariant mass from the reweighting method (\textsc{Sherpa} + RW) and
a dedicated full simulation with \textsc{MadGraph\_aMC\@NLO}+\textsc{Pythia}8 for a color-singlet pseudo-scalar of mass $m=1.5$ TeV. The \emph{Parton Analysis} was adopted.}
\label{fig:rwgtXfull}
\end{figure}
\section{Results}
\label{sec:statistics}
Resonant top-quark pair production at the LHC has been analyzed for several of the models mentioned above already.
Color neutral resonances decaying into $t\bar{t}$ have been studied in several works for a large number of models~\cite{Gaemers:1984sj,Dicus:1994bm,Gori:2016zto,Jung:2015gta,Djouadi:2016ack}, even including interference effects at NLO in QCD~\cite{Bernreuther:2015fts,Hespel:2016qaf,BuarqueFranzosi:2017jrj}.
The case of a color-octet signal has been considered in~\cite{Frederix:2007gi,FileviezPerez:2008ib,Frolov:2016gvu,Hayreter:2017wra}, also considering other production channels, \emph{e.g.} via $b\bar{b}$ initial states, or even double scalar production~\cite{Gerbush:2007fe,Kim:2008bx,Schumann:2011ji}.
Our approach differs from previous studies because we adopt the strategy of directly comparing to data which has been shown
to agree well with the SM prediction, and therefore, can be used to put direct limits on the model parameters,
in the same spirit as~\cite{Butterworth:2016sqg}.
Indeed, the recent ATLAS measurement of the top-quark pair differential cross section at $\sqrt{s}=13 \,\text{TeV}$ shows good agreement with various SM Monte Carlo generators~\cite{Aaboud:2017fha}.
However, there are no measurements of the $t\bar{t}$ invariant mass in the boosted regime at this energy yet.
Moreover, the uncertainties are still quite large, since only the 2015 data, corresponding to $3.2\,{\rm fb}^{-1}$,
were used, but we expect that an update of the analysis will be available in the near future, with
improved systematics and statistical uncertainty (comparable to the ones presented in this paper)
allowing to derive real exclusion limits. We assume in what follows that data will be well described
by the SM expectation, and take the SM prediction from \textsc{Sherpa}~ as mock data.
The method proposed allows theorists to derive realistic exclusion limits on a variety of NP scenarios
without a dedicated and expensive experimental analysis. It opens a new path to search for NP,
with the experiments providing precision measurements of SM processes. With respect to dedicated experimental
searches, it can serve as check and as an alternative (less-expensive) approach to look for more general
parametrizations of deviations caused by New Physics. For instance, in the ATLAS and CMS collaborations'
analyses~\cite{Khachatryan:2015sma,Chatrchyan:2013lca,Chatrchyan:2012cx,Sirunyan:2017uhk,Aaboud:2017hnm}, only
a leptophobic Z' bosons (present for instance in topcolor scenarios), a Kaluza-Klein excitation of the gluon and heavy states in THDM were searched for. Moreover, interference effects were considered only in Ref.~\cite{Aaboud:2017hnm}.
With our technique we are able to provide limits for a whole wealth of models.
In order to assess the possibility to observe the signals described above we perform a simple $\chi^2$ analysis
using the bins of the $r$ distribution. We consider the mass window $m_\phi-200\,{\rm \,\text{GeV}}< m(t\bar{t})< m_\phi+200\,{\rm \,\text{GeV}}$ and compute
\begin{equation}
\chi^2_{N}=\sum_{i=1}^{N}\frac{r_i(H)^2}{\sigma_i^2}\,,
\end{equation}
with $N$ the number of bins taken into account, according to the assumed resolution of the measurement.
$r_i(H)$ is the $r(H)$ distribution integrated over bin $i$ and $H$ is the hypothesis (either $S$ or $S+I$).
$\sigma^2_i$ is the variance on each bin of the distribution.
The variance is derived according to the rules of propagation of uncertainties and is estimated by
\begin{equation}
\sigma^2 = \frac{1}{B_\sigma}\left(1+\frac{H_\sigma^2}{B_\sigma^2}\right)
+ \epsilon_{\rm SYS}^2\left(1+\frac{H_\sigma^2}{B_\sigma^2}\right)
+ \epsilon_{\rm TH}^2\frac{(H_\sigma+B_\sigma)^2}{B_\sigma^2} \,.
\end{equation}
We kept the indexes $i$ implicit in the expression. The first term accounts for statistical error, the second for systematic uncertainties of experimental sources, and the third for theoretical uncertainties.
We assume a flat distribution for theory and systematic uncertainty, and that statistical uncertainties are dominated by the background, with a small ratio signal over background.
We take $\epsilon_{TH}=1\%$ for both $H_\sigma=S_\sigma$ and $H_\sigma=S_\sigma+I_\sigma$, assuming other errors are strongly correlated and will be canceled when taking the ratio distribution.
The experimental uncertainty is more important and we consider three benchmark estimates for $\epsilon_{\rm SYS}$:
\begin{enumerate}
\item In Ref.~\cite{Aaboud:2017hnm} the total systematics on the background were estimated as 10\% and 11\%.
As a pessimistic case we consider $\epsilon_{\rm SYS}=10\%-15\%$.
\item As an optimistic scenario we vary it to lower values considering a future improved understanding of the uncertainties
and the reduction in uncertainty associated to normalization. Since we are using a normalized distribution many of the uncertainties estimated in the previous benchmark are strongly correlated and will be canceled out. For this we use
$\epsilon_{\rm SYS}=5\%-10\%$.
\item As the most optimistic case we assume experimental uncertainties can be drastically reduced to the level of theoretical, which according to Ref.~\cite{Czakon:2017wor} results in $\epsilon_{\rm SYS}=1\%-2\%$.
\end{enumerate}
We consider $N=1$ for a \emph{bad} resolution case, assuming the experiment can resolve only the full window of 400 GeV in $m(t\bar{t})$, and $N=10$ assuming a mass resolution in $m(t\bar{t})$ of $40$ GeV.
We consider $\chi^2\geq 2$ as a criterion for exclusion, which corresponds roughly to an exclusion at 95\% of confidence level.
This simple analysis is intended to be a first approximation to a full statistical data analysis that will be carried
out eventually. In particular we assume the same uncertainty for every bin without correlation between them, and we assume only two
cases of resolution independent of the bin. In the following we discuss some benchmark scenarios and the respective results.
\subsection{Pseudo-scalar color octet}
The first scenario we consider is when the resonance $\phi$ represents a pseudo-scalar color octet ($\widetilde{\eta}$) with total width dominated by the decays to pairs of tops and gluons
\begin{equation}
\Gamma_{\rm TOT}=\Gamma_{tt}+\Gamma_{gg}\,.
\label{eq:decaywidth}
\end{equation}
In Fig.~\ref{fig:Octet500} we show the resulting $r$ distribution assuming a color octet resonance with mass $m_{\tilde \eta}=500$ GeV and the parameters $c_t=1$, $c_g=1$ (left) and $c_g=-1$ (right) at parton level, \emph{i.e.} using the \emph{Parton Analysis}
described in \sec{sec:analysis}. We show both the full line-shape, which comprises signal and interference with QCD background (S+I), and the pure signal (S) for comparison. The importance of
taking into account interference effects can clearly be noticed.
\begin{figure}[H]
\includegraphics[width=0.49\textwidth]{{plots_topBSM_SI_13TeV_d10e1noMCL20b40_cParton129_m500ct1.0cg1.0}.pdf}
\includegraphics[width=0.49\textwidth]{{plots_topBSM_SI_13TeV_d10e1noMCL20b40_cParton065_m500ct1.0cg-1.0}.pdf}
\caption{Normalized top-pair mass distributions, $r\equiv\frac{d\sigma/dm}{d\sigma_{\rm SM}/dm}$ for a pseudo-scalar color octet resonance with $m_{\tilde\eta}=500\,{\rm \,\text{GeV}}$, $c_t=1$ and $c_g=1$ ($c_g=-1$) on the left (right) using the \emph{Parton} analysis. Signal plus interference (S+I) is in blue and pure signal (S) in red.}
\label{fig:Octet500}
\end{figure}
Similarly, in Fig.~\ref{fig:Octet1700}, we present the effect of a resonance with mass $m_{\tilde \eta}=1700\,{\rm GeV}$ and couplings $c_t=1$, $c_g=1$ (left) and $c_g=-1$ (right), reconstructed using the \emph{Boosted Analysis}.
The excess reaches more than 10\%, which indicates that even a pessimistic estimate of the uncertainties is sufficient to exclude the existence of this state for values of $c_g$ of order 1. We thus use the most pessimistic value for the systematic error, $\epsilon_{\rm SYS}=10\%-15\%$.
\begin{figure}[H]
\includegraphics[width=0.49\textwidth]{{plots_topBSM_SI_13TeV_d10e1noMCL20b40_cBoosted135_m1700ct1.0cg1.0}.pdf}
\includegraphics[width=0.49\textwidth]{{plots_topBSM_SI_13TeV_d10e1noMCL20b40_cBoosted071_m1700ct1.0cg-1.0}.pdf}
\caption{Normalized top-pair mass distributions $r$ reconstructed with the \emph{Boosted} analysis for a pseudo-scalar color octet resonance with $m_{\tilde \eta}=1700\,{\rm \,\text{GeV}}$, $c_t=1$ and $c_g=1$ ($c_g=-1$) on the left (right). The color scheme is the same as in Fig.~\ref{fig:Octet500}.}
\label{fig:Octet1700}
\end{figure}
In Fig.~\ref{fig:Octetlimit} the corresponding exclusion limits are shown, assuming a fixed value of $c_{t}=1$. The bands correspond to a systematic uncertainty on the measurement running from $10\%$ to $15\%$.
The limits are evaluated considering the interference effect (dashed lines) or neglecting it (continuous lines). The interference has a significant effect in the low mass region ($m_{\tilde{\eta}} < 1.3\,{\rm TeV}$).
The excluded region corresponds to larger values of $|c_g|$. We show the exclusion for integrated luminosities of $L=20\,{\rm fb}^{-1}$ (blue line) and $L=100\,{\rm fb}^{-1}$ (black). In the \emph{left-panel} we use $10$ bins of $40\,{\rm GeV}$ width in the invariant-mass distribution to compute $\chi^2_{10}=2$ while on the \emph{right-panel} we use only a single 400 GeV bin centered around the resonance mass, $\chi^2_{1}=2$.
The comparison between the left and right panel shows the importance of a good resolution and for a line-shape analysis.
\begin{figure}[H]
\includegraphics[width=0.49\textwidth]{{ExclCL_ScanP2m1500_13TeV}.pdf}
\includegraphics[width=0.49\textwidth]{{ExclCLSignificance_ScanP2m1500_13TeV}.pdf}
\caption{Exclusion limits ($\chi^2=2$) in $(m_{\tilde \eta},c_g)$ parameter space for a pseudo-scalar color octet assuming $c_t=1$. The band represents the different assumptions for the systematic uncertainty, varying from 10\% to 15\%. Integrated luminosities are $L=20\,{\rm fb}^{-1}$ (blue line) and $L=100\,{\rm fb}^{-1}$ (black), as well as considering interference (dashed line) and neglecting it (solid line).}
\label{fig:Octetlimit}
\end{figure}
We expect striking signatures in other channels, but little has been studied. For instance, in the analysis of $\gamma$+jets in Ref.~\cite{Aaboud:2017nak} a color octet has not been considered.
\subsection{Pseudo-scalar singlet}
For the benchmark scenario of a pseudo-scalar color singlet we again assume the resonance' width is dominated by the top and
gluon decays, as in \eq{eq:decaywidth}.
We show in Fig.~\ref{fig:Boostedm1500} the distribution of the normalized $m(t\bar{t})$ distribution $r$ assuming $m_\eta=1500\,{\rm GeV}$. In the left-hand (right-hand) panel we consider $c_g=1$ ($c_g=-1$). The line-shapes of this scenario are highly non-trivial,
they strongly depend on the mass and couplings, and can feature pure dips, pure peaks and intermediate peak-dip or dip-peak structures. A sample of different line-shapes is shown in \app{app:lineshapes}.
\begin{figure}[H]
\includegraphics[width=0.49\textwidth]{{plots_ScanPct1_13TeV_d10e1noMCL20_cBoosted134_m1500ct1.0cg1.0}.pdf}
\includegraphics[width=0.49\textwidth]{{plots_ScanPct1_13TeV_d10e1noMCL20_cBoosted070_m1500ct1.0cg-1.0}.pdf}
\caption{Normalized top-pair mass distributions $r$ reconstructed with the \emph{Boosted} analysis for a pseudo-scalar color singlet resonance with $m_\eta=1500\,{\rm \,\text{GeV}}$, $c_t=1$ and $c_g=1$ ($c_g=-1$) on the left (right). The color-scheme is the same as in Fig.~\ref{fig:Octet500}.
}
\label{fig:Boostedm1500}
\end{figure}
In Fig.~\ref{fig:Exclct1} we show the exclusion limits in the $(m_\eta,c_g)$ parameter space plane for $c_t=1$. The band represents the different assumptions for the systematic uncertainty, 5\% and 10\%.
The effect of interference is important for low masses $m_\eta\lesssim 1.2\,{\rm \,\text{TeV}}$, where also systematics dominate and have a huge impact on the exclusion power. The use of the full line-shape in the statistical analysis improves the exclusion power mostly for low masses where more distinct line-shapes are present. For masses above $m_\eta\gtrsim 2\,{\rm \,\text{TeV}}$, higher luminosities than $L=100\,{\rm fb}^{-1}$ are needed.
In Fig.~\ref{fig:Exclm1500} we show the corresponding exclusion limits in the $(c_t,c_g)$ plane for a fixed mass $m_\eta=1.5\,{\rm \,\text{TeV}}$.
The effect of interference is important for large top couplings, $c_t\gtrsim 1.2$, which is directly related to the size of the width. The use of full line-shape gives a mild improvement in the exclusion power.
\begin{figure}[H]
\includegraphics[width=0.49\textwidth]{Exclchi2ScanPct1.pdf}
\includegraphics[width=0.49\textwidth]{ExclsignifScanPct1.pdf}
\caption{Exclusion limits ($\chi^2=2$) in $(m_\eta,c_g)$ parameter space and $c_t=1$ for a pseudo-scalar color singlet. The band represents the different assumptions for the systematic uncertainty, varying from 5\% to 10\%. The color and style scheme for the lines are the same as in Fig.~\ref{fig:Octetlimit}. }
\label{fig:Exclct1}
\end{figure}
\begin{figure}[H]
\includegraphics[width=0.49\textwidth]{Exclchi2ScanPm1500.pdf}
\includegraphics[width=0.49\textwidth]{ExclsignifScanPm1500.pdf}
\caption{Equivalent to Fig.~\ref{fig:Exclct1} for the $(c_t,c_g)$ plane for a fixed mass of $m_\eta=1.5\,{\rm \,\text{TeV}}$ }
\label{fig:Exclm1500}
\end{figure}
For very low masses the \emph{Resolved} analysis can be slightly more powerful than the \emph{Boosted}.
In Fig.~\ref{fig:Resolved} on the left we show an example of a line-shape and on the right the exclusion limit
provided by the \emph{Resolved} analysis.
Compared to \fig{fig:Exclct1} it can be noticed that the low mass region $m_\eta\lesssim 600\,\text{GeV}$ can be better covered by the \emph{Resolved} selection. We note as well that the case of negative $c_g$ is less excluded due to the fact that larger cancellations between top-quark loop and effective vertex happens for these masses.
\begin{figure}[H]
\includegraphics[width=0.49\textwidth]{{plots_ScanPct1_13TeV_d10e1noMCL20_cResolved129_m500ct1.0cg1.0}.pdf}
\includegraphics[width=0.49\textwidth]{Exclchi2ScanPct1-Resolved.pdf}
\caption{\emph{Left:} Normalized top pair mass distributions $r$ for $c_g=c_t=1$ and $m_\eta=500\,{\rm \,\text{GeV}}$. \emph{Right:} Exclusion limit ($\chi^2=2$) in $(m_\eta,c_g)$ parameter space for $c_t=1$. The color scheme is the same as in Fig.~\ref{fig:Exclct1}. In both panels the \emph{Resolved} analysis has been employed.}
\label{fig:Resolved}
\end{figure}
Diphoton and dijet searches might be relevant in extreme regions of parameter space, \emph{i.e.}
for very small $c_t\sim 0.2$, and large masses, due to the dependence of the $\gamma\gamma$ and $gg$ partial widths
on $m^3$ as opposed to the linear dependence of the $t\bar{t}$ decay width.
In \fig{fig:dijet-diphoton} we show the 95\%CL excluded region derived from the limits provided by the ATLAS
collaboration in the dijet search~\cite{Aaboud:2017yvp}. We used the case $\sigma_G/m_G=0$ and assumed
an acceptance of 50\%. In the same figure we show the 95\%CL excluded region in the diphoton channel using
the exclusion limits by the ATLAS analysis in Ref.~\cite{Aaboud:2017yyg}. We used the case $\Gamma_X/M_X=6\%$
and the spin-0 selection. To derive cross sections we used the N$^3$LO result for Higgs production cross section
$\sigma_h$~\cite{Anastasiou:2016hlm} and rescale by the LO decay width,
\begin{equation}
\sigma_\eta = \sigma_h\frac{\Gamma_{\eta\to gg}}{ \Gamma_{h\to gg}}=
\sigma_h\frac{\left| c_t^\eta A^{A}_{1/2}\left(\frac{m_\eta^2}{4m_t^2}\right)+ c_g^\eta \right|^2}{\left|A^{S}_{1/2}\left(\frac{m_\eta^2}{4m_t^2}\right)\right|^2}.
\end{equation}
$\Gamma_{\eta\to gg} $ is given in \eq{eq:gamma-eta-gg} and the form factors in eqs.~(\ref{eq:AA}--\ref{eq:ftau}).
The shaded area in the figure represents the region where $\sigma_\eta\times$BR is larger than the excluded line in the respective references, and BR is the corresponding branching ratios.
We can notice that these channels get competitive in sensitivity to $t\bar{t}$ analysis at low $c_t$ and large mass, but only if $c_\gamma$ is particularly large.
In particular, even for $c_t=1$, for $m>3 \,\text{TeV}$ the dijet search seems to be more sensitive to New Physics.
\begin{figure}[H]
\includegraphics[width=0.49\textwidth]{limits-dijet-diphoton-ca4-ct02.pdf}
\includegraphics[width=0.49\textwidth]{limits-dijet-diphoton-ca6-ct1.pdf}
\caption{95\%CL excluded region in parameter space in diphoton~\cite{Aaboud:2017yyg} and dijet searches~\cite{Aaboud:2017yvp}. On the \emph{left panel} $c_\gamma=4$ and $c_t=0.2$. On the \emph{right}, $c_\gamma=6$ and $c_t=1$. }
\label{fig:dijet-diphoton}
\end{figure}
\subsubsection*{Interpretation for Composite Higgs models with Top Partial Compositeness}
As an ultra-violet realization of the pseudo-scalar scenario we consider the composite models M3, M8 and M9 of Ref.~\cite{Belyaev:2016ftv}. These models are constituted by two additional confining fermions, $\psi$ and $\chi$, which form several composite states among which a top partner that can generate a mass to the top quarks through the partial-compositeness mechanism. In addition, they present two iso-singlet pseudo-scalar mass eigenstates $a$ and $\eta'$.
In general, the observation of such pseudo-scalar state decaying into top quarks can shed light on the mechanism of fermion mass generation~\cite{Alanne:2016rpe}.
These models present extra parameters which determine the couplings, given by a pair of integers $(n_\psi,n_\xi)$ and the relation between the mixing angle $\alpha$ and the ratio of scales and U(1) charges, $\zeta$.
We do not enter a discussion of the details of these the models and their parameters here but invite the reader to consult Ref.~\cite{Belyaev:2016ftv}.
We choose $\alpha=\zeta$ and the values of $(n_\psi,n_\xi)$ which provide the largest couplings to the tops, $(n_\psi,n_\xi)=(2,0),\,(-4,2)$ and $(4,2)$.
We neglect contributions to the resonance width from the decays into $Z$, $W$ and $\gamma$, which are sub-dominant.
The relevant couplings are summarized in Tab.~\ref{tab:models}.
\begin{table}
\caption{
Summary of the couplings of pseudo-scalar color-singlet state $a$ in the considered composite models. $c_t$ and $c_g$ are given in units of $v/F_\pi$. $c_t$ is shown for the three benchmarks $(n_\psi,n_\xi)=(2,0)/(-4,2)/(4,2)$. }
\label{tab:models}
\begin{tabular}{ l c c }
model & $\quad c_t[v/F_\pi]\quad$ & $c_g[v/F_\pi]$ \\
\hline\hline
M3 & 0.934/1.09/-2.65 & 5.44 \\
M8 & 0.926 /1.54/-2.16 & 1.54 \\
M9 & 0.293/-0.195/-1.37 & 8.6 \\
\hline
\end{tabular}
\end{table}
In Fig.~\ref{fig:Exclm1500-model} we show the value of $c_t$ and $c_g$ for each model together with the exclusion region (above the black curve) for a fixed mass $ m_a=1.5\,{\rm \,\text{TeV}}$. We consider an integrated luminosity of $L=20\,{\rm fb}^{-1}$ for the exclusion limit and a systematic error $\epsilon_{\rm SYS}=5\%$.
The different line colors in the figure refer to the different models: red is M8, yellow M9 and brown M3. The styles of the lines represent the fermionic charges: $(n_\psi,n_\xi)=(2,0)$ (solid line), (-4,2) (dashed) and (4,2) (dot-dashed).
Each line scans the values of $F_\pi$ from $v$ (most external and largest couplings) to $8v$ (most internal and smallest couplings), the dots represent the values $F_\pi=n\,v$, with $n$ an integer between 1 and 8 included.
Also shown for reference in the upper region the couplings of the $\eta_{63}$ state of the Fahri-Susskind one-family model~\cite{Farhi:1980xs}.
From the figure we can get the minimal value of the compositeness scale $F_\pi>F_\pi^{min}$ for which state $a$ would still not have been observed for different scenarios. For instance, for model M8 (red lines), $v\lesssim F_\pi^{min}\lesssim 2v$ depending on the values of $(n_\psi,n_\xi)$. The model M3 is more constrained, and $6v\lesssim F_\pi^{min}\lesssim 7v$ for (-4,2) and $F_\pi^{min}\sim 5v$ for (2,0) or (4,2).
Model M9 has low values of $c_t$ but values $ F_\pi\gtrsim 6v$ can be excluded for the case (-4,2), while the other scenarios are hard to access in the $t\bar{t}$ search.
Other decay channels have been analyzed in Ref.~\cite{Belyaev:2016ftv}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.8\textwidth]{Exclchi2_ScanP2m1500_13TeV-a.pdf}
\end{center}
\caption{In black thick lines the exclusion limits for the partial-compositeness models considered here are drawn. An integrated luminosity of $L=20\,{\rm fb}^{-1}$ and an
uncertainty of $\epsilon_{\rm SYS}=5\%$ are assumed. The model lines refer to models M8 (in red), M9 (yellow) and M3 (brown)
introduced in the text. The styles of the lines represent the fermionic charges: $(n_\psi,n_\xi)=(2,0)$ (solid line),
(-4,2) (dashed) and (4,2) (dot-dashed). Each line scans the values of $F_\pi$ from $v$ (most external and largest couplings)
to $8v$ (most internal and smallest couplings), the dots represent the values $F_\pi=n\,v$, with $n$ an integer between 1 and 8 included.}
\label{fig:Exclm1500-model}
\end{figure}
\subsection{Broad scalar color singlet}
In this benchmark scenario we assume a CP-even color-singlet scalar that can, apart from top quarks and gluons, also decay into other particles and
is thus much broader than the previous scenarios. We choose a total width of 20\% of the resonance mass $\Gamma_\sigma=20\%\, m_\sigma$.
The rationale for choosing a larger width is the fact that the scalar tends to decay also to weak bosons. Indeed, we expect a large sensitivity in this decay channel which might be competitive w.r.t. top pair production.
In this scenario the signal is very weak and thus hard to be observed unless the systematic uncertainty is improved to values below
5\% or higher values of $c_g>3$ are considered. In Fig.~\ref{fig:scalar} on the left we show the line-shape for $m_\sigma=900\,{\rm \,\text{GeV}}$, $c_t=c_g=1$. It can be noticed that the yields are always below 5\%. On the right panel we show the $\chi^2_{10}=2$ contours in the $(m_\eta,c_g)$ parameter space plane for $c_t=1$. Varying the assumed systematic uncertainties between $\epsilon_{\rm SYS}=1\%-2\%$ determines the band of the exclusion limit.
The integrated luminosities are $L=20\,{\rm fb}^{-1}$ (blue line) and $L=300\,{\rm fb}^{-1}$ (black). Limits are given considering interference (dashed lines)
and neglecting it (solid lines). A large interference effect can be noticed, which is in fact larger than the pure signal.
\begin{figure}[H]
\includegraphics[width=0.49\textwidth]{{plots_ScanSct1w20-2_13TeV_d10e1noMCL20_cBoosted003_m900ct1.0cg-3.0}.pdf}
\includegraphics[width=0.49\textwidth]{Exclchi2ScanSct1w20-2.pdf}
\caption{\emph{Left:} Normalized top-pair mass distributions $r$ for a color-singlet scalar with $c_g=c_t=1$, $m_\sigma=900\,{\rm \,\text{GeV}}$
and $\Gamma_\sigma=20\%\,m_\sigma$.
\emph{Right:} Exclusion limit ($\chi^2=2$) in the $(m_\sigma,c_g)$ parameter space and $c_t=1$ for such scalar state. The color scheme is the same as in Fig.~\ref{fig:Exclct1}. In both panels the \emph{Boosted} analysis have been adopted. }
\label{fig:scalar}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this work we have provided a framework to reinterpret the SM $t\bar{t}$ differential cross section
measurements in terms of exclusion limits for signatures of NP scalar resonances decaying into
$t\bar{t}$. The method relies on the detailed simulation of the SM prediction at particle level
with the \textsc{Sherpa}\ Monte Carlo, the subsequent analysis in the \textsc{Rivet}\ framework, which can be
directly compared with the measured distributions provided by the experimental collaborations,
a modeling of the NP scenarios efficient enough to allow a scan over a large range in parameter
space, and finally a statistical analysis to determine the excluded regions.
In the simulation of top-pair production we take into account higher-order QCD corrections through
matching LO or NLO matrix elements to parton showers and merging partonic processes of varying
multiplicity. To validate our simulation we compare to data from the ATLAS collaboration, finding
very good agreement. As New Physics contributions we consider CP-even and CP-odd scalar resonances,
being either color-singlets or octets. To model the signal we devise an efficient and fast
\emph{reweighting} method allowing to scan large regions of parameter space without the need of
full re-simulation and re-analysis for each parameter point. For our simplified model we have
derived exclusion limits based on a simple $\chi^2$ analysis, that can subsequently be used to
set limits on other specific models, and we consider a model of partial compositness as an example.
We showed the importance of properly accounting for interference between the New Physics signal and
the SM background in setting the exclusion limit, as well as of using a full line-shape analysis
which is not necessarily a simple Breit-Wigner shape due to the interference effects.
By confronting SM precision measurements with hypotheses for New Physics models stringent exclusion
limits on the parameters of the latter can be obtained, providing complementary sensitivity to
direct searches. The methodology laid out here can be readily applied to other observables than the
top-pair invariant mass considered here. It relies on a solid understanding of the respective
SM expectation and the uncertainties related to the theoretical predictions and the experimental
data.
\section*{Acknowledgments}
This work was supported by the European Union through the FP7 training network MCnetITN
(PITN-GA-2012-315877) and the Horizon2020 Marie Sk{\l}odowska-Curie network MCnetITN3 (722104).
Federica Fabbri especially wants to thank MCnetITN for the opportunity to hold a short-term
studentship at the II. Institute for Physics at G\"ottingen University.
|
2,869,038,156,809 | arxiv | \section{Introduction}
There are both theoretical (e.g. \citealt{Hopkins2008}) and observational (e.g. \citealt{Sanders1988,Yan2010}) arguments that
support the notion that luminous star-forming galaxies (hereafter: `Starbursts')
and luminous, unobscured Active Galactic Nuclei (AGN; hereafter luminous AGN or `QSO') are basically the same systems caught in different stages of the co-eval growth of (massive) galaxies and the Super Massive Black Holes (SMBH) sitting in their centres.
In particular, Starbursts should trace objects caught in the rapid SMBH growth phase characterized by efficient Star Formation (SF), in a dust-enshrouded, dense environment, while the unobscured QSOs are systems radiating at the Eddington limit, where the SMBH is almost fully assembled.
Given that both SF and AGN activity are thought to be sustained by the availability of cold gas in galaxies
(see e.g \citealt{Menci2008,Vito2014_gas}), millimeter observations of molecular transitions are needed to directly probe the presence and state of this gas.
In the past decade, observations of cold molecular gas reservoirs at high redshift (see \citealt{CW2013} for a comprehensive review) turned out to be crucial in studying the gas content and consumption rate in both normal and peculiar systems.
For example, the gas properties of ``normal" galaxies are being investigated in increasing details up to high-z \citep{Tacconi2013,Genzel2014_gas,Sargent2014}, and as a function of many of the structural and physical properties of the systems (e.g. Star Formation Rate, SFR; stellar mass; colors; see e.g. \citealt{Genzel2014_gas,Sargent2014}). This has become possible thanks to the large investment of time at millimeter arrays, mainly the Plateau de Bure Interferometer (PdBI).
In particular, it has been reported that, among massive systems, (M$_{\star}>10^{10}$ M$_\odot$), the gas fraction increases across the main sequence (MS; defined between the SFR and the stellar mass of galaxies) at fixed redshift (see \citealt{Magdis2012a,Magdis2012b,Saintonge2012,Tacconi2013,Sargent2014}) and is hence closely related to the Specific Star Formation rate (sSFR). This fits in a scenario where the redshift evolution of the sSFR is consistent with being driven by the gas fraction (see also \citealt{Lilly2013}).
Similar conclusions are reached in works involving dust fitting methods to derive the gas mass (see e.g. \citealt{Santini2014}).
The first molecular studies on local Ultra Luminous Infrared Galaxies (ULIRGs) and Submillimeter Galaxies (SMG) at higher redshifts, i.e. targetting objects in the `Starburst' phase,
showed that these systems typically have a low molecular gas content with respect to their current SFR, or alternatively higher star-formation efficiencies.
Indeed, defining empirically the Star Formation Efficiency (SFE) as the ratio of the IR luminosity to the CO luminosity (in units of L$_{\rm \odot}$/(K km s$^{-1}$ pc$^2$)), `Starbursts' have SFE$>$200 (see e.g. \citealt{Daddi2010,Genzel2010}) larger than those observed in normal star forming galaxies with the same molecular gas content (\citealt{Tacconi2010}, SFE$\sim50-200$).
In other words, their consumption time scale is shorter with respect to normal galaxies and they will exhaust their gas reservoirs in a short timescale ($\lsimeq100$ Myr). This is consistent with the hypothesis that `Starbursts' in general (and ULIRGs/SMGs in particular) are objects at the peak of their SF activity in the heavily obscured phase.
On the other hand, high values of the L$_{\rm IR}$/L'(CO) ratio have been also observed in high-z unobscured QSO host galaxies (SFE$>200$; e.g. \citealt{Solomon2005,Riechers2011,Riechers2011_QSOz3}), although, being a subsequent phase of `Starbursts' in the evolutionary sequence, their SFR is expected to be already substantially suppressed.
In this case a significant fraction of the gas could have been previously removed during the `blow-out' phase, and the observed high SFE in unobscured QSOs can be ascribed to region of residual, on-going SF, pointing towards a possible effect of 'positive feedback' on the galaxy from the AGN \citep{Silk2013,Zubovas2014}.
\begin{figure*}[!t]
\centering
\includegraphics[width=8.7cm,angle=0]{spectrum_CO_new.png}
\includegraphics[width=6.8cm]{map_inte_new.png}
\caption{{\it Left panel}: spectrum of XID2028 integrated over the beam. The solid line shows a Gaussian fit with FWZI=770 km s$^{-1}$ and centered at the frequency corresponding to the redshift of the source. {\it Right panel}: integrated map of CO(3–2), in the channels corresponding to the ``systemic" peak of the line. Contour levels are 1$\sigma$ each ($\sigma$=0.23 Jy km s$^{-1}$). The synthesised beam is shown in the bottom-left corner. The black cross marks the phase center (i.e. the ACS nucleus). The blue and red cross mark the positions of the blue and red line components, as derived from our spectroastrometric analysis.
}
\label{coline}
\end{figure*}
What is still missing for a full understanding of the results of the aforementioned studies, in terms of the role of the physical processes which govern the co-eval BH-galaxy growth, is a full characterization of the gas properties
of objects
caught in the short-lived ``transition'' phase between the Starburst and QSO stages.
This phase is expected to be characterized by gas reservoirs not yet depleted and by complex kinematics, including strong winds and outflows.
\citet{Brusa2010} proposed that sources in the `blow-out' phase at z$\sim1.5$ can be isolated on the basis of their observed X-ray-to-optical-to-NIR colors and presented the source XID2028 (z=1.5927), detected in the XMM-COSMOS survey, as the prototype of this class.
XID2028 is a luminous (L$_{\rm bol}\sim2\times10^{46}$ erg s$^{-1}$), mildly obscured QSO hosted in a massive galaxy, with M$_{*}\sim4.5\times10^{11}$ M$_\odot$ and a SFR$\sim270$ M$_\odot$ yr$^{-1}$ as measured by {\it Herschel} from PEP and SPIRE data \citep{Lutz2011,Bethermin2012}.
At its center, XID2028 has a supermassive black hole with mass M$_{\rm BH}\sim3\times10^9$ M$_\odot$ \citep{Bongiorno2014}, which is accreting at
$\sim5$\% of its Eddington luminosity.
The presence of a massive outflow in the ionized gas component of XID2028, traced by the [O III]$\lambda$5007 emission,
has been unambiguosly and independently confirmed by X-shooter slit spectroscopy \citep{Brusa2015,Perna2015} and SINFONI J-band IFU observations:
in fact, XID2028 hosts one of the most massive ($\dot{M}_{ion}>250$ M$_\odot$ yr$^{-1}$, with v$>1500$ km s$^{-1}$) and most extended (out to scales of $\sim13$ kpc) outflows detected in a high-z QSO \citep{Cresci2015}.
Most importantly, the outflow lies exactly in the center of a cavity in star forming regions in the host galaxy (as traced by narrow H$\alpha$ emission line map and rest frame U band imaging; see \citealt{Cresci2015}) thus suggesting that the wind is removing the gas from the host galaxy (`negative feedback’),
and at the same time is also triggering star formation by outflow induced pressure at the edges (`positive feedback’; e.g. \citealt{Zubovas2014}). XID2028 therefore represents a test case to study QSO `feedback in action'.
However, the evidence of feedback in this source mostly comes from measurements of the on-going star formation in the source traced by the narrow H$\alpha$ emission line that in principle may be affected by e.g. differential extinction effects in the host galaxy.
Direct observations of the cold gas component in this galaxy are needed to assess
whether the ionized outflow has an impact on the cold gas reservoir.
With this aim, here we present observations of the CO(3-2) transition of XID2028, redshifted to 2mm, obtained with the PdBI Interferometer.
We compare the gas masses derived from CO with that inferred from the dust mass and based on Far Infrared (FIR) data.
These two methods allow us to investigate whether AGN feedback has already been effective in diminishing the cold gas mass in the host, or whether the feedback phase is still associated with cold-gas-rich galaxies similarly to MS star-forming galaxies, with important consequences for galaxy-AGN coevolutionary models.
The paper is organised as follows: Section 2 presents the PdBI observations and data analysis, Section 3 discusses the results, while Section 4 summarizes our conclusions.
Throughout the paper, we adopt the cosmological parameters $H_0=70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_m$=0.3 and $\Omega_{\Lambda}$=0.7 (Spergel 2003). In quoting magnitudes, the AB system will be used, unless otherwise stated. We adopt a Chabrier Initial Mass Function to derive stellar masses and SFRs for the target and comparison samples. The physical scale is 1"$\sim8.5$ kpc at the redshift of the source.
\section{Millimeter observations}
\subsection{Data reduction}
XID2028 was observed with receivers tuned to a frequency of 133.37 GHz, corresponding to the expected frequency of the CO(3-2) emission line, with the PdBI
array in the most compact (D) configuration. The observations were split in
3 tracks (31-May, 1, 6, June 2014). The
system temperature (T$_{sys}$) was between 100 and 300 K, and water vapor 4-6
mm. The quasar 1005+058 (0.3 Jy at 133.7 GHz) was used as a phase and
amplitude calibrator. MCW349 (with a flux of 1.8 Jy) was used for absolute flux calibration,
which yields an absolute flux accuracy of about 5\% at the observed frequency.
Calibration and mapping were done in the GILDAS environment. The flagging of the phase visibilities was
fixed at $<45\degree$ rms.
The total observing time was 5.6 hrs (3.06 hrs on source),
for a total of 3673 visibilities available, before applying any flag.
We then removed one scan (3994 in 01-June track) due to problems with the tracking.
Data from antenna 1 from the 06-June track were not used in the final dataset due to the presence of a tuning parasite that produced a spurious signal at a frequency (133.21 GHz) close to the observed frame frequency of the CO(3-2) transition. After flagging of bad visibilities, the total on source time is 2.54 hours (six-antenna equivalent), and the 1$\sigma$ sensitivity is 1.36 mJy/beam in 20 MHz channels, for a total of 3052 visibilities.
The clean beam of the observations is 4.5"x3.4", with an angle of 38 degrees.
The phase center of the data set was set to the HST position of the QSO nucleus (RA=10:02:11.29, DEC=+01:37:06.79).
\subsection{Analysis}
We estimated the 2 mm continuum by collapsing the line-free channels of the data set and fitting the visibilities.
The continuum is not detected with a 3$\sigma$ upper limit on its flux of 0.3 mJy.
The redshift of the host galaxy (z=1.5927) was adopted to convert the frequency to velocity space.
Figure~\ref{coline} shows the line spectrum integrated over the beam.
The line displays two peaks: one centered around the systemic redshift (FWHM$\sim$$550\pm200$ km s$^{-1}$ from a Gaussian fit), and another centered at $\sim1000$ km s$^{-1}$ (henceforth referred to as ``red feature"). The peak at the systemic position is significant at $5\sigma$, while the ``red feature" is at a lower significance ($\sim$3$\sigma$).
Moreover, the ``red feature" peaks at $\sim133.0$ GHz, close to a known parasite signal at 132.9 coming from antenna 4 and identified in all tracks.
We created a new table flagging data on Antenna 4. The total exposure time decreased to 1.8hr and the total number of visibilities to be used for the scientific analysis also considerably decreased. The red feature is not significant anymore (S/N$<3$). However, the significance of the detection over the systemic line also decreased at S/N$\sim$4. For this reason we decided to keep the full datasets in the analysis and, in the absence of deeper, more highly resolving observations which could confirm the presence of a second dynamically distinct component, we will consider the red feature as spurious.
The zero spacing flux estimated by fitting the averaged visibilities in the velocities range from -340 to +440 km s$^{-1}$ with a point source function is {\it S'(CO)}=1.6$\pm$0.3 mJy (5.3$\sigma$), and returns a centroid at (RA,DEC=10:02:11.24 01:37:05.48).
The integrated flux over the full velocity range of the systemic line (with Full Width Zero Intensity, FWZI$\sim770$ km s$^{-1}$) is therefore $\int S’(CO)dv$=
1.23$\pm$0.23 Jy km s$^{-1}$.
This measure depends only on the data calibration (including flagging of the Antennas) and does not depend on any other assumption, like e.g. masking, extraction region, ad-hoc centroid.
The quoted errors take into account the statistical errors of the {\it uv} plane fit and the errors on the absolute flux calibration (5\%).
The right panel of Figure~\ref{coline} shows the integrated map over the
systemic line emission.
We verified that the flux extracted from the integrated map (S=1.55$\pm0.3$ mJy) on a region slightly larger than the beam, is in agreement with the one estimated by fitting the visibilities.
Figure~\ref{FigHST} shows the HST/ACS image (background) with superimposed the contours from the K-band image (blue), which should trace the extension of the host galaxy. The black contours are from the map obtained on the line detected at the systemic position (e.g. from Figure 1 right, in steps of S/N, starting from 1$\sigma$) and the black cross marks the line centroid.
From both Figures 1 and 2 it is clear that the line peak is offset by $\sim1\arcsec$ from the QSO nucleus position.
From previous observations with the same phase calibrator (1005+058), we can exclude errors in the absolute astrometry.
The error associated with the beam and the S/N of the source translates into a positional uncertainty of 0.46" x 0.36".
We note, however, that the displacement may be due to the limited {\it uv} coverage of the data, and that a CO-offset is typical of low S/N data (see e.g. \citealt{Casey2011}). Better signal-to-noise ratio and {\it uv} coverage are needed to refine the location of the gas reservoir.
A dynamical mass can be estimated from the CO line width assuming a size ($\rm R$) and an inclination ($i$) of a rotating molecular gas disc.
The size can be inferred using the spectroastrometric technique \citep[and references therein]{Gnerucci2011,Carniani2013}, applied to the CO data cube. By integrating the CO data in the red (0,+400 km s$^{-1}$) and blue (-400,0 km s$^{-1}$) line channels, we measure a difference in the line centroids of $\sim1.5\pm0.2\arcsec$ (with an error of 0.3 pixels for each detection). The centroids of these detections are also shown in Figure 1 (right panel) as blue and red crosses to mark the blue and red line channels, respectively.
The measured shift corresponds to $\sim$13 kpc at the source redshift and translates to R$\sim$$6.5\pm0.8$ kpc, in agreement with the extension seen in the K-band data (see also \citealt{Cresci2015}).
Applying Equation 5 of \citet{Gnerucci2011}, we infer a M$_{\rm dyn}$(sin i)$^2$=4.5$\times10^{11}$ M$_\odot$ and, assuming an inclination of 60 deg, a M$_{\rm dyn}$$\sim$6.0$\pm2.3\times10^{11}$ M$_\odot$ once all the uncertainties in the quantities are taken in account. We will discuss in the following how this compares with the total mass derived from M$_{\star}$+M$_{\rm gas}$.
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{hst_cont.png}
\caption{HST/ACS image (F814W filter) with superimposed K-band contours from CFHT (blue, arbitrary levels chosen to trace the whole K-band emission). Black contours represent CO(3-2) emission from the integrated map in the channels corresponding to the ``systemic'' peak of the line (same levels as in right panel of Figure 1; starting from 1 sigma). The black cross (with associated ellipse) marks the line centroid. The image is about 10" across. The beam size is 4.5"x3.4", with an angle of 38 degrees.
}
\label{FigHST}
\end{figure}
\section{Results and discussion}
Deriving the luminosity L'CO[1-0] of the ground-state transition (which is generally regarded as the best indicator of the total gas reservoir) requires an assumption on r$_{31}$, the luminosity ratio between
the CO(3-2) and the CO(1-0) transitions, which depends on the nature of the systems: a ratio of $\sim0.7-1$ is typically reported for SMG galaxies and QSOs (see \citealt{CW2013} and references therein),
while an average ratio of $\sim0.42$ has been determined for MS star-forming galaxies at a similar redshift as XID2028 (e.g. \citealt{Daddi2015}).
The CO(3-2) luminosity of XID028 is L'(CO[3-2])=1.9$\pm0.4\times10^{10}$ K km s$^{-1}$ pc$^2$ ($\sim9.3\pm2\times10^5$ L$_\odot$), following \citet{Solomon2005}. This value is in between the average values observed in this molecular transition for U/LIRGs (L[CO(3-2)]=$2.6\pm0.5$$\times10^{9}$ K km s$^{-1}$ pc$^2$) and SMGs (L[CO(3-2)]$4.4\pm1.1$$\times10^{10}$ K km s$^{-1}$ pc$^2$), as reported in the work of \citet{Iono2009}. On the other hand, the SFR ($\sim270$ M$_\odot$ yr$^{-1}$) and M$_\star$ ($\sim4.5\times10^{11}$ M$_\odot$) of XID2028 are consistent with those observed in a MS galaxy at z$\sim1.5$ (see \citealt{Mainieri2011,Bongiorno2014,Brusa2015}).
The CO(2-1) transition in XID2028 is not detected down to a sensitivity of 0.23 mJy/beam over the full 770 km/s line width
(corresponding to a 3$\sigma$ upper limit on the line integrated flux of 0.53 Jy km/s), from a separate, 3mm-band PdBI observation of XID2028 in October 2014 (M. Sargent, private communication). This suggests a near thermal CO-excitation state\footnote{r$_{31}\gsimeq1.0$ assuming CO[2-1] is thermalized, and r$_{31}\gsimeq0.9$ assuming CO[2-1] is sub-thermally excited $r_{21}=0.84$, standard value for MS objects}, and therefore r$_{31}$ around unity, i.e. larger than the standard value usually adopted for MS galaxies, more consistent with the QSO/Starburst scenario.
Given the complex nature of the system, we derive the CO(1-0) luminosity under the {\it conservative} assumption that r$_{31}$=0.7 (consistent with the constraints we have from millimeter data alone), and we will apply different $\alpha_{\rm CO}$ factors to derive molecular gas masses under the QSO/ULIRG and MS assumptions, discussing the implications of the findings in the different cases.
The inferred L’(CO[1-0]) luminosity for XID2028 (abbreviated as L'(CO) in the following) is therefore L’(CO)=2.6$\times10^{10}$ K km s$^{-1}$ pc$^2$ ($\sim1.2\times10^{6}$ L$_\odot$).
\subsection{Star Formation Efficiency}
Figure~\ref{SFE} (left panel) shows L'(CO) against the total Infrared Luminosity (L$_{\rm IR}$, computed between 8-1000 $\mu$m)
for XID2028 (red circle). The IR luminosity of XID2028 is very well constrained by Herschel/PACS+SPIRE data (logL$_{\rm IR}$=12.47; see \citealt{Brusa2015,Perna2015}) and has been estimated from fitting all bands with photometry with rest-frame wavelength $>50\mu$m with \citet{Dale2002} Starbursts templates, using the same technique as in \citet{Santini2009}.
Although recent works on FIR emission of AGN show that even the flux observed at rest frame wavelengths longer than 60$\mu$m can be AGN-dominated (e.g. \citealt{Mullaney2011}), in XID2028 the QSO contribution is expected to be negligible, as shown in Figure 2 of \citet{Perna2015}, where the most recent SED fitting decomposition for this object is presented.
The observed IR luminosity corresponds to a SFR of $\sim270^{+50}_{-100}$ M$_\odot$ yr$^{-1}$, using the SFR-IR luminosity relation \citep{Kennicutt1998}, and taking into account the uncertainties on the flux normalization of the starburst component related to the AGN-host SED decomposition.
We compare this measurement with the compilation of low- and high-redshift normal star-forming galaxies with measured $\alpha_{\rm CO}$ presented in \citet{Sargent2014} and the SMG-sample from \citet{Bothwell2013}, involving both outliers and galaxies consistent with the locus of the main sequence at their redshift.
We also plot the unobscured QSOs at 1$<z<$4 presented in \citet{Riechers2011}\footnote{In the case of lensed quasars, the values are corrected for the amplification, as reported in Riechers 2011.}. For these sources, the IR luminosities are extracted from the \citet{CW2013} compilation.
Finally, in Figure 3 we show SW022550 and SW022513 at z$\sim3.4$ \citep{Polletta2011}, ULASJ1539 at z$\sim2.5$ \citep{Feruglio2014},
and the MIPS-selected sources at z$\sim$2 from \citet{Yan2010}. All these systems have been proposed to be in the ``transition phase" between an heavily obscured Starburst phase and the unobscured QSO phase.
The SFE of XID2028 (SFE$\sim110$) is on the lower side of the SFEs measured for high-z SMG and unobscured QSOs (SFE$\sim100-1000$). Instead, the SFE is consistent with those reported (albeit with much larger uncertanties due to the lack of a complete multiwavelength coverage and reliable measurements of L$_{\rm IR}$) in the obscured QSOs systems proposed to be in the ``transition phase" mentioned above.
\begin{figure*}[!t]
\centering
\includegraphics[width=8.7cm]{LIR_LCO.png}
\includegraphics[width=8.0cm]{Mmol_SFR.png}
\caption{{\it Left panel}: L'(CO[1-0]) against the IR luminosity (8-1000$\mu$m) showing a compilation of MS galaxies at 0$<$z$<$2.5 from the \citet{Sargent2014} work (Sa14; grey symbols), local ULIRGs from from \citet[So97, crosses], SMGs from Bothwell et al. (2013; Bo13, filled brown squares), 1$<$z$<$4 QSOs with data from Riechers (2011) and Carilli \& Walter (2013; R11/C\&W13, light blue circles).
The measurement for XID2028 obtained assuming $r_{31}=0.7$ is shown as a red circle. Obscured QSOs proposed to be in the transition phase presented in \citet{Aravena2008}, \citet{Polletta2011}, \citet{Feruglio2014} and the MIPS-selected sources from Yan et al. (2010) are also marked, as labeled.
{\it Right panel}: Inverse, integrated Kennicutt-Schmidt relation between SFR and molecular gas mass. The color
points correspond to the position of XID2028 with different CO-to-H$_2$ conversion factors: $\alpha_{\rm CO}$=3.6 (top; blue),
and $\alpha_{\rm CO}$=0.8 (bottom; red). The green star shows the ISM mass inferred from the dust SED. All the values for XID2028 are slightly offset in the x-axis for clarity. Other points are taken from the same samples presented in the left panel. For the Bo13, R11/C\&W13 and Ya10 samples we plot the median value with associated 16\% and 84\% percentiles. For the obscured QSOs all the authors used $\alpha_{\rm CO}$=0.8.
In both panels, the solid black line is the best-fit relation for MS galaxies, the dashed line defines the locus of strong SB galaxies with
approximately 15 times shorter depletion time (M$_{\rm mol}$/SFR) than MS galaxies \citep{Sargent2014}.
}
\label{SFE}
\end{figure*}
\subsection{Molecular gas mass from CO data}
Estimating the molecular gas mass based on the CO luminosity critically hinges on the CO-to-H2 conversion factor $\alpha_{\rm CO}$, defined as the ratio between the mass of molecular gas (M$_{\rm mol}$) to the integrated CO(1-0) luminosity ($\alpha_{\rm CO}$=M$_{\rm mol}$/L'(CO)[1-0] in units of M$_\odot$/(K km s$^{-1}$ pc$^{2}$)). This value depends on the ISM conditions, and two distinct assumptions are often adopted:
$\alpha_{\rm CO}\sim4$ for extended SF disks/MS galaxies of solar metallicity, and $\alpha_{\rm CO}\sim0.8$ for compact luminous systems (\citealt{Downes1998}; see \citealt{CW2013} and \citealt{Bolatto2013} for in-depth discussions).
From a morphological point of view we do not have a clear classification on the properties of the host galaxy. Given that the HST image suffers from substantial extinction (A$_V\sim3$; see discussion in \citealt{Perna2015}) and, in any case, is dominated by the central active nucleus, it cannot be used for a reliable morphological analysis. However, no clear signatures of merging structures are visible in the rest-frame U-band. The low-resolution (with respect of HST) K-band image is instead consistent with both an elliptical galaxy and a spiral galaxy, possibly interacting with a north-east system (see Figure 2).
Even if the MS is mainly populated by "normal" spiral and disk galaxies (see e.g. \citealt{Wuyts2011a}), we note that XID2028 would lie among the population which occupies the upper envelope of the MS at z$\sim1.5$. These galaxies may have also cushier light profiles, intermediate between disky galaxies and red and dead systems (see \citealt{Wuyts2011b}, their Figure 1, right panel). In any case, if after point-source subtraction this galaxy were to be shown to have an early-type or disturbed host galaxy morphology, it would actually be highly consistent with the statistical findings of \citet{Wuyts2011b}.
In the vast majority of studies targeting SMGs, QSOs and ULIRGs systems (see e.g. \citealt{Aravena2008,Riechers2011_QSOz3,Polletta2011,Feruglio2014}, among others), $\alpha_{\rm CO}=0.8$ has been adopted even in absence of better information on the physical properties of the system (e.g. compactness of the source). Under the assumption of starbursts/QSO scenario, we obtain for XID2028 a gas mass M$_{\rm mol}\sim$2.1$\pm0.4\times10^{10}$ M$_\odot$.
To infer the molecular gas mass under the MS hypothesis, we consider a metallicity dependent conversion factor $\alpha_{\rm CO}$ (see e.g.\citealt{Genzel2012,Bolatto2013}).
In the following we will assume for XID2028 a value of $12+log(O/H)=9.07$, the metallicity inferred from the so-called Fundamental Metallicity Relation (FMR, \citealt{Mannucci2010}), that relates the metal content with the stellar mass and the SF of the galaxy independently of redshift \citep{Cresci2012}.
Applying the relations describing redshift-dependent variations of $\alpha_{\rm CO}$ in the SFR-M$_\star$ plane of \citet{Sargent2014} to XID2028 one would expect\footnote{A virtually identical conversion factor would be inferred using the relation between metallicity and $\alpha_{\rm CO}$ calibrated in \citet{Genzel2012} once the offsets between different metallicity calibrations are taken into account.} $\alpha_{\rm CO}\sim3.6$,
and the corresponding molecular gas mass would then be M$_{\rm mol}\sim9.5\pm1.9\times10^{10}$ M$_\odot$.
The two values for M$_{\rm gas}$ inferred under the two different assumptions are plotted in Figure 3b (with the statistical errors associated to the line detection), where the molecular gas mass is shown as a function of the SFR (for the same samples presented in Fig. 3a).
\subsection{Molecular gas mass from FIR emission}
We also adopt an independent method to compute the total gas mass in this source, using the dust mass derived from FIR photometry. For this purpose, we assume a metallicity-dependent
gas to dust ratio, following the calibration presented by \citet{Santini2014} and recently extended to AGN samples in the work by \citet{Vito2014_gas}.
This estimate is independent from $\alpha_{\rm CO}$, although it depends on the metallicity ($Z$) of the system and on the
assumptions that the dust-to-gas ratio scales linearly with $Z$ through a constant factor \citep{Draine2007}.
The dust mass is obtained via SED decomposition of the AGN and host galaxy contributions, using a combination of the \citet{Silva2004} AGN templates and the \citet{DraineLi2007} dust templates to fit the 100-500$\mu$m range. For objects at z$>$1, submillimeter data are in principle required to properly sample the dust emission free from AGN contamination. However, we note that for XID2028 the best fit SED decomposition performed following \citet{Vito2014_gas} is consistent with the upper limit of the continuum at 2mm (see Section 2.2).
We obtain a total dust mass of M$_{\rm dust}=7.7\pm4.2\times10^{8}$ M$_\odot$.
Assuming the FMR metallicity (see above) and the \citet{Santini2014} calibration, this translates into a M$_{\rm gas}\sim4.5\pm2.4\times10^{10}$ M$_\odot$ (without considering the uncertainty on the dust-to-gas-ratio calibrations, e.g. a factor of $\sim2$, see \citealt{Sandstrom2013}).
We note that using the \citet{Leroy2011} metallicity dependence of the dust-to-gas ratio would yield consistent results, within the errors.
The value inferred from the dust fit approach is plotted as a green star in Fig. 3b.
If we use this estimate for M$_{\rm gas}$, and the observed L'(CO), we can derive an {\it effective} $\alpha_{\rm CO}$ for this source, $\alpha_{\rm CO(dust)}\sim2.4\times r_{31}$ ($\alpha_{\rm CO(dust)}\sim$1.7 given our adopted excitation correction r$_{31}$=0.7).
\begin{figure*}[!t]
\centering
\includegraphics[width=18cm]{fig4_final.png}
\caption{
Gas fraction $\mu_{\rm mol}$ {\it (Left panel)} and depletion timescale {\it (Right panel)} plotted versus the sSFR-excess for the same samples and with the same color code presented in Figure 3. The values for XID2028 are slightly offset in the x-axis for clarity. All quantities are normalized to the expected values for normal and starburst galaxies predicted by the calibration presented in \citet{Sargent2014}. The black line traces the expected variation (median) with sSFR for a MS spiral galaxy with identical mass and redshift as XID2028 (see \citealt{Sargent2014}). The step at sSFR/$<$sSFR$>\sim$4 reflects the transition from the main sequence locus to the sSFR-regime where high-SFE starbursts dominate. XID2028 lies a factor $\sim$2 to $\sim$10 below the black line, i.e. it shows significant lower gas fraction and depletion time scale than those expected for the properties of the its host galaxy.
}
\label{fgas}
\end{figure*}
\subsection{Gas fraction and depletion timescale}
The uncertainty in the derived gas mass from the CO data is dominated by the assumption in $\alpha_{\rm CO}$ (a factor of 4.5) with respect to the statistical uncertainties (20\%). Given that the value derived from the dust fit is in between those for the two different assumptions in $\alpha_{\rm CO}$, in the following we will refer to this value as our best estimate for the molecular gas mass, and those from the CO data under the MS and QSO/Starburst assumptions as upper and lower limit, respectively, i.e. M$_{\rm gas}=4.5 (1.7-11.4)\times10^{10}$ M$_\odot$, where the lower and upper limits in parenthesis take also into account the statistical uncertainties of the detection, and overall also the uncertainty in the assumed dust-to-gas ratio.
We note that the total gas mass inferred from the dust continuum fit includes both the molecular and atomic components. However, the atomic mass usually constitutes a negligible fraction of the total gas mass.
The stellar mass of XID2028 is M$_{\star}\sim4.5\times10^{11}$ M$_\odot$ from the most recent SED fitting decomposition \citep{Perna2015}. This value is a result of the inclusion in the multicomponent SED fitting of a mildly obscured QSO component, given that we observe the Broad Line Region (BLR) emission in the H$\alpha$ line complex \citep{Bongiorno2014}.
In Section 2.2 we reported a dynamical mass M$_{\rm dyn}$$\sim$6$\pm2\times10^{11}$ M$_{\odot}$. Although the estimate of the dynamical mass suffer from large uncertainties, it is quite reassuring that it is consistent with the value we obtain from the sum of the stellar and molecular mass components (M$_{\rm tot}$$\sim4.6-5.6\times10^{11}$ M$_\odot$, taking into account the range of M$_{gas}$).
We can then calculate the molecular gas fraction, $\mu_{\rm mol}$, defined as the ratio of the molecular gas mass and the stellar mass ($\mu_{\rm mol}$=M$_{\rm mol}$/M$_{\star}$; see e.g. \citealt{Sargent2014,Genzel2014_gas}).
Given the molecular gas masses inferred in the previous Section, the gas fraction translates into $\sim5$\% for the QSO/Starburst and $\sim21$\% for the MS scenarios. The value from the dust mass measurement is in between these two estimates ($\sim10\%$).
Similarly, we can estimate the depletion time scale (defined as M$_{\rm gas}$/SFR; e.g. the rate at which the gas is converted into stars) and we infer t$_{\rm depl}$=75, 340 and 160 Myr using the QSO/Starburst, MS and dust-fit derived gas masses, respectively.
\subsection{Evidence for QSO feedback}
Figure~\ref{fgas} (left panel) shows the gas fraction in XID2028 for the three assumptions described above, plotted against
the sSFR-excess with respect to the main sequence, e.g. sSFR/sSFR$_{\rm MS}$=$0.86$, where the mass- and redshift-dependence of the characteristic sSFR of MS galaxies follows the calibration in \citep{Sargent2014} which is based on a large compilation of literature data.
In this plot we show the same samples used in Figure 3 (with the exception of unobscured QSOs and the MIPS selected sources
for which no stellar mass estimates are available) and we plot as a solid line the median trend
with normalized sSFR, expected for a galaxy of the same mass and redshift of XID2028
(taken from the 2-Star Formation Mode description of normal and starbursting (off-MS) galaxies in \citealt{Sargent2014}).
Taking the best M$_{\rm gas}$ estimate for our target, and even taking into account the uncertainty on $\alpha_{\rm CO}$ assumption, XID2028 is among the objects with the lowest gas fraction for its sSFR detected so far in the high-z Universe and associated to normal star-forming galaxies (green star in Figure 4),
especially when compared to systems with similar masses (solid line).
The $\mu_{\rm mol}$ is instead more similar to that expected for `Starburst' galaxies
of a similar mass and redshift (see value of black trend line at sSFR/<sSFR>$_{\rm MS}\gsimeq4$),
but XID2028 does not share with these sources the same burst of star formation.
Instead, the gas fraction of XID2028 is similar to normal galaxies in the local universe (open triangles), despite its higher redshift.
An alternative way of visualizing the gas content and consumption is illustrated in
the right panel of Figure~\ref{fgas}, where the depletion time scale
is plotted against the MS-normalised sSFR of the host galaxy.
Assuming our best M$_{\rm gas}$ estimate,
XID2028 lies at shorter depletion time scales with respect to MS galaxies (at any redshift),
i.e. it is consuming its residual gas
more rapidly than normal star-forming galaxies.
This qualifies XID2028 as a clear outlier with respect to the average population, and a rare object, consistent with the hypothesis that it is caught in the very short transition phase in which the QSO feedback is released.
Similar conclusions can be reached examining the position of our source with respect to the Kennicutt-Schmidt relation \citep{Kennicutt1998}: assuming the physical scales inferred in Section 2.2, and that the molecular gas and the SF episodes are distributed uniformly over this region, XID2028 would lie slightly above (a factor $\sim2.5$) the correlation observed for normal and starburst galaxies. However, the SFR density measured in this way is to be considered a lower limit, given that the SF regions seem to be patchy (see Cresci et al. 2015). Therefore, XID2028 would further deviate above the K-S relation, towards regions of short depletion timescales.
It is important to note that, even when using the MS assumption, the molecular gas fraction and depletion timescale would be considerably lower than those expected for systems of the same host galaxies properties of XID2028 (blue point in Figure 4). In particular, the depletion timescale observed in XID2028 for the MS scenario is a factor of $\sim2$ lower than the expectations of \citet{Sargent2014} and a factor $\sim3$ lower than that obtained by the parameterization of MS and off-MS galaxies presented in \citet[using their {\it global fit} we expect for XID2028 $\rm{t_{dep}(G15)_{global}}\sim970$ Myr]{Genzel2014_gas}.
The discrepancy with the calibrations is more extreme if the values obtained in the QSO scenario are adopted (red circles in Figure 4).
We also note that our chain of assumptions in deriving M$_{\rm gas}$ has been very conservative. For example, we used r$_{31}$=0.7 instead of $r_{31} \gsimeq 0.9$ as suggested by the non detection of CO[2-1] emission, which would have instead provided a 20\% smaller CO[1-0] flux. This conservative assumption also compensates a possible overestimate of the value of the CO[3-2] flux, which could result from measuring the line flux at the phase centre rather than at the slightly offset centroid. The result of the lack of molecular gas in XID2028, with M$_{\rm gas}\lsimeq10^{11}$ M$_\odot$, is therefore quite robust.
A short depletion time scale with respect to MS galaxies has also been found for SMGs in the \citet{Bothwell2013} sample, and other AGN/Starburst systems plotted in Figure 4 \citep{Aravena2008,Polletta2011,Feruglio2014}. \citet{Yan2010} also reported a short depletion timescale of $\sim$40 Myr for the sample of MIPS-selected ULIRGs.
The short depletion time scale in SMGs has been interpreted as higher star formation efficiency in the galaxy (e.g. \citealt{Genzel2010,Daddi2010}), probably due to higher density of the ISM in these compact systems. These may also be the case for ULASJ1534 and the COSBO11, which have sSFRs comparable to SMGs, and for which we expect compact gas reservoirs.
Instead, in XID2028 a significant fraction of the gas is expected to be already expelled from the galaxy. The SF is then probably maintained only in the denser environments, less affected by the negative feedback, and possibly enhanced by positive feedback due to the outflow induced pressure (e.g. \citealt{Silk2013}).
The fact that XID2028 has a smaller gas reservoir and shorter depletion time than that measured for MS galaxies of similar sSFR therefore constitutes a new probe,
in addition to the analysis presented in \citet{Cresci2015} based on NIR data, that QSO feedback in the form of powerful outflows is able to affect star formation in the host and expel a significant fraction of gas from the host galaxy.
\section{Summary}
We presented the first molecular line luminosity measurement, via CO(3-2) observations obtained at the PdBI interferometer, in a luminous obscured QSO at z$\sim$1.5. The target is thought to be in the `blow-out' phase, and the presence of a powerful outflow with significant impact on the host galaxy has been unveiled through previous NIR observations \citep{Perna2015,Cresci2015}. We complemented the PdBI data with FIR dust fitting, and report the following results:
\begin{itemize}
\item[$\bullet$] We measure a SFE ($\simeq$110) at the lower end of those reported in the literature for a large number of QSOs and Starburts/SMG galaxies (see \citealt{Iono2009,CW2013}), and consistent with that inferred for obscured QSOs at higher redshift;
\item[$\bullet$]
We infer a molecular gas mass (M$_{\rm mol}$) in the range $2.1\pm0.4\div9.5\pm1.9\times10^{10}$ M$_\odot$
applying the QSO/Starburst or MS conversion factors to the measured L'CO line luminosity, respectively, and a total gas mass M$_{\rm gas}\sim4.5\times10^{10}$ M$_\odot$ from dust continuum fitting;
\item[$\bullet$]
A value for the molecular gas mass $<10^{11}$ M$_\odot$ is also remarkably consistent with our estimates of the dynamical mass
through spectroastrometric methods (see Section 2.2), given the high stellar mass of XID2028;
\item[$\bullet$]
We also infer a molecular gas fraction $\mu_{\rm mol}\sim5-20$\%.
This translates into a gas depletion time scale t$_{\rm depl}\sim$70-340 Myr, depending on the assumptions on $\alpha_{\rm CO}$ (see Figure 4).
\item[$\bullet$]
The value of t$_{\rm depl}$ is considerably lower ($\lsimeq30$\%) than those observed in systems hosted in similar massive (M$_{\star}>10^{11}$ M$_\odot$) MS galaxies (MS-normalised sSFR$\sim1$), and consistent with those observed for SMGs and for the other few systems proposed to be in the transition phase.
\end{itemize}
\par\noindent
We propose that in XID2028 the QSO wind, detected in the ionised gas component out to 10-kpc scales, has already removed most of the molecular gas from the host galaxy.
All the observational constraints (low molecular gas content, lowest $\mu_{\rm mol}$ at a fixed sSFR when compared to M$_\star>10^{11}$ M$_\odot$ systems, and lowest sSFR at a fixed $\mu_{\rm mol}$) are consistent with such a scenario, where the gas in the host galaxy of XID2028
is indeed already depleted/dispersed by the effects of the strong QSO feedback (see also \citealt{Coppin2008} and \citealt{Yan2010} for similar interpretation).
In dense regions (e.g. clumpy M$_{gas}$ reservoirs), possibly located at the edge of the outflow cavity \citep{Cresci2015}, the residual gas is converted into stars at a high rate similar to that observed in SMGs, where the low depletion time scale is indeed ascribed to the efficient SF triggered in dense and compact gas reservoirs.
The measure of the intensity of the CO(3-2) emission in XID2028 represents a first step towards a mapping experiment using high spatial resolution to study the morphology and the kinematics of the molecular gas reservoir and
of the clumpy structures in the distribution of SF regions seen in HST and SINFONI maps.
Sensitive ALMA and/or NOEMA observations of XID2028 will finally
give the spatial resolution to locate molecular clouds (see, e.g., \citealt{Aravena2014}) and reveal any possible molecular outflow component.
\begin{acknowledgements}
Based on observations carried out under project number X--8 with the IRAM PdBI
Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN
(Spain). We gratefully acknowledge the allocation of IRAM DDT time, and we thank the staff of the IRAM Observatory for their support of this program.
MB, MP and GL acknowledge support from the FP7 Career Integration Grant ``eEASy'' (``SMBH evolution through cosmic time: from current surveys to eROSITA-Euclid AGN Synergies", CIG 321913).
MB gratefully acknowledges fundings from the DFG cluster of excellence `Origin and Structure of the Universe' (www.universe-cluster.de).
We acknowledge financial support from INAF under the contracts PRIN-INAF-2011 (``Black Hole growth and AGN feedback through cosmic time"), PRIN-INAF-2012 (``The Lifecycle of early Black Holes'') and PRIN MIUR 2010-2011 (``The dark Universe and the cosmic evolution of baryons").
We thank Dennis Downes and Andrea Comastri for enlightening discussion. We thank the anonymous referee for his/her interest towards the results of our work, a very careful reading of the paper and useful suggestions which improved the presentation of the results.
\end{acknowledgements}
|
2,869,038,156,810 | arxiv | \section{Introduction}
We consider the weighted elliptic system
\begin{equation}\label{eq:1.1}
\begin{cases}
-\Delta u=(1+|x|^2)^{\frac{\alpha}{2}} v,\\
-\Delta v=(1+|x|^2)^{\frac{\alpha}{2}} u^p,
\end{cases} \quad \mbox{in}\;\ \mathbb{R}^N,
\end{equation}where $N \ge 5$, $p>1$ and $\alpha>0$. We are interested in the Liouville-type theorems---i.e., the nonexistence of the classical positive and nonnegative stable solutions (\ref{eq:1.1}) in $\mathbb{R}^N$ or the half space $\mathbb{R}^N_+$.\vskip .05in
We recall the case $\alpha=0$, the so-called Lane-Emden equation or system which has been widely studied by many authors.
For the second order Lane-Emden equation, the finite Morse index solutions of the nonlinear problem
\begin{equation}\label{eq:1.2}
\Delta u+|u|^{p-1}u=0\quad \mbox{in}\; \mathbb{R}^N,\; p>1
\end{equation}have been completely classified by Farina (see \cite{Farina}). Farina also proved that nontrivial finite Morse index solutions to (\ref{eq:1.2}) exist if and only if $p \ge p_{JL}$ and $N \ge 11$, or $p=\dfrac{N+2}{N-2}$ and $N \ge 3$. Here $p_{JL}$ is the so-called Joseph-Lundgren exponent (see \cite{Gui}). His proof made a delicate application of the classical Moser's iteration. There exist many excellent papers to utilize Farina's approach to discuss the second order Hardy-H\'{e}non equation. We refer to \cite{Dancer,Wang} and the references therein.\vskip .05in
Unfortunately, Farina's approach may fail to obtain the similarly complete
classification for stable solution and finite Morse index solution of the biharmonic equation
\begin{equation}\label{eq:1.3}
\Delta^2 u=u^p,\quad \mbox{in}\;\ \Omega \subset \mathbb{R}^N.
\end{equation}To solve the complete classification, D\'{a}vila-Dupaigne-Wang-Wei \cite{Davila} have derived from a monotonicity formula for solution of (\ref{eq:1.3}) to reduce the nonexistence of nontrivial entire solutions for the problem (\ref{eq:1.3}), to that of nontrivial homogeneous solutions, and gave a complete classification of stable solutions and those of finite Morse index solutions. Adopting the similar method, Hu \cite{Hu-1} obtained a complete classification of stable solutions and finite Morse index solutions of the fourth order H\'{e}non equation $\Delta^2 u=|x|^{\alpha} |u|^{p-1}u$.\vskip .05in
However, it seems that the monotonicity formula approach in \cite{Davila,Hu-1} does not work well with some weighted elliptic systems or negative exponent. There are several new approaches dealing with those elliptic equation or systems. The first approach is the use of the test function, Souplet's inequality \cite{Souplet} and the idea of Cowan-Esposito-Ghoussoub in \cite{Cowan-Espositio-Ghoussoub}. For example, Fazly proved the following result:\vskip .1in
\noindent {\bf Theorem A}
{\it (\cite[Theorem 2.4]{Fazly}) Suppose that $(u,v) \in C^2(\mathbb{R}^N) \times C^2(\mathbb{R}^N)$ is a nonnegative entire stable solution of (1.1) in dimension
\begin{equation}\label{eq:1.4}
N<8+3\alpha+\dfrac{8+4\alpha}{p-1}.
\end{equation}Then $(u,v)$ has the only trivial solution, where $\alpha \ge 0$ and $p>1$.}
\vskip .1in
The second approach, which was obtained by Cowan-Ghoussoub \cite{Cowan-Ghoussoub} and Dupaigne-Ghergu-Goubet-Warnault \cite{Dupaigne} independently, is firstly to derive the following interesting intermediate second order stability criterion: for the stable positive solution to (\ref{eq:1.3}), it holds
\begin{equation*}
\sqrt{p} \int_{\mathbb{R}^N} u^{\frac{p-1}{2}} \zeta^2 dx \le \int_{\mathbb{R}^N} |\nabla \zeta|^2 dx ,\quad \forall \zeta \in C_0^1(\mathbb{R}^N).
\end{equation*}Then this will be carried out through a bootstrap argument which is reminiscent of the classical Moser iteration method. Recently, combining the first and second approaches, the fourth order elliptic equation with positive or negative exponent have been discussed in \cite{Cowan,Guo-Wei,Hajlaoui}.\vskip .05in
For the general system with $\alpha \neq 0$, the Liouville property is less understood and is more delicate to handle than $\alpha=0$.
Moreover, from Theorem A, we note that if $p>3$ and $\alpha < \dfrac{4(p-3)}{3p+1}$, then the space dimension for the Liouville property of the nonnegative stable solution to (\ref{eq:1.1}) is less than $12$. But the study of radial solutions in \cite{Karageorgis} suggests the following {\bf conjecture}:\\[0.1cm]
\hspace*{18pt}{\it A smooth stable solution to (\ref{eq:1.3}) exists if and only if $p \ge p_{JL_4}$ and $N \ge 13$.}\\[0.1cm]
Consequently, Liouville type result for stable solutions of (\ref{eq:1.1}) should hold true for any $N \le 12$, $p>1$ and $\alpha \ge 0$. That is what we will prove here.
Inspired by the ideas in \cite{Cowan,Guo-Wei,Hajlaoui}, our purpose in this paper is to prove the following Liouville-type theorems of the weighted elliptic system (\ref{eq:1.1}).
\begin{theorem}\label{eq:t1.1}
Suppose that $(u,v)$ is a classical stable solution of the weighted elliptic system (\ref{eq:1.1}) with $u>0$. If $N \ge 5$, $\alpha>0$ and $p>1$ satisfy the following conditions:
\begin{itemize}
\item [\rm (i).] $
N <\ell+\dfrac{\alpha (\ell-2)}{2}$.
\item [\rm (ii).] $p \in (1,p_*(\ell))$, where
\begin{equation*}p_*(\ell)=
\begin{cases}
+\infty,& 5 \le \ell \le \overline{\ell},\\
\dfrac{\ell+2-\sqrt{\ell^2+4-4\sqrt{\ell^2+H^*_{\ell}}}}{\ell-6-\sqrt{\ell^2+4-4\sqrt{\ell^2+H^*_{\ell}}}}, & \ell > \overline{\ell},
\end{cases}
\end{equation*}and $\overline{\ell} \in (12,13)$ is the root of the quartic equation
\begin{equation*}
8(\ell-2)(\ell-4)=H^*_{\ell}:=\dfrac{\ell^2 (\ell-4)^2}{16}+\dfrac{(\ell-2)^2}{2}-1.
\end{equation*}
\end{itemize}Then the system (\ref{eq:1.1}) has the only trivial solution.
\end{theorem}
\begin{remark}\label{eq:r1.1}
\begin{itemize}
\item [\rm (i).] We note that if $5 \le N\le 12+5 \alpha$, then the weighted elliptic system (\ref{eq:1.1}) do not have classical positive stable solution for any $p>1$ and $\alpha\ge 0$.
\item [\rm (ii).] From (\ref{eq:1.4}) and Theorem \ref{eq:t1.1}, we find that the inequality
\begin{equation*}
8+3\alpha+\dfrac{8+4\alpha}{p-1}<\ell+\dfrac{\alpha(\ell-2)}{2}
\end{equation*}holds true, when $8\le \ell \le 12$, $p=+\infty$ or $\ell \ge 13$, $p=p_*(\ell)$.
\item [\rm (iii).] If we denote $\ell:=2+2\mu$ by Remark \ref{eq:r3.1} (iii), then the weighted elliptic system (\ref{eq:1.1}) do not exist classical positive stable solution in
dimension
\begin{equation*}
N<2+(2+\alpha) \mu,
\end{equation*}where $\alpha \ge 0$, $\mu$ is the largest root of the polynomial
\begin{equation}\label{eq:1.5}
H(p,\mu)=\mu^4-\dfrac{32p(p+1)}{(p-1)^2}\mu^2+\dfrac{32p(p+1)(p+3)}{(p-1)^3}\mu-\dfrac{64p(p+1)^2}{(p-1)^4},
\end{equation}for any $p>1$.
\end{itemize}
\end{remark}\vskip .06in
\begin{theorem}\label{eq:t1.2}
Assume that $(u,v)$ is a classical nonnegative stable solution of the weighted elliptic system (\ref{eq:1.1}), and
$N\ge 5$, $\alpha>0$ and $p>1$ satisfies one of the following conditions:
\begin{itemize}
\item [\rm (i).] $N <\ell+\dfrac{\alpha (\ell-2)(p+3)}{4(p+1)}$ and $p \in (1,p_*(\ell))$. or
\item [\rm (ii).] For any $p>1$, $N<2+2\mu +\dfrac{\alpha(p+3)}{2(p+1)} \mu$, where $\mu$ is the largest root of the polynomial
$H(p,\mu)$ in (\ref{eq:1.5}).
\end{itemize}Then $(u,v)$ must be the trivial solution.
\end{theorem}
\begin{remark}\label{eq:r1.2}
Clearly, if for any $p>1$ and $\alpha \ge 0$, $5 \le N\le 12+\dfrac{5\alpha (p+3)}{2(p+1)}$, then the system (\ref{eq:1.1}) do no have classical nonnegative stable solution.
\end{remark}
\begin{itemize}
\item {\bf Notation}. Here and in the following, we use $B_r(x)$ to denote the open ball on $\mathbb{R}^N$ central at $x$ with radius $r$. We also write $B_r=B_r(0)$. $C$ denotes generic positive constants independent of $u$, which could be changed from one line to another.
\end{itemize}
The organization of the paper is as follows. In section 2, we prove some decay estimate and point-wise estimate for the stable solution of the system (\ref{eq:1.1}). Then we prove Liouville-type theorem for positive stable solution of (\ref{eq:1.1}), that is Theorem \ref{eq:t1.1} in section 3. Adopting the similar approach, we prove Theorem \ref{eq:t1.2} in section 4.\vskip .1in
\section{Preliminaries}
Let $\Omega$ be a subset of $\mathbb{R}^N$ and $f,g \in C^1\left (\mathbb{R}^{N+2},\Omega \right )$. Following Montenegro \cite{Montenegro}, we consider a general elliptic system
\begin{equation*}
\begin{cases}
-\Delta u=f(u,v,x),\\
-\Delta v=g(u,v,x),
\end{cases}\quad x \in \Omega. \eqno (Q_{f,g})
\end{equation*}
\begin{definition}\label{eq:d1.1}
A solution $(u,v)\in C^2(\Omega)\times C^2(\Omega)$ of $(Q_{f,g})$ is said to be {\it stable}, if the eigenvalue problem
\begin{equation*}
\begin{cases}
-\Delta \phi=f_u(u,v,x)\phi+f_v(u,v,x)\psi+\eta \phi,\\
-\Delta \psi =g_u(u,v,x)\phi+g_v(u,v,x)\psi+\eta \psi,
\end{cases}\eqno (E_{f,g})
\end{equation*} has a first positive eigenvalue $\eta>0$, with corresponding positive smooth eigenvalue pair $(\phi,\psi)$.
A solution $(u,v)$ is called {\it semi-stable}, if the first eigenvalue $\eta$ is nonnegative.
\end{definition}
\begin{lemma}\label{eq:l2.1}
Let $(u,v)$ be a classical stable solution of (\ref{eq:1.1}). Then the following two inequalities hold
\begin{equation*}
p\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^{p-1} \zeta^2 dx \le \int_{\mathbb{R}^N} \dfrac{|\Delta \zeta|^2}{(1+|x|^2)^{\frac{\alpha}{2}}}dx, \quad \forall \zeta \in H^2 (\mathbb{R}^N),
\end{equation*}and
\begin{equation}\label{eq:2.1}
\sqrt{p}\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^{\frac{p-1}{2}} \zeta^2 dx \le \int_{\mathbb{R}^N} |\nabla \zeta|^2 dx,
\end{equation}for all $\zeta \in H^1(\mathbb{R}^N)$.
\end{lemma}
\begin{proof}
Adopting the proof of Lemma 3 and Lemma 7 in \cite{Cowan}, we get the desired results. Therefore we omit the detail.
\end{proof}
Next, we list some decay estimate and point-wise estimate for stable solution which will be useful in the following proofs.
\begin{lemma}(\cite[Lemma 2.2]{Wei-Ye})\label{eq:l2.2}
For any $\zeta, \eta \in C^4(\mathbb{R}^N)$, the identity holds
\begin{equation*}
\Delta \zeta \Delta (\zeta\eta^2)=[\Delta (\zeta \eta)]^2-4(\nabla \zeta \cdot \nabla \eta)^2-\zeta^2(\Delta \eta)^2+2\zeta \Delta \zeta |\nabla \eta|^2-4\zeta \Delta \eta \nabla \zeta \cdot \nabla \eta.
\end{equation*}
\end{lemma}
\begin{lemma}\label{eq:l2.3}
For any $\zeta \in C^4(\mathbb{R}^N)$ and $\eta \in C^4_0 (\mathbb{R}^N)$, we obtain the two identities
\begin{align}\label{eq:2.2}
\int_{\mathbb{R}^N} \dfrac{\Delta \zeta \Delta (\zeta \eta^2)}{(1+|x|^2)^{\frac{\alpha}{2}}} dx =& \int_{\mathbb{R}^N} \dfrac{\left [\Delta (\zeta \eta) \right ]^2}{(1+|x|^2)^{\frac{\alpha}{2}}}dx +\int_{\mathbb{R}^N}
\dfrac{\left [-4(\nabla \zeta \cdot \nabla \eta)^2+2 \zeta \Delta \zeta | \nabla \eta|^2 \right ]}{(1+|x|^2)^{\frac{\alpha}{2}}}dx \nonumber \\[0.1cm]
& +\int_{\mathbb{R}^N} \dfrac{\zeta^2}{(1+|x|^2)^{\frac{\alpha}{2}}} \left [2\nabla (\Delta \eta)\cdot \nabla \eta+(\Delta \eta)^2 \right ]dx \nonumber \\[0.1cm]
& -2\alpha \int_{\mathbb{R}^N} \dfrac{\zeta^2}{(1+|x|^2)^{\frac{\alpha}{2}+1}}\Delta \eta (\nabla \eta \cdot x) dx,
\end{align}and
\begin{align}\label{eq:2.3}
2 \int_{\mathbb{R}^N} \dfrac{|\nabla \zeta|^2|\nabla \eta|^2}{(1+|x|^2)^{\frac{\alpha}{2}}}dx= & 2\int_{\mathbb{R}^N} \dfrac{\zeta (-\Delta \zeta)|\nabla \eta|^2 }{(1+|x|^2)^{\frac{\alpha}{2}}}dx+\int_{\mathbb{R}^N} \dfrac{\zeta^2 \Delta (|\nabla \eta|^2) }{(1+|x|^2)^{\frac{\alpha}{2}}}dx \nonumber \\[0.1cm]
& -\alpha \int_{\mathbb{R}^N} \dfrac{\zeta^2}{(1+|x|^2)^{\frac{\alpha}{2}+1}}\left [2(\nabla (|\nabla \eta|^2) \cdot x)+N|\nabla \eta|^2\right ] dx \nonumber \\[0.1cm]
&+\alpha(\alpha+2)\int_{\mathbb{R}^N} \dfrac{\zeta^2 |x|^2 |\nabla \eta|^2}{(1+|x|^2)^{\frac{\alpha}{2}+2}}dx.
\end{align}
\end{lemma}
\begin{proof}
Integrating by parts, we get
\begin{align*}
-4 \int_{\mathbb{R}^N} & \dfrac{\zeta \Delta \eta \nabla \zeta \cdot \nabla \eta }{(1+|x|^2)^{\frac{\alpha}{2}}}dx
= 2\int_{\mathbb{R}^N}\zeta^2\cdot div \left ( \dfrac{\Delta \eta \nabla \eta }{(1+|x|^2)^{\frac{\alpha}{2}}} \right )dx\\[0.1cm]
= & 2 \int_{\mathbb{R}^N} \dfrac{\zeta^2}{(1+|x|^2)^{\frac{\alpha}{2}}}\left [\nabla (\Delta \eta)\cdot \nabla \eta +(\Delta \eta)^2 \right ] dx-2\alpha \int_{\mathbb{R}^N} \dfrac{\zeta^2 \Delta \eta (\nabla \eta \cdot x)}{(1+|x|^2)^{\frac{\alpha}{2}+1}}dx.
\end{align*}Combining with Lemma \ref{eq:l2.2}, it implies that the identity (\ref{eq:2.2}) holds true.
A simple computation leads to
\begin{align*}
\int_{\mathbb{R}^N} \dfrac{\Delta (\zeta^2)|\nabla \eta|^2 }{(1+|x|^2)^{\frac{\alpha}{2}}}dx = & \int_{\mathbb{R}^N} \zeta^2 \Delta \left ( \dfrac{ |\nabla \eta|^2 }{(1+|x|^2)^{\frac{\alpha}{2}}} \right )dx\\[0.1cm]
=& \int_{\mathbb{R}^N} \dfrac{\zeta^2 \Delta (|\nabla \eta|^2) }{(1+|x|^2)^{\frac{\alpha}{2}}}dx-\alpha \int_{\mathbb{R}^N} \dfrac{\zeta^2 \left [2(\nabla (|\nabla \eta|^2) \cdot x)+N|\nabla \eta|^2\right ]}{(1+|x|^2)^{\frac{\alpha}{2}+1}} dx \\[0.1cm]
&+\alpha(\alpha+2)\int_{\mathbb{R}^N} \dfrac{\zeta^2 |x|^2 |\nabla \eta|^2}{(1+|x|^2)^{\frac{\alpha}{2}+2}}dx.
\end{align*}Again it is easy to verify that
\begin{equation*}
\dfrac{1}{2}\Delta (\zeta^2)=\zeta \Delta \zeta+|\nabla \zeta|^2.
\end{equation*}Combining the above two identities, we get the identity (\ref{eq:2.3}).
\end{proof}
\begin{lemma}\label{eq:l2.4}
Let $N \ge 5$, $p>1$, $\alpha \ge 0$ and $(u,v)$ be a classical stable solution of (\ref{eq:1.1}) with $u \ge 0$. Then we have
\begin{equation*}
\int_{B_R} (1+|x|^2)^{\frac{\alpha}{2}} \left [v^2+u^{p+1} \right ] dx \le CR^{N-4-\alpha-\frac{8+4\alpha}{p-1}},
\end{equation*}for all $R>0$.
\end{lemma}
\begin{proof}
Since $(u,v)$ is a classical stable solution of (\ref{eq:1.1}), we find that for any $\zeta \in C_0^4(\mathbb{R}^N)$,
\begin{equation}\label{eq:2.4}
\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^p \zeta dx=\int_{\mathbb{R}^N} \dfrac{\Delta u}{(1+|x|^2)^{\frac{\alpha}{2}}} \Delta \zeta dx,
\end{equation}and
\begin{equation}\label{eq:2.5}
p\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}}u^{p-1}\zeta^2 dx \le \int_{\mathbb{R}^N} \dfrac{|\Delta \zeta|^2}{(1+|x|^2)^{\frac{\alpha}{2}}}dx.
\end{equation}Substituting $\zeta =u \psi^2$ into (\ref{eq:2.4}), we obtain
\begin{equation}\label{eq:2.6}
\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^{p+1}\psi^2 dx=\int_{\mathbb{R}^N} \dfrac{\Delta u \Delta (u\psi^2)}{(1+|x|^2)^{\frac{\alpha}{2}}} dx.
\end{equation}Substitute $\zeta=u\psi$ into (\ref{eq:2.5}) to get
\begin{equation}\label{eq:2.7}
p\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}}u^{p+1}\psi^2 dx \le \int_{\mathbb{R}^N} \dfrac{[\Delta (u\psi)]^2}{(1+|x|^2)^{\frac{\alpha}{2}}}dx.
\end{equation}Here and in the following, we choose the cut-off function $\psi \in C^4_0(\mathbb{R}^N)$ with $0 \le \psi \le 1$,
\begin{equation*}
\psi (x)=
\begin{cases}
1, & \mbox{if}\;\ |x|<R, \\\
0, & \mbox{if}\;\ |x|>2R,
\end{cases}
\end{equation*}and $|\nabla^i \psi |\le \dfrac{C}{R^i}$, for $i=1,2,3$. Now, combining (\ref{eq:2.6}) and (\ref{eq:2.7}) with (\ref{eq:2.2}) and (\ref{eq:2.3}), we have
\begin{align*}
(p-1)\int_{\mathbb{R}^N} & (1+|x|^2)^{\frac{\alpha}{2}} u^{p+1}\psi^2 dx \le \int_{\mathbb{R}^N} \dfrac{4(\nabla u\cdot \nabla \psi)^2-2u\Delta u|\nabla \psi|^2}{(1+|x|^2)^{\frac{\alpha}{2}}}dx\\[0.12cm]
& -\int_{\mathbb{R}^N} \dfrac{u^2 \left [2\nabla (\Delta \psi)\cdot \nabla \psi+|\Delta \psi|^2\right ] }{(1+|x|^2)^{\frac{\alpha}{2}}}dx +2\alpha \int_{\mathbb{R}^N} \dfrac{u^2 \Delta \psi (\nabla \psi \cdot x)}{(1+|x|^2)^{\frac{\alpha}{2}+1}} dx\\[0.12cm]
\le & C\int_{\mathbb{R}^N} |uv| \cdot |\nabla \psi|^2 dx\\[0.1cm]
& +C\int_{\mathbb{R}^N} \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}}}\left [|\Delta (|\nabla \psi|^2)|+|\nabla (\Delta \psi)\cdot \nabla \psi|+|\Delta \psi|^2\right ] dx\\[0.1cm]
&+C \int_{\mathbb{R}^N} \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}+1}}\left [|\nabla (|\nabla \psi|^2)\cdot x|+|\nabla \psi|^2+|\Delta \psi (\nabla \psi \cdot x)| \right ]dx.
\end{align*}Again since $\Delta (u\psi)=-(1+|x|^2)^{\frac{\alpha}{2}}v\psi+2\nabla u\cdot \nabla \psi+u\Delta \psi$, we get
\begin{equation*}
\int_{\mathbb{R}^N}(1+|x|^2)^{\frac{\alpha}{2}}v^2\psi^2 dx \le C\int_{\mathbb{R}^N} \dfrac{1}{(1+|x|^2)^{\frac{\alpha}{2}}}\left [(\Delta (u\psi))^2+|\nabla u\cdot \nabla \psi|^2+u^2|\Delta \psi|^2 \right ] dx.
\end{equation*}Then, combining the above inequality with (\ref{eq:2.6}), it implies that
\begin{align*}
\int_{\mathbb{R}^N} & (1+|x|^2)^{\frac{\alpha}{2}}\left [v^2+u^{p+1}\right ]\psi^2 dx \le C \int_{\mathbb{R}^N} |uv| |\nabla \psi|^2dx \\
& +C \int_{\mathbb{R}^N} \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}}}\left [|\Delta (|\nabla \psi|^2)|+|\nabla (\Delta \psi)\cdot \nabla \psi|+|\Delta \psi|^2\right ]dx\\[0.1cm]
& +C \int_{\mathbb{R}^N} \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}+1}}\left [|\nabla (|\nabla \psi|^2)\cdot x|+|\nabla \psi|^2+|\Delta \psi (\nabla \psi \cdot x)| \right ]dx.
\end{align*}
Next, the function $\psi$ in the above inequality are replaced by $\psi^m$, where $m$ is a larger integer, then
\begin{align}\label{eq:2.8}
\int_{\mathbb{R}^N} & (1+|x|^2)^{\frac{\alpha}{2}} \left [v^2+u^{p+1} \right ]\psi^{2m} dx \le C \int_{\mathbb{R}^N} |uv| \psi^{2(m-1)}|\nabla \psi|^2dx \nonumber \\[0.1cm]
& +C\int_{\mathbb{R}^N} \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}}}\left [|\Delta (|\nabla \psi^m|^2)|+|\nabla (\Delta \psi^m)\cdot \nabla \psi^m|+|\Delta \psi^m|^2 \right ]dx \nonumber \\[0.1cm]
& +C \int_{\mathbb{R}^N} \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}+1}}\left [|\nabla (|\nabla \psi^m|^2)\cdot x|+|\nabla \psi^m|^2+|\Delta \psi^m (\nabla \psi^m \cdot x)| \right ]dx.
\end{align}A simple application of Young's inequality leads to
\begin{align*}
\int_{\mathbb{R}^N} |uv|\psi^{2(m-1)}|\nabla \psi|^2 dx \le \dfrac{1}{2C} \int_{\mathbb{R}^N}(1+|x|^2)^{\frac{\alpha}{2}}v^2 \psi^{2m}dx\\[0.1cm]
+C\int_{\mathbb{R}^N} \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}}} \psi^{2(m-2)} |\nabla \psi|^4 dx,
\end{align*}and putting into (\ref{eq:2.8}) yields
\begin{align*}
\int_{\mathbb{R}^N} & (1+|x|^2)^{\frac{\alpha}{2}}\left [v^2+u^{p+1}\right ]\psi^{2m} dx \le C \int_{\mathbb{R}^N} \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}}} \psi^{2(m-2)}|\nabla \psi|^4 dx \\[0.1cm]
& +C\int_{\mathbb{R}^N} \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}}}\left [|\Delta (|\nabla \psi^m|^2)|+|\nabla (\Delta \psi^m)\cdot \nabla \psi^m|+|\Delta \psi^m|^2\right ]dx\\[0.1cm]
& +C \int_{\mathbb{R}^N} \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}+1}}\left [|\nabla (|\nabla \psi^m|^2)\cdot x|+|\nabla \psi^m|^2+|\Delta \psi^m (\nabla \psi^m \cdot x)|\right ]dx\\[0.1cm]
=&C\int_{\mathbb{R}^N} \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}}}\psi^{2(m-2)} \mathfrak{B}(\psi^m) dx
+C\int_{\mathbb{R}^N} \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}+1}}\psi^{2(m-2)} \mathfrak{F}(\psi^m) dx,
\end{align*}where
\begin{align*}
\mathfrak{B}(\psi^m) & =|\nabla \psi|^4 +\psi^{2(2-m)}\left [|\Delta (|\nabla \psi^m|^2)|+|\nabla (\Delta \psi^m)\cdot \nabla \psi^m|+|\Delta \psi^m|^2\right ],\\[0.1cm]
\mathfrak{F}(\psi^m) & =\psi^{2(2-m)} \left [|\nabla (|\nabla \psi^m|^2)\cdot x|+|\nabla \psi^m|^2+|\Delta \psi^m (\nabla \psi^m \cdot x)|\right ].
\end{align*}Choosing $m$ larger enough such that $(m-2)(p+1) \ge 2m$, we utilize H\"{o}lder's inequality to the both terms in the right side of the above inequality and find
\begin{align*}
\int_{\mathbb{R}^N} & \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}}}\psi^{2(m-2)} \mathfrak{B}(\psi^m) dx \\[0.1cm]
= & \int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{p+1}}u^2\psi^{2(m-2)} (1+|x|^2)^{-\frac{\alpha}{p+1}-\frac{\alpha}{2}} \mathfrak{B}(\psi^m)dx \\[0.1cm]
\le & \left (\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}}u^{p+1}\psi^{2m} dx \right )^{\frac{2}{p+1}}\\[0.1cm]
& \times \left ( \int_{\mathbb{R}^N} (1+|x|^2)^{-\frac{2\alpha+\alpha(p+1)}{2(p-1)}} \mathfrak{B}(\psi^m)^{\frac{p+1}{p-1}} dx \right )^{\frac{p-1}{p+1}}
\end{align*}and
\begin{align*}
\int_{\mathbb{R}^N} & \dfrac{u^2}{(1+|x|^2)^{\frac{\alpha}{2}+1}}\psi^{2(m-2)} \mathfrak{F}(\psi^m) dx\\[0.1cm]
\le & \left (\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}}u^{p+1}\psi^{2m} dx \right )^{\frac{2}{p+1}} \\[0.1cm]
&\times \left ( \int_{\mathbb{R}^N} (1+|x|^2)^{-\frac{2\alpha+(\alpha+2)(p+1)}{2(p-1)}} \mathfrak{F}(\psi^m)^{\frac{p+1}{p-1}} dx \right )^{\frac{p-1}{p+1}}.
\end{align*}Therefore, we get
\begin{align*}
\int_{\mathbb{R}^N} & (1+|x|^2)^{\frac{\alpha}{2}}\left [ v^2+u^{p+1} \right ] \psi^{2m} dx \\[0.1cm]
\le & C\int_{\mathbb{R}^N} (1+|x|^2)^{-\frac{2\alpha+\alpha(p+1)}{2(p-1)}} \mathfrak{B}(\psi^m)^{\frac{p+1}{p-1}} dx\\[0.1cm]
&+C \int_{\mathbb{R}^N} (1+|x|^2)^{-\frac{2\alpha+(\alpha+2)(p+1)}{2(p-1)}} \mathfrak{F}(\psi^m)^{\frac{p+1}{p-1}} dx\\[0.1cm]
\le & C R^{N-4-\alpha-\frac{8+4\alpha}{p-1}},
\end{align*}for all $R>0$.
\end{proof}
Let $N\ge 5$, $p>1$ and $\alpha >0$. We consider a more general elliptic system
\begin{equation}\label{eq:2.9}
\begin{cases}
\begin{split}
&-\Delta u=(1+|x|^2)^{\frac{\alpha}{2}}v,\\
&-\Delta v=(1+|x|^2)^{\frac{\alpha}{2}}u^p,
\end{split}\quad & \mbox{in}\; \Sigma, \\
\ u=\Delta u=0, \quad & \mbox{on}\; \partial \Sigma,
\end{cases}
\end{equation}where $\Sigma=\mathbb{R}^N$ or the half space $\Sigma=\mathbb{R}^N_+$ or the exterior domain $\Sigma=\mathbb{R}^N \backslash \overline{\Omega}$, $\mathbb{R}^N_+\backslash \overline{\Omega}$, and $\Omega$ is a bounded smooth domain of $\mathbb{R}^N$. A solution $(u,v)$ of (\ref{eq:2.9}) is said to be stable if for any $\zeta \in H^2(\Sigma) \cap H_0^1(\Sigma)$, we have
\begin{equation*}
p\int_{\Sigma} (1+|x|^2)^{\frac{\alpha}{2}} u^{p-1} \zeta^2 dx \le \int_{\Sigma} \dfrac{|\Delta \zeta|^2}{(1+|x|^2)^{\frac{\alpha}{2}}}dx,
\end{equation*}or if for any $\zeta \in H^1(\Sigma)$
\begin{equation*}
\sqrt{p}\int_{\Sigma} (1+|x|^2)^{\frac{\alpha}{2}} u^{\frac{p-1}{2}} \zeta^2 dx \le \int_{\Sigma} |\nabla \zeta|^2 dx.
\end{equation*}
Motivated by the proof in \cite{Hajlaoui,Phan,Souplet}, we obtain the crucial ingredient in the proof of Theorem \ref{eq:t1.1} and Theorem \ref{eq:t1.2}.
\begin{lemma}\label{eq:l2.5}
Assume that $(u,v)$ is a classical stable solution of (\ref{eq:2.9}). If $u\ge 0$, then the inequality holds
\begin{equation*}
v \ge \sqrt{\dfrac{2}{p+1}}u^{\frac{p+1}{2}},\quad \mbox{in}\;\ \Sigma.
\end{equation*}
\end{lemma}
\begin{proof}
Set $\delta=\sqrt{\dfrac{2}{p+1}}$, $\gamma=\delta u^{\frac{p+1}{2}} -v$. A direct calculation yields
\begin{align*}
\Delta \gamma & =\frac{p-1}{2\delta}\cdot u^{\frac{p-3}{2}}|\nabla u|^2-\delta^{-1}(1+|x|^2)^{\frac{\alpha}{2}}u^{\frac{p-1}{2}}v+(1+|x|^2)^{\frac{\alpha}{2}}u^p \\
& \ge \delta^{-1}(1+|x|^2)^{\frac{\alpha}{2}}u^{\frac{p-1}{2}}\gamma.
\end{align*}Denote $\gamma_+:=\max \{\gamma,0\}$. Since $\gamma_+\Delta \gamma \ge 0$ in $\Sigma$ and $\gamma=0$ on $\partial \Sigma$,
it implies that for any $R>0$
\begin{align}\label{eq:2.10}
\int_{\Sigma \cap B_R} |\nabla \gamma_+|^2dx & =-\int_{\Sigma \cap B_R}\gamma_+\Delta \gamma dx+\int_{\partial (\Sigma \cap B_R)}\gamma_+\dfrac{\partial \gamma}{\partial \nu} d\sigma \nonumber \\[0.1cm]
& \le \int_{\Sigma \cap \partial B_R} \gamma_+\dfrac{\partial \gamma}{\partial \nu}d \sigma.
\end{align}For $r>0$, define $\xi (r):=\displaystyle \int_{\mathbb{S}^{N-1} \cap (r^{-1} \Sigma)}\gamma_+^2(r\sigma)d \sigma$, where $\mathbb{S}^{N-1}$ denotes by the unit sphere in $\mathbb{R}^N$. We easily deduce that there exists $R_0>0$ such that
\begin{equation*}
\int_{\Sigma \cap \partial B_r} \gamma_+\dfrac{\partial \gamma}{\partial \nu} d \sigma =\dfrac{r^{N-1}}{2}\xi '(r),\quad \forall r \ge R_0.
\end{equation*}
On the other hand, for any $R \ge R_0$, we conclude that
\begin{align*}
\int_{R_0}^R r^{N-1} \xi (r)dr = & \int_{R_0}^R \int_{\mathbb{S}^{N-1}\cap (r^{-1}\Sigma)} \gamma_+^2(r\sigma) r^{N-1} dr d\sigma \\[0.1cm]
\le & C\int_{B_R\cap \Sigma}\gamma_+^2 dx \le C\int_{B_R\cap \Sigma}(1+|x|^2)^{\frac{\alpha}{2}}(v^2+u^{p+1})dx\\[0.1cm]
\le & CR^{N-4-\alpha-\frac{8+4\alpha}{p-1}}=o(R^N).
\end{align*}Therefore, $\xi (R_i) \to 0$ for some sequence $R_i \to \infty$. Thus, there exists $\tilde{R}_i \to +\infty$ such that $\xi'(\tilde{R}_i)\le 0$. Letting $i \to \infty$ in (\ref{eq:2.10}) with $R=\tilde{R}_i$, we find
\begin{equation*}
\int_{\Sigma}|\nabla \gamma_+|^2 dx =0.
\end{equation*}Again sine $\gamma=0$ on $\partial \Sigma$, we get that $\gamma_+\equiv 0$ in $\Sigma$.
\end{proof}
Throughout the paper, we let $R_k=2^k R$ with $R>0$ and integers $k \ge 1$.
\begin{lemma}(\cite[Lemma 5]{Cowan})\label{eq:l2.6}
For any integer $k\ge 1$ and $1 \le \beta <\dfrac{N}{N-2}$, there is some $C=C(k,\beta)<+\infty$ such that for any smooth $w \ge 0$, the inequality holds
\begin{equation*}
\left (\int_{B_{R_k}} w^{\beta}dx \right )^{\frac{1}{\beta}}\le CR^{2+N(\frac{1}{\beta}-1)}\int_{B_{R_{k+1}}}|\Delta w|dx +CR^{N(\frac{1}{\beta}-1)}
\int_{B_{R_{k+1}}}wdx.
\end{equation*}
\end{lemma}\vskip .3in
\section{Proof of Theorem \ref{eq:t1.1}}
The following two lemmas play an important role in dealing
with Theorem \ref{eq:t1.1}.
\begin{lemma}\label{eq:l3.1}
Let $N \ge 5$, $p>1$ and $\alpha >0$.
Assume that $(u,v)$ is a classical nonnegative stable solution of (\ref{eq:1.1}). Then we obtain that, for any $s>2$ and $R>0$
\begin{equation*}
\int_{B_{R_k}} (1+|x|^2)^{\frac{\alpha}{2}} u^p v^{s-1} dx \le \dfrac{C}{R^2} \int_{B_{R_{k+1}}} v^s dx,
\end{equation*}provided that
\begin{equation}\label{eq:3.1}
L(p,s)=s^4-\dfrac{32p}{p+1}s^2+\dfrac{32p(p+3)}{(p+1)^2}s-\dfrac{64p}{(p+1)^2} <0.
\end{equation}
\end{lemma}
\begin{proof}
Testing (\ref{eq:2.1}) on $\zeta=u^{\frac{q+1}{2}}\phi$ with $\phi \in C^2_0(\mathbb{R}^N)$ and $ q \ge 1$,
we get
\begin{align}\label{eq:3.2}
\sqrt{p}\int_{\mathbb{R}^N} & (1+|x|^2)^{\frac{\alpha}{2}} u^{\frac{p-1}{2}}u^{q+1}\phi^2 dx \le \int_{\mathbb{R}^N} u^{q+1}|\nabla \phi|^2 dx \nonumber \\[0.1cm]
& +\int_{\mathbb{R}^N} |\nabla u^{\frac{q+1}{2}}|^2 \phi^2 dx+(q+1)\int_{\mathbb{R}^N} u^q \phi \nabla u\cdot \nabla \phi dx.
\end{align}Integrating by parts, we have
\begin{align*}
(q+1)\int_{\mathbb{R}^N} u^q \phi & \nabla u\cdot \nabla \phi dx= \dfrac{1}{2} \int_{\mathbb{R}^N} \nabla (u^{q+1})\cdot \nabla (\phi^2) dx \\[0.1cm]
= & -\dfrac{1}{2}\int_{\mathbb{R}^N} u^{q+1}\Delta (\phi^2) dx,
\end{align*}and
\begin{align*}
\int_{\mathbb{R}^N} |\nabla u^{\frac{q+1}{2}}|^2 & \phi^2 dx = \dfrac{(q+1)^2}{4q}\int_{\mathbb{R}^N} \phi^2 \nabla (u^q)\cdot \nabla u dx \\[0.1cm]
=& -\dfrac{(q+1)^2}{4q}\int_{\mathbb{R}^N} u^q\phi^2 \Delta u dx -\dfrac{q+1}{4q}\int_{\mathbb{R}^N} \nabla (u^{q+1})\nabla (\phi^2) dx\\[0.1cm]
=& \dfrac{(q+1)^2}{4q}\int_{\mathbb{R}^N}(1+|x|^2)^{\frac{\alpha}{2}} u^q v \phi^2 dx+\dfrac{q+1}{4q}\int_{\mathbb{R}^N} u^{q+1} \Delta (\phi^2) dx.
\end{align*}Combining the above two identities with (\ref{eq:3.2}) leads to
\begin{align*}
a_1\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} & u^{\frac{p-1}{2}}u^{q+1}\phi^2 dx\le \int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^qv\phi^2 dx\\[0.1cm]
& +C\int_{\mathbb{R}^N} u^{q+1}\left [|\Delta (\phi^2)| +|\nabla \phi|^2\right ]dx,
\end{align*}where $a_1=\dfrac{4q\sqrt{p}}{(q+1)^2}$.
We choose $\phi(x)=\omega \left (\dfrac{x}{R_k} \right )$ where $\omega \in C_0^2 (B_2)$ with $\omega \equiv 1$ in $B_1$, then we find
\begin{equation}\label{eq:3.3}
\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^{\frac{p-1}{2}}u^{q+1}\phi^2 dx\le \dfrac{1}{a_1}\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^qv\phi^2 dx +\dfrac{C}{R^2}\int_{B_{R_{k+1}}}
u^{q+1}dx.
\end{equation}Similarly, testing (\ref{eq:2.1}) on $\zeta=v^{\frac{r+1}{2}}\phi$ with $r \ge 1$, there holds
\begin{align*}
\sqrt{p}\int_{\mathbb{R}^N} & (1+|x|^2)^{\frac{\alpha}{2}} u^{\frac{p-1}{2}}v^{r+1}\phi^2 dx \le \int_{\mathbb{R}^N} v^{r+1}|\nabla \phi|^2 dx\\[0.1cm]
& +\int_{\mathbb{R}^N} |\nabla v^{\frac{r+1}{2}}|^2 \phi^2 dx+(r+1)\int_{\mathbb{R}^N} v^r\phi \nabla v\cdot \nabla \phi dx.
\end{align*}Adopting the same computation as above (noting that the equation $-\Delta v=(1+|x|^2)^{\frac{\alpha}{2}}u^p$), we obtain
\begin{equation}\label{eq:3.4}
\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^{\frac{p-1}{2}}v^{r+1}\phi^2 dx\le \dfrac{1}{a_2}\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^pv^r\phi^2 dx
+\dfrac{C}{R^2}\int_{B_{R_{k+1}}}
v^{r+1}dx,
\end{equation}where $a_2=\dfrac{4r\sqrt{p}}{(r+1)^2}$. \vskip .1in
Rewriting (\ref{eq:3.3}) and (\ref{eq:3.4}) yields
\begin{align}\label{eq:3.5}
I_1+a_2^{r+1}I_2:= & \int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^{\frac{p-1}{2}}u^{q+1}\phi^2 dx +a_2^{r+1}\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^{\frac{p-1}{2}}v^{r+1} \phi^2 dx \nonumber \\[0.1cm]
\le & \dfrac{1}{a_1}\int_{\mathbb{R}^N}(1+|x|^2)^{\frac{\alpha}{2}} u^q v\phi^2 dx +a_2^r\int_{\mathbb{R}^N}(1+|x|^2)^{\frac{\alpha}{2}}u^p v^r \phi^2 dx \nonumber \\[0.1cm]
& +\dfrac{C}{R^2}\int_{B_{R_{k+1}}} (u^{q+1}+v^{r+1}) dx.
\end{align}Fix now
\begin{equation*}
2q=(p+1)r+p-1 \Longleftrightarrow q+1=\dfrac{(p+1)(r+1)}{2}.
\end{equation*}A direct application of Young's inequality leads to
\begin{align*}
\dfrac{1}{a_1}\int_{\mathbb{R}^N} (1+|x|^2 & )^{\frac{\alpha}{2}} u^q v\phi^2 dx = \int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^{\frac{p-1}{2}}\phi^2 u^{\frac{(q+1)r}{r+1}}\dfrac{v}{a_1} dx\\[0.08cm]
\le & \dfrac{r}{r+1} \int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}}u^{\frac{p-1}{2}}u^{q+1}\phi^2 dx \\[0.08cm]
& +\dfrac{1}{a_1^{r+1}(r+1)}\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^{\frac{p-1}{2}}v^{r+1}\phi^2 dx\\[0.08cm]
= & \dfrac{r}{r+1}I_1+\dfrac{1}{a_1^{r+1}(r+1)} I_2.
\end{align*}Arguing as above, we get
\begin{align*}
a_2^r\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} & u^pv^r \phi^2 dx
\le \dfrac{1}{r+1}\int_{\mathbb{R}^N}(1+|x|^2)^{\frac{\alpha}{2}}u^{\frac{p-1}{2}}u^{q+1}\phi^2 dx \\[0.08cm]
& +\dfrac{a_2^{r+1}r}{r+1}\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}}u^{\frac{p-1}{2}}v^{r+1}\phi^2 dx\\[0.08cm]
= & \dfrac{1}{r+1}I_1+\dfrac{a_2^{r+1}r}{r+1}I_2.
\end{align*}Combining the above two inequalities with (\ref{eq:3.5}), we find
\begin{align*}
I_1+a_2^{r+1}I_2\le & \dfrac{r}{r+1}I_1+\dfrac{1}{a_1^{r+1}(r+1)}I_2+\dfrac{1}{r+1}I_1+\dfrac{a_2^{r+1}r}{r+1}I_2\\[0.08cm]
& +\dfrac{C}{R^2}\int_{B_{R_{k+1}}} (u^{q+1}+v^{r+1})dx.
\end{align*}Thus, it implies that
\begin{equation*}
\dfrac{(a_1a_2)^{r+1}-1}{r+1}I_2 \le \dfrac{Ca_1^{r+1}}{R^2}\int_{B_{R_{k+1}}} (u^{q+1}+v^{r+1})dx.
\end{equation*}If $a_1a_2>1$, then we conclude from the choice of $\phi$ that
\begin{equation*}
\int_{B_{R_k}} (1+|x|^2)^{\frac{\alpha}{2}}u^{\frac{p-1}{2}}v^{r+1} dx \le I_2 \le \dfrac{C}{R^2} \int_{B_{R_{k+1}}} (u^{q+1}+v^{r+1})dx.
\end{equation*}Since $q+1=\dfrac{(p+1)(r+1)}{2}$ and $v \ge \sqrt{\dfrac{2}{p+1}}u^{\frac{p+1}{2}}$, we have
\begin{equation*}
\int_{B_{R_k}} (1+|x|^2)^{\frac{\alpha}{2}}u^{\frac{p-1}{2}}v^{r+1}dx \le \dfrac{C}{R^2}\int_{B_{R_{k+1}}} v^{r+1}dx.
\end{equation*}Denote $s:=r+1$. Then, combining the above inequality with Lemma \ref{eq:l2.5}, it implies that
\begin{align*}
\int_{B_{R_k}} (1+|x|^2)^{\frac{\alpha}{2}}u^pv^{s-1} dx =\int_{B_{R_k}} (1+|x|^2)^{\frac{\alpha}{2}}u^{\frac{p-1}{2}}u^{\frac{p+1}{2}}v^{s-1}dx\\[0.08cm]
\le \int_{B_{R_k}} (1+|x|^2)^{\frac{\alpha}{2}} u^{\frac{p-1}{2}}v^s dx \le \dfrac{C}{R^2}\int_{B_{R_{k+1}}} v^sdx.
\end{align*}
On the other hand, since $a_1=\dfrac{4q\sqrt{p}}{(q+1)^2}$, $a_2=\dfrac{4r\sqrt{p}}{(r+1)^2}$, and $2q=(p+1)r+p-1$, $q+1=\dfrac{p+1}{2}(r+1)$, $s=r+1$, we obtain that
\begin{equation*}
a_1a_2>1
\end{equation*}is equivalent to
\begin{equation*}
8p[(p+1)(s-1)+p-1](s-1) >\dfrac{(p+1)^2}{4}s^4.
\end{equation*}A direct calculation shows
\begin{equation*}
L(p,s):=s^4-\dfrac{32p}{p+1}s^2+\dfrac{32p(p+3)}{(p+1)^2}s-\dfrac{64p}{(p+1)^2}<0.
\end{equation*}This completes the proof.
\end{proof}\vskip .1in
\begin{remark}\label{eq:r3.1}
For any $p>1$, the following statements are equivalent:
\begin{itemize}
\item [\rm (i).] $L(p,s)=s^4-\dfrac{32p}{p+1}s^2+\dfrac{32p(p+3)}{(p+1)^2}s-\dfrac{64p}{(p+1)^2}<0$.
\item [\rm (ii).] $H(p,\mu) =\mu^4-\dfrac{32p(p+1)}{(p-1)^2}\mu^2+\dfrac{32p(p+1)(p+3)}{(p-1)^3}\mu-\dfrac{64p(p+1)^2}{(p-1)^4}<0$.
\item [\rm (iii).] Let $\ell=2+2\mu$, then we have $p\in (1,p_*(\ell))$.
\end{itemize}
\end{remark}
\begin{proof}
(i) $\Longleftrightarrow$ (ii). Set $\mu: =\dfrac{p+1}{p-1}s$. A simple calculation yields
\begin{align}\label{eq:3.6}
\left (\dfrac{p+1}{p-1}\right )^4 L(p,s) & =\left (\dfrac{p+1}{p-1} s\right )^4-\dfrac{32p(p+1)^3}{(p-1)^4}s^2+\dfrac{32p(p+3)(p+1)^2}{(p-1)^4}s-\dfrac{64p(p+1)^2}{(p-1)^4}\nonumber \\[0.15cm]
& =\mu^4-\dfrac{32p(p+1)}{(p-1)^2}\mu^2+\dfrac{32p(p+1)(p+3)}{(p-1)^3}\mu-\dfrac{64p(p+1)^2}{(p-1)^4}\\
& =: H(p,\mu). \nonumber
\end{align}Thus (\ref{eq:3.1}) is equivalent to
\begin{equation}\label{eq:3.7}
H(p,\mu)<0.
\end{equation}\vskip .08in
\noindent (ii) $\Longleftrightarrow$ (iii).
We recall from Theorem 1 in \cite{Karageorgis} that the radial entire solutions to the biharmonic equation $\Delta^2 u=u^p$ in $\mathbb{R}^N$ are unstable if and only if
\begin{equation}\label{eq:3.8}
\dfrac{N^2(N-4)^2}{16} <p K(\lambda),
\end{equation}where $\lambda =-\dfrac{4}{p-1}$ and $K(\lambda)=\lambda(\lambda-2)(\lambda+N-2)(\lambda+N-4)$. We note that the left hand side of (\ref{eq:3.8}) comes from the best constant of the Hardy-Rellich inequality (see \cite{Rellich}): If $N \ge 5$, then, for all $\varphi \in H^2(\mathbb{R}^N)$
\begin{equation*}
\int_{\mathbb{R}^N} |\Delta \varphi|^2 dx \ge \dfrac{N^2(N-4)^2}{16}\int_{\mathbb{R}^N} \dfrac{\varphi^2}{|x|^4} dx.
\end{equation*}Solving the corresponding quartic inequality, we find that
\begin{equation*}
(\ref{eq:3.8}) \;\ \mbox{holds}\; \mbox{true}\; \mbox{if}\; \mbox{and} \; \mbox{only}\; \mbox{if}\;\ 1<p<p(N),
\end{equation*}where $p(N)$ is the fourth-order Joseph-Lundgren exponent computed by Gazzola and Grunau (see \cite{Gazzola}):
\begin{equation*}
p(N)=
\begin{cases}
+\infty, & \mbox{if}\; N\le 12\\[0.1cm]
\dfrac{N+2-\sqrt{N^2+4-4\sqrt{N^2+H_N}}}{N-6-\sqrt{N^2+4-4\sqrt{N^2+H_N}}}, & \mbox{if}\; N \ge 13,
\end{cases}
\end{equation*}and $H_N=\dfrac{N^2(N-4)^2}{16}$.\vskip .05in
If we denote $N:=2+2\mu$ in (\ref{eq:3.8}), a direct calculation shows
\begin{equation*}
\mu^4-2\mu^2+1 <\dfrac{32p(p+1)}{(p-1)^2}\left [\mu^2-\dfrac{p+3}{p-1}\mu+\dfrac{2(p+1)}{(p-1)^2} \right ].
\end{equation*}Thus, it implies that (\ref{eq:3.8}) is equivalent to
\begin{equation}\label{eq:3.9}
H_0(p,\mu):=(\mu^2-1)^2-\dfrac{32p(p+1)}{(p-1)^2}\mu^2+\dfrac{32p(p+1)(p+3)}{(p-1)^3}\mu-\dfrac{64p(p+1)^2}{(p-1)^4}<0.
\end{equation}We find that
\begin{equation*}
H_0(p,\mu)=H(p,\mu)-2\mu^2+1.
\end{equation*}Combining the above identities with (\ref{eq:3.6})-(\ref{eq:3.9}), we obtain that (\ref{eq:3.7}) is equivalent to
\begin{equation*}
\dfrac{\ell^2(\ell-4)^2}{16}+\dfrac{(\ell-2)^2}{2}-1<pK(\lambda),
\end{equation*}where $\ell =2+2\mu$.
Now, we denote $H^*_{\ell}:=\dfrac{\ell^2(\ell-4)^2}{16}+\dfrac{(\ell-2)^2}{2}-1$, then we conclude that (\ref{eq:3.7}) holds true if and only if $1<p<p_*(\ell)$, where
\begin{equation*}p_*(\ell)=
\begin{cases}
+\infty,& 5 \le \ell \le \overline{\ell},\\[0.1cm]
\dfrac{\ell+2-\sqrt{\ell^2+4-4\sqrt{\ell^2+H^*_{\ell}}}}{\ell-6-\sqrt{\ell^2+4-4\sqrt{\ell^2+H^*_{\ell}}}}, & \ell >\overline{\ell},
\end{cases}
\end{equation*}and $\overline{\ell}$ is a unique root in the interval $(12,13)$ such that
$8(\ell-2)(\ell-4)=H^*_{\ell}$.
\end{proof}\vskip .1in
\begin{remark}\label{eq:r3.2}
\begin{itemize}
\item [\rm (i).] By Lemma 2.2 in \cite{Hajlaoui}, we find that, for any $p>1$,
$L(p,2t_0^+)<0$. Moreover, $L$ defined by (\ref{eq:3.1}) has a unique root $s_0$ in the interval $[2t_0^+,+\infty)$ such that $L(p,s)<0$ for $s \in [2t_0^+,s_0)$. Here
\begin{equation*}
t_0^+:=\sqrt{\dfrac{2p}{p+1}}+\sqrt{\dfrac{2p}{p+1}-\sqrt{\dfrac{2p}{p+1}}}.
\end{equation*}
\item [\rm (ii).] Let $N \ge 5$, $p>1$, $\alpha > 0$ and $(u,v)$ be a classical positive stable solution of (\ref{eq:1.1}). Adopting the similar proof as Lemma 4 in \cite{Cowan}, we find
that Lemma \ref{eq:l3.1} holds true for $s \in (2t_0^-,2t_0^+)$, where
\begin{equation*}
t_0^-:=\sqrt{\dfrac{2p}{p+1}}-\sqrt{\dfrac{2p}{p+1}-\sqrt{\dfrac{2p}{p+1}}}.
\end{equation*}Obviously, $t_0^-$ is strictly decreasing in $p$, $\lim\limits_{p \to 1} t_0^- =1$, $\lim\limits_{p \to \infty} t_0^-=\sqrt{2}-\sqrt{2-\sqrt{2}}$. Thus, for any $p>1$, $2t_0^-<2$.
\item [\rm (iii).] If $N \ge 5$, $p>1$, $\alpha > 0$ and $(u,v)$ is a classical nonnegative stable solution of (\ref{eq:1.1}), then Lemma \ref{eq:l3.1} holds true for $s \in [1,s_0)$.
\end{itemize}
\end{remark}\vskip .1in
\begin{lemma}\label{eq:l3.2}
Let $N \ge 5$, $p>1$, $\alpha > 0$ and $(u,v)$ be a classical stable solution of (1.1) with $u>0$. Assume that $1\le \beta <\dfrac{N}{N-2}$, $2t_0^- < \tau \le s$, where $s$ is defined in Lemma \ref{eq:l3.1}. Then there exist $k \in \mathbb{N}^+$ and $C<+\infty$ such that
\begin{equation*}
\left (\int_{B_{R_k}} v^{\beta \tau} dx\right )^{\frac{1}{\beta}}
\le CR^{N(\frac{1}{\beta}-1)} \int_{B_{R_{k+3}}} v^{\tau}dx,
\end{equation*}for all $R>0$.
\end{lemma}
\begin{proof}
Denote $w:=v^{\tau}$. A simple calculation yields
\begin{align*}
|\Delta w|\le \tau(\tau-1)v^{\tau-2}|\nabla v|^2+\tau(1+|x|^2)^{\frac{\alpha}{2}}v^{\tau-1}u^p.
\end{align*}Then, it implies from Lemma \ref{eq:l2.6} that
\begin{align}\label{eq:3.10}
\Big (\int_{B_{R_k}} v^{\beta \tau}dx \Big )^{\frac{1}{\beta}} \le & C R^{2+N(\frac{1}{\beta}-1)} \int_{B_{R_{k+1}}}\left [ v^{\tau-2}|\nabla v|^2+ (1+|x|^2)^{\frac{\alpha}{2}}u^pv^{\tau-1} \right ]dx \nonumber \\[0.1cm]
& +C R^{N(\frac{1}{\beta}-1)} \int_{B_{R_{k+1}}} v^{\tau} dx.
\end{align}
Next, we estimate the term $\displaystyle \int_{B_{R_{k+1}}} v^{\tau-2}|\nabla v|^2 dx$ in (\ref{eq:3.10}). We take a cut-off function $\phi \in C_0^2 (B_{R_{k+2}})$ such that $\phi \equiv 1$ in $B_{R_{k+1}}$ and $|\nabla \phi| \le \dfrac{C}{R}$. Multiply $-\Delta v=(1+|x|^2)^{\frac{\alpha}{2}}u^p$ by $v^{\tau-1}\phi^2$ and integrate by parts to get
\begin{align}\label{eq:3.11}
\int_{\mathbb{R}^N} (1 & +|x|^2)^{\frac{\alpha}{2}} u^p v^{\tau-1}\phi^2 dx =\int_{\mathbb{R}^N} -\Delta v (v^{\tau-1}\phi^2)dx \nonumber \\[0.1cm]
= & (\tau-1)\int_{\mathbb{R}^N} v^{\tau-2}\phi^2|\nabla v|^2dx +2\int_{\mathbb{R}^N} v^{\tau-1}\phi \nabla v \cdot \nabla \phi dx.
\end{align}We use Young's inequality to yield
\begin{align*}
\int_{\mathbb{R}^N} v^{\tau-1}\phi \nabla v\cdot \nabla \phi dx \le \dfrac{1}{8} \int_{\mathbb{R}^N} v^{\tau-2}\phi^2 |\nabla v|^2 dx +C \int_{\mathbb{R}^N} v^{\tau}|\nabla \phi|^2 dx.
\end{align*}Substituting the above inequality into (\ref{eq:3.11}), it implies from the choice of the function $\phi$ that
\begin{align}\label{eq:3.12}
\int_{B_{R_{k+1}}} v^{\tau-2}|\nabla v|^2 dx \le C\int_{\mathbb{R}^N} (1+|x|^2)^{\frac{\alpha}{2}} u^p v^{\tau-1} \phi^2 dx +C\int_{\mathbb{R}^N} v^{\tau}|\nabla \phi|^2 dx \nonumber \\[0.1cm]
\le C\int_{B_{R_{k+2}}} (1+|x|^2)^{\frac{\alpha}{2}} u^pv^{\tau-1}dx +\dfrac{C}{R^2} \int_{B_{R_{k+2}}} v^{\tau} dx.
\end{align}Putting (\ref{eq:3.12}) into (\ref{eq:3.10}) gives
\begin{align}\label{eq:3.13}
\left (\int_{B_{R_k}} v^{\beta \tau} dx\right )^{\frac{1}{\beta}} \le & C R^{2+N(\frac{1}{\beta}-1)} \int_{B_{R_{k+2}}} (1+|x|^2)^{\frac{\alpha}{2}} u^pv^{\tau-1} dx \nonumber \\[0.1cm]
& +CR^{N(\frac{1}{\beta}-1)} \int_{B_{R_{k+2}}} v^{\tau} dx.
\end{align}Applying Lemma \ref{eq:l3.1} and Remark \ref{eq:r3.2} (ii) to the first term on the right hand side of (\ref{eq:3.13}), we
conclude that
\begin{equation*}
\left (\int_{B_{R_k}} v^{\beta \tau} dx\right )^{\frac{1}{\beta}}
\le CR^{N(\frac{1}{\beta}-1)} \int_{B_{R_{k+3}}} v^{\tau}dx,
\end{equation*}for all $R>0$.
\end{proof}
Now, we can follow exactly the iteration process in \cite[Corollary 2]{Cowan} to get
\begin{proposition}\label{eq:p3.1}
Suppose that $N \ge 5$, $p>1$, $\alpha>0$ and $(u,v)$ is a classical stable solution of (1.1) with $u>0$. Let $2t_0^-<\tau<2$ and $\tau \le \beta<\dfrac{N}{N-2}s$, then there is a $C<+\infty$ and integer $n \ge 1$ such that for all $R>1$
\begin{equation*}
\left (\int_{B_R}v^{\beta} dx\right )^{\frac{1}{\beta}} \le CR^{N(\frac{1}{\beta}-\frac{1}{\tau})} \left (\int_{B_{R_{3^n}}} v^{\tau}dx \right )^{\frac{1}{\tau}}.
\end{equation*}Here $s$ is defined in Lemma \ref{eq:l3.1}.
\end{proposition}
\noindent {\it Proof of Theorem \ref{eq:t1.1}.} Suppose that $(u,v)$ is a classical positive stable solution of (\ref{eq:1.1}). By Lemma \ref{eq:l3.1} and Remark \ref{eq:r3.1}-\ref{eq:r3.2}, we take $2t_0^-<\tau<2$ and $\tau \le \beta<\dfrac{N}{N-2}s$, then, combining Proposition \ref{eq:p3.1}, we obtain
\begin{equation}\label{eq:3.14}
\left (\int_{B_R} v^{\beta} dx \right )^{\frac{1}{\beta}} \le CR^{N\left (\frac{1}{\beta}-\frac{1}{\tau} \right )} \left (\int_{B_{R_{3^n}}} v^{\tau} dx \right )^{\frac{1}{\tau}}.
\end{equation}From Lemma \ref{eq:l2.4}, we apply H\"{o}lder's inequality to get
\begin{align*}
\int_{B_R} v^{\tau} dx \le & \left (\int_{B_R} (1+|x|^2)^{\frac{\alpha}{2}}v^2 dx \right )^{\frac{\tau}{2}} \left (\int_{B_R} (1+|x|^2)^{-\frac{\alpha\tau}{2(2-\tau)}}dx \right )^{\frac{2-\tau}{2}} \\[0.1cm]
\le & CR^{\left (N-4-\alpha-\frac{8+4\alpha}{p-1} \right )\frac{\tau}{2}}R^{\left (N-\frac{\alpha \tau}{2-\tau} \right ) \frac{2-\tau}{2}}\\[0.1cm]
= & CR^{N-\frac{(2+\alpha)(p+1)}{p-1}\tau}.
\end{align*}Combine the above inequality with (\ref{eq:3.14}) to yield
\begin{equation*}
\left (\int_{B_R} v^{\beta} dx \right )^{\frac{1}{\beta}} \le CR^{N\left (\frac{1}{\beta}-\frac{1}{\tau}\right )+\frac{1}{\tau}\left (N-\frac{(2+\alpha)(p+1)}{p-1}\tau \right )},
\end{equation*}for all $R>0$. We easily find the fact that
\begin{equation*}
N\left (\frac{1}{\beta}-\dfrac{1}{\tau}\right )+\frac{1}{\tau}\left (N-\dfrac{(2+\alpha)(p+1)}{p-1}\tau \right )<0
\end{equation*}is equivalent to
\begin{equation}\label{eq:3.15}
N < \dfrac{(2+\alpha)(p+1)}{p-1} \beta < \dfrac{(2+\alpha)(p+1)}{p-1} \dfrac{N}{N-2}s.
\end{equation}Since $\mu =\dfrac{p+1}{p-1}s$ and $\ell=2+2\mu$, it implies from Remark \ref{eq:l3.1} that the inequality (\ref{eq:3.15}) is equivalent to one of the following two conditions:
\begin{itemize}
\item [\rm (i).] $\ell \ge 5$, $p \in (1,p_*(\ell))$ and
\begin{equation*}
N<\ell + \dfrac{\alpha(\ell-2)}{2}.
\end{equation*}
\item [\rm (ii).] For any $p>1$,
\begin{equation*}
N<2+(2+\alpha) \mu,
\end{equation*}where $\mu$ is the largest root of the polynomial
\begin{equation*}
H(p,\mu)=\mu^4-\dfrac{32p(p+1)}{(p-1)^2}\mu^2+\dfrac{32p(p+1)(p+3)}{(p-1)^3}\mu-\dfrac{64p(p+1)^2}{(p-1)^4}.
\end{equation*}
\end{itemize}
From the condition of Theorem \ref{eq:t1.1} and Remark \ref{eq:r1.1} (iii), it is easy to see that (\ref{eq:3.15}) holds true.
Therefore, we conclude that $\|v\|_{L^{\beta}(\mathbb{R}^N)} =0$ as $R \to +\infty$, i.e., $v \equiv 0$ in $\mathbb{R}^N$.
This is a contradiction. Thus we get the desired result. \hspace*{161pt} $\square$\vskip .1in
Adopting the similar proof as Theorem \ref{eq:t1.1}, we obtain the following result.
\begin{corollary}
Let $N \ge 5$, $\alpha>0$ and $p>1$ satisfy one of the following conditions:
\begin{itemize}
\item [\rm (i).] $N <\ell +\dfrac{\alpha (\ell-2)}{2}$ and $p \in (1,p_*(\ell))$. or
\item [\rm (ii).] For any $p>1$, $N<2+(2+\alpha)\mu$ and $\mu$ is the largest root of the polynomial $H(p,\mu)$ in (\ref{eq:1.5}).
\end{itemize}
Then there does no exist classical positive
stable solution of (\ref{eq:2.9}) in $\Sigma =\mathbb{R}_+^N$.
\end{corollary}\vskip .2in
\section{Proof of Theorem \ref{eq:t1.2}}
From Remark \ref{eq:l3.2} (i) and (iii), we adopt the same proof as in Lemma 3.1 and Lemma 3.2 to obtain the following results.
\begin{lemma}\label{eq:l4.1}
Let $N \ge 5$, $p>1$, $\alpha>0$ and $(u,v)$ be a classical nonnegative stable solution of (1.1). Assume that $1\le \beta <\dfrac{N}{N-2}$ and $s$ is defined in Lemma \ref{eq:l3.1}. Then there exist integer $k \in \mathbb{N}^+$ and $C<+\infty$ such that
\begin{equation*}
\left (\int_{B_{R_k}} v^{\beta s} \right )^{\frac{1}{\beta}}
\le CR^{N(\frac{1}{\beta}-1)} \int_{B_{R_{k+3}}} v^sdx,
\end{equation*}for all $R>0$.
\end{lemma}
\begin{proposition}\label{eq:p4.1}
Suppose that $N \ge 5$, $p>1$, $\alpha>0$ and $(u,v)$ is a classical nonnegative stable solution of (1.1). Then, for any $2 \le \beta <\dfrac{N}{N-2} s$, there is some $C<+\infty$ and integer $n \ge 1$ such that for all $R>1$
\begin{equation*}
\left (\int_{B_R}v^{\beta} \right )^{\frac{1}{\beta}} \le CR^{N(\frac{1}{\beta}-\frac{1}{2})} \left (\int_{B_{R_{3^n}}} v^2 \right )^{\frac{1}{2}}.
\end{equation*}Here $s$ is defined in Lemma \ref{eq:l3.1}.
\end{proposition}
\noindent {\it Proof of Theorem \ref{eq:t1.2}.}
(1). Suppose the condition (i) holds. Let $2 \le \beta <\dfrac{N}{N-2}s$, then, combining Proposition \ref{eq:p4.1} with Lemma \ref{eq:l2.4}, we get
\begin{align*}
\left (\int_{B_R} v^{\beta} dx \right )^{\frac{1}{\beta}} & \le CR^{\frac{N}{2}(\frac{2}{\beta}-1)} \left (\int_{B_{R_{3^n}}} v^2 dx \right )^{\frac{1}{2}}\\[0.1cm]
& \le CR^{\frac{N}{2}(\frac{2}{\beta}-1)}\left (\int_{B_{R_{3^n}}}(1+|x|^2)^{\frac{\alpha}{2}} v^2 dx \right )^{\frac{1}{2}}\\[0.1cm]
& \le CR^{\frac{N}{2}(\frac{2}{\beta}-1)+\frac{N}{2}-2-\frac{\alpha}{2}-\frac{4+2\alpha}{p-1}},
\end{align*}for all $R>0$. We obtain that
\begin{equation*}
\dfrac{N}{2}\left (\dfrac{2}{\beta}-1\right )+\frac{N}{2}-2-\dfrac{\alpha}{2}-\dfrac{4+2\alpha}{p-1}<0
\end{equation*}is equivalent to
\begin{equation}\label{eq:4.1}
N < \left [\dfrac{2(p+1)}{p-1}+\dfrac{\alpha(p+3)}{2(p-1)} \right ]\dfrac{N}{N-2}s.
\end{equation}Since $\mu =\dfrac{p+1}{p-1}s$ and $\ell=2+2\mu$, it implies from Remark \ref{eq:r3.1} that the inequality (\ref{eq:4.1}) is equivalent to
\begin{equation*}
N<\ell + \dfrac{\alpha(\ell-2)(p+3)}{4(p+1)},
\end{equation*}where $\ell \ge 5$ and $p \in (1,p_*(\ell))$. Therefore, if $\ell \ge 5$, $p \in (1,p_*(\ell))$ and $N<\ell + \dfrac{\alpha(\ell-2)(p+3)}{4(p+1)}$, we obtain that $\|v\|_{L^{\beta}(\mathbb{R}^N)} =0$ as $R \to +\infty$, i.e., $v \equiv 0$ in $\mathbb{R}^N$.
Thus we get the desired result. \vskip .08in
(2). Take $\mu =\dfrac{p+1}{p-2}s$. We find that the condition (ii) is equivalent to (\ref{eq:4.1}). Hence, we get the desired result by adopting the same proof as the above.
\hspace*{90pt} $\square$\vskip .2in
\noindent {\bf Acknowledgments} \vskip .1in
The work was partially supported by NSFC of China (No. 11201248), K.C. Wong Fund of Ningbo University and
Ningbo Natural Science Foundation (No. 2014A610027).\vskip .2in
|
2,869,038,156,811 | arxiv | \section*{Introduction}
Spiking neural networks (SNNs) are among the leading candidates to solve one of the major impediments
of more widespread uses of modern AI: The energy consumption of the very large artificial neural
networks (ANNs) that are needed. These ANNs need to be large, since they need to have a sufficiently
large number of parameters in order to absorb enough information from the huge data sets on which they
are trained, such as the 1.2 million images of ImageNet2012.
Inference on these large ANNs is power hungry \\
\citep{Garcia-Martin2019}, which impedes their deployment in mobile devices or autonomous vehicles.
Spiking neurons have been in the focus of recent work on novel computing hardware for AI with a
drastically reduced energy budget, because the giant SNN in the brain –consisting of about 100 billion
neurons-- consumes just 20W \citep{LingJ2001}. Most spiking neuron models that are considered in neuromorphic
hardware are in fact inspired by neurons in the brain. Their output is a train of stereotypical electrical
pulses --called spikes. Hence their output is very different from the analog numbers that an ANN neuron
produces as output.
But whereas large ANNs, trained with ever more sophisticated learning algorithms on
giant data sets, approach --and sometimes exceed-- human performance in several categories of intelligence,
the performance of SNNs is lagging behind. One strategy for closing this gap is to design an ANN-to-SNN
conversion that enables us to port the performance of a trained ANN into an SNN. The most common –and so
far best performing—conversion method was based on the idea of (firing-) rate coding, where the analog
output of an ANN unit is emulated by the firing rate of a spiking neuron \citep{Rueckauer2017}.
This method can readily be used for ANN units that are based on the ReLU (rectified linear) activation
function. It has produced impressive results for professional benchmark tasks such as ImageNet,
but a significant gap to the accuracy, latency, and throughput of ANN solutions has thwarted its
practical application. Problems with the timing and precision of resulting firing rates on higher
levels of the resulting SNNs have been cited as possible reasons for the loss in accuracy of the SNN.
In addition, the transmission of an analog value through a firing rate requires a fairly large number
of time steps –typically in the order of 100, which reduces both latency and throughput for inference.
An additional impediment for a rate-based ANN-to-SNN conversion emerged more recently in the form of
better performing ANNs such as EfficientNet \citep{Tan2019}. They employ the Swish function as
activation function, defined by $x\cdot \text{sigmoid}(x)$,
which contains more complex non
linearities than the ReLU function. Furthermore, the Swish function also produces negative values that
can not be readily encoded by the -- necessarily non-negative -- firing rate of a spiking neuron.
We introduce a new ANN-to-SNN conversion method that we call AMOS conversion because it requires
At-Most-One-Spike (AMOS) per neuron. This method is obviously very different from rate-based conversions,
and structurally more similar to temporal coding, where the delay of a single spike is used to encode an
analog value.
However temporal coding has turned out to be difficult to implement in a noise-robust
and efficient manner in neuromorphic hardware. This arises from the difficulty to implement delays with
sufficiently high precision without sacrificing latency or throughput of the SNN, and the difficulty to
design spiking neurons that can efficiently process such temporal code \citep{maass1998},
\citep{Thorpe2001}, \citep{Rueckauer2017}, \citep{Kheradpisheh2019}. In contrast, AMOS coding
requires just on the order of log N different delays for transmitting integers between 1 to N.
Furthermore, these delays can be arranged in a data-independent manner that supports pipelining, so that a new image can be processed by the SNN at every time step.
We show that even
the simplest type of spiking neuron, the McCulloch Pitts neuron or threshold gate
\citep{McCullochPitts43}
can efficiently compute with AMOS codes.
Thus no temporal integration of information is needed for the spiking neuron model in
order to efficiently emulate inference by ANNs.
This simple version of a spiking neuron had previously already been used for image classification by SNNs
in hardware \citep{Esser2016}.
We will describe in the first subsection of Results the design of an AMOS unit that replaces the gate
of an ANN –with basically any activation function—in this new ANN-to-SNN conversion. We then show that
this conversion produces an SNN that carries out inference for classifying images from the full ImageNet2012
dataset with drastically improved accuracy, latency, and throughput. Whereas the design of the
AMOS unit for the conversion of ANN-neurons with the Swich function requires training of its parameters,
one can design an AMOS unit for the special case of the ReLU activation function explicitly: It reduces
in this special case to an analog-to-digital conversion via binary coding. This will be made explicit
in the last subsection of Results.
\section{Results}
\subsection{Architecture of the AMOS unit}
We present in Fig. 1B the architecture of an AMOS unit --consisting of K spiking neurons--
that approximates a generic ANN gate with activation function $f$ shown in Fig. 1A.
Besides fixed delays (shown in green) the AMOS unit contains weight coefficients
$c_1$, ..., $c_K$, $d_1$, ..., $d_K$, $h_{ij}\ \text{for}\ i,j \in \{1, \ldots,K\}$, and thresholds
$T_1$, ..., $T_K$ (shown in blue).
The case that the activation function $f$ of an ANN gate outputs positive and negative numbers
is of particular importance
in view of the Swish function (see Fig. \ref{amos-swish}) that was introduced in
\citep{Ramachandran2017} and used as activation function in
EfficientNet \citep{Tan2019}. It is defined by
\begin{equation}
\text{Swish}(x) = x \cdot \frac{1}{1 + e^{-x}}.
\label{swish}
\end{equation}
\begin{figure}[]
\centering
\includegraphics[scale=0.6]{figures/amos.png}
\vspace*{10mm}
\caption{\textit{Architecture of an AMOS unit, consisting of $K$ spiking neurons. \\
\textbf{A)} An ANN gate with activation function $f$ that is to be emulated by the AMOS unit.
\textbf{B)} The AMOS unit receives the same inputs $a_1, \ldots, a_L$, duplicated $K$ times.
It outputs after $K+1$ time steps an approximation of the value $wf(x)$ which the ANN gate sends
to subsequent gates.}}
\label{amos-plt}
\end{figure}
Neuron $i$ in the AMOS unit outputs
\begin{equation}
z_i = \Theta(c_i\cdot x - H_i - T_i ),
\label{spike-fn}
\end{equation}
where $\Theta$ is the Heaviside activation function defined by
\begin{equation}
\Theta(x) =
\begin{cases}
0, & \text{if}\ x < 0 \\
1, & \text{if}\ x \geq 0,
\end{cases}
\end{equation}
the coefficient $c_i$ and the threshold $T_i$ are trained parameters,
and $H_i$ is an inhibitory input defined by
\begin{equation}
H_i = \sum_{j=1}^{i-1} h_{ij} z_{j}
\end{equation}
with trained negative weights $h_{ij}$.
The output $y$ of the AMOS unit, which is fed into subsequent AMOS units
that emulate subsequent gates to which the ANN that it emulates is connected
(only one subsequent gate is shown for simplicity in Fig. \ref{amos-plt}), can be written as
\begin{equation}
y = \sum_{i=1}^{K} wd_i z_j,
\end{equation}
where the $d_i$ are additional trained weights of the AMOS unit,
and $w$ is the weight of the corresponding connection from the ANN gate
in Fig. 1A to the next ANN gate. Thus the computational function of the entire AMOS unit
can be expressed as
\begin{equation}
\text{AMOS}(x) = y =
\sum_{i=1}^{K} w \cdot d_i \cdot \Theta(c_i \cdot x - H_i - T_i).
\end{equation}
For the case of functions $f(x,y)$ with two input values $x$ and $y$, one just needs to
replace the motif of Fig.~\ref{amos-plt}A in Fig.~\ref{amos-plt} by the motif indicated
in Fig.~\ref{one-to-many}B.
This changes equation \ref{spike-fn} to:
\begin{equation}
z_i = \Theta(c_i\cdot x + c_i' \cdot x' - H_i - T_i )
\end{equation}
\begin{figure}[]
\centering
\includegraphics[scale=0.7]{figures/one_input_to_many.png}
\vspace*{10mm}
\caption{\textit{AMOS approximation of a function $f(x,y)$ with two input values $x,y$, e.g. $x \cdot y$ }}
\label{one-to-many}
\end{figure}
\subsection{Approximation of some common ANN gates by AMOS units}
In the case of the ReLU activation function no training of the parameters of the AMOS unit from Fig. 1B
is needed. If one defines its thresholds $T_j$ by $2^{K - j}$ and weights $d_j, c_j, h_{ij}$
by $c_j = 1, d_j=2^{K-j}, h_{ij} = 2^{K-j}$, an AMOS unit with $K$ neurons
produces an approximation $\text{ReLU}(x)$ of the activation function ReLU(x) that deviates for x from $0$ by at
most $\alpha2^{-K}$ for an approximation in the interval from $-\infty$ to $\alpha$, for any $\alpha \in \mathbb{R}$. The resulting approximation is plotted in Fig. 3.
For the case of other gate functions $f$ we train the additional weights and thresholds of
the AMOS unit through backpropagation, using a triangle-shaped pseudo-derivative in place of
the non-existing derivative of the Heaviside function.
Results of such approximations are plotted in Fig.~\ref{amos-sigmoid} - \ref{fig6} for the case
of the sigmoid function, Swish function, and multiplication.
Note that the AMOS unit can be used in a pipelined manner, processing $K$ different
inputs $x$ in its $K$ time steps. Hence the resulting SNN can be used in a pipelined manner,
processing a new network input at each time step.
Hence its throughput is much better than that of SNNs that result from rate-based ANN-to-SNN
conversions, such as for example \citep{Rueckauer2017, Sengupta2019}.
The number of neurons in the network is increased through the AMOS conversion by some factor.
However a hardware implementation can reuse AMOS units for multiple time steps
(since all AMOS units have the same internal parameters), thereby reducing the required size
of the network at the cost of a corresponding reduction of the throughput.
\begin{figure}[H]
\centering
\includegraphics[scale=0.7]{figures/relu.png}
\caption{\textit{AMOS approximation of the ReLu function, $K=10$, \\
(red: target function, blue: AMOS approximation)}}
\label{amos-relu}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.7]{figures/sigmoid.png}
\caption{\textit{AMOS approximation of the sigmoid function, $K=8$ \\
(red: target function, blue: AMOS approximation)}}
\label{amos-sigmoid}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{figures/swish_zoom.png}
\caption{\textit{AMOS approximation of the Swish function, $K=12$, \\
(red: target function, blue: AMOS approximation)}}
\label{amos-swish}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{figures/mult_error.png}
\caption{\textit{Approximation error of the trained AMOS unit for multiplication with $K = 40$ (MSF = 0.0102).}}
\label{fig6}
\end{figure}
\subsection{Application of the AMOS conversion to image classification with SNNs}
The ImageNet data set \citep{Russakovsky2015} has become the most popular benchmark
for image classification in machine learning (we are using here the ImageNet2012 version).
This data set consists of $1.281.167$ training images and $50.000$ test images
(both RGB images of different sized), that are labeled by 1000 different categories.
Classifying imaged from ImageNet is a nontrivial task even for a human, since this data
set contains for example 59 categories for birds of different species and gender \citep{van2015building}.
This may explain why a relaxed performance measurement, where one records whether the
target class is among the top 5 classifications that are proposed by the neural network ("Top5"),
is typically much higher.
A new record in ANN performance for the classification of images from the ImageNet 2012 dataset was achieved by \citep{Tan2019}.
They introduced a new ANN family called EfficientNet, that achieved 84.4\%.
With a modified training procedure it can achieve 85\%
accuracy \citep{Cubuk2019}.
The parameters of the trained network are publicly available\footnote{https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/randaug/efficientnet-b7-randaug.tar.gz}.
This accuracy was achieved by the EfficientNet-B7
model that has 66M parameters. Besides a new scaling method for
balancing network depth, width, and image resolution, they also introduced the Swish function as
activation function, instead of ReLU.
The Swish function emerged from automatic search for better performing activation functions
\citep{Ramachandran2017}.
Other nonlinearities that are used in EfficientNet are sigmoids and multiplication.
These occur in "Squeeze and Excitation Layers" \citep{Hu2018} of EfficientNet, see Fig. 7.
The main building block of the EfficientNet architecture is the mobile inverted bottleneck \citep{Sandler2018},
which uses depthwise separable convolutions.
This type of convolutional layer uses neurons with linear activation functions.
Although it would certainly be possible to approximate linear functions using AMOS conversion,
we
simply collapsed linear layers
into
the generation of the weighted sums that form the inputs to the next layers.
We evaluated the classification performance of the SNN that results from an application of the
AMOS conversion described in the previous section to EfficientNet-B7.
The resulting SNN
achieved a classification
performance of $80.97\%$, and $95.82\%$ for Top5; see Table~\ref{tab:my-table-1}.
The values for $K$ used in the AMOS conversion are listed in Table \ref{K-param-num} .
\begin{table}[H]
\centering
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
Model & \# params & ANN & SNN & \# layers & \# neurons & \# spikes \\ \hline
EfficientNet-B7 & 66M& 85\% (97.2\%) & 80.97\% (95.82\%) & 4605 & 3110M & 1799M \\ \hline
ResNet50 & 26M & 75.22\% (92.4\%) & 75.10\% (92.36\%) & 500 & 96M & 14M \\ \hline
\end{tabular}
\caption{\textit{Performance results for ANNs and the SNNs, that result from their
AMOS conversion on ImageNet. Top5 accuracy is given in parentheses.
\#layers and \#neurons
refer to the SNN version of the network. Spike numbers refer to inference for a sample test image.}}
\label{tab:my-table-1}
\end{table}
Note that the resulting SNNs have an optimal throughput, since they can classify a new image
at each time step. SNNs that result from a rate-based ANN conversion need up to 3000 time steps
for reaching maximum accuracy, hence their throughput is by a corresponding factor smaller.
The number of parameters is increased by the AMOS conversion only by a small additive
term (see Table \ref{K-param-num}), since all AMOS units of the same type use the same parameters.
The accuracy of 75.22\% for the ANN version of ResNet50 in Table \ref{tab:my-table-1} resulted from
training a variant of ResNet50 where max pooling was replaced by average pooling, using
the hyperparameters given in the TensorFlow repository.
This accuracy is close to the best published performance of 76\% for ResNet50 ANNs \citep[Table 2]{Tan2019}.
Apart from max-pooling, ResNets use neither Swish nor sigmoid or multiplication as nonlinearities, just ReLU.
This explains why the application of the AMOS conversion to ResNet yields SNNs whose Top1 and Top5
performance is almost indistinguishable from the ANN version. Note also that the resulting SNNs
are only sparsely active, an activity regime that enhances the energy efficiency of some hardware
implementations.
The best previous performance of an SNN on ImageNet was achieved by
converting an Inception-v3 model \citep{Szegedy2016} with a rate-based
conversion scheme \citep{Rueckauer2017}.
The reported test accuracy of the resulting SNN was 74.6\%, where 550 time steps were used
to simulate the model.
Hence already the application of AMOS conversion to ResNet50 improves this result with regard
to accuracy, and especially with regard to throughput. The AMOS conversion of EfficientNet-B7
yields an additional $5.87\%$ accuracy improvement.
\subsection{Results for the classification of images from the CIFAR10 data set}
The results for the ANN versions of ResNet that are given in Table \ref{tab:my-table} are the outcome of
training them with the hyperparameters given in the TensorFlow models repository.
They are very close to the best results reported in the literature.
The best ResNet on CIFAR10 is the ResNet110, where a test accuracy of 93.57\% has been reported \citep{He2016}.
Our ResNet50 achieves 92.99\%, which is very close to the performance of the ResNet56 with 93.03\%.
Spiking versions of ResNet20 have already been explored \citep{Sengupta2019}.
Using a rate-based conversion scheme a performance of 87.46\% was reported.
Compared to these results, AMOS conversion yields a higher accuracy, while also
using significantly fewer time steps, thereby reducing latency for inference.
In addition, the throughput is reduced from a rather low value to the theoretically ideal
value of one image per time step. \\
\begin{table}[]
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
Model & ANN & SNN & \# neurons & \# spikes \\ \hline
ResNet50 & 92.99\% & 92.42\% & 4.751.369 & 647.245 \\ \hline
ResNet20 & 91.58\% & 91.45\% & 1.884.160 & 261.779 \\ \hline
ResNet14 & 90.49\% & 90.39\% & 1.310.720 & 190.107 \\ \hline
ResNet8 & 87.22\% & 87.05\% & 737.280 & 103.753 \\ \hline
\end{tabular}
\caption{\textit{SNN performances on CIFAR10 that result from AMOS conversions of ResNet models of different
depths. (using $K=10$ in the AMOS units) \#neurons refers to the SNN version. }}
\label{tab:my-table}
\end{table}
\subsection{Trade-off between latency and network size of the SNNs}
It is well-known that the extraction of bits from a weighted sum, as well as multiplication
of binary numbers, can be carried out by threshold circuits --hence also by SNNs-- with a small
number of layers --typically 2 or 3-- that does not depend on the bit length of the binary numbers involved,
however at the cost of increasing the number of neurons to a low-degree polynomial of this bit length.
A recent summary of such results is provided in section 3 of \citep{parekh2018constant}.
Hence one can replace the basic architectures of the AMOS units from Fig. 1 and 2 by the more
shallow architectures that employ a larger number of spiking neurons. In order to make the
resulting circuits applicable to larger input domains, one can train their parameters, similarly
as for the basic architectures.
Hence there exists a trade-off between the number of layers (latency) and the
size of the SNN that can be achieved through an AMOS conversion, and several points
on this trade-off curve can be reached through these modified AMOS conversions.
\section{Methods}
\subsection{Squeeze and Excitation in EfficientNet}
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{figures/SE.png}
\caption{Squeeze and Excitation Layer}
\label{fig:7_new}
\end{figure}
\subsection{Number of parameters of AMOS units}
\begin{table}[H]
\centering
\begin{tabular}{|l|l|}
\hline
Approximation & Number of parameters \\ \hline
Sigmoid (K=8) & 52 \\ \hline
ReLU (K= 10) & 75 \\ \hline
Swish (K=12) & 102 \\ \hline
Mult (K= 40) & 940 \\ \hline
\end{tabular}
\caption{Number of parameters in the AMOS units that are used in this article}
\label{K-param-num}
\end{table}
\section*{Discussion}
We have introduced a new method for converting ANNs to SNNs.
Since the resulting SNN uses for inference at most one spike (AMOS) per neuron,
AMOS conversion can be seen as dual to the familiar rate-based conversion,
since it uses space -- i.e., more neurons -- rather than time for replacing analog values by spikes.
This conversion significantly improves accuracy, latency,
and especially the throughput of the resulting SNN. Furthermore, since AMOS conversion can be applied
to virtually any type of ANN gate, it demonstrates for the first time that SNNs are universal computing
devices from the perspective of modern ANN-based machine learning and AI. For the classification of
natural images, it raises the accuracy of SNNs for the full ImageNet 2012 dataset
to 80.97\% -- and to 95.82\% for Top5 -- thereby bringing it into the range of the best ANN and human performance.
This
was achieved by converting EfficientNets, a recently proposed variant
of ANNs that employ the Swish function as neural activation function, for which a rate-based conversion
to SNNs is impossible. The resulting SNNs that achieve high performance
are -- like their
ANN counterparts -- very large. But one can sacrifice some of their accuracy by starting with a smaller
ANN, or by reducing their perfect throughput of one image per step and reuse gates of a smaller SNN with
online reconfigurable connections and parameters. Note that reuse of neurons
would be more problematic
for rate-based SNN computations, since each spiking neuron is occupied there during a fairly large and
somewhat unpredictable number of time steps when emulating a computation step of an ANN gate.
In contrast, AMOS conversion provides a tight bound on the number of time steps during which a spiking
neuron is occupied. Hence it can also be used for converting recurrently connected ANNs to SNNs.
Altogether the proposed method for generating highly performant SNNs
offers an opportunity to combine the computationally more efficient and
functionally more powerful training of ANNs with the superior energy-efficiency
of SNNs for inference.
Note that one can also use the AMOS-converted SNN as initialization for subsequent direct training
of the SNN for a more specific task.
Altogether our results suggest that spike-based hardware may gain an edge in the competition for the development of drastically more energy efficient hardware for AI applications by combining energy efficiency and competitive performance with a versatility that optimized hardware for specific ANNs –such as a specific type of convolutional neural networks— cannot offer.
\section*{Acknowledgements}
We would like to thank Franz Scherr for helpful discussions. This research was partially supported by the Human Brain Project of the European Union (Grant agreement number 785907).
\bibliographystyle{apalike}
|
2,869,038,156,812 | arxiv | \section{Introduction}
In this paper we develop an
expansion in fermionic spherical harmonics
in flat spacetime that serves as a nontrivial test for the technical correctness
of a subtle fermionic calculation of black hole thermodynamics.
The original context is provided by the black-hole entropy calculation
in the framework of `t Hooft's brick-wall approach~\cite{BW_t-Hooft}, which
can be extracted in a systematic manner
via a near-horizon (NH) expansion technique.
This methodology
not only shows that black-hole thermodynamics is associated with
the leading divergent order of the entropy calculation
but it also
provides a clear origin for
the Bekenstein-Hawking area law~\cite{BH_thermo_CQM,semiclassical_BH_thermo,PI-fermion_curvedST}
from
a NH conformal symmetry.
In Ref.~\cite{BH_thermo_CQM},
an expansion in spherical harmonics
was used
for the reduction to a radial one-dimensional semiclassical analysis of the density of modes
for spin-zero fields.
Even though this does not exhaust the arsenal of available techniques---NH methods without
the need for an expansion in spherical harmonics
are also known for the scalar case~\cite{semiclassical_BH_thermo,
holographic_scaling_I}---the NH origin of the thermodynamics
is most obvious with the resolution in spherical coordinates.
The appropriate generalization of Ref.~\cite{BH_thermo_CQM}
in fermionic spherical harmonics
can be derived~\cite{PI-fermion_curvedST}
in both the canonical and path-integral frameworks,
using the same techniques presented below for the flat spacetime case.
In addition to the subtleties discussed in the following sections,
in the case of a
black hole background,
the NH method in spherical coordinates
proves to be a crucial ingredient in the
extraction of the Bekenstein-Hawking area law.
In essence,
while the path-integral calculation of the free energy for a free-fermion gas
in flat spacetime is well known using Cartesian coordinates~\cite{TFT1a,TFT1b,TFT2},
its counterpart in spherical coordinates is not.
Thus, through a series of steps leading to the free-energy path integral
in spherical coordinates,
we complete this calculation
and explicitly show the equivalence
with Cartesian coordinates---also providing
a strong check on the correctness
of the corresponding fermionic path-integral in more general spherically symmetric backgrounds,
including the background leading to black hole thermodynamics.
Moreover, this calculation highlights some conceptual and technical
properties that are crucial for similar computations in curved spacetime, and lends itself to generalizations
for higher spin fields.
In its greatest generality,
the analysis of Dirac operators in spherical coordinates can be fully implemented within
the Newman-Penrose geometric
framework~\cite{NP-formalism}
and the use of spin-weighted spherical
harmonics~\cite{spin-weightedY_Newman-Penrose1, spin-weightedY_Newman-Penrose2, spin-weightedY_Newman-Penrose3}.
These techniques have led to solutions of remarkable generality with Chandrasekhar's
work~\cite{Chandrasekhar}.
There is considerable interest, however, in
the development of additional techniques for specific problems;
for example,
for problems in the background of gravitational fields~\cite{Shishkin_Dirac-gravity} as well as
in curvilinear orthogonal coordinates in flat spacetime, mainly by
algebraic methods of separation of
variables~\cite{Shishkin-Villalba_separation1,Shishkin-Villalba_separation2, Shishkin-Villalba_separation3},
to name just a few.
Our interest is in
the use and development of techniques
that simplify the path integral treatment for specific problems with Dirac fermions.
In this paper, we focus on the case of spherical polar coordinates, and adapt
some of these techniques
to our path integral problem.
Thus,
we follow a
remarkably simple constructive approach specifically tailored for the computation of determinants and
statistical properties of the Dirac operator.
This approach leads to the generalized spherical harmonics discussed in Sec.~\ref{sec:PI_spherical-coordinates}
and in App.~\ref{sec-app:spherical_reduction}.
With
these considerations in mind, we first
turn
to the setup of the problem
in both coordinate systems, and then to the
development of the relevant fermionic spherical harmonic expansion leading to their equivalence.
\section{The Fermion Problem in Flat Spacetime:
\\
Euclideanized Action and Partition Function}
Consider the Euclideanized action for free fermion fields in flat spacetime~\cite{TFT1a,TFT1b,TFT2},
\begin{equation}
S_{E} =
\int _{0}^{\beta }
d\tau
\int d^{3} x
\,
\bar{\psi }
\,
\left( \tau , \mathbf{x} \right)
\,
\left( \gamma_{E}^{\mu }
\partial _{\mu }
+ m\right)
\,
\psi \left(\tau , \mathbf{x} \right)
\, ,
\label{eq:Euclidean-action}
\end{equation}
where the Euclidean time is $\tau = i t$ and the
relevant conventions and definitions, including those of the Euclidean Dirac matrices
$\gamma_{E}^{\mu } $
are summarized in App.~\ref{sec-app:Euclidean_Dirac}.
It should be noticed that the derivatives
$\partial _{\mu }$ in Eq.~(\ref{eq:Euclidean-action}), by abuse of notation,
already involve the Euclidean time $\tau $ (this will have no further consequences
for the remainder of the paper, but the subtleties are absorbed in the conversion to
$\gamma_{E}^{\mu } $).
Correspondingly, the partition function for this system is
\begin{equation}
Z=
\int_{\scriptsize
\begin{array}{l}
{\psi \left(0, \mathbf{x} \right)
=
-\psi \left(\beta ,\mathbf{x} \right)}
\\
{\bar{\psi }
\left(0, \mathbf{x} \right)
=-\bar{\psi }\left(\beta , \mathbf{x} \right)}
\end{array}
}
{\mathcal D}
\bar{\psi }
\left(\tau , \mathbf{x} \right)
{\mathcal D}
\psi \left(\tau ,\mathbf{x} \right)
\exp \left\{-S_{E}
\right\}
\label{eq:Z_TFT}
\, .
\end{equation}
In effect~\cite{TFT1a, TFT1b,TFT2},
for the transition to the thermal field theory, the evolution of the system in
imaginary time $\tau$ is subject to appropriate
boundary conditions:
for fermionic fields, anti-periodic boundary conditions should be satisfied
with respect to $\tau$, i.e.,
\begin{equation}
\left\{
\begin{array}{l}
{\psi \left(0, \mathbf{x} \right)
=
-\psi \left(\beta ,\mathbf{x} \right)}
\\
{\bar{\psi }
\left(0, \mathbf{x} \right)
=-\bar{\psi }\left(\beta ,\mathbf{x} \right)}
\, ,
\end{array}
\right.
\end{equation}
where $\beta =1/T$ is the inverse temperature.
This implies that the frequencies associated with the Euclidean time $\tau $,
known as fermionic Matsubara frequencies, are discrete and of the form
\begin{equation}
\omega _{n} =\frac{2\pi }{\beta } \left(n+\frac{1}{2} \right)
\, ,
\;
n \in {\mathbb Z}
\, .
\label{eq:Matsubara-frequencies}
\end{equation}
The translational invariance of flat spacetime
permits the introduction
of the momentum-space Fourier representation for the spatial part of the fermion field
\begin{equation}
\psi \left(\tau , \mathbf{x} \right)
=
\displaystyle\sum _{n}
\frac{
e^{-i\omega _{n} \tau}
}{\sqrt{\beta } }
\int \frac{d^{3} p }{
\left[ \left(2\pi \right)^{3/2}
\sqrt{2
\omega _{\mathbf{p} } } \right]
}
\,
e^{i \mathbf{p}\cdot \mathbf{x} }
\,
\psi _{n} \left(\mathbf{p}\right)
\, ,
\label{eq:Qfield_Fourier}
\end{equation}
where
$\omega _{\mathbf{p} }
= \sqrt{ \mathbf{p}^{\, 2} + m^{2} }$,
the sum is $\sum_{n} = \sum_{n=-\infty}^{\infty}$,
and the normalization
corresponds to the one-particle wave function
$\left\langle 0 |
\psi \left(\tau ,\mathbf{x}\right)
| \omega_{n}, \mathbf{p}
\right\rangle
=
e^{i(\mathbf{p}\cdot \mathbf{x} - \omega_{n}\tau)}/\sqrt{ (2\pi)^{3} 2 \omega_{\mathbf{p} } }$.
Notice that the basic modes
involve the functions $e^{-i P \cdot X}$.
Then, the Euclideanized action becomes
\begin{equation}
S_{E} =
\displaystyle\sum _{n}
\int
\frac{d^{3} p }{ 2\omega _{\mathbf{p} } }
\;
\bar{\psi }_{n} \left(\mathbf{p}\right)
\,
\left(- i\gamma_{E}^{\mu } P_{\mu }
+m\right)
\,
\psi _{n}
\left(\mathbf{p}\right),
\label{eq:Euclidean-action_p-space}
\; .
\end{equation}
This defines the Euclidean Dirac operator $\mathfrak{D}_{P}=\left(- i\gamma_{E}^{\mu } P_{\mu }
+m\right)$ in the momentum representation, with the conventions of App.~\ref{sec-app:Euclidean_Dirac}.
Therefore, using the well-known result for
Berezin path integrals with
Grassmann (anti-commuting)
variables~\cite{TFT1a, TFT1b,TFT2,Grassmann},
\begin{equation}
\int \left\{\prod _{i}da_{i}^{*} da_{i}
\right\}
\exp \left(-a_{i}^{*} M_{ij} a_{j} \right)
{=}
\det \left(M\right)
\, ,
\label{fermionic-det_Grassmann}
\end{equation}
the partition function~(\ref{eq:Z_TFT})
yields
\begin{equation}
\begin{array}{rcl}
{Z} & {=}
& C
\displaystyle\displaystyle\prod_{n,\mathbf{p}}
\det
\left(- i\gamma _{E}^{\mu } P_{\mu } +m
\right)
\\ {}
& {=} &
C
\left[
\displaystyle\prod_{n,\mathbf{p}}
\det
\left(
- i\gamma_{E}^{\mu } P_{\mu }
+m
\right)
\det
\left(
i\gamma_{E}^{\mu } P_{\mu }
+
m
\right)
\right]^{1/2}
\, ,
\end{array}
\end{equation}
with $C$ a constant.
Here we used
\begin{equation}
\left|\det D\right|
=
\sqrt{\det D\det D^{\dag } }
\,
\label{eq:square-root_operator_det-relation}
\end{equation}
for each operator in the factorization above.
Now,
in the Euclidean slash notation,
$
\slashed{P}_{E}=
\gamma_{E}^{\mu } P_{\mu }
$,
and using the Euclidean Clifford algebra relations~(\ref{eq:Euclidean_Clifford-algebra}),
we have
$
(i \slashed{P}_{E} + m )
(- i \slashed{P}_{E} + m)
=
\slashed{P}_{E} \slashed{P}_{E}
+ m^{2}
=
\left( P^{2} + m^{2} \right)
\, {\mathbb {1}}_{4} $,
where
$ {\mathbb {1}}_{4} $
is the
$4 \times 4$ identity matrix.
Thus, up to a phase, we get
\begin{equation}
{Z}
=
\displaystyle\prod _{n,\mathbf{p}}
\det\,^{1/2}
\left[
\left(\omega _{n}^{2} +\mathbf{p}^{\; 2}
+m^{2} \right)
{\mathbb {1}}_{4}
\right]
\, ,
\end{equation}
as
$P^2 \equiv \delta^{\mu \nu} P_{\mu } P_{\nu}
=
\omega _{n}^{2} +\mathbf{p}^{\; 2}$.
Notice that
the calculation of the determinant involves
products with respect to
the Dirac matrices in addition to its functional
nature.
As a result of the simple factorization above as a direct product,
the matrix part
leads to an overall exponent of
4 for the functional determinant,
so that
\begin{equation}
\begin{array}{rcl} {Z}
& {=} &
\displaystyle\prod _{n,\mathbf{p}}
\left[
\det\, \! ^{1/2}
\left(\omega _{n}^{2} +\mathbf{p}^{\; 2}
+m^{2}
\right)
\right] ^{4}
\\
{} & {=} &
{\displaystyle\prod _{n,\mathbf{p}}
\det\, \! ^{2} \left(\omega _{n}^{2}
+\mathbf{p}^{\; 2} +m^{2} \right) .}
\end{array}
\label{eq:Z_momentum}
\end{equation}
This derivation highlights the fact that $Z$ in Eq.~(\ref{eq:Z_momentum})
is the momentum factorization (up to a constant) of
the square of the
$\det \left( \square _{E} + m^{2} \right)$,
where
$\square _{E} =- \left( \partial _{\tau }^{2} + \boldsymbol{\nabla }^{2} \right)$
(Euclidean Klein-Gordon operator, but with the eigenvalues to be evaluated with the fermionic
Matsubara frequencies).
Finally, from Eq.~\eqref{eq:Z_momentum},
it is straightforward to get the free energy using $F= -\ln Z/\beta$,
as discussed in the standard references~\cite{TFT1a,TFT1b,TFT2}.
\section{Path Integral in Spherical Coordinates:
\\
Explicit Calculation and
Coordinate Invariance}
\label{sec:PI_spherical-coordinates}
Our construction begins with
the familiar
transformation
from Cartesian to spherical coordinates,
for which the Dirac operator in the action
turns into
\begin{equation}
\mathfrak{D}
=
\gamma_{E}^{0} \partial _{0}
+\slashed{e}_{r} \partial _{r}
+\slashed{e}_{\theta } \frac{1}{r} \partial _{\theta }
+\slashed{e}_{\phi } \frac{1}{r\sin \theta } \partial _{\phi }
+m
\, ,
\label{eq:Dirac-operator_spherical}
\end{equation}
where, in the Euclidean slash notation,
\begin{equation}
\slashed{e}_{i}
=\gamma_{E}^{\mu } e_{\mu i}
,\,\,i=r,\theta ,\phi
\, ,
\label{eq:slashed-vierbein}
\end{equation}
with
$
e_{\mu i}
$ being the spatial
transformation matrix with columns $\hat{e}_{i}$
($i=r,\theta ,\phi$)
for $\mu =1,2,3$ representing the Cartesian axes; in essence,
$\slashed{e}_{i}$ is the corresponding ``spherical component'' (i.e., rotated) of the Gamma matrices.
For our derivation of determinants below,
it should be noticed that the last term
is explicitly
proportional to the $4 \times 4$ identity matrix, i.e., of the form
$m \, {\mathbb {1}}_{4} $.
This selection of the Cartesian frame,
i.e.,
the fixed ``Cartesian gauge,'' does not
require the explicit use of the spin connection; thus, the derivatives
$\partial_{\mu}$ above are just
ordinary derivatives.
The alternative
approach with covariant derivatives is outlined at the end of App.~\ref{sec-app:Euclidean_Dirac}.
The operator $\mathfrak{D}$,
defined above in the original field representation
$\psi \left(\tau , \mathbf{x} \right)$,
is not convenient for our calculational purposes.
Instead,
a more ``friendly'' geometrical version results if we perform a unitary transformation on $\psi$
defined by~\cite{Shishkin-Villalba_similarity1, Shishkin-Villalba_similarity2}
\begin{equation}
\psi \left(\tau , \mathbf{x} \right)
=
U\tilde{\psi }
\left(\tau , \mathbf{x} \right)
\; ,
\; \; \; \; \; \;
\bar{\psi }\left(\tau , \mathbf{x} \right)
=
\bar{\tilde{\psi }}
\left(\tau , \mathbf{x} \right)
U^{\dag }
\, ,
\end{equation}
with the rotation
\begin{equation}
\begin{array}{rcl}
{U \left(
R_{z} \left(\phi \right)
R_{y} \left( \theta \right)
\right)
} & {=} &
{U
\left(R_{z}
\left(\phi \right)\right)
U \left(R_{y} \left(\theta \right)\right)
}
\\
{} & {=} &
{
\exp
\left[
- \!
\mbox{ \Large $\frac{i \phi }{2}$}
\left(\begin{array}{cc} {\sigma _{3} } & {0}
\\
{0} & {\sigma _{3} }
\end{array}
\right)
\right]
\exp {
\left[
- \!
\mbox{ \Large $\frac{i \theta }{2}$}
\left(
\begin{array}{cc}
{\sigma _{2} } & {0}
\\
{0} & {\sigma _{2} }
\end{array}
\right)
\right]
\; ,
}}
\end{array}
\label{eq:rotation}
\end{equation}
where $\sigma_{j}$ ($j = 1,2,3$) are the Pauli matrices.
Geometrically,
this amounts to a realignment of the coordinate axes with
the chosen curvilinear coordinates $(r,\theta,\phi)$.
Under this field redefinition,\footnote{This transformation leaves the path integral measure invariant.}
the Euclideanized Lagrangian becomes
\begin{equation}
\bar{\psi }
\left(\tau , \mathbf{x} \right)
\mathfrak{D}
\psi \left(\tau , \mathbf{x} \right)
=
\tilde{\bar{\psi }}
\left(\tau , \mathbf{x} \right)U^{\dag }
\mathfrak{D} U\tilde{\psi }\left(\tau , \mathbf{x} \right)\equiv \tilde{\bar{\psi }}
\left(\tau , \mathbf{x} \right)\tilde{\mathfrak{D} }\tilde{\psi }\left(\tau , \mathbf{x} \right)
\; ,
\label{eq:transformed_Dirac-Lagrangian}
\end{equation}
where, from Eq.~(\ref{eq:rotation}),
the unitarily transformed Dirac operator,
$\tilde{\mathfrak{D} }
=
U^{\dagger}
\mathfrak{D}
U
$,
takes the form
\begin{equation}
\tilde{\mathfrak{D} }
=\gamma_{E}^{0} \partial _{0}
+\gamma_{E}^{3} \left(\partial _{r} +\frac{1}{r} \right)
+\gamma_{E}^{1} \frac{1}{r} \left(\partial _{\theta }
+\frac{1}{2} \cot \theta \right)
+\gamma_{E}^{2} \frac{1}{r\sin \theta } \partial _{\phi }
+m \, {\mathbb {1}}_{4}
\, ,
\label{eq:Dirac-operator_angular-mod}
\end{equation}
and we used the fact that spatial rotations
[in our case, $U$ in Eq.~(\ref{eq:rotation})]
commute with $\gamma_{E}^{0} $.
Effectively, this choice leads to the selection of self-adjoint operators associated with the
given coordinates,
including the {\em extra terms\/}, i.e.,
${\displaystyle
\frac{1}{r}
\; {\rm and}
\;
\frac{1}{2} \cot \theta }$
for the generalized radial and polar momenta.~\footnote{This is a
modified version of the ``rotating diagonal gauge''
of Refs.~\cite{Shishkin-Villalba_similarity1, Shishkin-Villalba_similarity2}.}
It should be noticed that
these are precisely the terms generated by the spin connection in the tetrad formalism, as sketched
in App.~\ref{sec-app:Euclidean_Dirac}.
With our representation of the Dirac matrices
(see App.~\ref{sec-app:Euclidean_Dirac}),
we explicitly have
\begin{equation}
\begin{array}{l}
\tilde{\mathfrak{D} }
=
{\left(
\begin{array}{cc}
{{\mathbb {1}}_{2} } & {0}
\\ {0} & {-{\mathbb {1}}_{2} }
\end{array}
\right)
\partial _{0}
-
i
\left(\begin{array}{cc}
{0} & {\sigma _{3} }
\\
{-\sigma _{3} } & {0}
\end{array}\right)
\left(\partial _{r}
+
\mbox{\large $\frac{1}{r}$}
\right)}
\\
-
\mbox{\Large $\frac{i}{r}$ } \!
\left(\begin{array}{cc}
{0} & {\sigma _{1} }
\\
{-\sigma _{1} } & {0}
\end{array}
\right)
\left(
\partial _{\theta }
+\frac{1}{2} \cot \theta
\right)
-
\mbox{\Large $\frac{i}{r}$ } \!
\left(\begin{array}{cc}
{0} & {\sigma _{2} }
\\
{-\sigma _{2} } & {0}
\end{array}\right)
\! \mbox{\Large $\frac{1}{\sin \theta } $}
\partial _{\phi }
+
m \, {\mathbb {1}}_{4}
\, .
\end{array}
\label{eq:Dirac-operator_angular-explicit}
\end{equation}
It is now necessary to attempt a separation of variables to reduce the problem to one (radial)
dimension.
In particular, owing to the symmetry of the problem in angular variables, suggest an expansion
of the form\footnote{For notational simplicity, we are omitting the $j$ and $m_j$
labels on the spherical functions $Y_{\left(1,2\right)}$ in
Eqs.~(\ref{eq:spherical_reduction})--(\ref{eq:eigenvalue-eq_sphericalH}).}
\begin{equation}
\tilde{\psi}
\propto
\frac{e^{-i\omega _{n} \tau } }{\sqrt{\beta } }
\left(\begin{array}{c}
{A_{n} (r)Y_{1} (\theta ,\phi )}
\\
{B_{n} (r)Y_{2} (\theta ,\phi )}
\\
{C_{n} (r)Y_{1} (\theta ,\phi )}
\\
{D_{n} (r)Y_{2} (\theta ,\phi )}
\end{array}
\right)
\, ,
\label{eq:spherical_reduction}
\end{equation}
to be used along with
Eqs.~(\ref{eq:transformed_Dirac-Lagrangian})--(\ref{eq:Dirac-operator_angular-explicit}).
Following App.~\ref{sec-app:spherical_reduction},
we introduce the
``generalized spherical harmonic'' eigenfunctions $Y_{1}$ and $Y_{2}$,
\begin{equation}
\left(
\left[
\begin{array}{cc}
{0} & {1}
\\
{1} & {0}
\end{array}
\right]
\left(\partial _{\theta }
+\frac{1}{2} \cot \theta \right)
+\left[\begin{array}{cc}
{0} & {-i}
\\
{i} & {0}
\end{array}
\right]
\frac{1}{\sin \theta }
\partial _{\phi }
\right)
\left(\begin{array}{c}
{Y_{1} } \\ {Y_{2} }
\end{array}
\right)
=
\left(
\begin{array}{c}
{\lambda _{+} Y_{1} }
\\ {\lambda _{-} Y_{2} }
\end{array}
\right)
\, ,
\label{eq:eigenvalue-eq_sphericalH}
\end{equation}
where
\begin{equation} \label{1.22)}
\lambda _{\pm } =\pm \left(j+1/2\right).
\end{equation}
Then, the effect of the Euclidean Dirac operator on the fields \eqref{eq:spherical_reduction} is
\begin{equation} \label{1.21)}
\tilde{\mathfrak{D} } \tilde{\psi}
=
\left(
\begin{array}{c}
{-i
\left[
\left(\partial _{r} +\frac{1}{r} \right)C+\frac{\lambda _{+} }{r} D
\right]
Y_{1}
+
\left(-i\omega _{n} +m\right)AY_{1} }
\\
{-
i
\left[
-\left(\partial _{r}
+\frac{1}{r} \right)D+\frac{\lambda _{-} }{r} C
\right]
Y_{2}
+
\left[
-i\omega _{n}
+m
\right]
BY_{2} }
\\
{i
\left[
\left(\partial _{r} +\frac{1}{r} \right)A
+\frac{\lambda _{+} }{r} B
\right]
Y_{1}
+\left(i\omega _{n}
+m\right)CY_{1} }
\\ {i\left(-\left(\partial _{r}
+\frac{1}{r} \right)B+\frac{\lambda _{-} }{r} A\right)Y_{2}
+\left(i\omega _{n}
+m\right)
BY_{2} }
\end{array}\right)
\frac{e^{-i\omega _{n} \tau } }{\sqrt{\beta } }
\, .
\end{equation}
We use the above harmonics to expand the spatial part of the fermion field,
leaving the Euclidean time as before,
\begin{equation}
\begin{array}{l}
{ \tilde{ \psi }
=
\displaystyle\sum _{j,m_{j} ,n}
\frac{e^{-i\omega _{n} \tau} }{\sqrt{\beta } }
\left(\begin{array}{c}
{A_{nj} (r)Y_{1jm_{j} } }
\\
{B_{nj} (r)Y_{2jm_{j} } }
\\
{C_{nj} (r)Y_{1jm_{j} } }
\\
{D_{nj} (r)Y_{2jm_{j} } }
\end{array}
\right)
}
\\
{
\tilde{\bar{\psi }}
=
\displaystyle\sum _{j,m_{j} ,n}
\frac{e^{ i\omega _{n} \tau}}{\sqrt{\beta } }
\Biggl(
\begin{array}{cccc}
{A_{nj}^{*} (r)Y_{1jm_{j} }^{*} }
\; \;
&
{B_{nj}^{*} (r)Y_{2jm_{j} }^{*} }
\; \;
&
{-C_{nj}^{*} (r)Y_{1jm_{j} }^{*} }
\; \;
&
{-D_{nj}^{*} (r)Y_{2jm_{j} }^{*} }
\end{array}
\Biggr)
\, .
}
\end{array}
\label{eq:psi_bar-psi_expansions}
\end{equation}
Using
\begin{equation}
\label{1.24)}
\begin{array}{l}
{\iint Y_{1jm_{j} }^{*} Y_{1j'm'_{j} }
\sin \theta d\theta d\phi =\delta _{jj'} \delta _{m_{j} m'_{j} }
}
\\
{
\iint Y_{2jm_{j} }^{*} Y_{2j'm'_{j} }
\sin \theta d\theta d\phi
=
\delta _{jj'} \delta _{m_{j} m'_{j} }
\; ,
}
\end{array}
\end{equation}
and
\begin{equation}
\frac{1}{\beta}
\int_{0}^{\beta} d \tau
\, e^{-i(\omega_{n} - \omega_{n'}) \tau } = \delta_{nn'}
\, ,
\end{equation}
the Euclideanized action becomes
\begin{eqnarray}
\label{1.25)}
S_{E}
=
& - &
i
\displaystyle\sum _{n,j,m_{j} }
\int dr r^{2}
\biggl\{
\omega _{n}
\biggr.
\left(
A_{nj}^{*} A_{nj}
+B_{nj}^{*} B_{nj}
+C_{nj}^{*} C_{nj}
+D_{nj}^{*} D_{nj}
\right)
\nonumber
\\
& + &
\left[
A_{nj}^{*} \left(\partial _{r}
+\frac{1}{r} \right)C_{nj} -B_{nj}^{*} \left(\partial _{r}
+\frac{1}{r} \right)D_{nj}
\right.
\nonumber
\\
& + &
\left.
C_{nj}^{*}
\left(\partial _{r} +\frac{1}{r} \right)A_{nj} -D_{nj}^{*}
\left(\partial _{r}
+\frac{1}{r}
\right)
B_{nj}
\right]
\nonumber
\\
& + &
\mbox{\Large $\frac{ \left(j+ 1 / 2 \right) }{r }$}
\left(A_{nj}^{*} D_{nj} -B_{nj}^{*} C_{nj} +C_{nj}^{*} B_{nj}
-
D_{nj}^{*} A_{nj} \right)
\nonumber
\\
\biggl.
& + &
{
im
\left(A_{nj}^{*} A_{nj} +B_{nj}^{*} B_{nj} -
C_{nj}^{*} C_{nj} -D_{nj}^{*} D_{nj}
\right)}
\biggr\}
\, .
\end{eqnarray}
This expression can be written as
\begin{equation}
S_{E}
=
\displaystyle\sum _{n,j,m_{j} }\int drr^{2}
\bar{\psi }_{njm_{j} } \Omega _{njm_{j} } \psi _{njm_{j} }
\, ,
\label{eq:action_spherical}
\end{equation}
where (omitting the tilde notation below)
\begin{equation}
\begin{array}{l}
{
{\psi} _{njm_{j} }
=
\left(\begin{array}{c}
{A_{nj} }
\\
{B_{nj} }
\\
{C_{nj} }
\\
{D_{nj} }
\end{array}\right)}
\\
{
{\bar{\psi }}_{njm_{j} }
=
\Biggl(
\begin{array}{cccc}
{A_{nj}^{*} }
\; \; \;
&
{B_{nj}^{*} }
\; \; \;
&
{-C_{nj}^{*} }
\; \; \;
&
{-D_{nj}^{*} }
\end{array}
\Biggr)
}
\\
{\Omega _{njm_{j} }
=
-
i
\left[
\gamma^{0} \omega _{n}
+\gamma^{3}
\left(\partial _{r}
+
\mbox{\Large $\frac{ 1 }{r }$}
\right)
+
i\gamma^{2}
\mbox{
\Large $
\frac{
\left(
j+
1/2
\right)
}{r}
$}
+
im
\, {\mathbb {1}}_{4}
\right]
\, .
}
\end{array}
\label{eq:Z_spherical1}
\end{equation}
In Eq.~(\ref{eq:Z_spherical1}),
the standard (Minkowskian) Dirac matrices $\gamma^{\mu}$ are restored for convenience, and will be used
for the remainder of this paper.
Thus, the partition functional integral~(\ref{eq:Z_TFT}), from Eqs.~(\ref{fermionic-det_Grassmann}),
(\ref{eq:action_spherical}) and
(\ref{eq:Z_spherical1}),
becomes
(up to an irrelevant constant)
\begin{equation}
Z
=
\displaystyle\prod_{j,n}
\det\,^{2j+1}
\left[
-i\gamma^{0} \omega_{n}
-i\gamma^{3} \left(\partial_{r}
+\frac{1}{r} \right)+\gamma^{2}
\frac{\left(j+1/2 \right)}{r}
+m
\, {\mathbb {1}}_{4}
\right]
\, ,
\label{eq:Z_spherical2}
\end{equation}
where the multiplicity $(2j+1)$ (arising from $|m_{j}| \leq j$)
is explicitly displayed.
To evaluate the partition function~(\ref{eq:Z_spherical2}),
we again use the properties of the Dirac matrices and the identity
\begin{equation}
\det \left(D\right)=\sqrt{\det \left(D\right)\det \left(D^{\dag } \right)}
\end{equation}
(up to a phase factor)
for each spherical component of the Dirac operator.
As a result,
\begin{equation}
Z=\displaystyle\prod_{j,n}
\det\,^{ j+1/2 }
\left[
\left(
m^{2}
+
\omega _{n}^{2}
+ p_{r}^{2}
+\frac{ \left(j+1/2 \right)^{2} }{r^{2} }
\right)
\, {\mathbb {1}}_{4}
+ \frac{ \left(j+ 1/2 \right) }{r^{2} }
\begin{pmatrix}
\sigma _{1}
&
0
\\
0 & \sigma _{1}
\end{pmatrix}
\right]
\; ,
\label{eq:Z_spherical3}
\end{equation}
where
\begin{equation}
p_{r}
=
-i \left(\partial_{r}
+
\frac{ 1 }{r }
\right)
\;
\label{eq:radial-momentum}
\end{equation}
is the self-adjoint radial momentum operator, with
\begin{equation}
p_{r}^{2}
=
-
\left( \partial^{2}_{r} + \frac{2}{r} \partial_{r} \right)
=
-\frac{1}{r^{2} } \frac{d}{dr} \left(r^{2} \frac{d}{dr} \right)
\; .
\end{equation}
Notice that the determinant symbol
``det'' in Eqs.~(\ref{eq:Z_spherical2}) and (\ref{eq:Z_spherical3})
involves
a direct product of
the radial functional attribute ($r$) and the Dirac matrix character.
Now, the operator in Eq.~(\ref{eq:Z_spherical3})
can be rewritten as
\begin{equation}
\Lambda_{4}^{\left(j,r\right)}
=\left[\begin{array}{cc}
\Omega_{\left(r,j\right)}
\, {\mathbb {1}}_{2}
+ W(r) \,
\sigma_{1} & {0}
\\ {0}
& \Omega _{\left(r,j\right)}
\, {\mathbb {1}}_{2}
+
W(r) \,
\sigma_{1}
\end{array}\right]
\; ,
\label{eq:Omega-Lambda_matrix-form1}
\end{equation}
where
\begin{equation}
\Omega_{\left(r,j\right)}
=\omega_{n}^{2} -\frac{1}{r^{2} }
\frac{d}{dr} \left(r^{2} \frac{d}{dr} \right)
+\frac{\left(j+1/2\right)^{2} }{r^{2} }
+m^{2}
\,
\label{eq:Omega_diagonal-part}
\end{equation}
and
$W(r) = (j +1/2)/r^2$.
Equation~(\ref{eq:Omega-Lambda_matrix-form1})
has a block form
\begin{equation}
\Lambda_{4}^{\left(j,r\right)}
=
\left[\begin{array}{cc}
{\Lambda_{2}^{\left(j,r\right)} }
&
{0}
\\
{0}
&
{\Lambda _{2}^{\left(j,r\right)} }
\end{array}\right]
\; ,
\label{eq:Omega-Lambda_matrix-form2}
\end{equation}
with the $2 \times 2$ matrix
\begin{equation}
{\Lambda_{2}^{\left(j,r\right)} }
=
\Omega_{\left(r,j\right)}
\,
{\mathbb {1}}_{2}
+
W(r) \,
\sigma_{1}
=
{\left(
\begin{array}{cc}
\Omega_{\left(r,j\right)}
\; & \;
W (r)
\\
W (r)
\; & \;
{\Omega _{\left(r,j\right)} }
\end{array}
\right)}
\; ,
\end{equation}
which can be diagonalized with the
coordinate-independent
unitary transformation
\begin{eqnarray}
V^{\dagger}
\,
{\Lambda_{2}^{\left(j,r\right)} }
\,
V
& = &
\check{\Lambda}_{2}^{\left(j,r\right)}
\\
& = &
\left(\begin{array}{cc}
{\Omega _{-\left(r,j\right)} } & {0}
\\
{0} & {\Omega_{+\left(r,j\right)} }
\end{array}
\right)
\, ,
\end{eqnarray}
where
\begin{equation}
{\Omega _{\pm \left(r,j\right)} }
=
\Omega _{\left(r,j\right)}
\pm
\frac{ ( j+1/2) }{r^{2} }
\; ,
\end{equation}
and
\begin{equation}
V
=
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1
\\
-1 & 1
\end{pmatrix}
\; .
\end{equation}
Therefore,
\begin{equation}
\begin{array}{rcl}
\det \Lambda _{4}^{\left(j,r\right)}
& {=} &
\det ^{2}
\left(
\Lambda _{2}^{\left(j,r\right)}
\right)
=
\det ^{2}
\left(
\check{\Lambda} _{2}^{\left(j,r\right)}
\right)
=
\det ^{2}
\left(\begin{array}{cc}
{\Omega _{-\left(r,j\right)} } & {0}
\\
{0} & {\Omega_{+\left(r,j\right)} }
\end{array}\right)
\, .
\end{array}
\label{eq:determinant-Lambda}
\end{equation}
Now we are ready to evaluate the partition functional integral~(\ref{eq:Z_spherical3}), including the
angular momentum degeneracy.
As $j$ takes only semi-integer values
$(j = 1/2, 3/2, 5/2, \ldots)$,
let $l=j+{1\mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2} =1,2,3 \ldots$.
With this notation,
calling $\Omega _{\pm } \equiv \Omega_{\left(l,l-1\right)} $, i.e., $\Omega _{+} \equiv \Omega_{l}$
and $\Omega _{-} \equiv \Omega_{l-1}$, we have
\begin{equation}
\Omega _{\pm } \equiv \Omega_{\left(l,l-1\right)}
=\omega _{n}^{2} -\frac{1}{r^{2} }
\frac{d}{dr} \left(r^{2} \frac{d}{dr} \right)+\frac{l\left(l\pm 1\right)}{r^{2} } + {m}^{2} .
\end{equation}
We can then write Eq.~(\ref{eq:Z_spherical3})
as
\begin{equation}
\displaystyle\prod _{n,l}\det \, \!^{2l} \Lambda _{2}^{\left(l,r\right)}
=\displaystyle\prod _{n,l=1}\left(\det \Omega _{l} \right)^{2l}
\displaystyle\prod _{n,l=1}\left(\det \Omega _{l-1} \right)^{2l} .
\label{eq:Lambda-l_to_Omega-l}
\end{equation}
In addition,
shifting the index and singling out one power of $\Omega_{l}$,
the following two relations ensue:
\begin{equation}
\displaystyle\prod _{l=1}\left(\det \Omega _{l-1} \right)^{2l}
=\displaystyle\prod _{l=0}\left(\det \Omega _{l} \right)^{2l+2}
=\displaystyle\prod _{l=0}\left(\det \Omega _{l} \right)^{2l+1}
\displaystyle\prod _{l=0}\det \Omega _{l} \,,
\label{eq:Omega-l_aux1}
\end{equation}
\begin{equation}
\displaystyle\prod _{l=1}\left(\det \Omega _{l} \right)^{2l}
=\displaystyle\prod _{l=0}\left(\det \Omega _{l} \right)^{2l} .
\label{eq:Omega-l_aux2}
\end{equation}
Therefore, from Eqs.~(\ref{eq:Lambda-l_to_Omega-l}) through (\ref{eq:Omega-l_aux2}),
the final expression for the spherical-coordinate representation of the
partition function is obtained,
\begin{equation}
Z=\left(\displaystyle\prod _{n,l=0}\left(\det \Omega _{l} \right)^{2l+1} \right)^{2} .
\label{eq:Z_ang-momentum}
\end{equation}
Now, from the standard resolution of the Laplacian operator in spherical coordinates,
$\displaystyle\prod _{n,l=0}\left(\det \Omega _{l} \right) ^{2l+1} $
is the spherical-coordinate factorization (up to a constant) of
$\det \left( \square _{E} + m^{2} \right)$,
where
$\square _{E} =- \left( \partial _{\tau }^{2} + \boldsymbol{\nabla }^{2} \right)$.
Therefore,
Eq.~\eqref{eq:Z_ang-momentum}
then becomes,
up to a constant, equal to the expression of Eq.~\eqref{eq:Z_momentum},
which was to be demonstrated.
\section{Conclusions}
We have shown,
via an explicit construction,
the equivalence of the calculation
for the partition function of a free gas of fermions
in flat spacetime in Cartesian and in spherical coordinates.
The latter involved an expansion of the fermion fields in ``generalized harmonics,'' as computed
in this paper.
While the result is not surprising, our treatment of the path integral in spherical coordinates
highlights novel features and subtleties.
It is technically remarkable for instance, to see the emergence of the zero mode ($l=0$)
for the scalar case in the process of ``squaring'' the fermionic determinant.
Beyond its own merits, this flat-spacetime calculation lends strong support to the correctness
of the corresponding calculation in the case of a black hole background
performed recently by the authors.
In that case, Eq.~(\ref{eq:Z_spherical3}),
after following the same procedure, is to be replaced by
a more complicated expression with additional terms and factors governed by the scale factor
$ f(r)
= 1 -2M/r$
of the Schwarzschild metric
$ ds^{2}
=
f\left(r\right)\left( dx^{0} \right)^{2}
-\left[ f\left(r\right) \right]^{-1}
dr^{2}
-r^{2} \left(d\theta ^{2}
+\sin ^{2} \theta \, d\phi ^{2} \right)
$
in $D=4$ spacetime dimensions
[where the
Riemannian spacetime geometry is described by a metric with signature $\left(+,-,-,-\right)$].
As our computation shows, the determinant in Eq.~(\ref{eq:Z_spherical3})
has a tight structure that allows for a full diagonalization into products of determinants
over subspaces of the original one.
The NH expansion method is crucial to systematically
isolate the leading contribution to the divergent part
of the free energy---hence of the entropy---of the fermionic thermal atmosphere
surrounding the black hole, which leads to the Bekenstein-Hawking law within the framework
of `t Hooft's brick-wall approach.
Therefore,
while the black-hole calculation is consistent
with the expected fundamental result of black hole thermodynamics,
it is important to establish
the full validity of the technical aspects of the NH expansion,
and this paper lends further credibility to our approach.
|
2,869,038,156,813 | arxiv | \section{Introduction}
\label{sec:intro}
The search for $R$-parity conserving supersymmetric particles at colliding beam
experiments is plagued by the necessity to {\it pair produce} sparticles,
and by the fact that the sparticle cascade decay terminates in the
lightest SUSY particle (LSP), usually assumed to comprise at least a portion of the
missing dark matter in the universe. The first of these thus requires enough
energy to produce two rather than just one sparticle, while the second of
these means that the sparticle invariant mass can't be directly
reconstructed as a resonance.
An alternative path to SUSY discovery at collider experiments is to search
for the $R$-parity even neutral heavy Higgs bosons, the heavy scalar $H$ and
the pseudoscalar $A$. These particles can be produced singly as $s$-channel
resonances and have the advantage in that their invariant mass can,
in principle, be directly reconstructed
(as was the case in discovery of the light scalar $h$).
In this paper, we examine production and decay of the heavy neutral
scalar Higgs bosons of the MSSM in the most lucrative discovery channel
$pp\rightarrow H,\ A\rightarrow\tau\bar{\tau}$. In previous phenomenological work\cite{Carena:2013qia,Djouadi:2013uqa,Djouadi:2015jea,Bagnaschi:2018ofa,Bahl:2020kwe},
new scenarios were proposed for the $m_A$ vs. $\tan\beta$ discovery plane
which ensured that $m_h\simeq 125$ GeV while also respecting that LHC
sparticle search limits were enforced, usually by assuming supersymmetry
breaking in the multi-TeV regime. These constraints can in principle
affect the regions of the heavy Higgs search planes which can be probed by
current and forthcoming hadron colliders.
In the present work, we add to these constraints the condition that the
magnitude of the weak scale also be {\it natural}.
This is because natural SUSY models are in a sense more plausible than
unnatural models\cite{Baer:2022dfc}.
For our naturalness criterion, we adopt the notion of practical naturalness\cite{Baer:2015rja}:
\begin{quotation}
An observable ${\cal O}=o_1 +\cdots +o_n$ is natural if all {\it independent} contributions to ${\cal O}$ are comparable to or less than ${\cal O}$.
\end{quotation}
Here, we adopt the measured value of the $Z$-boson mass as representative of
the magnitude of weak scale, where in the Minimal Supersymmetric Standard Model
(MSSM)\cite{Baer:2006rs}, the $Z$ mass is related to Lagrangian parameters
via the electroweak minimization condition
\be
m_Z^2/2 =\frac{m_{H_d}^2+\Sigma_d^d-(m_{H_u}^2+\Sigma_u^u )\tan^2\beta}{\tan^2\beta -1}-\mu^2
\label{eq:mzs}
\ee
where $m_{H_u}^2$ and $m_{H_d}^2$ are the Higgs soft breaking masses,
$\mu$ is the (SUSY preserving) superpotential $\mu$ parameter and
the $\Sigma_d^d$ and $\Sigma_u^u$ terms contain a large assortment of
loop corrections (see Appendices of Ref's \cite{Baer:2012cf} and \cite{Baer:2021tta}
and also \cite{Dedes:2002dy} for leading two-loop corrections).
For natural SUSY models, the naturalness measure\cite{Baer:2012up}
\be
\Delta_{EW}\equiv |maximal\ term\ on\ RHS\ of\ Eq.~\ref{eq:mzs}|/(m_Z^2/2)
\ee
is adopted here where a value
\be
\Delta_{EW}\lesssim 30
\label{eq:dew30}
\ee
fulfills the {\it comparable} condition of practical naturalness.
For most SUSY benchmark models, the superpotential $\mu$ parameter is tuned
to cancel against large contributions to the weak scale from SUSY breaking.
Since the $\mu$ parameter typically arises from very different physics
than SUSY breaking, {\it e.g.} from whatever solution to the SUSY
$\mu$ problem that is assumed,\footnote{Twenty solutions to the SUSY
$\mu$ problem are recently reviewed in Ref. \cite{Bae:2019dgg}.}
then such a ``just-so'' cancellation
seems highly implausible\cite{Baer:2022dfc}
(though not impossible) compared to the
case where all contributions to the weak scale are $\sim m_{weak}$,
so that $\mu$ (or any other parameter) need not be tuned.
There are several important implications of Eq. \ref{eq:dew30} for
heavy neutral SUSY Higgs searches.
\begin{itemize}
\item The superpotential $\mu$ parameter enters $\Delta_{EW}$ directly,
leading to $|\mu |\lesssim 350$ GeV.
This implies that for heavy Higgs searches with $m_{A,H}\gtrsim 2|\mu |$, then
SUSY decay modes of $H,\ A$ should typically be open. If these additional
decay widths to SUSY particles are large, then the branching fraction to the
$\tau\bar{\tau}$ discovery mode can be substantially reduced.
\item For $m_{H_d}\gg m_{H_u}$, then $m_{H_d}$ sets the heavy Higgs mass scale
($m_{A,H}\sim m_{H_d}$) while $m_{H_u}$ sets the mass scale for $m_{W,Z,h}$.
Then naturalness requires\cite{Bae:2014fsa}
\end{itemize}
\be
m_{A,H}\lesssim m_Z\tan\beta\sqrt{\Delta_{EW}}.
\ee
For $\tan\beta\sim 10$ with $\Delta_{EW}\lesssim 30$, then $m_A$ can range up
to $\sim 5$ TeV. For $\tan\beta\sim 40$, then $m_A$ stays natural up to
$\sim 20$ TeV (although for large $\tan\beta\gtrsim 20$, then bottom squark
contributions to $\Sigma_u^u$ become large and provide typically much
stronger limits on natural SUSY spectra).
Since most $H,A\rightarrow\tau\bar{\tau}$ searches and projected reach limits
take place assuming a decoupled SUSY spectra, then such results can
overestimate the collider heavy Higgs reach since in general the presence
of $H,A\rightarrow SUSY$ decay modes will diminish the $H,\ A\rightarrow\tau\bar{\tau}$
branching fraction.
Using naturalness, in Sec. \ref{sec:natplane} we propose a new natural SUSY
benchmark scenario $m_h^{125}({\rm nat})$ which is also consistent with
expectations from the string landscape\cite{Baer:2020kwz}.
In Sec. \ref{sec:prodBF}, we discuss production and decay of heavy neutral
Higgs bosons in the $m_h^{125}({\rm nat})$ scenario.
In Sec. \ref{sec:mT} we discuss signal event generation and SM backgrounds
for the case of back-to-back (BtB) $\tau$s in the transverse plane
using the total transverse mass variable $m_T^{tot}$.
In Sec. \ref{sec:mtautau}, we discuss signal and background for
the acollinear tau pairs using the $m_{\tau\tau}$ variable.
Including this signal channel can lead to a substantial increase in
signal significance and so combined with the BtB $\tau$s can give an
increased collider reach in the $m_A$ vs. $\tan\beta$ search plane.
In Sec. \ref{sec:LHCreach}, we present our reach of present LHC
with 139 fb$^{-1}$ and also the projected reach of LHC Run3 and
HL-LHC.
Our conclusions reside in Sec. \ref{sec:conclude}.
\section{The natural SUSY Higgs search plane}
\label{sec:natplane}
The mass of the light SUSY Higgs boson is given approximately by\cite{Slavich:2020zjv}
\be
m_h^2\simeq m_Z^2\cos^22\beta +\frac{3g^2}{8\pi^2}\frac{m_t^4}{m_W^2}
\left[\ln\frac{m_{\tilde t}^2}{m_t^2}+\frac{x_t^2}{m_{\tilde t}^2}\left(1-\frac{x_t^2}{12m_{\tilde t}^2}\right)\right]
\ee
where $x_t=A_t-\mu\cot\beta$ and $m_{\tilde t}^2\simeq m_{Q_3}m_{U_3}$ is the mean top
squark mass. For a given value of $m_{\tilde t}^2$, then $m_h^2$ is maximal for
$x_t^{max}=\pm\sqrt{6}m_{\tilde t}$.
\subsection{Some previous SUSY Higgs benchmark studies}
In Ref. \cite{Carena:2013qia}, a variety of SUSY Higgs search benchmark
points were proposed, including 1. the $m_h^{max}$ scenario where a value of
$x_t^{max}$ was chosen along with $m_{\tilde g}=1500$ GeV and $m_{SUSY}\equiv m_{\tilde t}=1$ TeV with $\mu=M_2=0.2$ TeV as a conservative choice which maximized the
parameter space of the $m_A$ vs. $\tan\beta$ plane available for new
SUSY Higgs boson searches. Similarly, $m_h^{mod+}$ and $m_h^{mod-}$ scenarios
were proposed with similar parameters except for more moderate
$x_t=1.6 m_{SUSY}$ and $x_t=-2.2 m_{SUSY}$ values. Light stop, light stau,
$\tau$-phobic and low $m_H$ scenarios were proposed as well. Over time,
all these benchmark models have become LHC-excluded since (at least)
they all proposed $m_{\tilde g}\sim 1500$ GeV while after LHC Run 2 the ATLAS/CMS
Collaborations require $m_{\tilde g}\gtrsim 2.2$ TeV\cite{ATLAS:2020syg,CMS:2019zmd}.
In Ref. \cite{Bagnaschi:2018ofa}, an $m_h^{125}$ benchmark model was proposed
with $m_{SUSY}\sim 1.5$ TeV, $\mu =1$ TeV and $m_{\tilde g}=2.5$ TeV in accord
with LHC Run 2 gluino mass constraints.
The $x_t=2.8$ TeV value was chosen to nearly maximize the value of $m_h$
given the other parameters of the model. This model has almost all
$H,\ A\rightarrow SUSY$ decay modes kinematically closed due to the heavy SUSY spectra
so it closely resembles the type-II two-Higgs doublet model (2HDM) phenomenology\cite{Branco:2011iw}.
An $m_h^{125}(\tilde{\tau} )$ scenario
(exemplifying bino-stau coannihilation was selected with $\mu=1$ TeV
along with a $m_h^{125}(\tilde\chi )$ scenario with $\mu =180$ GeV, $M_1=160$ GeV
and $M_2=180$ GeV so that $H,\ A$ decay to many electroweakino states is allowed. Also, an $m_h^{125}(align)$ model with specific {\it alignment without decoupling}\cite{Gunion:2002zf,Carena:2013ooa} parameters with $\mu =7.5$ TeV was chosen along with a
$m_H^{125}$ scenario where the heavy Higgs scalar was actually the 125 GeV Higgs boson. These scenarios would be hard pressed to explain why
$m_{weak}\sim 100$ GeV due to the tuning needed for such large
$\mu$ parameters. The exception is the $m_h^{125}(\tilde\chi )$ scenario,
although here the peculiar gaugino/higgsino mass choices seem at odds with
most theory expectations\footnote{Gaugino mass unification is usually expected
in models based on grand unification, but is also expected by the simple
form of the supergravity (SUGRA) gauge kinetic function which depends typically on only a single hidden sector field in many string-inspired constructs.}.
A somewhat different approach is taken in the model labelled $hMSSM$\cite{Djouadi:2013uqa,Djouadi:2015jea,Arcadi:2022hve}.
In the hMSSM, by adopting a high $m_{SUSY}$ scale and by neglecting
some small radiative corrections to the
Higgs mass matrix, then one may use $m_h$ (along with $m_A$ and $\tan\beta$)
as an input parameter with Higgs mixing angle $\alpha$, $m_H$
and $m_{H^{\pm}}$ as outputs.
This ensures that $m_h=125$ GeV is enforced throughout the remaining
Higgs search parameter space. The adoption of a high value $m_{SUSY}\gtrsim 1$ TeV
then makes this model look like the 2HDM, and sparticle mass spectra are
effectively neglected. By combining $H,\ A\rightarrow\tau\bar{\tau}$ with
$H,\ A\rightarrow t\bar{t}$ at lower $\tan\beta$, then it is claimed almost the entire
$m_A$ vs. $\tan\beta$ parameter space can be probed by HL-LHC for
$m_A\lesssim 1$ TeV\cite{Djouadi:2015jea}.
\subsection{Status of Run 2 LHC searches}
The ATLAS Collaboration has reported on a search for
$H,\ A\rightarrow\tau\bar{\tau}$ at CERN LHC Run 2 using 139 fb$^{-1}$ of
integrated luminosity at $\sqrt{s}=13$ TeV\cite{ATLAS:2020zms}.
The study focusses on
back-to-back $\tau\bar{\tau}$ states where transverse opening angles
$\Delta\phi (\tau_{had}\tau_{had})>155^\circ$ and $\Delta\phi (\tau_{lep}\tau_{had})>135^\circ$ are required. Mixed leptonic-hadronic
($\tau_{lep}\tau_{had}$) and hadronic-hadronic ($\tau_{had}\tau_{had}$) final
states are combined.
The hadronic tau tagging efficiency in one or three charged prong
$\tau$-jets varies from 60-85\%.
The total transverse mass\cite{Barger:1984sm}
\be
m_T^{tot}=\sqrt{(p_T^{\tau_1}+p_T^{\tau_2}+E_T^{miss})^2-(\vec{p}_T^{\tau_1}+\vec{p}_T^{\tau_2}+\vec{E}_T^{miss})^2}
\ee
is measured and a fit to expected signal plus background is made to determine
the presence of a signal. For the signal, the $m_T^{tot}$ distribution is
bounded from above by $m_T^{tot}< m_{H,\ A}$ and near this upper bound is where
the signal-to-background significance is greatest. In this region,
the dominant background comes from Drell-Yan $\gamma^*,\ Z\rightarrow\tau\bar{\tau}$
production.
The signal sample is further divided by either the presence or absence of
a tagged $b$-jet but the signal significance is dominated by the $b$-jet vetoed
events. No signal is found, so the 95\% CL
exclusion limits are plotted in the $m_A$ vs. $\tan\beta$ plane in the
Bagnaschi {\it et al.} $m_h^{125}$ scenario\cite{Bagnaschi:2018ofa}.
They find that for $\tan\beta\sim 10$, then $m_A\lesssim 1.1$ TeV is already
excluded while for $\tan\beta\sim 50$, then $m_A\lesssim 2$ TeV is excluded.
The CMS collaboration has presented results of $H,\ A\rightarrow\tau\bar{\tau}$
searches using 35.9 fb$^{-1}$ of integrated luminosity\cite{CMS:2018rmh}.
The 95\% CL exclusion limits are plotted in the $m_A$ vs. $\tan\beta$
plane for the $m_h^{mod+}$ and hMSSM scenarios.
Further CMS analyses using the full Run 2 data set should be forthcoming.
\subsection{Some previous LHC upgrade SUSY Higgs reach studies}
In Ref. \cite{Cepeda:2019klc},
the ATLAS and CMS collaborations
presented expected reach plots for $H,\ A\rightarrow\tau\bar{\tau}$ for HL-LHC
with either 3 or 6 ab$^{-1}$ of integrated luminosity and $\sqrt{s}=14$ TeV.
The results were a direct extrapolation of their previous search results
from LHC Run 2. ATLAS with 3 ab$^{-1}$ expects to explore $m_A\lesssim 1500$ GeV
for $\tan\beta =10$ in the hMSSM scenario and up to $m_A\lesssim 1$ TeV in the
$m_h^{mod+}$ scenario. The plot upper limit of $m_A<2250$ GeV precludes
any limits for $\tan\beta \gtrsim 40$.
With 3 ab$^{-1}$, the CMS collaboration expects to explore at 95\% CL up to
$m_A<750$ GeV in the $m_h^{mod+}$ scenario and up to $m_A\lesssim 1400$ GeV in the
hMSSM scenario, both for $\tan\beta =10$.
The HL-LHC and ILC sensitivity for heavy SUSY Higgs bosons was also
estimated by Bahl {\it et al.}\cite{Bahl:2020kwe}. Their 95\% CL
exclusion using a combined ATLAS/CMS sensitivity (6 ab$^{-1}$) is
to explore up to $m_A\lesssim 1500$ GeV for $\tan\beta =10$ in the $m_h^{125}$
scenario (heavy SUSY) and to $m_A\lesssim 1$ TeV in the $m_h^{125} (\tilde\chi )$
scenario (light EWinos).
They also explore some $m_{h,EFT}^{125}$ scenarios\cite{Bahl:2019ago}
at low $\tan\beta\sim 1-10$ which we will not consider.
\subsection{The $m_h^{125}(nat)$ Higgs search benchmark}
In this Subsection, we introduce a more plausible SUSY Higgs search
benchmark model in that all its contributions to the weak scale
are comparable to or less than the weak scale by a conservative factor
of $\sim 4$. This would be the class of natural SUSY models characterized by
$\Delta_{EW}\lesssim 30$\cite{Baer:2012up}. These natural SUSY models can be found in several
different guises:
\begin{enumerate}
\item The 2,3,4-extra parameter non-universal Higgs models NUHM2,3,4
which characterize what might be expected from dominant gravity-mediated
SUSY breaking\cite{Baer:2012cf},
\item natural anomaly-mediated SUSY breaking\cite{Baer:2018hwa} (nAMSB) wherein non-universal bulk soft terms
allow for naturalness while maintaining $m_h\simeq 125$ GeV and
\item natural generalized
mirage-mediation (nGMM) models\cite{Baer:2016hfa} wherein soft terms are characterized by comparable
anomaly- and gravity/moduli-mediated contributions.
The nGMM model is expected to emerge\cite{Choi:2005ge} from KKLT moduli stabilization\cite{Kachru:2003aw} and the string landscape\cite{Broeckel:2020fdz}.
\end{enumerate}
For our benchmark models, it is perhaps easiest to settle on the more familiar
gravity-mediated two-extra-parameter non-universal Higgs model
NUHM2\cite{Ellis:2002wv,Baer:2005bu} which is characterized by the parameter space
\be
m_0,\ m_{1/2},\ A_0,\ \tan\beta,\ m_{H_u},\ m_{H_d}
\ee
where $m_0$ denotes the GUT scale matter scalar soft terms,
$m_{1/2}$ are the unified gaugino masses, $A_0$ are common trilinear soft terms
and $\tan\beta\equiv v_u/v_d$ is the usual ratio of Higgs field vevs.
It is reasonable to have $m_{H_u}\ne m_{H_d}\ne m_0$ in gravity-mediation
since the scalar mass soft terms in supergravity do not respect universality.
However, a remnant $SO(10)$ local GUT symmetry may enforce the matter scalars of
each generation to have a common mass $m_0(i)$, where $i=1-3$ is a
generation index.\footnote{
In the landscape context, the first two generations are pulled to common
upper bounds which yields a mixed decoupling/quasi-degeneracy solution to the SUSY flavor and CP problems\cite{Baer:2019zfl}. The third generation is pulled up much less
than the first two generations since it contributes more to the weak scale
via the large Yukawa couplings.}
The Higgs soft terms $m_{H_u}$ and $m_{H_d}$ are frequently traded for the weak scale parameters $\mu$ and $m_A$ via the scalar potential minimization conditions.
Thus, the parameter space of NUHM2
\be
m_0,\ m_{1/2},\ A_0,\ \tan\beta,\ \mu,\ m_A
\ee
is well-suited to Higgs searches since it allows for variable
$m_A$ and $\tan\beta$
as independent input parameters while also allowing the input of $\mu\lesssim 350$
GeV which is required by naturalness in Eq. \ref{eq:mzs}.
Using NUHM2, we adopt the following natural SUSY benchmark Higgs search scenario:
\be
m_h^{125}({\rm nat}):\ m_0=5\ {\rm TeV},\ m_{1/2}=1.2\ {\rm TeV},\ A_0=-1.6m_0,\ \tan\beta ,\ \mu=250\ {\rm GeV}\ {\rm and}\ m_A .
\ee
The $m_h^{125}({\rm nat})$ benchmark model spectra is shown in Table \ref{tab:bm} for $\tan\beta =10$ and $m_A=2$ TeV.
We adopt the computer code Isajet\cite{Paige:2003mg} featuring Isasugra\cite{Baer:1994nc}
for spectra generation. The SUSY Higgs boson masses are computed
using renormalization-group (RG) improved third generation fermion/sfermion
loop corrections\cite{Bisset:1995dc}.
The RG improved Yukawa couplings include full
threshold corrections\cite{Pierce:1996zz} which account for leading
two-loop effects\cite{Carena:2002es}.
From the Table, we note that $m_h=124.7$ GeV and $\Delta_{EW}=22$.
Recent versions of FeynHiggs\cite{Bahl:2018qog} predict $m_h$ values closer to Isasugra
than past versions, and for the $m_h^{125}({\rm nat})$ benchmark
point we find from FeynHiggs 2.18.1 that $m_h=125.3\pm 1.3$ GeV,
in close accord with Isasugra.
\begin{table}[h!]
\centering
\begin{tabular}{lc}
\hline
parameter & $m_h^{125}({\rm nat})$ \\
\hline
$m_0$ & 5 TeV \\
$m_{1/2}$ & 1.2 TeV \\
$A_0$ & -8 TeV \\
$\tan\beta$ & 10 \\
\hline
$\mu$ & 250 GeV \\
$m_A$ & 2 TeV \\
\hline
$m_{\tilde{g}}$ & 2830 GeV \\
$m_{\tilde{u}_L}$ & 5440 GeV \\
$m_{\tilde{u}_R}$ & 5561 GeV \\
$m_{\tilde{e}_R}$ & 4822 GeV \\
$m_{\tilde{t}_1}$& 1714 GeV \\
$m_{\tilde{t}_2}$& 3915 GeV \\
$m_{\tilde{b}_1}$ & 3949 GeV \\
$m_{\tilde{b}_2}$ & 5287 GeV \\
$m_{\tilde{\tau}_1}$ & 4746 GeV \\
$m_{\tilde{\tau}_2}$ & 5110 GeV \\
$m_{\tilde{\nu}_{\tau}}$ & 5107 GeV \\
$m_{\tilde{\chi}_1^\pm}$ & 261.7 GeV \\
$m_{\tilde{\chi}_2^\pm}$ & 1020.6 GeV \\
$m_{\tilde{\chi}_1^0}$ & 248.1 GeV \\
$m_{\tilde{\chi}_2^0}$ & 259.2 GeV \\
$m_{\tilde{\chi}_3^0}$ & 541.0 GeV \\
$m_{\tilde{\chi}_4^0}$ & 1033.9 GeV \\
$m_h$ & 124.7 GeV \\
\hline
$\Omega_{\tilde{z}_1}^{std}h^2$ & 0.016 \\
$BF(b\rightarrow s\gamma)\times 10^4$ & $3.1$ \\
$BF(B_s\rightarrow \mu^+\mu^-)\times 10^9$ & $3.8$ \\
$\sigma^{SI}(\tilde{\chi}_1^0, p)$ (pb) & $2.2\times 10^{-9}$ \\
$\sigma^{SD}(\tilde{\chi}_1^0, p)$ (pb) & $2.9\times 10^{-5}$ \\
$\langle\sigma v\rangle |_{v\rightarrow 0}$ (cm$^3$/sec) & $1.3\times 10^{-25}$ \\
$\Delta_{\rm EW}$ & 22 \\
\hline
\end{tabular}
\caption{Input parameters (TeV) and masses (GeV)
for the $m_h^{125}({\rm nat})$ SUSY benchmark point from the NUHM2 model
with $m_t=173.2$ GeV using Isajet 7.88~\cite{Paige:2003mg}.
}
\label{tab:bm}
\end{table}
In Fig. \ref{fig:mhdew}{\it a}), we show regions of light Higgs mass
$m_h$ in the $m_A$ vs. $\tan\beta$ plane for the $m_h^{125}({\rm nat})$
benchmark scenario. From the plot, we can see that the value of $m_h$
is indeed very close to 125 GeV throughout the entire plane except for
very low $\tan\beta\lesssim 6$ where $m_h$ dips below $123$ GeV.
In Fig. \ref{fig:mhdew}{\it b}), we show regions of EW naturalness
measure $\Delta_{EW}$. We see that in the region of $\tan\beta:1-15$,
then $\Delta_{EW}\lesssim 30$ even for $m_A$ extending out as high as
$5$ TeV. For larger $\tan\beta\gtrsim 20$, then $\Delta_{EW}$ moves to
$\sim 45-90$ mainly because the $b$- and $\tau$-Yukawa couplings grow
and lead to large $\Sigma_u^u(\tilde b,\tilde \tau )$ terms.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{mh.png}
\includegraphics[height=0.25\textheight]{dew.png}
\caption{{\it a}) Contours of $m_h$ in the $m_A$ vs. $\tan\beta$ plane using the
$m_h^{125}({\rm nat})$ scenario from the NUHM2 model with $m_0=5$ TeV, $m_{1/2}=1.2$ TeV,
$A_0=-8$ TeV and $\mu=250$ GeV. {\it b}) Regions of electroweak naturalness
measure $\Delta_{EW}$ in the same plane as {\it a}).
\label{fig:mhdew}}
\end{center}
\end{figure}
\section{Production and decay of $H,\ A$ in the $m_h^{125}({\rm nat})$ scenario}
\label{sec:prodBF}
\subsection{$H$ and $A$ production cross sections in the $m_h^{125}({\rm nat})$ scenario}
The $s$-channel resonance production of the $H$ and $A$ bosons takes place
mainly via $gg$ and $q\bar{q}$ (mainly $b\bar{b}$) fusion reactions at
hadron colliders. The total $H$ and $A$ production cross sections
are shown in the $m_A$ vs. $\tan\beta$ plane in Fig. \ref{fig:sigHA14}
for $\sqrt{s}=14$ TeV $pp$ collisions-- as are expected at CERN LHC Run 3
and at high-luminosity LHC (HL-LHC) where of order 300 fb$^{-1}$ (for Run 3)
and 3000 fb $^{-1}$ (for HL-LHC) of integrated luminosity is expected
to be obtained.
For the cross sections, we use the computer code
SusHi\cite{Harlander:2012pb} which contains contributions up to
NNLO in perturbative QCD. The cross sections range from over
$10^4$ fb at low $m_{H,A}\sim 400$ GeV down to $\sigma (pp\rightarrow H,A )<1$ fb for
$m_{H,A}\sim 2$ TeV, and they increase somewhat with increasing $\tan\beta$
where production via $b\bar{b}$ fusion is enhanced.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{sigH.png}
\includegraphics[height=0.25\textheight]{sigA.png}\\
\caption{The total cross section for {\it a}) $pp\rightarrow H$
and {\it b}) $pp\rightarrow A$ at $\sqrt{s}=14$ TeV using the SusHi
code\cite{Harlander:2012pb}.
\label{fig:sigHA14}}
\end{center}
\end{figure}
\subsection{$H$ and $A$ branching fractions in the $m_h^{125}({\rm nat})$ scenario}
It is sometimes claimed in the literature that the tree-level
production and decay rates for the $H$ and $A$ bosons depend only
on $m_A$ and $\tan\beta$, and indeed search limits for the heavy Higgs
bosons are typically presented in the $m_A$ vs. $\tan\beta$
plane, following the early pioneering work by Kunszt and
Zwirner\cite{Kunszt:1991qe}.
While this is true for the (non-supersymmetric) 2HDM, it is not true for the
MSSM, where the importance of tree level SUSY Higgs boson decays to SUSY
particles was first emphasized in \cite{Baer:1987eb,Gunion:1987ki,Gunion:1988yc}. In the 2HDM, decays of $H$ and $A$ to the heaviest available
fermion pairs will typically dominate, with decays to $b\bar{b}$
and $\tau\bar{\tau}$ enhanced at large $\tan\beta$. However, in SUSY models
there is a direct gauge coupling
\be
{\cal L}\ni-\sqrt{2}\sum_{i,A}{\cal S}_i^\dagger g t_A\bar{\lambda}_A\frac{}{}\psi_i +H.c.
\ee
where ${\cal S}_i$ labels various matter and Higgs scalar fields
(labelled by $i$), $\psi_i$ is the fermionic superpartner of ${\cal S}_i$
and $\lambda_A$ is the gaugino with gauge index $A$.
Also, $g$ is the corresponding gauge coupling for the gauge group in
question and the $t_A$ are the corresponding gauge group matrices.
Letting ${\cal S}_i$ be the Higgs scalar fields,
we see there is an unsuppressed coupling of the Higgs scalars to
gaugino plus higgsino.
This coupling can lead to dominant SUSY Higgs boson decays to SUSY
particles when the gaugino-plus-higgsino decay channel is
kinematically allowed.
In Fig. \ref{fig:BFHtt1}, we plot the $H \rightarrow\tau\bar{\tau}$ branching
fractions as color-coded regions in the
$m_A$ vs. $\tan\beta$ plane for {\it a}) the hMSSM and {\it b}) for
our $m_h^{125}({\rm nat})$ BM scenario.
For the hMSSM, we use the computer code 2HDMC\cite{Eriksson:2009ws}
with $m_h=125$ GeV throughout the $m_A$ vs. $\tan\beta$ plane but with decoupled sparticles. We use the ``Physical mass input set''. With the potential parameters $\lambda_i$ as in the tree-level MSSM except $\lambda_2$, which includes a correction term to bring the light CP-even higgs mass to be 125 GeV, the only free physical inputs left are then just $m_A$ and $\tan\beta$.
From frame {\it a}), we see
as expected that for the hMSSM, the $BF(H\rightarrow\tau\bar{\tau} )$ increases
with $\tan\beta$. It also increases slightly as $m_A$ increases
since the $\tau$ Yukawa coupling $f_\tau$ increases slightly with scale choice.
In frame {\it b}) for the $m_h^{125}({\rm nat})$ case, we again see an increasing branching fraction as $\tan\beta$ increases, but now as $m_A$
(and hence $m_H$) increases, various SUSY decay modes to EWinos open
up, especially around $m_A\sim 1200$ GeV where decays to gaugino-plus-higgsino
become accessible. We see the $BF(H\rightarrow\tau\bar{\tau})$ can drop from 12\%
on the left-side of the plot down to just a few percent on the right-hand-side.
This is due to the fact that the decay to EWinos ultimately dominates
the heavy Higgs branching fraction\cite{Bae:2015nva,Baer:2021qxa}.
There is also a glitch apparent at around $m_A\sim 2500$ GeV in the contours.
This occurs because we include SUSY threshold corrections to the
Yukawa couplings which are implemented at the scale
$m_{SUSY}^2=m_{\tilde t_1}m_{\tilde t_2}$ and so the Yukawa couplings have a
slight discontinuity (see {\it e.g.} Fig. 6 of Ref. \cite{Baer:2012jp}).
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{Htt_hmssm_1.png}
\includegraphics[height=0.25\textheight]{Htt_rns_1.png}\\
\caption{Branching fraction of $H\rightarrow\tau\bar{\tau}$
in the {\it a}) hMSSM and {\it b}) in the $m_h^{125}({\rm nat})$ benchmark case
in the $m_A$ vs. $\tan\beta$ plane.
\label{fig:BFHtt1}}
\end{center}
\end{figure}
It is also helpful to show the explicit $BF(H\rightarrow\tau\bar{\tau})$ vs. $m_A$
for two specific choices of $\tan\beta =10$ and 40 for the {\it a})
hMSSM and {\it b}) the $m_h^{125}({\rm nat} )$ model in Fig. \ref{fig:BFHtt2}.
For the hMSSM, we again see the slight increase with increasing $m_A$,
although the BFs stay in the vicinity of 10-15\%.
For the $m_h^{125}({\rm nat})$ case, we see the sharp drop off in
$BF(H\rightarrow\tau\bar{\tau})$ as various $H\rightarrow EWinos$ thresholds are passed:
then, ultimately the branching fraction drops below 2\% for
large $m_A$.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{Htt_hmssm_2.png}
\includegraphics[height=0.25\textheight]{Htt_rns_2.png}\\
\caption{Branching fraction of $H\rightarrow\tau\bar{\tau}$
in the {\it a}) hMSSM and {\it b}) in the $m_h^{125}({\rm nat})$ benchmark case
vs. $m_A$ for $\tan\beta =10$ and 40.
\label{fig:BFHtt2}}
\end{center}
\end{figure}
Similar behavior is shown in Fig. \ref{fig:BFAtt1}{\it a}) and {\it b})
for the $A\rightarrow\tau\bar{\tau}$ branching fraction: it has a slight increase with increasing $m_A$ for the hMSSM case but suffers sharp drops in the
$m_h^{125}({\rm nat})$ case due to the turn on of $A$ decay to
gaugino-plus-higgsino. This will affect the reach plots in a substantial way.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{Att_hmssm_1.png}
\includegraphics[height=0.25\textheight]{Att_rns_1.png}\\
\caption{Branching fraction of $A\rightarrow\tau\bar{\tau}$
in the {\it a}) hMSSM and {\it b}) in the $m_h^{125}({\rm nat})$ benchmark case
in the $m_A$ vs. $\tan\beta$ plane.
\label{fig:BFAtt1}}
\end{center}
\end{figure}
The corresponding plots of $BF(A\rightarrow\tau\bar{\tau})$ vs. $m_A$ for
$\tan\beta =10$ and 40 are shown in Fig. \ref{fig:BFAtt2}.
The behavior is rather similar to that already explained for the
$H$ decay.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{Att_hmssm_2.png}
\includegraphics[height=0.25\textheight]{Att_rns_2.png}\\
\caption{Branching fraction of $A\rightarrow\tau\bar{\tau}$
in the {\it a}) hMSSM and {\it b}) in the $m_h^{125}({\rm nat})$ benchmark case
vs. $m_A$ for $\tan\beta =10$ and 40.
\label{fig:BFAtt2}}
\end{center}
\end{figure}
\section{Signal from back-to-back $\tau\bar{\tau}$ via $m_T$}
\label{sec:mT}
In this Section, we present details from our event generation calculations
for the $H,\ A\rightarrow\tau\bar{\tau}$ signal with nearly back-to-back
(BtB) $\tau$s.
For signal and background event generation, we adopt the Pythia 8.07 event
generator\cite{Sjostrand:2007gs} interfaced with the Delphes toy detector simulation\cite{deFavereau:2013fsa}.
For signal, we generate $pp\rightarrow H,\ A\rightarrow\tau\bar{\tau}$
events with the total cross section adjusted to the SusHi NNLO result.
For SM backgrounds, we generate $q\bar{q}\rightarrow \gamma^*,Z\rightarrow\tau\bar{\tau}$
(Drell-Yan), $t\bar{t}$ and $VV$ production where $VV=W^+W^-,W^\pm Z$ and $ZZ$.
For jet finding, we use the Delphes FASTJET jet finder.
The FASTJET jet finder requires $p_T(jet)>25$ GeV and $\Delta R$ between jets
as $\Delta R_{jj}>0.4$. We also require $|\eta_{jet}|<2.5$.
Delphes includes a hadronic $\tau$-jet finding tool which we also use
which identifies one-and-three charged prong jets as tau jets provided
the tau is within $\Delta R=0.4$ of the jet in question. The Delphes
$\tau$-jet identification efficiency is found to be in the 50\% range
which is well below the ATLAS quoted $\tau$-jet efficiency ID
which is at the 75\% level. We also use the Delphes b-tag algorithm and the Delphes isolated lepton tag which requires
$\Delta R (l, l) > 0.3$ with $|\eta (e,\mu )|<2.5$.
The $\tau_{had}\tau_{had}$ channel are selected by single-$\tau$ trigger $p_T$ cut of 160 GeV. Events contain at least two $\tau_{had}$ identified by the Delphes tau-tag algorithm. The two tau $\tau_{had}$ candidates must have opposite electric charge.
The $\tau_{lep}\tau_{had}$ channel are selected using single-electron and single-muon triggers with $p_T$ threshold of 30 GeV. The events contain exactly one isolated lepton and at least one $\tau_{had}$ candidate. The isolated lepton and the $\tau_{had}$ candidate must have opposite electric charge. Also, we rejected the events that the isolated lepton and the $\tau_{had}$ candidate have an invariant mass between 80 GeV and 110 GeV to reduce the background contribution from $Z\rightarrow ee$.
The events from either channel are further divided into categories of the $b$-tag for events containing at least one $b$-jet and the $b$-veto for events containing no $b$-jets.
After selecting for candidate ditau events,
we plot in Fig. \ref{fig:Dphi} the transverse opening angle
$\Delta\phi (\tau\bar{\tau} )$ from our signal and BG events for
our $m_h^{125}({\rm nat})$ benchmark point with $m_A=1$ and 2 TeV and
$\tan\beta =10$ and 40. Both the DY background and the signal events
rise to a peak at $180^\circ$ indicating that these events are mostly
back-to-back in the transverse plane as expected. The ditau opening angle from
$t\bar{t}$ and $VV$ are rather less pronounced at $\Delta\phi\sim 180^\circ$.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{Dphitautau_had-had-bveto.png}
\includegraphics[height=0.25\textheight]{Dphitautau_had-had-btag.png}\\
\includegraphics[height=0.25\textheight]{Dphitautau_lep-had-bveto.png}
\includegraphics[height=0.25\textheight]{Dphitautau_lep-had-btag.png}\\
\caption{Distribution in transverse ditau opening angle
$\Delta\phi (\tau\tau )$ for our $m_h^{125}({\rm nat})$ benchmark
scenario with $\tan\beta =10$ and $m_A=1$ and 2 TeV.
\label{fig:Dphi}}
\end{center}
\end{figure}
We next divide our signal into BtB ditau events,
where $\Delta\phi (\tau\bar{\tau} )>155^\circ$ (this Section)
or non-BtB (acollinear) ditaus where $\Delta\phi (\tau\tau )<155^\circ$
(Sec. \ref{sec:mtautau}).
Then we plot the
total transverse mass variable $m_T^{tot}$ as shown in Fig. \ref{fig:mT}. From the plot, we see that the signal
distributions rise to a peak around $m_T\sim 0.8 m_A$ and then fall off
sharply for $m_T\gtrsim m_A$ due to kinematics
(the cutoff is not completely sharp due to considerable smearing
entering into the signal distributions).
The SM backgrounds are all peaked below $m_T\sim 500$ GeV and have falling
distributions for increasing values of $m_T^{tot}$.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.275\textheight]{mT_had-had-bveto.png}
\includegraphics[height=0.275\textheight]{mT_had-had-btag.png}\\
\includegraphics[height=0.275\textheight]{mT_lep-had-bveto.png}
\includegraphics[height=0.275\textheight]{mT_lep-had-btag.png}\\
\caption{Distribution in $m_T^{tot}$ for ditau events with
$\Delta\phi (\tau\tau )>155^\circ$ for our $m_h^{125}({\rm nat})$ benchmark
scenario with $\tan\beta =10$ and $m_A=1$ and 2 TeV after cuts listed
in the text.
\label{fig:mT}}
\end{center}
\end{figure}
\section{Signal from acollinear $\tau\bar{\tau}$ via $m_{\tau\tau}$}
\label{sec:mtautau}
For acollinear ditau events (non-BtB), we require
the transverse ditau opening angle $\Delta\phi(\tau\tau )<155^\circ$
so that this data set is orthogonal to the back-to-back ditau set.
For the acolliner ditau events, we also require the presence of an
additional jet in the event besides the $\tau_{had}$ jets (usually an initial-state-radiation (ISR)
jet in the case of signal events): $n_{jets}\ge 1$.
For this configuration,
then we are able to use the tau-tau invariant mass reconstruction trick
since once $\vec{E}_T^{miss}$ is known, and we assume the neutrinos from
each tau decay are collinear with the parent tau direction, then the
ditau invariant mass can be solved for.
Since the taus are ultra-relativistic, the daughter visible decay
products and the associated neutrinos are all boosted in the direction of
the parent $\tau$ momentum.
In the approximation that the visibles (vis) and the neutrinos from the
decay of each tau are all exactly collimated in the tau direction,
we can write the momentum carried off by the neutrinos from the decay
$\tau_1\rightarrow vis_1\nu$ of the first tau as $\xi_1\vec{p}_T(vis_1)$ and
likewise for the second tau.
Momentum conservation in the transverse plane requires
\be
-\vec{p}_T(j)=(1 + \xi_1)\vec{p}_T(vis_1) + (1 + \xi_2)\vec{p}_T(vis_2).
\ee
Since this is really two independent equations
(recall we require $p_T(j) > 25$ GeV), it is possible
to use the measured values of the jet and visible-tau-decay momenta to
solve these to obtain $\xi_1$ and $\xi_2$, event-by-event.
It is simple to check that in the approximation of collinear tau decay, the
squared mass of the di-tau system is given by
\be
m_{\tau\tau}^2 = (1 +\xi_1)(1 +\xi_2)m_{vis_1 vis_2}^2
\ee
For ditau plus jet events from $H,\ A$-decay to taus, we expect
$\xi_i > 0$ and $m_{\tau\tau}^2$ to peak at $m_{H,\ A}^2$.
Moreover, for these events, the missing energy vector will usually point in
between the two $\tau (vis)$ momentum vectors in the transverse plane.
In contrast, for backgrounds where $E_T^{miss}$ arises from neutrinos from
decays of heavy SM particles ($t$, $W$, $Z$), the visible and $E_T^{miss}$
directions are uncorrelated and the $E_T^{miss}$-vector may point well away,
or even backwards, from one of the leptons so that one (or both)
$\xi_i < 0$.
Then we can plot the $m_{\tau\tau}$
distribution, as is shown in Fig. \ref{fig:mtautau} for
$\tan\beta =10$ and 40 and for {\it a}) $m_A=1$ TeV and {\it b}) $m_A=2$ TeV.
From the plot, the DY distribution shows a remnant peak at $m_Z=91.2$ GeV
while $t\bar{t}$ and $VV$ are peaked below $500$ GeV. In contrast,
the $A\rightarrow\tau\bar{\tau}$ signal distributions are peaked at
$m_{\tau\tau}\sim m_A$ with a width that arises from smearing effects and
non-exact-collinearity of the $\tau$ decay products.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.275\textheight]{mtautau_had-had-bveto.png}
\includegraphics[height=0.275\textheight]{mtautau_had-had-btag.png}\\
\includegraphics[height=0.275\textheight]{mtautau_lep-had-bveto.png}
\includegraphics[height=0.275\textheight]{mtautau_lep-had-btag.png}\\
\caption{Distribution in $m_{\tau\tau}$ for ditau events with
$\Delta\phi (\tau\tau )<155^\circ$ and $n_{jet}\ge 1$
for our $m_h^{125}({\rm nat})$ benchmark
scenario with $\tan\beta =10$ and {\it a}) $m_A=1$ and
{\it b}) $m_A=2$ TeV after cuts listed in the text.
\label{fig:mtautau}}
\end{center}
\end{figure}
To illustrate some numerics of our results, in Table \ref{tab:sigma}
we list the resultant signal and background cross sections (in fb)
after all cuts for the cases of
$pp\rightarrow H,\ A\rightarrow\tau\bar{\tau}$ at $\sqrt{s}=14$ TeV for
$\tan\beta =10$ and $m_A=1$ TeV, for both the hMSSM and the $m_h^{125}({\rm nat})$
scenario. From the Table, we see that, as expected, the surviving signal after
cuts from the $m_h^{125}({\rm nat})$ scenario is somewhat diminished
from the hMSSM case due to the diminished branching fractions
$BF(H,\ A\rightarrow\tau\bar{\tau})$. Also, the two signal channels
from $H$ and from $A$ production are nearly comparable.
The dominant background comes from $\gamma^*,Z\rightarrow\tau\bar{\tau}$ while
$t\bar{t}$ and $VV$ are smaller but still significant. The signal is
quite smaller in the acollinear channel than in the BtB channel. However, this
is compensated for somewhat by smaller backgrounds in the acollinear channel
than in the BtB channel, which makes the acollinear channel to have a much better $S/B$ ratio than the BtB channel.
\begin{table}[h!]
\centering
\begin{tabular}{lcc}
\hline
process & back-to-back (BtB) & acollinear \\
\hline
$H\rightarrow\tau\bar{\tau}(hMSSM)$ & 0.197 & 0.024 \\
$A\rightarrow\tau\bar{\tau}(hMSSM)$ & 0.222 & 0.027 \\
$H\rightarrow\tau\bar{\tau}(SUSY)$ & 0.140 & 0.017 \\
$A\rightarrow\tau\bar{\tau}(SUSY)$ & 0.162 & 0.020 \\
\hline
$\gamma^*,Z\rightarrow\tau\bar{\tau}$ & 23.33 & 0.586 \\
$t\bar{t}$ & 19.95 & 2.112 \\
$VV$ & 0.663 & 0.069 \\
$total(BG)$ & 43.94 & 2.767 \\
\hline
\end{tabular}
\caption{Cross section (fb) after optimized cuts for the various
signal and background processes from $pp$ collisions at $\sqrt{s}=14$ TeV
and $\tan\beta =10$ and $m_A=1$ TeV.
}
\label{tab:sigma}
\end{table}
\section{Reach of LHC3 and HL-LHC for $H,\ A\rightarrow\tau\bar{\tau}$}
\label{sec:LHCreach}
After settling on cuts for the BtB and acollinear ditau signals,
it is possible to plot reach plots in terms of exclusion limits or discovery sensitivity for $pp\rightarrow H,\ A\rightarrow\tau\bar{\tau}$
in the $m_A$ vs. $\tan\beta$ plane.
For the exclusion plane, the upper limits for exclusion of a signal are set at the 95\% CL and assume the true distribution one observes in experiment corresponds to background only. They are then computed using a modified frequentist $CL_s$ method\cite{Read_2002} with the profile likelihood ratio as the test statistic.
For the discovery plane, we use $5\sigma$ to denote the discovery and assume the true distribution one observes in experiment corresponds to signal-plus-background. Then we test this against the background only distribution to see if the background only hypothesis could be rejected at a $5\sigma$ level.
In both the exclusion plane and the discovery plane, the asymptotic approximation for getting the median significance is used\cite{Cowan_2011}. The systematic uncertainty is assumed to take $1\sigma$ of the corresponding statistical uncertainty, which is a very conservative rule-of-thumb estimate.
\subsection{Exclusion plane}
As a first step, to compare
with the ATLAS reach of upper limits obtained in their Run 2 search with 139 fb$^{-1}$,
we plot our corresponding exclusion limit in Fig. \ref{fig:exl139_btb_hmssm}. For this plot, we use only the BtB signal in the hMSSM where
$m_h$ is set to 125 GeV, which should compare well with the
$m_h^{125}$ scenario used by ATLAS which contains sparticles at or around 2 TeV,
{\it i.e.} presumably SUSY decay modes are closed for most $m_A$ values
shown in the plot. From Fig. \ref{fig:exl139}, we see our expected
95\% CL exclusion extends to
$m_A\sim 0.9$ TeV for $\tan\beta =10$ which compares favorably with ATLAS. For $\tan\beta =40$, we obtain a 95\% CL exclusion of $m_A\sim 1.9$ TeV,
which is somewhat better than the ATLAS expected result of $m_A\sim 1.8$ TeV.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{exl139_BtBonly_hmssm.png}
\caption{The 95\% CL upper limits with $\sqrt{s}=13$ TeV and 139 fb$^{-1}$
for $H,\ A\rightarrow\tau\bar{\tau}$ using BtB signal only in
the hMSSM.
\label{fig:exl139_btb_hmssm}}
\end{center}
\end{figure}
In Fig. \ref{fig:exl139}, we plot in frame {\it a}) our expected Run 2 exclusion assuming 139 fb$^{-1}$ using the combined BtB and acollinear signal channels
in the hMSSM. The exclusion limit extends to $m_A\sim 0.95$ TeV
for $\tan\beta =10$ and to $m_A\sim 1.95$ TeV for $\tan\beta =40$.
For frame {\it b}), for the $m_h^{125}({\rm nat})$ scenario, then the
corresponding 139 fb$^{-1}$ reach extends to $m_A\sim 0.8$ TeV for
$\tan\beta =10$ and to $m_A\sim 1.8$ TeV for $\tan\beta =40$.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{exl139_hmssm.png}
\includegraphics[height=0.25\textheight]{exl139_susy.png}\\
\caption{The 95\% CL upper limits with $\sqrt{s}=13$ TeV and 139 fb$^{-1}$
for $H,\ A\rightarrow\tau\bar{\tau}$ in the
{\it a}) the hMSSM and {\it b}) the $m_h^{125}({\rm nat})$ scenario.
\label{fig:exl139}}
\end{center}
\end{figure}
In Fig. \ref{fig:exl300}, we present our projected future exclusion plots,
this time for LHC collisions at $\sqrt{s}=14$ TeV with 300 fb$^{-1}$
of integrated luminosity, as would be expected from LHC Run 3.
Here, we use both the BtB and acollinear signals.
For Run 3, we see in frame {\it a}) for the hMSSM with
$\tan\beta =10$, the 95\% CL exclusion extends out to
$m_A\sim 1.1$ TeV while the $\tan\beta =40$ exclusion extends to
$m_A\sim 2.3$ TeV. For the frame {\it b}) case with the
$m_h^{125}({\rm nat})$ scenario, the 95\% CL reach for $\tan\beta =10$
extends to $m_A\sim 1$ TeV whilst for $\tan\beta =40$ the Run 3
exclusion extends to $m_A\sim 2$ TeV. Thus, comparing the Run 2 139 fb$^{-1}$
exclusion to that expected from LHC Run 3, we find an extra gain in exclusion of
$m_A$ of $\sim 0.1-0.2$ TeV. The presence of (natural) SUSY decay modes
tends to reduce the LHC exclusion by $\sim 0.2$ TeV compared to the hMSSM.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{exl300_hmssm.png}
\includegraphics[height=0.25\textheight]{exl300_susy.png}\\
\caption{The 95\% CL upper limits with $\sqrt{s}=14$ TeV and 300 fb$^{-1}$
for $H,\ A\rightarrow\tau\bar{\tau}$ in
{\it a}) the hMSSM and {\it b}) the $m_h^{125}({\rm nat})$ scenario.
\label{fig:exl300}}
\end{center}
\end{figure}
In Fig. \ref{fig:exl3000}, we plot our projected exclusion limits of HL-LHC
for $H,\ A\rightarrow\tau\bar{\tau}$ at $\sqrt{s}=14$ TeV with 3000 fb$^{-1}$.
From frame {\it a}) in the hMSSM case, we find a HL-LHC 95\% CL exclusion out to
$m_A\sim 1.5$ TeV for $\tan\beta =10$ and out to $m_A\sim 2.8$ TeV for
$\tan\beta =40$. If instead we invoke the $m_h^{125}({\rm nat})$
SUSY scenario, then the corresponding HL-LHC exclusion drops to
$m_A\sim 1.3$ TeV for $\tan\beta =10$ and to $m_A\sim 2.6$ TeV
for $\tan\beta =40$, {\it i.e.} a drop in reach of about $0.2$ TeV
in moving from the hMSSM to the $m_h^{125}({\rm nat})$ scenario.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{exl3000_hmssm.png}
\includegraphics[height=0.25\textheight]{exl3000_susy.png}\\
\caption{The 95\% CL upper limits with $\sqrt{s}=14$ TeV and 3000 fb$^{-1}$
for $H,\ A\rightarrow\tau\bar{\tau}$ in
{\it a}) the hMSSM and {\it b}) the $m_h^{125}({\rm nat})$ scenario.
\label{fig:exl3000}}
\end{center}
\end{figure}
\subsection{Discovery plane}
To compare with the ATLAS reach in the discovery plane obtained in their Run 2 search with 139 fb$^{-1}$,
we show our corresponding results in Fig. \ref{fig:dis139_btb_hmssm}.
For this plot, we use only the BtB signal in the hMSSM where
$m_h$ is set to 125 GeV, which should compare well with the
$m_h^{125}$ scenario used by ATLAS which contains sparticles at or around 2 TeV,
{\it i.e.} presumably SUSY decay modes are closed for most $m_A$ values
shown in the plot. From Fig. \ref{fig:dis139}, we see our expected
$5\sigma$ reach extends to
$m_A\sim 0.75$ TeV for $\tan\beta =10$ which compares favorably wih ATLAS. For $\tan\beta =40$, we obtain a $5\sigma$ reach of $m_A\sim 1.7$ TeV,
which is somewhat better than the ATLAS expected reach of $m_A\sim 1.6$ TeV.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{dis139_BtBonly_hmssm.png}
\caption{The discovery sensitivity at $5\sigma$ level with $\sqrt{s}=13$ TeV and 139 fb$^{-1}$
for $H,\ A\rightarrow\tau\bar{\tau}$ using BtB signal only in
the hMSSM.
\label{fig:dis139_btb_hmssm}}
\end{center}
\end{figure}
In Fig. \ref{fig:dis139}, we plot in frame {\it a}) our expected Run 2 discovery reach
assuming 139 fb$^{-1}$ using the combined BtB and acollinear signal channels in the hMSSM. The $5\sigma$ discovery reach for $\tan\beta =10$ extends to $m_A=0.7$ TeV and for $\tan\beta =40$ to $m_A=1.7$ TeV.
For frame {\it b}), for the $m_h^{125}({\rm nat})$ scenario, then the
corresponding 139 fb$^{-1}$ $5\sigma $ discovery reach extends to $m_A\sim 0.7$ TeV for
$\tan\beta =10$ and to $m_A\sim 1.6$ TeV for $\tan\beta =40$.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{dis139_hmssm.png}
\includegraphics[height=0.25\textheight]{dis139_susy.png}\\
\caption{The discovery sensitivity with $\sqrt{s}=13$ TeV and 139 fb$^{-1}$
for $H,\ A\rightarrow\tau\bar{\tau}$ in the
{\it a}) the hMSSM and {\it b}) the $m_h^{125}({\rm nat})$ scenario.
\label{fig:dis139}}
\end{center}
\end{figure}
In Fig. \ref{fig:dis300}, we present our future $5\sigma$ discovery sensitivity reach,
this time for LHC collisions at $\sqrt{s}=14$ TeV with 300 fb$^{-1}$
of integrated luminosity, as would be expected from LHC Run 3.
Here, we use both the BtB and acollinear signals.
For Run 3, we see in frame {\it a}) for the hMSSM the
$\tan\beta =10$ discovery reach extends out to
$m_A\sim 0.8$ TeV while the $\tan\beta =40$ reach extends to
$m_A\sim 1.8$ TeV. For the frame {\it b}) case with the
$m_h^{125}({\rm nat})$ scenario, the discovery sensitivity reach for $\tan\beta =10$
extends to $m_A\sim 0.75$ TeV whilst for $\tan\beta =40$ the Run 3
reach extends to $m_A\sim 1.75$ TeV. Thus, comparing the Run 2 139 fb$^{-1}$
reach to that expected from LHC Run 3, we find an extra gain in reach of
$m_A$ of $\sim 0.1-0.2$ TeV. The presence of (natural) SUSY decay modes
tends to reduce the LHC reach by $\sim 0.1$ TeV compared to the hMSSM.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{dis300_hmssm.png}
\includegraphics[height=0.25\textheight]{dis300_susy.png}\\
\caption{The discovery sensitivity with $\sqrt{s}=14$ TeV and 300 fb$^{-1}$
for $H,\ A\rightarrow\tau\bar{\tau}$ in
{\it a}) the hMSSM and {\it b}) the $m_h^{125}({\rm nat})$ scenario.
\label{fig:dis300}}
\end{center}
\end{figure}
In Fig. \ref{fig:dis3000}, we plot our discovery reach of HL-LHC
for $H,\ A\rightarrow\tau\bar{\tau}$ at $\sqrt{s}=14$ TeV with 3000 fb$^{-1}$.
From frame {\it a}) in the hMSSM case, we find a HL-LHC discovery sensitivity reach out to
$m_A\sim 1.25$ TeV for $\tan\beta =10$ and out to $m_A\sim 2.45$ TeV for
$\tan\beta =40$. If instead we invoke the $m_h^{125}({\rm nat})$
SUSY scenario, then the corresponding HL-LHC reaches drop to
$m_A\sim 1.15$ TeV for $\tan\beta =10$ and to $m_A\sim 2.25$ TeV
for $\tan\beta =40$, {\it i.e.} a drop in reach of about $0.2$ TeV
in moving from the hMSSM to the $m_h^{125}({\rm nat})$ scenario.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.25\textheight]{dis3000_hmssm.png}
\includegraphics[height=0.25\textheight]{dis3000_susy.png}\\
\caption{The discovery sensitivity with $\sqrt{s}=14$ TeV and 3000 fb$^{-1}$
for $H,\ A\rightarrow\tau\bar{\tau}$ in
{\it a}) the hMSSM and {\it b}) the $m_h^{125}({\rm nat})$ scenario.
\label{fig:dis3000}}
\end{center}
\end{figure}
\subsection{Comparing reach results to expectations from the string landscape}
It is instructive to compare the various LHC upgrade reach in $m_A$ to recent
theoretical predictions for SUSY Higgs bosons from the string landscape
picture\cite{Baer:2020kwz}, which also offers a solution to the cosmological constant problem.
In a statistical scan of pocket universes within the greater multiverse as
expected from the string landscape, one expects a power-law draw to large soft
terms\cite{Douglas:2004qg}, including $m_{H_d}^2$ which tends to set the mass scale for $m_{A,H}$.
However, the draw to large soft terms is tempered by the requirement that
contributions to the weak scale should not lie outside the
Agrawal-Barr-Donoghue-Seckel (ABDS) anthropic window\cite{Agrawal:1998xa} lest the weak scale
become too big and complex nuclei and hence atoms as we know them do not form (atomic principle).
In such a setting, the expected statistical predictions in the
$m_A$ vs. $\tan\beta$ plane were plotted in Fig. 9 of Ref. \cite{Baer:2019xww}.
In that Figure, the string landscape with an $n=1$ power-law draw to
large soft terms typically has $m_A$ extending from $1-8$ TeV with
$\tan\beta \sim 10-20$. By comparing our LHC reach plots from either exclusion plane or discovery plane with the string
landscape expectation, we see that even HL-LHC will only probe a small portion
of the theory-expected region of parameter space.
\section{Conclusions}
\label{sec:conclude}
In this paper, we have re-examined the current LHC and LHC-upgrades reach for
SUSY Higgs bosons in a natural SUSY model with $m_h\simeq 125$ GeV.
This led us to propose the $m_h^{125}({\rm nat})$ scenario where a
100 GeV weak scale emerges because all contributions to the weak scale are
comparable to or less than the measured weak scale, in accord with
practical naturalness.
This scenario is a more plausible SUSY benchmark than many
others proposed in the literature in that it requires no implausible finetunings
of parameters in order to gain a value for the weak scale in accord with its measured value. The price of this natural SUSY scenario is that for
$m_A\gtrsim 1-2$ TeV, as is being presently explored at LHC, the
$H,\ A$ decay modes to gaugino+higgsino are frequently open and can even dominate the heavy Higgs branching ratios, thus diluting the
value of the $H,\ A\rightarrow\tau\bar{\tau}$ branching fraction as expected in the
hMSSM, or other unnatural SUSY models with a heavy spectrum of SUSY particles.
We also revisited the $H,\ A\rightarrow\tau\bar{\tau}$ discovery channels.
Along with the channel used by ATLAS and CMS of BtB ditaus, we
advocated for inclusion of acollinear ditaus where the ditau invariant mass
can be reconstructed under the assumption that the daughter neutrinos
from $\tau$ lepton decay are collinear with the parent $\tau$ direction.
This additional signal channel can substantially increase the signal
compared to using only the BtB ditau channel.
Using the combined BtB and acollinear ditau signals along with the
$m_h^{125}({\rm nat})$ scenario (and the hMSSM for comparison), we
evaluated the present LHC and future LHC upgrades
exclusion limits and $5\sigma$ discovery reach for
$H,\ A\rightarrow\tau\bar{\tau}$ in the $m_A$ vs. $\tan\beta $ plane.
For $\tan\beta =10$, the reach for $m_A$ in the $m_h^{125}({\rm nat})$
senario for Run 2 (Run 3) ((HL-LHC))
extends to $m_A\sim 1$ TeV (1.1 TeV) ((1.4 TeV)).
This will probe some additional chunk of parameter space, although
string landscape predictions allow $m_A$ values up to $\sim 8$ TeV,
so much higher energy hadron colliders will be needed for a complete
coverage of heavy Higgs boson parameter space.
{\it Acknowledgements:}
This material is based upon work supported by the U.S. Department of Energy,
Office of Science, Office of High Energy Physics under Award Number DE-SC-0009956 and DE-SC-001764.
|
2,869,038,156,814 | arxiv | \section{Introduction}
The HERA photoproduction data are the main experimental component in Forshaw's
``Theorist's Highlights"~\cite{Forshaw}, so this talk gives more stress to
photon-photon collisions, with some mention of related HERA results. There are
five sections; $F_{2}^{\gamma}$ hadronic; Other Photon Structure Functions;
Inclusive Processes; Exclusive Processes; and Dreams - possible future
developments. Many important contributions have had to be left out for lack of
time and space.
\section{$F_{2}^{\gamma}$, hadronic}
New results on $F_{2}^{\gamma}(x,Q^{2})$ from singly tagged events have been
presented by three LEP experiments, DELPHI~\cite{Tyapkin}, ALEPH~\cite{Finch}
and OPAL~\cite{Nisius,Bechtluft}.
\begin{wrapfigure}[12]{r}{5.2cm}
\vspace{-0.2cm}
\epsfig{file=dis.eps,height=3.8cm,width=5.5cm}
\vspace{-0.7cm}
\caption{Variables in electron-photon DIS.}
\label{fig:feynman}
\end{wrapfigure}
Figure~\ref{fig:feynman} is the Feynman graph for a $\gamma \gamma$ scattering
event at an $e^{+}e^{-}$ collider. For singly tagged events one of the
scattered electrons is detected, giving a good measurement of
$Q^{2}=2E_{b}E_{tag}(1-cos \theta_{tag})$ for the probing photon. The other
lepton is required not to be seen, which keeps the value of $P^{2}$ for the
target photon close to zero. The invariant mass of the hadronic system is
$W_{\gamma \gamma}$, which is underestimated because some of the hadronic
energy is poorly measured in the forward regions of the
detectors~\cite{Lauber,OPALZeits}. This means that
the value of $x=Q^{2}/(Q^{2}+W_{\gamma
\gamma}^{2})$ is overestimated. The experiments use unfolding
packages~\cite{Blobel,Kartvili} to correct for this. (Things are much easier
at HERA for the measurement of the proton structure function. There the target
proton has a unique high momentum instead of the soft distribution of virtual
target gammas radiated from the electron beams at LEP, the $ep$ event rate at
large values of W is much higher than in $e \gamma$, and $x$ is well
determined.)
OPAL~\cite{Bechtluft} has new $F_{2}^{\gamma}(x,Q^{2})$ data for two bins with
average $Q^{2}$ values of 1.86 and 3.76 GeV$^{2}$, the first measurements in
this low $Q^{2}$ region since TASSO~\cite{TASSO} and TPC/2$\gamma$~\cite{TPC}.
Electrons were tagged in the OPAL Silicon-Tungsten luminometer at angles down
to 27 mrad from the beam, with the LEP $e^{+}e^{-}$ energy close to the peak of
the $Z^{0}$. Since LEP has now moved on past the $WW$ threshold these may be
the last measurements in this $Q^{2}$ range for a long time.
\begin{figure}[t]
\vspace{-1.3cm}
\begin{center}
\mbox{\hspace{-1.1cm}\epsfig{file=fig-1-2.eps,height=10cm}}
\vspace{-2.1cm}\begin{minipage}[t]{0.42\linewidth}
\caption{\label{fig:Wwvis}
$W-W_{\rm vis}$ correlation for various Monte Carlo models with
and without the included simulation of the OPAL forward region
(FR) between $25<\theta <200$ mrad.}
\end{minipage}\hfill
\begin{minipage}[t]{0.52\linewidth}
\caption{\label{fig:eflow}
Hadronic energy flow per event as a function of pseudorapidity
based on the HERWIG generator,
before and after detector simulation. The tag is always at
negative $\eta$ and is not shown.}
\end{minipage}
\end{center}
\end{figure}
The unfolded $F_{2}^{\gamma}$ distributions at low $Q^{2}$ show the following
characteristics:
\begin{itemize}
\item There is no sudden change of the shape of $F_{2}^{\gamma}(x)$ when
$Q^{2}$ drops below $5 \rm ~GeV^{2}$ (compare shape in ref~\cite{Bechtluft}
with ref~\cite{Finch} and ref~\cite{Nisius}). This is
in contrast with the previous measurement from $TPC/2 \gamma$.
\item The absolute value of $F_{2}^{\gamma}$
(ref~\cite{Bechtluft} Fig.~2) is higher than either
the GRV~\cite{Vogt,GRV} or the SaS-1D~\cite{SaS-1D} predictions. The GRV-HO
curve comes closest.
\item A rise at $x<0.01$, as seen in the proton structure at
HERA~\cite{ZEUSfirst,H1first}, is allowed but not established,
largely because --
\item The systematic errors after unfolding are much larger than the
statistical errors (true for all LEP $F_{2}^{\gamma}$ measurements, see
discussion in next few paragraphs).
\end{itemize}
The values of $F_{2}^{\gamma}$ in the medium to large $Q^{2}$ range
($5<Q^{2}<120 \rm ~GeV^{2}$) from the three LEP
experiments~\cite{Tyapkin,Finch,Nisius} are in good agreement
(see Figure 3 in~\cite{Nisius}). All of them
are consistent with the expected $\ln Q^{2}$ rise from QCD~\cite{Vogt}. The
DELPHI error bars are less than those from ALEPH and OPAL, for comparable
statistics, because DELPHI has a different approach to calculating the
systematic errors from unfolding; what Lauber~\cite{Lauber} calls ``the
problem".
\begin{wrapfigure}[17]{r}{5.9cm}
\vspace{-0.1cm}
\epsfig{file=graphs.eps,height=6.cm,width=6.cm}
\vspace{-0.7cm}
\caption{Feynman graphs of direct and resolved processes.}
\label{fig:directetc}
\end{wrapfigure}
The problem was posed -- in exaggerated form, we now know -- by Forshaw
(reporting an exercise of Seymour, corroborated by L\"onnblad) at
Photon'95~\cite{Forshaw95}. Events from the HERWIG~\cite{HERWIG} Monte Carlo
program were passed through a simple detector simulation which modelled the
way the experimental analyses had previously been done by suppressing all
hadron reconstruction in the endcap regions ($\theta<200 mr$). In this HERWIG
exercise, for generated values of $W_{\gamma \gamma}>15 \rm ~GeV$ almost all
correlation was lost between the visible reconstructed value
$W_{\rm vis}$ and the generated value -- see the open circles in
Figure~\ref{fig:Wwvis}~\cite{LauberWw}. Studies with
PYTHIA~\cite{PYTHIA} and ARIADNE~\cite{ARIADNE} showed a similar effect. If
this were representative of what is really happening in experiments it must
mean that, for large $W_{\gamma \gamma}$ and hence for small $x$, unfolding
results would be unreliable -- as experimenters already feared~\cite{DJMLund}.
An immediate partial remedy was clear to the experimenters; use the sampled
hadron energy from the forward electromagnetic calorimeters.
Figure~\ref{fig:eflow} shows
that approximately one third of this energy is actually measured by OPAL.
ALEPH and DELPHI are similar. The result is shown as the solid circles in
Figure~\ref{fig:Wwvis}(a). Some correlation is already restored.
But study of the data has led all three LEP experiments to doubt the
completeness of the modelling in HERWIG and PYTHIA. The measured hadronic
energy flows in OPAL and ALEPH, as reported here~\cite{Finch,Nisius}, show
less energy in the partially sampled forward region than predicted by these two
Monte Carlo models, and more energy goes into parts of the well-measured
central region. In OPAL the shape of the observed energy flow is closer to
that from the simple F2GEN~\cite{F2GEN} model where the outgoing hadronic
system is generated as the pointlike production of a quark-antiquark pair,
though this must be an incomplete model of the QCD process.
Figure~\ref{fig:Wwvis}(b) shows how much better the correlation is
between $W_{\rm vis}$ and the true value for events generated with this pointlike
F2GEN model, both with and without the sampled hadronic energy from the forward
region. The distribution of hadronic transverse energy $E_{\rm t,out}$,
perpendicular to the beam-tag plane, is also very different between data and
HERWIG or PYTHIA, especially at low $x$~\cite{Lauber}. And Rooke~\cite{Rooke}
has shown that the number of events with 2 high transverse energy jets is much
lower in HERWIG and PYTHIA than in the data. In both of these cases the
pointlike F2GEN sample lies on the other side of the data points from HERWIG
and PYTHIA. Butterworth (private communication) has speculated that HERWIG and
PYTHIA may be underestimating the contribution from one or more hard-parton
processes; photon-gluon fusion, for instance (Figure~\ref{fig:directetc}(b)).
\begin{wrapfigure}[17]{r}{6.9cm}
\epsfig{file=flow.eps,height=6.cm,width=6.9cm}
\vspace{-0.5cm}
\caption{$F_2^\gamma$ measurement flow chart.}
\label{fig:flow}
\end{wrapfigure}
The way the game is now being played is shown in Figure~\ref{fig:flow} as a
flowchart. Lauber~\cite{Lauber} described an exercise with Seymour and
L\"onnblad which is represented by the nearly vertical dotted arrow from
item F to item B on the
flowchart, using the energy flow and $E_{\rm t,out}$ histograms from experiment,
item F, to tune the parameters of the parton shower generators,
item B, in HERWIG and PYTHIA. Tyapkin~\cite{Tyapkin} reported a similar
exercise with the DELPHI generator TWOGAM which has an explicit singly resolved
photon component~\cite{Zimin}, including photon-gluon fusion. The nearly
horizontal dotted arrow, from INPUT to item B on the flowchart,
represents a feature of both HERWIG
and PYTHIA which use the input set of theoretical parton density functions in
their parton shower generators as well as in the cross section generator.
The large systematic errors on the OPAL and ALEPH unfoldings come from
assuming a set of Monte Carlo models which cover the whole range of variations
in the histogrammed quantities. The DELPHI errors are smaller because they
only use the tuned TWOGAM Monte Carlo for unfolding.
There is a serious dilemma here. If we tune the generators perfectly, to match
all of the observed histograms, then it will not matter what input
parametrisation of $F_{2}^{\gamma}(x,Q^{2})$ we have used; the unfolding
package, item E, will automatically give us back the input
$F_{2}^{\gamma}(x,Q^{2})$ as our measured output. What is needed is a set of
Monte Carlo models whose parameters are all tied down, either by QCD theory or
by fits to other data -- hadronic scattering, HERA photoproduction, etc. We
must use them to unfold $F_{2}^{\gamma}(x,Q^{2})$ from the visible $x$
distribution, but we must also check that they give good energy flows, jet
numbers and $E_{\rm t,out}$ distributions. If they do not there must be something
missing from them which will have to be added in a well motivated way, or we
have to look for better models. It is intriguing that the PHOJET Monte Carlo
model~\cite{Engel} fits some features of the untagged $\gamma \gamma$ data as
well as PYTHIA does~\cite{Buergin}. A version of PHOJET with off-shell photons
is eagerly awaited, as are re-engineered versions of HERWIG and PYTHIA.
The last three or four years of LEP running will double or triple the
statistics available for photon structure function analysis. If the Monte
Carlo tools can be refined to match there is every prospect of clear answers to
two questions; can we measure $\Lambda _{QCD}$ from the high $Q^{2}$ evolution,
and is there a rise of $F_{2}^{\gamma}(x,Q^{2})$ at low $x$? Of course, we
would also like to measure the gluon density in the photon -- but that is only
accessible directly through inclusive processes, see below.
\section{Other Photon Structure Functions}
There is no reason to expect surprises from measurements of the QED structure
functions of the photon. A large part of our motive for studying them is to
use them as a testbed for the techniques used to extract the hadronic structure
functions. The longitudinal hadronic structure function $F_{L}^{\gamma}$ is
particularly interesting because it should have different QCD
scaling~\cite{Witten} behaviour from $F_{2}^{\gamma}$. But it had been shown
before LEP started up~\cite{Millergreenbook} that $F_{L}^{\gamma}$ would be
hard to measure there because of poor statistics for events with the highest
sensitivity to $F_{L}^{\gamma}$, events with low tagged electron energies. The
difficulties are now known to be even greater due to background from fake-tags
by off-momentum electrons in the beam halo (e.g.~\cite{Zimin}). More recently
Field and others~\cite{Field95,LEP2wkshp} have pointed out that there are
other structure functions which are akin to $F_{L}^{\gamma}$, but which can be
measured from the main sample of tagged data.
ALEPH~\cite{Brew} and OPAL~\cite{Doucet} both reported results from singly
tagged $\gamma \gamma \rightarrow \mu^{+} \mu^{-}$ samples. The new structure
functions govern the distribution of the azimuthal angle $\chi$ between plane
of the outgoing muons and the plane of the beam and the tagged electron in the
$\gamma \gamma$ C. of M. Both saw significant values for $F_{B}^{\gamma,
QED}$, in agreement with the QED prediction. ALEPH also presented the first
measurement of $F_{A}^{\gamma, QED}$. $F_{B}^{\gamma, QED}$ multiplies the
$cos 2\chi$ term in the angular distribution and $F_{A}^{\gamma, QED}$
multiplies the $cos \chi$ term. Since the two experiments used different sign
conventions for the definition of $\chi$ it may well be that OPAL
``folded away''
their sensitivity to $F_{A}^{\gamma, QED}$. Successful measurement of
$F_{B}^{\gamma, QED}$ is particularly encouraging because its hadronic form has
the same parton content as $F_{L}^{\gamma}$, in the limit of massless quarks,
though it comes from a different set of helicity amplitudes.
The task now is to try and use the outgoing jets in hadronic events in the same
way as the outgoing muons to define a $\chi$ angle. There will be problems.
Whereas the tagged $\mu^+\mu^-$ events have a constrained fit which gives a
precisely defined final state $\gamma \gamma$ energy, the hadronic events are
very poorly defined because of the incomplete sampling of hadron energies in
the forward regions. And only a sub-set of hadronic events has a clear two-jet
axis. Telnov suggested that the statistics may be increased by including
untagged events in which the electron recoil plane is implicitly defined by the
overall transverse momentum of the hadronic system, but it is not clear that
this will work. New ideas are still needed. If we are lucky Photon '99 may
see the first analyses for the hadronic $F_{B}^{\gamma}$ and its evolution with
$Q^{2}$.
\section{Inclusive processes}
H1 continues to tease the $\gamma \gamma$ community by trying to
extract photon structure from jets in photoproduction.
The latest study~\cite{Rick} uses an
appropriate set of cuts to get a differential cross section which they
say should be equal to the pointlike anomalous contribution to the photon
interaction $\alpha^{-1} x_{\gamma}(q+\frac{9}{4} g)$. When this is plotted
against the $p_{T}^{2}$ of the jets it has a logarithmic rise, as would be
expected from the scale-breaking nature of the photon-quark coupling. In fact,
the rise seems to be significantly steeper than either the GRV prediction or
the observed logarithmic rise in $F_{2}^{\gamma}(Q^{2})$~\cite{Nisius}. This
may be a hint that the gluon contribution is doing something unexpected or --
more likely I fear at this stage -- that the H1 analysis could contain
systematic effects which have not yet been understood. It is noticeable that
H1 did not present an update at Photon '97 of the Photon '95
analysis~\cite{Erdmann} that claimed to measure the gluonic structure of the
photon, presumably because of systematic difficulties in separating the primary
signal from underlying multiple parton interactions, as described at Photon '95
by ZEUS~\cite{Butterworth}.
Progress has been made in inclusive $\gamma \gamma$ analysis, thanks to two
important factors: a) LEP has moved away from the $Z^{0}$ peak; b) the HERA
experiments have developed analysis techniques which can be applied to $\gamma
\gamma$ as well as to $\gamma p$. Even though the integrated LEP luminosity
above the $Z^{0}$ is still only 10s of pb$^{-1}$ compared with over $100
\rm pb^{-1}$ on peak,
the rate for collecting untagged $\gamma \gamma \rightarrow
hadrons$ is much greater than for tagged events -- and the $Z^{0}$ background
can be kept well below 10\% of the sample with reasonable cuts.
DELPHI~\cite{Zimin} presented a preliminary empirical survey of how the
properties of events evolve with $\sqrt{s_{e^+e^-}}$. The observed cross
section, after selection cuts, rises at about 10 pb/GeV from
$\sqrt{s_{e^+e^-}}\simeq 132 \rm ~GeV$ to
$\sqrt{s_{e^{+}e^{-}}}\simeq 172 \rm ~GeV$,
and it extrapolates back plausibly to just below the points at
$\sqrt{s_{e^+e^-}}\simeq 91 \rm ~GeV$, under the background from the $Z^{0}$.
The same TWOGAM Monte Carlo model that they use for unfolding
$F_{2}^{\gamma}$~\cite{Tyapkin} gives predicted distributions of final state
quantities, including $W_{\gamma \gamma}$, energy flow as a function of
psuedorapidity, transverse momentum of jets and number of jets. Most of them
agree well; this home-made model seems to have a good combination of hard and
soft components. But they draw attention to one disagreement between data and
Monte Carlo at $\sqrt{s_{e^{+}e^{-}}}\simeq 172 \rm ~GeV$, where the energy flow in
the forward region drops below the prediction in a way which is very
reminiscent of the effect seen in the OPAL tagged
data~\cite{Nisius,Bechtluft,Lauber}.
OPAL's inclusive analysis~\cite{Buergin} goes further than DELPHI,
and may be a prototype that other
experiments could follow ($\gamma \gamma \rightarrow hadrons$ has long had as
many different analysis techniques as experiments~\cite{MillerCornell}, which
meant that no experiment could check another's results). OPAL uses a
development of the $x_{\gamma}$ variable from HERA as an estimator of
the fraction of the target photon's momentum carried by the hard parton which
produces identified jets with high $E_{T}$.
\[x^{\pm}_{\gamma}=\frac{\sum_{jets} {E_{j} \pm p_{z,j}}}{\sum_{hadrons}
{E_{i} \pm p_{z,i}}}, \]
where $p_{z,i}$ is the momentum of a hadron projected along the LEP beam
direction. The $\pm$ ambiguity arises because the initial state is
intrinsically symmetric, unlike the situation at HERA,
and either photon might be the target.
Three main categories of events with high $E_{T}$ jets are expected: direct,
singly resolved and doubly resolved (Figure~\ref{fig:directetc})~\cite{ChrLls}.
Using the PYTHIA Monte Carlo, OPAL shows that the direct sample should be very
cleanly separated from the resolved samples by requiring both $x^{+}_{\gamma}$
and $x^{-}_{\gamma}$ to be greater than 0.8. They confirm this separation in
the experimental data for two jet events with $E_{T}>3 \rm ~GeV$ by computing an
effective parton scattering angle $\theta^*$ in the dijet C. of M. and showing
that the direct ($x^{\pm}_{\gamma}>0.8$) sample has the expected rather flat
distribution, while the resolved samples ($x^{+}_{\gamma}$ or $x^{-}_{\gamma}$
less than 0.8) are much more forward-backward peaked, as predicted on a parton
level by lowest order QCD (and as seen in a very similar analysis of
photoproduction by ZEUS, quoted in Aurenche's introduction to the inclusive
session~\cite{Aurenche}).
Given the evidence, at least in the two jet sample, for approximate jet-parton
duality, OPAL has compared the $E_{T}$ distribution of jets with the parton
level NLO matrix element predictions of Kleinwort and Kramer~\cite{KandK}. The
effects of measurement errors are removed by unfolding. The match between
theory and experiment is good for $E_{T}>5 \rm ~GeV$ and is consistent with the
predicted domination by the direct matrix element for
$E_{T}>8\rm ~GeV$. Aurenche
also showed how well these NLO curves matched $\gamma \gamma$ data from AMY and
TOPAZ, as well as photoproduction from H1 and ZEUS.
Comparison of the OPAL inclusive two-jet cross sections with Monte Carlo
predictions is tantalising. For direct events ($x^{\pm}_{\gamma}>0.8$) the
PYTHIA and PHOJETS predictions agree with one another and with the data,
regardless of the set of PDFs used. But for $x^{+}_{\gamma}$ or
$x^{-}_{\gamma}$ less than 0.8, i.e. for the resolved samples, there are some
disagreements between the two programs with the same PDFs, and large
disagreements between different PDFs in the same program. The LAC1~\cite{LAC}
PDFs, for instance, give much too high a cross section with both programs,
surely because of too much gluon. Better statistics and further analysis may
lead to an independent measurement of the gluon content of the photon.
The total cross section $\sigma_{\gamma \gamma}$ has been one of the worst
measured quantities in particle physics~\cite{Pancheri} (but see ``Dreams"
below). It remains so for $W_{\gamma \gamma}<5 \rm ~GeV$, but L3 has presented
first measurements from LEP~\cite{VanRossum} with
$5<W_{\gamma \gamma}<70 \rm ~GeV$
which are much more coherent than anything at lower energies. They show a
significant rise over this range, consistent with the logarithmic rise seen in
hadron-hadron and $\gamma p$ cross sections. The problem with this measurement
is an intensified version of the problem discussed above for $F_{2}^{\gamma}$,
how to correct for the lost hadronic energy in the forward region. In the
tagged events used for the structure function some transverse momentum is
required in the hadronic system to balance the tagged electron. But the bulk
of the events in the total cross section have no tag, and at high $W_{\gamma
\gamma}$ there must be a large fraction of diffractive events in which the
hadrons hardly have enough transverse momentum to enter the forward luminosity
detectors. Most of these events give no trigger and the only way of allowing
for them is to use a Monte Carlo program to correct for their loss. Rather
surprisingly the PHOJETS and PYTHIA Monte Carlo models give very similar
distributions for the $W_{\rm visible}$ distribution, including the barrel region
and the forward detectors, so the total cross section values do not change much
when unfolded with either PYTHIA or PHOJETS. But a plot was
shown~\cite{VanRossum} of cluster
energies in the forward luminosity detectors alone in which there was a marked
divergence at high energies between, on the one hand, the data and the PHOJETS
prediction, which both levelled off and agreed with one another, and on the
other hand, the PYTHIA prediction which fell away much more sharply. This is
all we know about the region where many events must be totally unseen, so it is
hard to be completely confident in the measurement until one or more of DELPHI,
ALEPH and OPAL have done a similar analysis, hopefully with a larger selection
of Monte Carlo models.
Charm production in $\gamma \gamma$ remains intractable. The new L3 result for
the inclusive charm cross section~\cite{Andreev} agrees with the QCD
model~\cite{DKZZ}, but it is only based on 43 events at LEP1 in
$80\rm pb^{-1}$ and
29 events at LEP2 in $20\rm pb^{-1}$, both tagged with muons
from charm decay. It
is frustrating to know that there are thousands of unresolved charm events
there, boosted forward by the $\gamma \gamma$ kinematics so that they cannot be
identified in the microvertex detectors. A few more tagging channels can be
added, however, and the eventual LEP2 luminosity should give a factor of
$\simeq \times 20$, so a worthwhile test of the theory should come by Photon
'01.
A potentially important $\gamma^* \gamma^*$ study has been suggested by Hautmann
and others~\cite{Hautmann,deRoeck} who make predictions from the high energy
limit of QCD (using the BFKL pomeron) which give a significant doubly-tagged
rate for $e^{+}e^{-} \rightarrow e^{+}e^{-}hadrons$ (approximately 1 event per
pb$^{-1}$ at LEP2 with $Q^{2}\simeq 10\rm ~GeV^{2}$). There was some surprise that
the effect has not yet been noticed in LEP1 data, if it is there. A few dozen
doubly tagged events have been seen. They are routinely rejected from the
singly tagged samples of thousands of events which are used for structure
function studies. There may just be enough of them, after inefficiencies have
been allowed for, to accommodate the new prediction. As ever, a Monte Carlo
study of the hadronic acceptance will be needed to find out if a significant
part of the signal is being lost. This will surely be settled by Photon '99.
Come to Freiburg to see if BFKL survives!
\section{Exclusive processes}
There is no shortage of data, but there is a serious shortage of people to work
on it. Cleo II now has over 3fb$^{-1}$ of integrated luminosity, and we can
expect even more from the specialised beauty factory experiments, Belle in
Japan and BaBar at Stanford. For higher mass $\gamma \gamma$ systems LEP is
accumulating worthwhile samples. And there is no shortage of problems to be
solved, both from QCD~\cite{Brodsky} and in resonance physics where predictions
proliferate for glueballs, hybrids, molecules, 4-quark states and recurrences.
I concentrate on two beautiful results from Cleo II, supplemented by L3, and
mention a first survey from H1.
Cleo II has sufficient integrated luminosity to do a precision study on tagged
samples of $\gamma^* \gamma \rightarrow \pi^{0}, \eta$ and $\eta
'$~\cite{Savinov}. They have recalibrated the inner edge of their tagging
detector so that they can use incompletely contained electron showers to go
down to a lower limit of $Q^{2}=1.5 \rm ~GeV^{2}$,
joining on well for the $\pi^{0}$
with lower $Q^{2}$ data from CELLO. There is a clear difference between the
$Q^{2}$ behaviour of $\eta'$ and the behaviour of $\pi^{0}$ and $\eta$. Both
$\pi^{0}$ and $\eta$ form factors appear to obey the perturbative QCD
prediction of Brodsky and Lepage~\cite{Brodsky}:
\[ \lim_{Q^{2} \rightarrow \infty}|F_{\gamma^* \gamma m}(Q^{2})|=2f_{m}, \]
where $m$ is the particular pseudoscalar meson,
and they have consistent values ($\Lambda_{\pi^{0}}\simeq 776 \pm 20\rm ~MeV,
\Lambda_{\eta}\simeq 774 \pm 30$ MeV) for the $\pi^{0}$ and $\eta$
mass parameters in the monopole formula:
\[F(Q^{2})=F(0) \frac{1}{1+Q^{2}/ \Lambda_{m}^{2}}.\]
But the $\eta '$ form factor rises to approximately twice the pQCD prediction
at $Q^{2}\simeq 15 \rm ~GeV^{2}$, and it has a higher monopole mass ($\Lambda_{\eta
'}\simeq 859 \pm 25$ MeV; L3 is consistent~\cite{Braccini} but with bigger
errors). Brodsky and Ruskov -- in their talks~\cite{Brodsky,Ruskov} and over
breakfast this morning -- agree that these results mean that the $\pi^{0}$ and
$\eta$ are behaving as if their wavefunctions are already close to asymptotic
whereas the $\eta '$ is a much more complicated mixed object.
Cleo II's other beautiful result was totally negative~\cite{Paar} but very
clear. This was a search for $\gamma \gamma$ production of the glueball
candidate $f_{J}(2220)$ and its decay to $K_{s}K_{s}$. Cleo II sees many other
resonances in this analysis, so there is no question about their sensitivity,
but they do not see even a hint of the $f_{J}(2220)$. They therefore put the
highest ever lower limit ($>82$ at 95\% confidence) on the
``stickiness"~\cite{Chanowitz} of a meson, the normalised ratio of its $\gamma
\gamma$ width to its radiative branching ratio from $J/\psi $. Both BES and Mk
II have clear signals for $J/\psi$ decays to
the $f_{J}(2220)$. This object must now be one of
the strongest of all glueball candidates. Two other experiments,
L3~\cite{Braccini} and ARGUS~\cite{Medin} reported $\gamma \gamma$ resonance
studies. The L3 results are promising and should soon have a physics impact.
They demonstrate a good acceptance and resolution for many states with masses
from 1200 to 1750 GeV/$c^{2}$ and the statistics will triple or quadruple
before Photon '01.
There was an encouraging first look at exclusive resonance production at HERA
from H1~\cite{Tapprogge}, making particular use of the new SPACAL calorimeter
to measure multi photon final states boosted in the backward direction. Clear
$\pi^{0}$, $\omega$ and $\eta$ signals were seen, but no $\eta '$. There was
also a suggestion of an $a_{0}(980)$ peak. As well as conventional $\gamma
\gamma$ or $\gamma $-pomeron processes, some of these channels should be
sensitive to more exotic exchanges, such as the ``odderon". With rising HERA
luminosity this could become very interesting.
\section{Dreams; possible future developments}
A recurrent good dream seems closer to the real world after Romanov's
talk~\cite{Romanov}. This is the hope for precise measurement of the total
cross section $\sigma_{\gamma \gamma}$ in the resonance region
by using double tagging at around zero
scattering angle in an $e^{+}e^{-}$ collider. The KEDR detector at the VEPP-4M
collider in Novosibirsk has focusing spectrometers built into it which measure
the outgoing electron and positron to very high precision (we saw results from
a setting-up experiment on photon splitting using one of the two
spectrometers~\cite{Maslennikov}). The collider will run with $\sqrt{s} \simeq
1$ GeV soon, but should then go up to around 12 GeV. The resolution on the
mass of the system recoiling against the two tags will be better than 20
MeV/$c^{2}$ over a range of masses from $\simeq 0.5$ to 3.5 GeV/$c^{2}$, with a
tagging efficiency of better than 15\%. The main KEDR detector will have good
tracking and calorimetry to measure the properties of the hadronic final state,
so this experiment could make a substantial contribution to resonance studies.
A daydream which some of us indulge in is to imagine the same kind of zero
angle tagging system installed in one of the spare LEP straight sections,
together with good luminosity monitors and forward tracking, with a simple
barrel detector to trigger on hadronic systems. A well designed specialised
experiment could push the $\sigma_{\gamma \gamma}$ measurement up to
$\sqrt{s}\simeq 70$ GeV or more, could solve the big problem of measuring
$W_{\gamma \gamma}$ in the study of $F_{2}^{\gamma}$, could see the BFKL
effects predicted by Hautmann et al. and would be much more sensitive
than the present LEP experiments to such
diffractive processes as $\gamma \gamma \rightarrow \rho \rho, J/\psi \rho$
etc. But I hear there is to be a new user for the LEP tunnel after 2001.
In this morning's talks on the high energy photon linear collider Telnov
reported~\cite{Telnov} on the steady progress being made in solving the
fundamental problems of realising the full potential luminosity of such a
machine and Jikia~\cite{Jikia}, Ginzburg~\cite{Ginzburg} and
Takahashi~\cite{Takahashi} updated some of the feasibility studies on physics,
including measuring the couplings of Higgs bosons to $\gamma \gamma$. Because
this coupling could be sensitive to the existence of very heavy fermions and
bosons -- well beyond anything reachable at
planned machines -- it remains one of
the most important of all the numbers to be determined once a Higgs boson is
found. Nothing has been said here to undermine the conclusion presented at the
LCWS in Morioka~\cite{DJMMorioka} that, if a Higgs boson is found with a mass
of less than 350 GeV, then a high energy $\gamma \gamma$ collider must be built
to study it. Such a machine in $e^{-} \gamma$ mode will also give the
definitive measurement of the high $Q^{2}$ evolution of $F_{2}^{\gamma}$,
avoiding the big problem of measuring $W_{\gamma \gamma}$ by using a narrow
band beam of real photons as the target~\cite{VogtMiller}. Brodsky says that
he believes the study of $e^{-} \gamma \rightarrow W\nu $ will give the best
possible measurement of the $\gamma WW$ couplings. Telnov reminded us that if
a high energy linear $e^{+}e^{-}$ collider is built there must be provision for
a second interaction region with a finite beam crossing angle to be built at a
later date for real $\gamma \gamma$ and $\gamma e^{-}$ physics.
The idea of a lower energy photon linear collider was mentioned in passing. It
could be a superb tool for studying resonances in the 1 to 4 GeV/$c^{2}$ mass
range~\cite{Borden,MillerBerk}. If it were done as part of an upgrade of the
SLC at Stanford it might even reach the $e^{-} \gamma \rightarrow W\nu $
threshold.
\section{Summary and Conclusions}
In measuring $F^{2}_{\gamma}$ the LEP experiments agree with one another that
the shape and evolution are consistent with QCD. But the problem of modelling
the parton shower must be solved before the two important questions can be
settled: is the hadronic part of the photon so like the proton that at low x it
has the same kind of rising structure function; and can a precise measurement
of the QCD scale be made from the evolution at high $Q^{2}$? The influence of
HERA photoprodcution on untagged $\gamma \gamma$ studies is very important. It
will be intriguing to see whether LEP or HERA gets the best eventual
measurement of the gluon density in the photon; each has its own systematics
and intrinsic background problems. Resonance studies continue to be frustrated
by lack of effort; the work is intricate and time consuming, and it can be
unrewarding if the results are not clear cut. Here Cleo II used its large
statistics to report two convincingly clear results. L3 should be able to
follow suit with its excellent neutral particle reconstruction.
The connections between photoproduction and $\gamma \gamma$ physics grow
closer. Many of the ``dreams" of $e \gamma$ and $\gamma \gamma$ physicists,
from the previous section, involve achieving comparable statistics and
precision to what HERA can already do in $ep$ or $\gamma p$. This may only
be possible at a linear collider.
\section*{Acknowledgements}
The organisers of the conference are to be congratulated on the
scientific organisation, on their
choice of venue and on the care they have taken of us. Comletion of
the written version of this review has depended heavily upon the kind
advice and help of Dr. Jan Lauber.
\section*{References}
|
2,869,038,156,815 | arxiv | \section{Introduction}
Multicellular organisms are made of cells that can divide into many, which specialize in controlling and maintaining the body, sensing the environment, or protecting from external threats. Such features were acquired by evolution from the first living cell. After millions of years, colonies of unicellular organisms appeared and were essential to the development of multicellular organisms with cellular differentiation \citep{niklas2013origins}. Developmental biologists study that the growth and specialization of an organism are coordinated by its genetic code \citep{slack2021essential}.
The field of artificial life tries to create life-like computational models taking ideas from biological life, such as decentralized and local control \citep{langton2019artificial}. One of the sub-fields of artificial life, artificial development \citep{harding2009artificial,doursat2013morpho}, focuses on modeling or simulating cell division and differentiation. The techniques applied in artificial development are often based on the indirect encoding of developmental rules (i.e.\ analogous to the genome of a biological organism describing its phenotype). This type of encoding facilitates the scaling of an organism because the information in the genome is much smaller than in the resulting phenotype. This property is referred to as genomic bottleneck \citep{zador2019critique,variengien}, and it implies that the genetic code of an organism compresses the information to grow and maintain its body, and in some species even complex brains.
One of the simplest computational models of artificial life or dynamical systems is a cellular automaton (CA) \citep{wolfram2002new}. A CA can be described as a universe with discrete space and time, which is governed by local rules without any central control. Such a discrete space is divided into a regular grid of cells and can possess any number of dimensions. The most commonly studied CAs have one or two dimensions and their most well-known versions are, respectively, elementary CA \citep{wolfram2002new} and Conway’s Game of Life \citep{conway1970game}. Both have cells with binary states, but other CA can have many discrete states or continuous ones. In the 1940s, the first CA was introduced by Ulam and von Neumann \citep{topa2011network}. Von Neumann aimed to produce self-replicating machines, and Ulam worked on crystal growth. In 2002, a CA with rules defined by an artificial neural network was described \citep{li2002neural}. Nowadays, this type of approach is called neural cellular automaton (NCA). In 2017, \citet{nichele2017neat} presented an NCA that has developmental features that were learned through neuroevolution using a method called compositional pattern-producing network \citep{stanley2007compositional}. Recently, \citet{mordvintsev2020growing} introduced a differentiable NCA, which possesses growth and regeneration properties. In their work, an NCA is trained through gradient descent to grow a colored image from one active "seed" cell.
In evolutionary robotics, co-evolution of morphology and control has the inherent challenge of optimizing two different features in parallel \citep{bhatia2021evolution}. It also presents scalability issues when it deals with modular robots \citep{yim2007modular}. Our goal is to implement an approach where the optimization happens in just one dynamical substrate with local interactions. Here we introduce such a system, a \emph{Neural Cellular Robot Substrate} (NCRS), in which a single NCA grows the morphology of an agent’s body and also controls how that agent interacts with its environment. The NCA has two phases (Fig.~\ref{fig:teaser}). First is the developmental phase, in which the robot's body is grown, including where to place its sensors and actuators. In the following control phase, copies of the same NCA are running in each cell of the agent, taking into account only local information from neighboring cells to determine their next state. The optimization task thus entails figuring out how to transmit information from the robot's sensors to its actuators to perform the task at hand.
We also introduce a virtual environment with three benchmark tasks for evaluating the NCRS' capacity of designing a robot and then controlling it. Two benchmarks consist in growing and controlling a robot to approach a light source (Fig.~\ref{fig:teaser}b and Fig.~\ref{fig:env2}).
The third task challenges the robot to carry a ball to a target area. In this benchmark, a second type of rudimentary eye is added, so the robot can differentiate the ball and the target area (Fig.~\ref{fig:env3}).
The main contribution of this work is the introduction of a single neural cellular automaton that first grows an agent's body and then controls it during deployment. While the solved benchmark domains are relatively simple, the unified substrate for both body and brain opens up interesting future research directions, such as opportunities for open-ended evolution \citep{stanley2019open}. The source code of this project and videos of the results are available at \url{https://github.com/sidneyp/neural-cellular-robot-substrate}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{Figures/new_overview.png}
\caption{Neural Cellular Robot Substrate (NCRS) \normalfont In the developmental phase (a), the robot is grown from an initial starting seed, guided by a neural cellular automaton (c). Once grown, the same neural cellular automaton determines how the signals propagate through the robot's morphology during the control phase (b).}
\label{fig:teaser}
\end{figure}
\begin{figure}[ht]
\centering
\subcaptionbox{Light chasing with obstacle task\label{fig:env2}}{%
\includegraphics[width=0.5\linewidth,frame]{Figures/env2_description.png}
}\hspace{1.5cm}
\subcaptionbox{Carrying ball to target task\label{fig:env3}}{%
\includegraphics[width=0.35\linewidth,frame]{Figures/env3_description.png}
}
\caption{Extensions from the light chasing task. (\subref{fig:env2}) It depicts the original size of the playing field, which is 60.}
\label{fig:env23}
\end{figure}
\section{Related work}
The co-design of robot bodies and brains has been an active area of research for decades \citep{medvet2021biodiversity,sims1994evolving,komosinski1999framsticks,veenstra2020different,gupta2021embodied}. Brain and body co-design stands for producing a control policy and a morphology for a robotic system.
For example, in the work of \citet{lipson2000automatic} the same genome directly encodes the robot's body and the artificial neural network for control. A method that uses genetic regulatory networks to define separately a body and an artificial neural network was introduced by \citet{bongard2003evolving} and named artificial ontogeny. The evolved robots are able to locomote and push blocks in noisy environments. More recent work by \citet{bhatia2021evolution} presents several virtual environments and also an algorithm for brain and body co-design with separated description methods for the morphology and control. In comparison with NCRS, our co-design algorithm consists of only one neural cellular automaton.
The work on NCAs by \citet{mordvintsev2020growing} is one of the first examples of self-organizing and self-repair systems that use differentiable models as rules for cellular automata. Before that, NCA models were typically optimized with genetic algorithms \citep{nichele2017neat}. After the work on growing NCA, other neural CAs were introduced, including methods optimized without differentiable programming. There exist other generative methods for growing 3D artifacts and functional machines \citep{sudhakaran2021growing}, for developing soft robots \citep{horibe2021regenerating}. Moreover, an NCA was used as a decentralized classifier of handwritten digits \citep{randazzo2020mnist}.
The developmental phase of our approach is similar to the generative method with NCA for level design trained with CMA-ME in the work of \citet{earle2021illuminating}. Morphology design is also present in other works \citep{hejna2021task,talamini2021criticality,kriegman2018morphological,brodbeck2015morphological}. The control phase is based on the NCA for controlling a cart-pole agent introduced by \citet{variengien}, but their NCA is trained using a reinforcement learning algorithm named deep-Q learning and the communication between NCA and environment happens in predefined cells. Our approach, NCRS, unifies these two methods by having two phases. The first phase is generative, and the second one is an agent's policy.
\section{Approach: A Unified Substrate}
The modular robots grown by the NCA consist of different types of cells such as sensors, actuators, and connecting tissue. After growth, the robot is deployed in its particular environment. Importantly, in our approach, the same NCA controls both the growth of the modular robot (Fig.~\ref{fig:teaser}a) and the robot itself (Fig.~\ref{fig:teaser}b). Therefore, it is a unified substrate for body-brain co-design and is called Neural Cellular Robot Substrate (NCRS). The architecture of NCRS is illustrated in Fig.~\ref{fig:teaser}c. When the growth process is finished, the channels responsible to define the body modules reflect the robot's morphology, then the NCA can observe and act in the environment using the cells assigned to the specific types of modules, which are sensors, wheels, and tissue.
The state of a cell is updated considering the eight surrounding neighbors and itself, then it forms a $3\times 3$ neighborhood. The values of the nine cells with all the $n$ channels are processed by a trainable convolutional layer with 30 filters of size $3\times 3\times n$. Followed by a dense layer of 30 neurons and another one with $n$ neurons for the $n$ channels of the neural CA. After all cells have been computed, the result of this process is added to the previous state of the neural CA, and then it is clipped to the range of $[-5,5]$. This update is only valid for the cells that are considered "alive", which are the ones that have their value in the body channel greater than $0.1$ and their neighbors. This architecture is very similar to the ones in self-classifying MNIST \citep{randazzo2020mnist} and in self-organized control of a cart-pole agent \citep{variengien}.
The channels have specific roles in the neural CA, as shown in Fig.~\ref{fig:channel}. The number of channels $n$ differs because of the different number of sensors in the types of benchmark tasks. The body channel is the one that indicates that there is a body part in that cell if its value is greater than $0.1$. The neighbors of a body part are allowed to update their states because they are considered "growing". The next channel has fixed values and works as a control flag. When the neural CA is in the developmental phase, all cells in this channel are set to zero. When it is in the control phase, they are set to one. The following channels are responsible to define the type of the body part. The channel with the highest value is the one that specifies the body part. In the case of a tie, the first channel is selected. The order of those channels is: body/brain tissue, light/ball sensor, target area sensor (if needed), and wheel. In this way, it can define a robot as depicted in Fig.~\ref{fig:teaser}a. Then, there are the hidden channels to support the computation in the neural CA. For all benchmark types, the neural CA contains six hidden channels. Finally, the input/output channel, which is the one that receives the values from the sensors and gives the values to the actuators (wheels).
\begin{figure}[tb]
\centering
\subcaptionbox{Initial time-step in developmental phase\label{fig:channel_1}}{%
\includegraphics[width=0.4\textwidth]{NCA/env1_es_channels_1}%
}\hspace{1cm}
\subcaptionbox{Final time-step of developmental phase\label{fig:channel_2}}{%
\includegraphics[width=0.4\textwidth]{NCA/env1_es_channels_2}%
}
\subcaptionbox{Final time-step of control phase\label{fig:channel_3}}{%
\includegraphics[width=0.4\textwidth]{NCA/env1_es_channels_3}%
}
\caption{Channels of the neural cellular automaton in different stages.}
\label{fig:channel}
\end{figure}
The initial state of the neural CA is a "seed". The middle cell of the grid has the state set as one in the body channel, and the rest is zero. Fig.~\ref{fig:channel_1} illustrates this. After a few time-steps in the developmental phase, all channels are updated except the control flag channel. This phase lasts for ten time-steps. The end of the developmental phase is represented in Fig.~\ref{fig:channel_2}. After development, the control phase starts. In this phase, the benchmark environment initializes with the developed robot body. For advancing one time-step in the environment, the NCA takes two time-steps for defining an action after receiving observations from the sensors. The body and body parts channels become fixed and their values are defined by the robot body. This is used to support the neural CA by identifying the cells with body modules, such as tissues, sensors, and actuators. The cell is assigned the value one to the body channel if there is a body part and to the specific body parts channel. Fig.~\ref{fig:channel_3} shows this assignment for the identification of body parts during the control phase. The robot designed by this NCA is depicted in Fig.~\ref{fig:env1_es_body}. At the start of the control phase, the cellular activity of the hidden and input/output channels is set to zero. In the input/output channel, only the input cells are fixed and their values come from the sensors.
In our neural CA, there is no noise. Even all "alive" cells are updated every time-step. This is done because the stochastic update or any other type of noise would affect the development of the robot body. After the developmental phase, the same model could produce different types of robot body.
For our experiments, the neural cellular automaton has a grid of size $5\times 5$. Therefore, it generates a body for an agent with the same size. Since this is a neural CA, the grid size does not affect the number of trainable parameters. The light chasing and light chasing with obstacle environments require just the light sensor. Therefore, the robot can have tissue, light sensor and wheel. A wheel's orientation is always vertical during the initialization of the benchmark environment. The wheel rotates upwards and downwards relative to the initial angle of the robot. The maximum speeds for each of those directions are, respectively, +1 and -1.
This takes three body part channels. With one body, one control flag, six hidden, and one input/output channels, the total number of channels is $12$. In this way, the number of trainable parameters is 4,572. In the carrying ball to target environment, the robot needs one additional sensor. Therefore, it adds one more channel. It results in a neural CA with 4,873 trainable parameters.
\section{Benchmark environments}
To test the capacity of controlling the developed robot, we implemented three benchmark environments, which are: light chasing (LC), light chasing with obstacle (LCO), and carrying ball to target (CBT). They are environments where a modular robot equipped with simple light sensors and wheels can be evaluated. In those environments, we decided that the size of the playing field and the distance between the objects are affected by the maximum size that the robot can have. Thus, the larger the robot can be, the bigger the playing field. In our experiments, we use a robot and a neural cellular automaton grid with size $5\times 5$. Because the possible maximum size of the robot is 5, we chose the size of the playing field to be 60
The fitness score is calculated using the average score of 12 runs where the location of the agent, light, ball, and target can differ for each run. The light or ball has some predefined regions to be initially placed.
The benchmark environments are based on the implementation of the top-down racing environment in Open AI gym \citep{openaigym}. We use the pybox2d, which is a 2D physics library in Python.
\subsection{Light chasing}
The light chasing (LC) environment is shown in Fig.~\ref{fig:teaser}b. The goal of the agent is to be closer to the light during the entire simulation. The agent starts in the middle of the playing field. One light is randomly placed around the region of one of the four corners of the playing field. The fitness score is calculated by the average distance between the center of the robot and the center of the light over all simulation time-steps, and a successful run means that this distance reached less than 10 times the module size. The activity $s$ of the agent's light sensors is calculated as:
\begin{equation}
s=e^{-distance/playfield},
\label{eq:score}
\end{equation}
where the distance between the objects is normalized by the size of the playing field $playfield$, which is 60.
The values of the sensor activity or fitness score are between 0 and 1, where 1 means no distance. The values exponentially decay to 0 with an increase in distance.
\subsection{Light chasing with obstacle}
The light chasing with obstacle (LCO) environment is a more difficult version of the light chasing one (Fig.~\ref{fig:env2}).
The robot does not have sensors to detect the obstacle, thus its morphology plays a bigger role in this benchmark. The passage width is calculated by the possible maximum size of the robot. If the robot can have up to $5\times 5$ body parts, then the passage width would be the size of three body parts.
The robot is randomly initialized at the bottom of the playing field. An obstacle is procedurally generated with a target passage width and wall roughness. The obstacle has the shape of a funnel because there are no sensors to it, then this helps the robot to reach the passage depending on its body. The passage is randomly located on the horizontal axis and fixed on the vertical axis. The light is at the top and after the obstacle. The initial light location has four predefined regions on the horizontal axis, which are left, center-left, center-right and right. The fitness and success definition are the same as the light chasing task.
\subsection{Carrying ball to target area}
Among the three benchmark environments, the task to carry a ball to a target area is the most difficult one (Fig.~\ref{fig:env3}).
For the control phase, the robot needs to move towards the ball, and then move to the target area without losing the ball during the transport. For the developmental phase, the body of the robot needs to be adequate to push or kick the ball to the target area, and properly placing the sensors of each type, so it can successfully locate ball and target area. The agent is located at the bottom in a random horizontal location. The ball is located in the middle of the vertical axis of the playing field, but it has the same four predefined regions as the light chasing with obstacle environment. The target is located at the top and its location on the horizontal axis is randomly defined. Besides the sensor for the ball (or light for the other two environments), there is a new sensor type that calculates the distance to the center of the target area (following \eqref{eq:score}).
The fitness score of this environment is the average of the distance between robot and ball, and the distance between the ball and the center of the target area. Since they are distances used to calculate the fitness score, they are normalized using \eqref{eq:score}. The definition of success in this task means carrying the ball to the target, so it can have a distance less than ten times the module size of the robot.
\section{Training methods}
We have chosen to use some derivative-free optimization methods because NCRS needs some adjustments for using deep reinforcement learning because of the variable number of inputs and outputs \citep{variengien}. They are the covariance matrix adaptation evolution strategy (CMA-ES) \citep{hansen1996cmaes} and covariance matrix adaptation MAP-Elites (CMA-ME) \citep{fontaine2020cmame}. The latter is used to add quality diversity to the former, broadening the exploration of robot designs. For both training methods, we use the library CMA-ES/pycma \citep{hansen2019cma}. There are two training methods and three benchmark tasks. This gives a total of six different combinations. Because of the computational demands, each of these combinations was trained only once.
The training process is performed entirely on a CPU. To speed up evaluation times, robots with a design that would not work properly in the environment are not simulated. For the two light chasing environments, robots must have at least one light sensor and two actuators. For the carrying ball to target, they must have one sensor of each type and two actuators. The fitness scores of the failed designs are calculated according to the number of correct parts they have. For each correct body part, the fitness score increases by $0.01$.
To compare the quality diversity of CMA-ES and CMA-ME, we use the percentage of cells or feature configurations filled, and QD-score. They measure quality and diversity of the elites \citep{pugh2016quality}. The QD-score is calculated by summing the fitness score of all elites and dividing it by the total number of possible feature configurations. Moreover, CMA-ES and CMA-ME have their elites stored, even though CMA-ES does not use elites during training.
\subsection{Covariance matrix adaptation evolution strategy}
CMA-ES is one of the most effective derivative-free numerical optimization methods for continuous domains \citep{fontaine2020cmame}. CMA-ES runs 20,000 generations for all environments. The initial mean is $0.0$ for all dimensions, and the initial coordinate-wise standard deviation (step size) is $0.01$. The population size or the number of solutions acquired to update the covariance matrix is 112. This number was selected by the number of available threads in the machine used to train, which contains 56 threads at 2.70GHz.
\subsection{Covariance matrix adaptation MAP-Elites}
CMA-ME is a variant of CMA-ES with the added benefit of quality diversity from MAP-Elites \citep{mouret2015mapelites}. The changes to CMA-ES are that there are emitters of CMA-ES being trained in a cycle. Additionally, a feature map stores one elite for each possible feature configuration. Because there are invalid body designs, they do not produce an elite. When there is a successful robot design, the number of sensors, actuators, and body parts are used as features. If there are no elites or the current solution is better than the actual elite stored in the feature map, then the current solution is assigned to its feature configuration.
We use a slightly modified version of the CMA-ME with improvement emitters \citep{fontaine2020cmame}. We only restart an emitter when the number of elites is greater than the number of emitters and it is stuck for more than 500 generations. Being stuck means that the emitter could not find a better elite or an elite could not be placed into an empty feature configuration in the map. When an emitter restarts, the mean used to initialize the CMA-ES is a random elite in the map.
CMA-ME is executed for 60,000 generations for all environments, except the light chasing environment with 67,446 generations because we forced it to stop a longer training and its best fitness score was already better than the one trained with CMA-ES. The initial mean and the initial coordinate-wise standard deviation are the same as CMA-ES for all emitters. The population size is 128 because the CMA-ME training was executed in a computer with 128 threads at 2.9GHz.
\section{Results}
The training process took around 2.5 days for optimizing the NCA with CMA-ES. The evolution with CMA-ME took around 5.5 days. It is important to note that they do not have the same machine configuration, population size, and maximum number of generations.
Fig.~\ref{fig:env_es_body} shows all robot designs with the best fitness scores in regards to their training method and task. Almost all robots for the LC and CBT tasks fill the entire $5\times 5$ grid of cells. Those environments do not have any environmental constraints (any obstacle) for the robot size. Therefore, we infer that the full grid of modules is easier to design and there are more computational resources for controlling the robot. Their fitness scores are shown in Table~\ref{tab:train_fitness}. The results indicate that CMA-ES and CMA-ME can reach almost the same fitness scores after training. However, CMA-ME has fewer generations for the 15 emitters (4,000 generations per emitter). It is possible that if we run 20,000 generations per emitter, CMA-ME could reach a better final performance than CMA-ES and with more diversity. The history of the maximum fitness score per generation is depicted in Fig.~\ref{fig:loss}.
\begin{table}
\centering
\caption{Best fitness score after training in the tasks of light chasing (LC), light chasing with obstacle (LCO) and carrying ball to target (CBT)}
\label{tab:train_fitness}
\begin{tabular}{c|c|c|}
\cline{2-3}
& CMA-ES & CMA-ME \\ \hline
\multicolumn{1}{|c|}{LC} & 0.58274 & 0.61481 \\ \hline
\multicolumn{1}{|c|}{LCO} & 0.49295 & 0.47723 \\ \hline
\multicolumn{1}{|c|}{CBT} & 0.48445 & 0.47884 \\ \hline
\end{tabular}
\end{table}
\begin{figure}[tpb]
\centering
\subcaptionbox{CMA-ES - LC\label{fig:env1_es_body}}{%
\includegraphics[width=0.14\textwidth]{Results/env1_es_body}%
}\hfill
\subcaptionbox{CMA-ES - LCO\label{fig:env2_es_body}}{%
\includegraphics[width=0.14\textwidth]{Results/env2_es_body}%
}\hfill
\subcaptionbox{CMA-ES - CBT\label{fig:env3_es_body}}{%
\includegraphics[width=0.14\textwidth]{Results/env3_es_body}%
}\hfill
\subcaptionbox{CMA-ME - LC\label{fig:env1_cmame_body}}{%
\includegraphics[width=0.14\textwidth]{Results/env1_cmame_body}%
}\hfill
\subcaptionbox{CMA-ME - LCO\label{fig:env2_cmame_body}}{%
\includegraphics[width=0.14\textwidth]{Results/env2_cmame_body}%
}\hfill
\subcaptionbox{CMA-ME - CBT\label{fig:env3_cmame_body}}{%
\includegraphics[width=0.14\textwidth]{Results/env3_cmame_body}%
}
\caption{Robot designs with best fitness scores for the tasks of light chasing (LC), light chasing with obstacle (LCO) and carrying ball to target (CBT).}
\label{fig:env_es_body}
\end{figure}
\begin{figure}[tpb]
\centering
\subcaptionbox{Light chasing\label{fig:loss_env1}}{%
\includegraphics[width=0.3\textwidth]{Figures/loss_env1.png}%
}\hfill
\subcaptionbox{Light chasing with obstacle\label{fig:loss_env2}}{%
\includegraphics[width=0.3\textwidth]{Figures/loss_env2.png}%
}\hfill
\subcaptionbox{Carrying ball to target\label{fig:loss_env3}}{%
\includegraphics[width=0.3\textwidth]{Figures/loss_env3.png}%
}
\caption{Maximum fitness score through generations.}
\label{fig:loss}
\end{figure}
The elites were saved for both CMA-ES and CMA-ME, then we can compare their quality diversity. In Table~\ref{tab:elites}, the number of cells filled and QD-scores of all six methods and tasks combinations are presented. It is noticeable that CMA-ME provides much more quality diversity because of its bigger number of feature configurations and its QD-score. We can visualize it in Fig.~\ref{fig:elites}. This shows a small part of the elites produced for the light chasing tasks with CMA-ES and CMA-ME. Nevertheless, it confirms those two quality diversity measurements because more cells are filled, and there are more cells with higher fitness scores.
\begin{table}
\centering
\caption{Elites stored during training for the light chasing (LC), light chasing with obstacle (LCO) and carrying ball to target (CBT)}
\label{tab:elites}
\begin{tabular}{c|cc|cc|}
\cline{2-5}
& \multicolumn{2}{c|}{CMA-ES} & \multicolumn{2}{c|}{CMA-ME} \\ \cline{2-5}
& \multicolumn{1}{c|}{Cells filled} & QD-score & \multicolumn{1}{c|}{Cells filled} & QD-score \\ \hline
\multicolumn{1}{|c|}{LC} & \multicolumn{1}{c|}{67.58\%} & 0.29530 & \multicolumn{1}{c|}{89.57\%} & 0.40152 \\ \hline
\multicolumn{1}{|c|}{LCO} & \multicolumn{1}{c|}{17.88\%} & 0.06069 & \multicolumn{1}{c|}{61.80\%} & 0.19841 \\ \hline
\multicolumn{1}{|c|}{CBT} & \multicolumn{1}{c|}{58.10\%} & 0.22957 & \multicolumn{1}{c|}{93.82\%} & 0.37996 \\ \hline
\end{tabular}
\end{table}
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\linewidth]{Figures/map_elites.png}
\caption{Selected elites trained in the light chasing environment. Those modules were selected because they are the most different between CMA-ES and CMA-ME. Axes and subplots indicate the number of components.}
\label{fig:elites}
\end{figure}
For testing the success of our six trained models, we run 100 times the simulation and the percentage of success is presented in Table~\ref{tab:success}. We can visualize some examples of those simulations in Fig.~\ref{fig:env_last_step}. The trained model with CMA-ES for the light chasing task got 92\% of success rate with $0.58274$ fitness score while the one trained with CMA-ME had 75\% of success and $0.61481$ of fitness score. This means that a higher fitness score does not indicate a more successful model for reaching the light. This can be observed in Fig.~\ref{fig:env1_es_1}-\subref{fig:env1_es_4} for CMA-ES, and Fig.~\ref{fig:env1_cmame_1}-\subref{fig:env1_cmame_4} for CMA-ME. We can see in Fig.~\ref{fig:env1_cmame_3} that the light is at the top-right corner and the robot goes to the top-left corner. This explains the 75\% success rate of this NCRS because the light is at the top-right corner in 25 out of the 100 simulations. This model learned to move faster to the light in the other three corners, but it misses the one in the top-right corner. For the light chasing with obstacle task, the reason for the higher success rate of CMA-ES robot is that it is much thinner than the CMA-ME robot. Therefore, it is easier to pass through the passage. If we define success in LCO by passing the center of the body through the passage, then CMA-ES and CMA-ME had a success rate, respectively, of 77\% and 45\%. The NCRS did not learn to move to the light after passing through the obstacle. It just moves forward. Because of the difficulty of this task, we can consider the results for LCO were partially successful in general and successful in body design. Fig.~\ref{fig:env2_es_1}-\subref{fig:env2_es_4} and Fig.~\ref{fig:env2_cmame_1}-\subref{fig:env2_cmame_4} show that. The task of carrying a ball to a target had no successful trained model. The robots for both training methods just move forward and, by chance, it moves the ball to target. This can be seen in Fig.~\ref{fig:env3_es_1}-\subref{fig:env3_es_4} and Fig.~\ref{fig:env3_cmame_1}-\subref{fig:env3_cmame_4}.
Fig.~\ref{fig:channel} shows how the channels progress through time. The hidden channels are predominantly different in their behavior for the developmental and control phases. We infer this is mainly due to the control flag channel which regulates these two phases. We can observe the different patterns that emerged in their final time-steps. From the initial "seed" state to the state in Fig.~\ref{fig:channel_2}, we can see how the NCA behaves during 10 time-steps of the developmental phase. In Fig~\ref{fig:channel_3}, we can see the end of the control phase during its 200 time-steps (100 time-steps in the environment). We can still understand its behavior because the hidden and input/output channels were set to zero at the beginning of the control phase, and the body, control flag, tissue, sensor, and actuator channels were fixed according to the morphology of the robot.
\begin{table}
\centering
\caption{Testing success percentage over 100 runs for the tasks of light chasing (LC), light chasing with obstacle (LCO) and carrying ball to target (CBT)}
\label{tab:success}
\begin{tabular}{c|c|c|}
\cline{2-3}
& CMA-ES & CMA-ME \\ \hline
\multicolumn{1}{|c|}{LC} & 92\% & 75\% \\ \hline
\multicolumn{1}{|c|}{LCO} & 20\% & 8\% \\ \hline
\multicolumn{1}{|c|}{CBT} & 1\% & 2\% \\ \hline
\end{tabular}
\end{table}
\begin{figure}[htpb]
\centering
\subcaptionbox{CMA-ES - LC \#1\label{fig:env1_es_1}}{%
\includegraphics[width=0.16\textwidth]{Results/env1_es_1}%
}
\subcaptionbox{CMA-ES - LC \#2\label{fig:env1_es_2}}{%
\includegraphics[width=0.16\textwidth]{Results/env1_es_2}%
}
\subcaptionbox{CMA-ES - LC \#3\label{fig:env1_es_3}}{%
\includegraphics[width=0.16\textwidth]{Results/env1_es_3}%
}
\subcaptionbox{CMA-ES - LC \#4\label{fig:env1_es_4}}{%
\includegraphics[width=0.16\textwidth]{Results/env1_es_4}%
}\\
\subcaptionbox{CMA-ME - LC \#1\label{fig:env1_cmame_1}}{%
\includegraphics[width=0.16\textwidth]{Results/env1_cmame_1}%
}
\subcaptionbox{CMA-ME - LC \#2\label{fig:env1_cmame_2}}{%
\includegraphics[width=0.16\textwidth]{Results/env1_cmame_2}%
}
\subcaptionbox{CMA-ME - LC \#3\label{fig:env1_cmame_3}}{%
\includegraphics[width=0.16\textwidth]{Results/env1_cmame_3}%
}
\subcaptionbox{CMA-ME - LC \#4\label{fig:env1_cmame_4}}{%
\includegraphics[width=0.16\textwidth]{Results/env1_cmame_4}%
}\\
\subcaptionbox{CMA-ES - LCO \#1\label{fig:env2_es_1}}{%
\includegraphics[width=0.16\textwidth]{Results/env2_es_1}%
}
\subcaptionbox{CMA-ES - LCO \#2\label{fig:env2_es_2}}{%
\includegraphics[width=0.16\textwidth]{Results/env2_es_2}%
}
\subcaptionbox{CMA-ES - LCO \#3\label{fig:env2_es_3}}{%
\includegraphics[width=0.16\textwidth]{Results/env2_es_3}%
}
\subcaptionbox{CMA-ES - LCO \#4\label{fig:env2_es_4}}{%
\includegraphics[width=0.16\textwidth]{Results/env2_es_4}%
}\\
\subcaptionbox{CMA-ME - LCO \#1\label{fig:env2_cmame_1}}{%
\includegraphics[width=0.16\textwidth]{Results/env2_cmame_1}%
}
\subcaptionbox{CMA-ME - LCO \#2\label{fig:env2_cmame_2}}{%
\includegraphics[width=0.16\textwidth]{Results/env2_cmame_2}%
}
\subcaptionbox{CMA-ME - LCO \#3\label{fig:env2_cmame_3}}{%
\includegraphics[width=0.16\textwidth]{Results/env2_cmame_3}%
}
\subcaptionbox{CMA-ME - LCO \#4\label{fig:env2_cmame_4}}{%
\includegraphics[width=0.16\textwidth]{Results/env2_cmame_4}%
}\\
\subcaptionbox{CMA-ES - CBT \#1\label{fig:env3_es_1}}{%
\includegraphics[width=0.16\textwidth]{Results/env3_es_1}%
}
\subcaptionbox{CMA-ES - CBT \#2\label{fig:env3_es_2}}{%
\includegraphics[width=0.16\textwidth]{Results/env3_es_2}%
}
\subcaptionbox{CMA-ES - CBT \#3\label{fig:env3_es_3}}{%
\includegraphics[width=0.16\textwidth]{Results/env3_es_3}%
}
\subcaptionbox{CMA-ES - CBT \#4\label{fig:env3_es_4}}{%
\includegraphics[width=0.16\textwidth]{Results/env3_es_4}%
}\\
\subcaptionbox{CMA-ME - CBT \#1\label{fig:env3_cmame_1}}{%
\includegraphics[width=0.16\textwidth]{Results/env3_cmame_1}%
}
\subcaptionbox{CMA-ME - CBT \#2\label{fig:env3_cmame_2}}{%
\includegraphics[width=0.16\textwidth]{Results/env3_cmame_2}%
}
\subcaptionbox{CMA-ME - CBT \#3\label{fig:env3_cmame_3}}{%
\includegraphics[width=0.16\textwidth]{Results/env3_cmame_3}%
}
\subcaptionbox{CMA-ME - CBT \#4\label{fig:env3_cmame_4}}{%
\includegraphics[width=0.16\textwidth]{Results/env3_cmame_4}%
}
\caption{Last time-step where the robot is fully visible of the best NCRS trained with CMA-ES and CMA-ME in the environments for light chasing (LC), light chasing with obstacle (LCO) and carrying ball to target (CBT).}
\label{fig:env_last_step}
\end{figure}
\section{Discussion and conclusion}
Body-brain co-evolution is a challenging task \citep{bhatia2021evolution}. In this work, we developed three benchmark tasks for robot co-design and introduced a novel method by having a unified substrate as a genome with its own rules. This substrate is a single neural cellular automaton that works to develop and control a modular robot. This novelty opens up several possibilities in open-ended evolution \citep{stanley2019open}, especially because body and brain can co-evolve to the limits of the capacity of the artificial neural network. Because it defines the local rules in the CA, NCRS has the advantage of scalability. We also infer that curriculum learning will be important for complexifying the evolving robot \citep{bengio2009curriculum}. For example, the number of body parts and dimensions can increase over time with the progress of the generations. Evolution in multi-agent environments may also be applied, such as in PolyWorld \citep{yaeger1994computational}. We can also try to remove the two separated phases into one. Thus, we can observe how development and control can emerge and the performance the modular robots can have.
The presented results were successful for the LC task, but our trained models presented some failures when increasing the difficulty of the tasks. This may be addressed by adjusting the fitness score to reflect the success conditions, as well as by applying curriculum learning \citep{bengio2009curriculum}. In future works, we plan to apply our method in the Evolution Gym \citep{bhatia2021evolution}, or in a modified version of VoxCraft \citep{liu2020voxcraft} for 3D soft robots. Moreover, we aim at training and testing our approach for self-repair and robustness to noise.
\section*{Acknowledgment}
This work was partially funded by the Norwegian Research Council (NFR) through their IKTPLUSS research and innovation action under the project Socrates (grant agreement 270961). We thank Henrique Galvan Debarba for his thoughtful comments about the text. We also thank Joachim Winther Pedersen, Djordje Grbic, Miguel González Duque, and Rasmus Berg Palm for the helpful discussions during the implementation of the experiments.
\FloatBarrier
\bibliographystyle{plainnat}
|
2,869,038,156,816 | arxiv | \section*{Acknowledgments}
\vspace{-0.1in}
We are grateful to Pedro Schwaller, Daniel Stolarski, and Andreas Weiler for providing us with the code to run the dark-sector coupling as implemented for \cite{Schwaller:2015gea}. We also thank Andrew Larkoski, Matthew Low, Duff Neill, Jesse Thaler, Scott Thomas, Tien-Tien Yu, and Daniel Whiteson for useful discussions. TC is supported by an LHC Theory Initiative Postdoctoral Fellowship, under the National Science Foundation grant PHY-0969510.
\vspace{0in}
\onecolumngrid
\vspace{0.3in}
\twocolumngrid
\def\bibsection{}
\bibliographystyle{utphys}
|
2,869,038,156,817 | arxiv | \section{Introduction}
Matrix factorization is an important machine learning technique
for imputing missing values and analyzing hidden structures
in matrices.
With matrix factorization, a matrix is modeled by the product of two low-rank matrices, assuming that the rank of the given matrix is low.
Matrix factorization has been used in
a wide variety of applications, such as collaborative filtering~\cite{bokde2015matrix,mnih2008probabilistic,salakhutdinov2008bayesian,koren2009matrix},
text analysis~\cite{dumais2004latent,hofmann2001unsupervised},
bioinformatics~\cite{brunet2004metagenes},
and spatio-temporal data analysis~\cite{kimura2014spatio,takeuchi2017structurally}.
However,
when the number of observations is not large enough,
existing matrix factorization methods fail to
impute the missing values.
In some applications, only a limited number of observations are available.
For example, a newly launched recommender system
only has histories for small numbers of users and items,
and spatio-temporal data are not accumulated in the beginning when a new region is analyzed.
Recently,
few-shot learning and meta-learning have attracted attention for
learning from few labeled data~\cite{schmidhuber:1987:srl,bengio1991learning,finn2017model,vinyals2016matching,snell2017prototypical}.
Meta-learning methods learn how to learn from a small amount of
labeled data in various tasks,
and use the learned knowledge in unseen tasks.
Existing meta-learning methods assume that attributes are the same across all tasks.
Therefore, they are inapplicable to matrix factorization
when the rows or columns are not shared across matrices,
or the matrix sizes are different across matrices.
In this paper, we propose a meta-learning method for matrix factorization,
which can learn from various matrices without shared rows or columns,
and use the learned knowledge for the missing value imputation of unseen matrices.
The meta-training and meta-test matrices contain the missing values,
and their sizes can be different from each other.
With the proposed method, the prior distributions
of two factorized matrices are modeled by a neural network
that takes a matrix as input.
We use exchangeable matrix layers~\cite{hartford2018deep}
and permutation invariant networks~\cite{zaheer2017deep} for the neural network,
with which we encode the information of the given matrix
into the priors.
For each matrix, its factorized matrices are adapted to the given matrix by
maximum a posteriori (MAP) estimation using the gradient descent method.
The posteriors are calculated using the neural network-based priors
and the observations on the given matrix based on Bayes' theorem.
Since the neural network is shared across all matrices,
we can learn shared hidden structure in various meta-training matrices,
and use it for unseen meta-test matrices.
We meta-learn the neural networks such that
the missing value imputation error is minimized
when the MAP estimated factorized matrices are used for imputation.
Since the gradient descent steps for the MAP estimation are differentiable,
we can backpropagate the missing value imputation error
through the MAP estimation for updating the neural network parameters in the priors.
For each meta-training epoch based on the episodic training framework~\cite{finn2017model},
training and test matrices are randomly generated from the meta-training matrices,
and the test matrix imputation error of
the factorized matrices adapted to the training matrix
is evaluated and backpropagated.
Figure~\ref{fig:framework} shows the meta-learning framework of our proposed method.
Although we explain the proposed method with matrix imputation,
it is straightforwardly extended for tensor imputation
using exchangeable tensor layers~\cite{hartford2018deep}
and tensor factorization~\cite{kuleshov2015tensor}
instead of exchangeable matrix layers and matrix factorization.
\begin{figure}[t!]
\centering
\includegraphics[width=26em]{images/framework.png}
\caption{Our meta-learning framework: We are given multiple meta-training matrices. For each meta-training epoch, first, we generate a matrix from randomly selected rows and columns of a randomly selected matrix from the meta-training matrices. Second, we split elements in the matrix into training and test matrices. Third, factorized matrices' priors are inferred by a neural network. Fourth, we adapt the factorized matrices to the training matrix by maximizing the posterior with a gradient descent method. Fifth, we impute the missing values by multiplying the adapted factorized matrices. Sixth, we calculate the missing value imputation error using the test matrix and backpropagate it to update neural network's parameters.}
\label{fig:framework}
\end{figure}
The following are our major contributions:
\begin{enumerate}
\item We propose a meta-learning method for matrix imputation that can meta-learn from matrices without shared rows or columns.
\item We design a neural network to generate prior distributions of factorized matrices with different sizes, which is meta-trained such that the test matrix imputation performance improves when the factorized matrices are adapted to the training matrix based on the MAP estimation.
\item In our experiments using real-world data sets, we demonstrate that the proposed method achieves better matrix imputation performance when meta-training data contain matrices that are related to meta-test matrices.
\end{enumerate}
\begin{comment}
The remainder of this paper is organized as follows.
In Section~\ref{sec:related},
we briefly review related work.
In Section~\ref{sec:proposed},
we define our problem formulation, propose a neural network-based model that outputs factorized matrices adapted to observations, and present its training procedure.
Section~\ref{sec:experiments} experimentally demonstrates the effectiveness
of the proposed method.
Finally, we present concluding remarks and discuss future work in Section~\ref{sec:conclusion}.
\end{comment}
\section{Related work}
\label{sec:related}
Many meta-learning or few-shot learning methods have been proposed~\cite{schmidhuber:1987:srl,bengio1991learning,ravi2016optimization,andrychowicz2016learning,vinyals2016matching,snell2017prototypical,bartunov2018few,finn2017model,li2017meta,kimbayesian,finn2018probabilistic,rusu2018meta,yao2019hierarchically,edwards2016towards,garnelo2018conditional,kim2019attentive,hewitt2018variational,bornschein2017variational,reed2017few,rezende2016one}.
These existing methods cannot learn from matrices without shared or and columns.
With probabilistic meta-learning methods~\cite{finn2018probabilistic,kimbayesian},
the prior of model parameters is meta-trained,
where they require that the numbers of parameters are the same across tasks.
Therefore, they are inapplicable for meta-learning nonparametric models,
including matrix factorization,
where the number of parameters can grow with the sample size.
In contrast, the proposed method meta-trains a neural network
that generates the prior of a task-specific model,
which enables us to meta-learn nonparametric models.
The proposed method is related
to model-agnostic meta-learning~\cite{finn2017model} (MAML)
in the sense that both methods backpropagate a loss
through gradient descent steps.
MAML learns the initial values of model parameters
such that the performance improves when all the parameters are adapted to new tasks.
The proposed method is more efficient than MAML
since MAML requires a second-order gradient computation on the whole neural network
while the proposed method requires that only on factorized matrices.
In our model, we explicitly incorporate matrix factorization procedures
by the gradient descent method,
which has been successfully used for a wide variety of matrix imputation applications.
The proposed method is also related to encoder-decoder based
meta-learning methods~\cite{xu2020metafun},
such as neural processes~\cite{garnelo2018conditional}.
The encoder-decoder based meta-learning methods
obtain a representation of few observations by an encoder
and use it in predictions by a decoder,
where the encoder and decoder are modeled by neural networks.
Similarly, the proposed method uses a neural network to
obtain factorized matrices for predicting the missing values.
Their differences are that
the proposed method uses a neural network designed for matrices with missing values, and
has gradient descent steps for adapting factorized matrices to observations.
Adapting parameters by directly fitting them to observations
is effective for meta-learning~\cite{lee2019meta,iwata2020few}
since it is difficult to output the adapted parameters
only by neural networks for a wide variety of observations.
The proposed method uses exchangeable matrix layers~\cite{hartford2018deep}
as its components.
The exchangeable matrix layers have not been used for meta-learning.
Heterogeneous meta-learning~\cite{iwata2020meta} can learn from multiple tasks
without shared attributes. However, it cannot handle missing values,
and is inapplicable to matrix imputation.
Collective matrix factorization~\cite{singh2008relational,bouchard2013convex,yang2015robust}
simultaneously factorizes multiple matrices,
where information
in a matrix can be transferred to other matrices for factorization.
Collective matrix factorization assumes that
some of the columns and/or rows are shared across matrices.
Transfer learning methods for matrix factorization that do not assume shared columns or rows
have been proposed~\cite{iwata2015cross}.
With such transfer learning methods,
target matrices are required in the training phase
for transferring knowledge from source to target matrices.
On the other hand,
the proposed method does not need to use target matrices for training.
Several few-shot learning methods for recommender systems
have been proposed~\cite{vartak2017meta,li2020few} to
tackle the cold start problem,
where the histories of new users or new items are insufficiently accumulated.
These methods use auxiliary information,
such as user attributes and item descriptions.
In contrast, the proposed method does not use auxiliary information.
\section{Proposed method}
\label{sec:proposed}
\subsection{Problem formulation}
Suppose that we are given $D$ matrices
$\mathcal{X}=\{\vec{X}_{d}\}_{d=1}^{D}$ in the meta-training phase,
where $\vec{X}_{d}\in\mathbb{R}^{N_{d}\times M_{d}}$
is the meta-training matrix in the $d$th task,
and $x_{dnm}$ is its $(n,m)$th element.
The sizes of the meta-training matrices can be different across tasks,
$N_{d}\neq N_{d'}$, $M_{d}\neq M_{d'}$,
and rows or columns are not shared across matrices.
The meta-training matrices can contain missing values.
In this case, we are additionally given binary indicator matrix
$\vec{B}_{d}\in\{0,1\}^{N_{d}\times M_{d}}$,
where $b_{dnm}=1$ if the $(n,m)$th element is observed,
and $b_{dnm}=0$ otherwise.
In the meta-test phase,
we are given a meta-test matrix with missing values,
$\vec{X}_{*}\in\mathbb{R}^{N_{*}\times M_{*}}$,
where the missing values are specified by indicator matrix
$\vec{B}_{*}\in\{0,1\}^{N_{*}\times M_{*}}$.
Our aim is to improve the missing value imputation performance on the meta-test matrix.
\subsection{Model}
\label{sec:model}
For each task, our model outputs
factorized matrices $\vec{U}(\vec{X},\vec{B})\in\mathbb{R}^{N\times K}$
and $\vec{V}(\vec{X},\vec{B})\in\mathbb{R}^{M\times K}$
given matrix with missing values $\vec{X}\in\mathbb{R}^{N\times M}$
and its indicator matrix $\vec{B}\in\{0,1\}^{N\times M}$.
In the meta-training phase,
$\vec{X}$ and $\vec{B}$ are generated from one of
meta-training matrices $\{\vec{X}_{d}\}_{d=1}^{D}$
as described in Section~\ref{sec:training}.
In the meta-test phase,
$\vec{X}$ and $\vec{B}$ correspond to
meta-test matrix $\vec{X}_{*}$ and its indicator matrix $\vec{B}_{*}$.
Figure~\ref{fig:model} illustrates our model.
With our model, the factorized matrices are estimated
by maximizing the posterior probability:
\begin{align}
\vec{U}(\vec{X},\vec{B}),\vec{V}(\vec{X},\vec{B})
= \arg\max_{\vec{U},\vec{V}} \left[\log p(\vec{X}|\vec{U},\vec{V})
+ \log p_{\vec{X}}(\vec{U},\vec{V}|\mathcal{X})\right],
\label{eq:map}
\end{align}
where the first term is the likelihood,
\begin{align}
\log p(\vec{X}|\vec{U},\vec{V})\propto -\sum_{n,m=1}^{N,M}b_{nm}(\vec{u}_{n}^{\top}\vec{v}_{m}-x_{nm})^{2},
\end{align}
the second term is the prior,
$\vec{U}=[\vec{u}_{1},\cdots,\vec{u}_{N}]^{\top}$,
$\vec{V}=[\vec{v}_{1},\cdots,\vec{v}_{M}]^{\top}$,
and $\top$ is the transpose.
We use subscript $\vec{X}$ in prior $p_{\vec{X}}$ to indicate that it is task-specific.
The prior is conditioned on meta-training matrices $\mathcal{X}$
since we meta-learn it from $\mathcal{X}$.
We assume a Gaussian prior with mean $\vec{u}_{n}^{(0)}$ ($\vec{v}_{m}^{(0)}$)
and variance $\lambda^{-1}$:
\begin{align}
\log p_{\vec{X}}(\vec{U},\vec{V}|\mathcal{X})\propto
-\lambda\left(
\sum_{n=1}^{N}\parallel\vec{u}_{n}-\vec{u}^{(0)}_{n}\parallel^{2}
+\sum_{m=1}^{M}\parallel\vec{v}_{m}-\vec{v}^{(0)}_{m}\parallel^{2}
\right).
\end{align}
We generate the task-specific prior mean values $\vec{u}_{n}^{(0)}, \vec{v}_{m}^{(0)}$
with different sizes using a neural network.
The neural network is shared across different tasks,
which enables us to extract knowledge in the meta-training matrices,
and use the knowledge for unseen tasks.
When $\lambda=0$, matrix factorization
is independently performed for each matrix,
and it cannot meta-learn useful knowledge for factorization.
\begin{figure*}[t!]
\centering
\includegraphics[width=35em]{images/model.png}
\caption{Our model: Matrix with missing values $\vec{X}$ and its indicator matrix $\vec{B}$ are the input. First, representations $\vec{Z}$ for each element in $\vec{X}$ are obtained by exchangeable matrix layers in Eq.~(\ref{eq:z}). Second, the mean of the prior of factorized matrices $\vec{U}^{(0)}$ and $\vec{V}^{(0)}$ is calculated by transforming the averages of representations $\vec{Z}$ over columns and rows by permutation invariant neural networks in Eq.~(\ref{eq:uv0}).
Third, factorized matrices $\vec{U}(\vec{X},\vec{B})$ and $\vec{V}(\vec{X},\vec{B})$ are estimated by maximizing the posterior probability using the gradient descent steps in Eqs.~(\ref{eq:u},\ref{eq:v}). For each gradient descent step, input $\vec{X}$ and $\vec{B}$ and prior means $\vec{U}^{(0)}$ and $\vec{V}^{(0)}$ are used as well as previous estimates $\vec{U}^{(t-1)}$ and $\vec{V}^{(t-1)}$. Our model can take matrices with different sizes as input, and output their factorized matrices.}
\label{fig:model}
\end{figure*}
For modeling the prior, first,
we obtain representations $\vec{Z}\in\mathbb{R}^{N\times M\times C}$
of given matrix $\vec{X}$ using a neural network,
where the matrix's $(n,m)$th element is represented
by vector $\vec{z}_{nm}\in\mathbb{R}^{C}$.
Our model uses exchangeable matrix layers~\cite{hartford2018deep}:
\begin{align}
z^{(\ell+1)}_{nmc}&=\sigma\Biggl(\sum_{c'=1}^{C^{(\ell)}}
\Bigl(w_{c'c1}^{(\ell)}b_{nm}z_{nmc'}^{(\ell)}
+w_{c'c2}^{(\ell)}\frac{\sum_{n'=1}^{N} b_{n'm}z_{n'mc'}^{(\ell)}}{\sum_{n'=1}^{N} b_{n'm}}
+w_{c'c3}^{(\ell)}\frac{\sum_{m'=1}^{M} b_{nm'}z_{nm'c'}^{(\ell)}}{\sum_{m'=1}^{M} b_{nm'}}
\nonumber\\
&+w_{c'c4}^{(\ell)}\frac{\sum_{n',m'=1}^{N,M} b_{n'm'}z_{n'm'c'}^{(\ell)}}{\sum_{n',m'=1}^{N,M} b_{n'm'}}
+w_{c5}^{(\ell)}\Bigr)\Biggr),
\label{eq:z}
\end{align}
where
$z^{(\ell)}_{nmc}\in\mathbb{R}$ is the $c$th channel of the representation
of the $(n,m)$ element in the $\ell$th layer,
$w_{c'ci}^{(\ell)}\in\mathbb{R}$ is a weight parameter in the $\ell$th layer
to be trained
for the influence of channel $c'$ on channel $c$ in the next layer,
$\sigma$ is an activation function,
and $C^{(\ell)}$ is the channel size of the $\ell$th layer.
In the first layer, the given matrix is used for representation
$\vec{z}^{(0)}_{nm}=x_{nm}\in\mathbb{R}$, where
$x_{nm}$ is the value of the $(n,m)$ element of given matrix $\vec{X}$.
The representation in the last layer
is used as final representation $\vec{Z}=\vec{Z}^{(L)}$,
where $L$ is the number of layers,
and $C^{(L)}=C$.
In the last layer, activation function $\sigma$ is omitted.
The first term in Eq.~(\ref{eq:z}) calculates the influence
from the same element,
the second term calculates the influence from the elements of the same column,
the third term calculates the influence from the elements of the same row,
the fourth term calculates the influence from all the elements,
and the fifth term is the bias.
The influences are averaged over the observed elements using indicator matrix $\vec{B}$.
The exchangeable matrix layer is permutation
equivariant, where the output is the same values
across all the row- and column-wise permutations of the input.
With the exchangeable matrix layers,
we can obtain representations for each element
considering the whole matrix.
The exchangeable matrix layers can handle matrices with different sizes
since their parameters $w^{(\ell)}_{c'ci}$ do not depend on the input matrix size.
Second, we calculate the mean of the priors of the factorized matrices
using permutation invariant networks~\cite{zaheer2017deep}:
\begin{align}
\vec{u}^{(0)}_{n}=f_{\mathrm{U}}\left(\frac{1}{M}\sum_{m=1}^{M}\vec{z}_{nm}\right),
\quad
\vec{v}^{(0)}_{m}=f_{\mathrm{V}}\left(\frac{1}{N}\sum_{n=1}^{N}\vec{z}_{nm}\right),
\label{eq:uv0}
\end{align}
where $\vec{u}^{(0)}_{n}\in\mathbb{R}^{K}$ is the prior mean
of the $n$th row of factorized matrix $\vec{U}$,
$\vec{v}^{(0)}_{m}\in\mathbb{R}^{K}$ is the pror mean
of the $m$th row of factorized matrix $\vec{V}$,
and $f_{\mathrm{U}}$ and $f_{\mathrm{V}}$ are feed-forward neural networks.
In Eq.~(\ref{eq:uv0}),
we take the average of element representation $\vec{Z}$
over the rows (columns) and input them into the neural networks.
Eq.~(\ref{eq:uv0}) is a permutation invariant operation
that can take any number of elements $N$ and $M$.
By Eqs.~(\ref{eq:z},\ref{eq:uv0}),
we can obtain prior mean values $\vec{u}^{(0)}_{n}$ and $\vec{v}^{(0)}_{m}$
for each row and column considering the relationship with
other elements in the given matrix.
Parameters $\{w_{c'ci}^{(\ell)}\}$ and parameters in $f_{\mathrm{U}}$, $f_{\mathrm{V}}$
are common for all matrices.
Prior mean values $\vec{U}^{(0)}$, $\vec{V}^{(0)}$ are different across matrices
since they are calculated from input matrices $\vec{X}$ and $\vec{B}$.
We estimate the factorized matrices
by the MAP estimation in Eq.~(\ref{eq:map}) using the prior mean values in Eq.~(\ref{eq:uv0}) as
the initial values.
The update rules of the MAP estimation based on the gradient descent method
are given in a closed form by taking the gradient of the objective function
with respect to factorized matrices
$\vec{u}_{n}$ and $\vec{v}_{m}$:
\begin{align}
\vec{u}^{(t+1)}_{n}=\vec{u}^{(t)}_{n}-\eta\Bigl(\sum_{m=1}^{M}b_{nm}(\vec{u}_{n}^{(t)\top}
\vec{v}^{(t)}_{m}-x_{nm})\vec{v}^{(t)}_{m}
+\lambda(\vec{u}^{(t)}_{n}-\vec{u}^{(0)}_{n})\Bigr),
\label{eq:u}
\end{align}
\begin{align}
\vec{v}^{(t+1)}_{m}=\vec{v}^{(t)}_{m}-\eta\Bigl(\sum_{n=1}^{N}b_{nm}(\vec{u}_{n}^{(t)\top}
\vec{v}^{(t)}_{m}-x_{nm})\vec{u}^{(t)}_{n}
+\lambda(\vec{v}^{(t)}_{m}-\vec{v}^{(0)}_{m})\Bigr),
\label{eq:v}
\end{align}
where $\vec{u}^{(t)}$ and $\vec{v}^{(t)}$ are estimated
factorized matrices at the $t$th iteration,
and $\eta>0$ is the learning rate.
The factorized matrices at the $T$th iteration
are used as the output of our model
$\vec{U}(\vec{X},\vec{B})=\vec{U}^{(T)}$, $\vec{V}(\vec{X},\vec{B})=\vec{V}^{(T)}$.
The missing value in $\vec{X}$ is predicted by the inner product of the factorized matrices
by
\begin{align}
\hat{x}_{nm}=\vec{u}_{n}(\vec{X},\vec{B})^{\top}\vec{v}_{m}(\vec{X},\vec{B}).
\end{align}
We could predict the missing values using
the output of the neural networks,
$\hat{x}_{nm}=\vec{u}_{n}^{(0)\top}\vec{v}_{m}^{(0)}$,
without the gradient descent steps.
However, the prediction might be different from the true values
even with the observed elements
if the given matrix does not resemble any of the meta-training matrices
that are used for optimizing the neural networks,
and the generalization performance of the neural networks
is not high enough.
With gradient descent steps,
the factorized matrices can be adapted to the given matrix
even when the neural networks fail to adapt.
Minimizing the error between the observed and predicted values
is a standard technique for matrix factorization~\cite{koren2009matrix}.
We use it as a component in our model.
Our model can be seen as a single neural network,
that takes a matrix with missing values as input,
and outputs its factorized matrices,
where the exchangeable matrix layers in Eq.~(\ref{eq:z}),
the permutation invariant networks in Eq.~(\ref{eq:uv0}),
and the gradient descent steps in Eqs.~(\ref{eq:u},\ref{eq:v})
are used as layers (Figure~\ref{fig:model}).
Algorithm~\ref{alg:model} shows the forwarding procedures of our model.
Since our model including the gradient descent steps is differentiable,
we can backpropagate the loss through the gradient descent steps
to update the parameters in our neural networks.
\begin{algorithm}[t!]
\caption{Forwarding procedures of our model.}
\label{alg:model}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE{Observed matrix with missing values $\vec{X}$, indicator matrix $\vec{B}$, number of iterations $T$}
\ENSURE{Factorized matrices $\vec{U}(\vec{X},\vec{B})$, $\vec{V}(\vec{X},\vec{B})$}
\STATE Obtain element representations $\vec{Z}$ by Eq.~(\ref{eq:z}).
\STATE Calculate the mean of the priors of factorized matrices $\vec{U}^{(0)}$ and $\vec{V}^{(0)}$ by Eq.~(\ref{eq:uv0}).
\FOR{$t:=1$ to $T$}
\STATE Update factorized matrices $\vec{U}^{(t)}$, $\vec{V}^{(t)}$ by Eqs.~(\ref{eq:u},\ref{eq:v}) based on the MAP estimation.
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Meta-training}
\label{sec:training}
We meta-train model parameters $\bm{\Theta}$,
i.e., exchangeable matrix layer parameters $\{\{\{\{\{w^{(\ell)}_{cc'i}\}_{i=1}^{5}\}_{c'=1}^{C^{\ell}}\}_{c=1}^{C^{\ell+1}}\}_{\ell=1}^{L-1}$,
the parameters of feed-forward neural networks $f_{\mathrm{U}}$ and $f_{\mathrm{V}}$,
and regularization parameter $\lambda$,
by minimizing the following expected test error
of the missing values with the episodic training framework:
\begin{align}
\mathbb{E}_{\vec{X},\vec{B},\vec{X}',\vec{B}'}[\parallel
\vec{B}'\odot(\vec{X}'-\vec{U}(\vec{X},\vec{B})^{\top}\vec{V}(\vec{X},\vec{B}))\parallel^{2}],
\label{eq:error}
\end{align}
where $\vec{X}$ and $\vec{X}'$ are sampled
training and test matrices,
$\vec{B}$ and $\vec{B}'$ are their indicator matrices,
$\vec{U}(\vec{X},\vec{B})\in\mathbb{R}^{N\times K}$ and
$\vec{V}(\vec{X},\vec{B})\in\mathbb{R}^{M\times K}$ are
the factorized matrices obtained by our model
from training matrix $\vec{X}$ and its indicator matrix $\vec{B}$,
$\mathbb{E}$ is the expectation,
and $\odot$ is the element-wise multiplication.
Eq.~(\ref{eq:error})
is the expectation error between the test matrix
and its imputation by our model adapted to the training matrix.
Algorithm~\ref{alg:train} shows the meta-training procedures of our model.
In Line 4,
matrix $\bar{\vec{X}}\in\mathbb{R}^{N\times M}$
is constructed from randomly selected meta-training matrix $\vec{X}_{d}$,
where $\bar{\vec{X}}$ is a submatrix of $\vec{X}_{d}$.
Instead of sampling the submatrices,
we can use the whole selected meta-training matrix $\bar{\vec{X}}=\vec{X}_{d}$,
or change the number of rows and columns,
$N$ and $M$, for each epoch.
In Line 5, the non-missing elements in matrix $\bar{\vec{X}}$
are randomly split into training matrix $\vec{X}$
and test matrix $\vec{X}'$,
where their indicator matrices are mutually exclusive,
$\vec{B}\cap\vec{B}'=\phi$.
We assume that the meta-test matrices are missing at random.
If they are missing not at random,
we can model missing patterns~\cite{marlin2007collaborative},
and use them for generating training and test matrices in the meta-training procedures.
The time complexity for each meta-training step is $O(\gamma LNM+T(\gamma NM+(N+M)K)+\gamma'NM)$,
where $\gamma$ is the rate of the observed elements in a training matrix,
and $\gamma'$ is that in a test matrix.
The first term $O(\gamma LNM)$ is for inferring the priors by the neural networks.
The second term $O(T(\gamma NM+(N+M)K))$ is for the MAP estimation of
the factorized matrices using Eqs.~(\ref{eq:u},\ref{eq:v}).
The third term $O(\gamma'NM)$ is for calculating the loss on the test matrix.
The number of model parameters $\bm{\Theta}$ depends on neither the meta-training data size
nor the numbers of rows $N$ and columns $M$ of the training and test matrices.
\begin{algorithm}[t!]
\caption{Meta-training procedure of our model.}
\label{alg:train}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE{Meta-training data $\{\vec{X}_{d}\}_{d=1}^{D}$,
number of rows $N$, number of columns $M$,
training ratio $R$, number of iterations $T$}
\ENSURE{Trained model parameters $\bm{\Theta}$}
\STATE Initialize model parameters $\bm{\Theta}$.
\WHILE{End condition is satisfied}
\STATE Randomly select task index $d$ from $\{1,\cdots,D\}$.
\STATE Randomly sample $N$ rows and $M$ columns from $\vec{X}_{d}$ and construct matrix $\bar{\vec{X}}$.
\STATE Randomly assign non-missing elements
in $\bar{\vec{X}}$ with probability $R$
as training matrix $\vec{X}$, and assign the
remaining non-missing elements as test matrix $\vec{X}'$.
\STATE Obtain factorized matrices $\vec{U}(\vec{X},\vec{B}), \vec{V}(\vec{X},\vec{B})$ of the training matrix by Algorithm~\ref{alg:model}.
\STATE Calculate loss $\parallel\vec{B}'\odot(\vec{X}'-\vec{U}(\vec{X},\vec{B})^{\top}\vec{V}(\vec{X},\vec{B}))\parallel^{2}$ on the test matrix and its gradient.
\STATE Update model parameters $\bm{\Theta}$ using the loss and gradient by a stochastic gradient method.
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{Experiments}
\label{sec:experiments}
\subsection{Data}
We evaluated the proposed method using three rating datasets:
ML100K, ML1M~\cite{harper2015movielens}, and Jester~\cite{goldberg2001eigentaste}~\footnote{ML100K and ML1M were obtained from \url{https://grouplens.org/datasets/movielens/},
and Jester was obtained from \url{https://goldberg.berkeley.edu/jester-data/}.}.
The ML100K data contained 100,000 ratings of 1,682 movies by 943 users.
The ML1M data contained 1,000,209 ratings of 3,952 movies by 6,040 users.
The Jester data contained 1,805,072 ratings of 100 jokes from 24,983 users.
The ratings of each dataset were normalized with zero mean and unit variance.
We randomly split the users and items,
and used 70\% of them for meta-training,
10\% for meta-validation,
and the remaining for meta-test.
There were no overlaps of users and items across the meta-training,
validation, and test data.
We randomly generated ten $30 \times 30$ meta-test matrices from the meta-test data
and used half of the originally observed ratings as
missing ratings for evaluation.
For each meta-test matrix, the average numbers of observed elements were respectively 28.8, 19.0, and 218.0
in the ML100K, ML1M, and Jester data.
We did not use the meta-test matrices in the meta-training phase.
The evaluation measurement was the test mean squared error,
which was calculated by the mean squared error between the true and estimated ratings
of the unobserved elements in the meta-test matrices.
We averaged the test mean squared errors
over ten experiments with different splits of meta-training, validation, and test data.
\subsection{Proposed method setting}
We used three exchangeable matrix layers with 32 channels.
Feed-forward neural networks $f_{\mathrm{u}},f_{\mathrm{v}}$
were four-layered with 32 hidden units and $K=32$ output units.
We used a rectified linear unit, $\mathrm{ReLU}(x)=\max(0,x)$, for the activation.
For the gradient descent steps of the matrix factorization in Eqs.~(\ref{eq:u},\ref{eq:v}),
the learning rate was $\eta=10^{-2}$,
and the number of iterations was $T=10$.
The number of rows and columns of the training matrices were $N=30$ and $M=30$,
and the training ratio was $R=0.5$.
We optimized our model using Adam~\cite{kingma2014adam} with learning rate $10^{-4}$,
batch size $16$, and dropout rate $0.1$~\cite{srivastava2014dropout}.
The number of meta-training epochs was 1,000, and
the meta-validation data were used for early stopping.
We implemented the proposed method with PyTorch~\cite{paszke2017automatic}.
\subsection{Comparing methods}
We compared the proposed method with the following eight methods:
exchangeable matrix layer neural networks (EML)~\cite{hartford2018deep},
EML finetuned with the meta-test matrix (FT),
model-agnostic meta-learning of EML (MAML)~\cite{finn2017model},
item-based AutoRec (AutoRecI)~\cite{sedhain2015autorec},
user-based AutoRec (AutoRecU),
deep matrix factorization (DMF)~\cite{xue2017deep},
matrix factorization (MF),
and the mean value of the meta-test matrix (Mean).
EML, FT, MAML as well as the proposed method
were meta-learning schemes,
all of which were trained using the meta-training matrices
such that the test mean squared error was minimized.
AutoRecI, AutoRecU, DMF, MF, and Mean
were trained using the observed ratings of the meta-test matrix
by minimizing the mean squared error
without meta-training matrices.
EML used three exchangeable matrix layers,
where the number of channels with the first two layers was 32,
and the number of channels with the last layer was one to output the estimation of a rating.
EML was trained with the episodic framework like the proposed method.
The exchangeable matrix layers have not been used for meta-learning.
We newly used them for meta-learning
by employing them as the encoder and decoder
in an encoder-decoder based meta-learning method.
FT finetuned the parameters of the trained EML
using the observed ratings in the meta-test matrix
by minimizing the mean squared error.
MAML trained the initial parameters of EML
to minimize the test mean squared error when finetuned
by the episodic training framework.
The number of iterations in the inner loop was five.
AutoRec is a neural network-based matrix imputation method.
With AutoRecI (AutoRecU),
a neural network took each row (column) as input
and output its reconstruction.
DMF is a neural network-based matrix factorization method,
where the row and column representations were calculated by neural networks
that took a row or column as input,
and ratings were estimated by the inner product of the row and column representations.
For the neural networks in AutoRecI, AutoRecU, and DMF,
we used four-layered feed-forward neural networks with 32 hidden units.
With DMF and MF, the number of latent factors was 32.
With AutoRecI, AutoRecU, DMF, and MF,
the weight decay parameter was tuned from $\{10^{-4},10^{-3},10^{-2},10^{-1},1\}$,
and the learning rate was tuned from $\{10^{-3},10^{-2},10^{-1}\}$
using the validation data.
\subsection{Results}
\begin{table}[t!]
\centering
\caption{Average test mean squared errors on unobserved elements in test matrices and their standard error: Values in bold are not statistically different at 5\% level from the best performing method in each dataset by a paired t-test.}
\label{tab:mse}
\begin{tabular}{lrrr}
\hline
& ML100K & ML1M & Jester\\
\hline
Ours & {\bf 0.901 $\pm$ 0.033} &{\bf 0.883 $\pm$ 0.024} &{\bf 0.813 $\pm$ 0.009} \\
EML & 0.933 $\pm$ 0.036 & 0.907 $\pm$ 0.024 & 0.848 $\pm$ 0.009 \\
FT & 1.175 $\pm$ 0.047 & 1.149 $\pm$ 0.046 & 0.990 $\pm$ 0.008 \\
MAML & 0.941 $\pm$ 0.036 & 0.904 $\pm$ 0.025 & 0.880 $\pm$ 0.011 \\
AutoRecI & 0.985 $\pm$ 0.040 & 0.968 $\pm$ 0.028 & 0.949 $\pm$ 0.008 \\
AutoRecU & 0.987 $\pm$ 0.033 & 0.962 $\pm$ 0.024 & 0.907 $\pm$ 0.014 \\
DMF & 0.979 $\pm$ 0.034 & 0.972 $\pm$ 0.023 & 0.852 $\pm$ 0.007 \\
MF & 1.014 $\pm$ 0.037 & 0.962 $\pm$ 0.031 & 1.005 $\pm$ 0.014 \\
Mean & 1.007 $\pm$ 0.020 & 0.983 $\pm$ 0.013 & 1.004 $\pm$ 0.008\\
\hline
\end{tabular}
\end{table}
\begin{figure*}[t!]
\centering
{\tabcolsep=0.1em
\begin{tabular}{ccc}
\includegraphics[width=13em]{images/anaresult_d0r2.png} &
\includegraphics[width=13em]{images/anaresult_d1r2.png} &
\includegraphics[width=13em]{images/anaresult_d0r3.png} \\
(a) ML100K & (b) ML1M & (c) Jester\\
\end{tabular}}
\caption{Average test mean squared errors
when meta-trained with training and test matrices of different sizes,
where the size of the meta-test matrices was the same as that of the training and test matrices.
We used square matrices, and the horizontal axis is the number of columns
and rows.
Bar show the standard error.}
\label{fig:error_matrix_size}
\end{figure*}
\begin{figure*}[t!]
\centering
{\tabcolsep=0.1em
\begin{tabular}{ccc}
\includegraphics[width=13em]{images/anaresult_d0r7.png} &
\includegraphics[width=13em]{images/anaresult_d1r7.png} &
\includegraphics[width=13em]{images/anaresult_d0r8.png} \\
(a) ML100K & (b) ML1M & (c) Jester\\
\end{tabular}}
\caption{Average test mean squared errors
with meta-test matrices with different sizes,
where the model was meta-trained with $30 \times 30$ matrices.
We used square matrices, and the horizontal axis is the number of columns
and rows.
Bars show the standard error.}
\label{fig:error_matrix_size_test}
\end{figure*}
Table~\ref{tab:mse} shows the average test mean squared error.
The proposed method achieved the lowest error on all the datasets.
EML's performance was the second best on the ML100K and Jester datasets.
This result indicates that exchangeable matrix layers
effectively extracted useful information from the matrices
with missing values.
The proposed method further improved the performance from EML
by directly adapting to the observed elements
using the gradient descent steps based on MAP estimation.
Since EML approximates the adaptation to the observed elements
only by exchangeable matrix layers,
the estimated values can be different from the observed elements.
The errors on the observed elements with EML were higher
than those with the proposed method shown in the supplementary material.
With FT, although the errors on the observed elements were low,
the test errors on the unobserved elements were high,
because FT was overfitted to the observed values by
training the model by minimizing the error on the observed elements.
On the other hand, the proposed method
trains the model by minimizing the test error on the unobserved elements
when fitted to the observed elements with MAP estimation.
Therefore, the proposed method alleviated overfitting
to the observed elements.
Since MAML trained the model by minimizing the expected test error,
the overfitting was smaller than FT.
However, MAML's performance did not surpass that of EML
and was lower than that of the proposed method.
With MAML, the whole neural network-based model is adapted
to the observed values.
In contrast, the proposed method adapts only factorized matrices
to the observed values,
although the neural networks are fixed and used for defining the priors
of the factorized matrices.
The errors of the methods that did not use meta-training matrices, i.e.,
AutoRec, DMF, MF, and Mean, were high
since they needed to estimate the missing values only from
a small number of observations.
On the other hand,
the proposed method achieved the lowest error
by meta-learning hidden structure in the meta-training matrices
that effectively improved the test matrix imputation performance
even though rows and columns were not shared across different matrices.
Figure~\ref{fig:error_matrix_size}
shows the test mean squared errors when
meta-trained with training and test matrices of different sizes,
where the size of the meta-test matrices was the same as that of the training and test matrices.
Our proposed method and EML decreased the error
as the matrix size increased
because the number of observations rose as the matrix size increased.
Figure~\ref{fig:error_matrix_size_test}
shows the test mean squared errors
with meta-test matrices of different sizes,
where the model was meta-trained with $30 \times 30$ matrices.
The proposed method achieved the lowest error with different sizes
of meta-test matrices.
The proposed method's performance improved
as the meta-test matrix size increased even though it
was trained with different-sized matrices.
Figure~\ref{fig:error_gd} shows the test mean squared errors with different numbers of gradient descent iterations with the proposed method. As the number of iterations increased, the error decreased especially with the ML100K and Jester data. This result indicates the effectiveness of the gradient descent steps in the proposed method.
\begin{figure*}[t!]
\centering
{\tabcolsep=0.1em
\begin{tabular}{ccc}
\includegraphics[width=12em]{images/anaresult_d0r6.png} &
\includegraphics[width=12em]{images/anaresult_d1r6.png} &
\includegraphics[width=12em]{images/anaresult_d2r6.png} \\
(a) ML100K & (b) ML1M & (c) Jester\\
\end{tabular}}
\caption{Average test mean squared errors
with different numbers of gradient descent iterations with the proposed method. Bar show the standard error.}
\label{fig:error_gd}
\end{figure*}
Table~\ref{tab:train_mse} shows the average training
mean squared errors.
The errors on the observed elements with EML were higher
than those with the proposed method.
Table~\ref{tab:mse_ablation} shows
the average test mean squared errors by the proposed method
when different datasets were used between meta-training and meta-test data.
With the ML1M and Jester meta-test data,
the proposed method achieved the best performance
when meta-trained with matrices from the same dataset.
Even when the meta-training datasets were different from the meta-test datasets,
the errors were relatively low, and
they were lower than those by the comparing methods.
This is because the datasets used in our experiments were
related to each other and shared some hidden structure, and
the proposed method learned the shared structure and used the learned structure
to improve the matrix imputation performance in the other datasets.
Table~\ref{tab:mse_K} shows
the average test mean squared errors by the proposed method
with different factorized matrix ranks $K$.
The proposed method worked even when factorized matrix rank $K$ is larger
than the number of rows $N$ or columns $M$
since it uses the priors inferred by the neural networks based on the MAP estimation.
Factorized matrix rank $K$ slightly affects the performance,
but the proposed method achieved better performance than the comparing methods
with a wide range of $K$.
Table~\ref{tab:mse_autorec} shows
the average test mean squared errors by AutoRec
with different numbers of hidden units.
With any numbers of hidden units,
the performance by AutoRec was worse than the proposed method.
Tables~\ref{tab:train_time} and \ref{tab:test_time}
show the computational time in seconds for meta-training and test
on computers with 2.60GHz CPUs.
The meta-training time of the proposed method was much shorter than MAML
since the proposed method adapts only factorized matrices
instead of neural networks,
where the adaption steps can be written explicitly.
The meta-training time of the proposed method was longer than EML
since the proposed method additionally requires adaptation steps.
The proposed method's meta-test time was short since
it requires only a small number of adaptation steps for a few observed elements.
\begin{table}[h]
\centering
\caption{Average mean squared errors on observed elements in test matrices and their standard error with meta-learning methods.}
\label{tab:train_mse}
\begin{tabular}{lrrr}
\hline
& ML100K & ML1M & Jester\\
\hline
Ours & 0.504 $\pm$ 0.017 &0.354 $\pm$ 0.013 &0.618 $\pm$ 0.008 \\
EML & 0.596 $\pm$ 0.018 & 1.573 $\pm$ 0.071 & 0.664 $\pm$ 0.010 \\
FT & 0.328 $\pm$ 0.013 & 0.346 $\pm$ 0.017 & 0.528 $\pm$ 0.012 \\
MAML & 0.493 $\pm$ 0.012 & 0.503 $\pm$ 0.028 & 0.658 $\pm$ 0.013 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t!]
\centering
\caption{Average test mean squared errors by the proposed method with different pairs of meta-training and meta-test datasets.
Each row represents the meta-training data, and each column represents the meta-test data.}
\label{tab:mse_ablation}
\begin{tabular}{lrrr}
\hline
Meta-training data $\backslash$ Meta-test data & ML100K & ML1M & Jester\\
\hline
ML100K & 0.901 $\pm$ 0.033 & 0.906 $\pm$ 0.023 & 0.825 $\pm$ 0.008 \\
ML1M & 0.889 $\pm$ 0.028 & 0.883 $\pm$ 0.024 & 0.827 $\pm$ 0.008 \\
Jester & 0.900 $\pm$ 0.027 & 0.927 $\pm$ 0.025 & 0.813 $\pm$ 0.009 \\
ML100K, ML1M & 0.894 $\pm$ 0.036 & 0.883 $\pm$ 0.024 & 0.829 $\pm$ 0.008 \\
ML100K, Jester & 0.894 $\pm$ 0.031 & 0.893 $\pm$ 0.025 & 0.824 $\pm$ 0.008 \\
ML1M, Jester & 0.884 $\pm$ 0.035 & 0.885 $\pm$ 0.024 & 0.832 $\pm$ 0.008 \\
ML100K, ML1M, Jester & 0.892 $\pm$ 0.035 & 0.884 $\pm$ 0.023 & 0.829 $\pm$ 0.008\\
\hline
\end{tabular}
\end{table}
\begin{table}[t!]
\centering
\caption{Average test mean squared errors by the proposed method with different factorized matrix ranks $K$.}
\label{tab:mse_K}
\begin{tabular}{lrrrrr}
\hline
$K$ & 8 & 16 & 32 & 64 & 128\\
\hline
ML100K & 0.908 & 0.903 & 0.901 & 0.899 & 0.898 \\
ML1M & 0.889 & 0.887 & 0.883 & 0.888 & 0.886 \\
Jester & 0.815 & 0.814 & 0.813 & 0.814 & 0.813 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t!]
\centering
\caption{Average test mean squared errors by AutoRec with different numbers of hidden units.}
\label{tab:mse_autorec}
\begin{tabular}{lrrrr}
\hline
& \multicolumn{2}{c}{AutoRecI} & \multicolumn{2}{c}{AutoRecU} \\
\hline
\#hidden units & 128 & 512 & 128 & 512 \\
\hline
ML100K & 0.980 & 0.974 & 0.998 & 0.979 \\
ML1M & 0.949 & 0.954 & 0.946 & 0.947 \\
Jester & 0.924 & 0.929 & 0.876 & 0.917 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t!]
\centering
\caption{Meta-training time in seconds.}
\label{tab:train_time}
\begin{tabular}{lrrr}
\hline
& ML100K & ML1M & Jester\\
\hline
Ours & 1039 & 3527 & 376 \\
EML & 850 & 2983 & 247 \\
MAML & 15983 & 55205 & 4189 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t!]
\centering
\caption{Meta-test time in seconds.}
\label{tab:test_time}
\begin{tabular}{lrrr}
\hline
& ML100K & ML1M & Jester\\
\hline
Ours & 0.13 & 0.13 & 0.14 \\
EML & 0.09 & 0.09 & 0.10 \\
FT & 3.88 & 3.75 & 2.77 \\
MAML & 1.72 & 3.12 & 1.60 \\
AutoRecI & 1.51 & 1.54 & 3.60 \\
AutoRecU & 1.50 & 1.49 & 3.99 \\
DMF & 1.78 & 5.20 & 4.49 \\
MF & 0.53 & 1.20 & 1.29\\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
We proposed a neural network-based meta-learning method
for matrix imputation that
learns from multiple matrices without shared rows and columns,
and predicts the missing values given observations in unseen matrices.
Although we believe that our work is an important step
for learning from a wide variety of matrices,
we must extend our approach in several directions.
First, we will apply our framework to tensor data
using exchangeable tensor layers~\cite{hartford2018deep}
and tensor factorizations~\cite{hitchcock1927expression,carroll1970analysis,harshman1970foundations,tucker1966some,welling2001positive,kuleshov2015tensor}.
Second, we will use our framework for other types of matrix factorization,
such as non-negative matrix factorization (NMF)~\cite{lee1999learning}
and independent component analysis~\cite{hyvarinen2000independent}.
For example,
we can use multiplicative update steps for NMF
instead of gradient descent steps.
Third, we want to extend our proposed method to use auxiliary information,
e.g., user and item information in recommender systems,
by taking it as input of our neural network.
\bibliographystyle{abbrv}
|
2,869,038,156,818 | arxiv | \section{#1}\setcounter{equation}{0}}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\renewcommand{\thefigure}{\arabic{section}.\arabic{figure}}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{definition}{Definition}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{example}{Example}[section]
\newcommand{\beqn}[1]{\begin{equation}\label{#1}}
\newcommand{\eeqn}{\end{equation}}
\newcommand{\calA}{{\cal A}}
\newcommand{\calB}{{\cal B}}
\newcommand{\calC}{{\cal C}}
\newcommand{\calD}{{\cal D}}
\newcommand{\calE}{{\cal E}}
\newcommand{\calF}{{\cal F}}
\newcommand{\calG}{{\cal G}}
\newcommand{\calI}{{\cal I}}
\newcommand{\calO}{{\cal O}}
\newcommand{\calU}{{\cal U}}
\newcommand{\calV}{{\cal V}}
\newcommand{\calW}{{\cal W}}
\newcommand{\calX}{{\cal X}}
\newcommand{\calY}{{\cal Y}}
\newcommand{\barf}{\overline{f}}
\newcommand{\req}[1]{(\ref{#1})}
\newcommand{\tim}[1]{\;\; \mbox{#1} \;\;}
\renewcommand{\Re}{\hbox{I\hskip -2pt R}}
\newcommand{\smallRe}{\hbox{\footnotesize I\hskip -2pt R}}
\newcommand{\ii}[1]{\{1, \ldots, #1 \}}
\newcommand{\tr}{{\rm tr}}
\newcommand{\ms}{\;\;\;\;}
\def\E{\mathbb{E}}
\def\P{\mathbb{P}}
\def\S{\mathbb{S}}
\def\R{\mathbb{R}}
\def\T{\mathbb{T}}
\newcommand{\qed}{\hphantom{.}\hfill Q.E.D.\medbreak}
\newcommand{\ep}{{\,\Box\,}}
\newcommand{\epb}{\raisebox{-1pt}{$\Box$}}
\newcommand{\bep}{\stackrel{-}{\ep}}
\newcommand{\uep}{\,\underline{\Box}\,}
\topmargin -10truept
\pagestyle{myheadings}
\markright{Luo, Qi, Toint --- Bernstein Concentration Inequalities for Tensors}
\everymath{\displaystyle}
\title{Bernstein Concentration Inequalities for Tensors\protect\\
via Einstein Products}
\author{
Ziyan Luo\thanks{
State Key Laboratory of Rail Traffic Control and Safety,
Beijing Jiaotong University, Beijing, China.
E-mail: [email protected]},
~Liqun Qi\thanks{
Department of Applied Mathematics,
The Hong Kong Polytechnic University,
Hung Hom, Kowloon, Hong Kong.
E-mail: [email protected]},
~and~Philippe L. Toint\thanks{
Namur Center for Complex Systems (naXys),
University of Namur, 61, rue de Bruxelles, B-5000 Namur, Belgium.
Email: [email protected]}
}
\date{\documentdate}
\begin{document}
\maketitle
\begin{abstract}
A generalization of the Bernstein matrix concentration inequality to random tensors
of general order is proposed. This generalization is based on the use of
Einstein products between tensors, from which a strong link can be established
between matrices and tensors, in turn allowing exploitation of existing
results for the former.
\end{abstract}
{\bf AMS subject classifications.} 15A52, 15A72, 49J55, 60H25.
{\bf Keywords}: random tensors, concentration inequality, Einstein products,
subsampling, computational statistics.
\section{Introduction}
The theory of random matrices has a rich history starting with Hurwitz (see
\cite{Forr10}) and Wishart \cite{Wish28} in the first half of the 20th
century. While it has developped on its own right within probability theory, it has
also found applications in many diverse domains of computational
statistics, ranging from matrix approximation \cite{FrieKannVemp98} to compressed sensing
\cite{Dono06}, graph theory \cite{AhlsWint02}, sparsification \cite{AchlMcSh01}
or subsampling of data \cite{WillSeeg01}. Important tools in several of these fields
are matrix concentration theorems that give results on expectation, norm
distribution and probability of deviation from the expectation. We refer the
interested reader to the excellent book by Tropp \cite{Trop15} for further
elaboration and an extensive bibliography.
The purpose of this short paper is to extend one of the proeminent matrix
concentration results, the Bernstein inequality, to the case of
tensors of general order. This extension was originally motivated by the
desire to extend the use of the Bernstein inequality in subsampling estimation
of gradients and Hessians of additive multivariate real functions
\cite{BellGuriMori18,BellGuriMoriToin18,BellKrejKrkl18,ChenJianLinZhan18,
KohlLucc17,XuRoosMaho17,XuRoosMaho18} to derivatives of
higher degree, thereby providing estimation tools for general Taylor's
expansions of such functions. It is however clear that applications of the
new tensor result has wider potential, including, for instance, randomized
tensor sparsification (such as in video streaming) or randomized tensor
products for fast computations.
Our approaches hinges on Einstein products of tensors and associated
``matricization'' transformations: these recast tensors in the form of large
matrices to which known results of matrix concentration inequalities \cite{Trop15}
may then be applied.
The paper is organized as follows. Section~\ref{defs-s} introduces the
Einstein products and states some of its properties that are central to our
development. We then state the Bernstein concentration inequality for
Einstein-symmetric tensor of even order in Section~\ref{sBern-s}. The more
general inequality for Einstein-symmetric tensors of arbitrary order is
derived in Section~\ref{gBern-s} and an ``intrinsic dimension'' version of
this inequality presented in Section~\ref{idBern-s}. Some conclusions and
perspectives are finally presented in Section~\ref{concl-s}.
\numsection{Tensors and the Einstein Product}\label{defs-s}
We start by defining the Einstein tensor product for high-order tensors, first
introduced by Lord Kelvin in 1856 \cite{Kelv56} and named after Albert
Einstein for his work in \cite{Eins07}.
\begin{definition}[Einstein Product, \cite{Eins07}]\label{Eproduct}
Let $\calA$ be a tensor in $\Re^{I_1\times\cdots\times I_m\times K_1\times\cdots\times K_m}$
and $\calB$ be a tensor in $\Re^{K_1\times\cdots\times K_m\times J_1\times\cdots\times J_p}$.
The Einstein product of $\calA$ and $\calB$, denoted by $\calA\ep\calB$,
is defined by
\beqn{EP}
\left(\calA\ep\calB\right)_{i_1\ldots i_mj_1\ldots j_p}
= \sum_{k_1\ldots k_m} a_{i_1\ldots i_m k_1\ldots k_m}b_{k_1\ldots k_mj_1\ldots j_p},
\tim{ for all } i_1,\ldots,i_m, j_1,\ldots,j_p,
\eeqn
\end{definition}
In this definition, each lowercase index varies from 1 to its uppercase
equivalent: for instance $i_2$ varies from $1$ to $I_2$, $k_3$ from $1$ to
$K_3$ and $j_1$ from $1$ to $J_1$.
The Einstein product can be regarded as a higher order generalization of the
standard matrix multiplication in which $m=p=1$. Such a contraction product
has been widely used in the areas of continuum mechanics \cite{LaiRubiKrem09}
and relativity theory \cite{Eins07}. Notice that in $\T_{m,d}$, the space of real
tensors of order $m$ and dimension $d$, that is the set of multiarrays $\calA
= (a_{i_1, \ldots, i_m})$ where $i_j$ varies from $1$ to $d$ for $j= 1,\ldots,
m$, the Einstein product satisfies the closure property
\[
\calA, \calB\in \T_{2m,d} \Longrightarrow \calA\ep\calB\in \T_{2m,d}.
\]
This nice property allows us to follow \cite{BrazLiNavaTamo13} and define
several new concepts based on the Einstein product for tensors.
\begin{definition}\label{concepts} Let
$\calA=\left(a_{i_1\ldots i_mj_1\ldots j_m}\right)\in \T_{2m,n}$.
\begin{itemize}
\item[(i)]{\bf Transpose:} The transpose of $\calA$, denotes by $\calA^\top$, is
defined by the relations
\[
\left(\calA^\top\right)_{i_1\ldots i_mj_1\ldots j_m}
=\left(\calA\right)_{j_1\ldots j_mi_1\ldots i_m}
\tim{ for all } i_1,\ldots, i_m, j_1, \ldots, j_m.
\]
\item[(ii)]{\bf Einstein-Symmetric Tensor:} $\calA$ is called Einstein-symmetric, or
\epb-symmetric, if $\calA^\top = \calA$.
The set of all \epb-symmetric tensors in $\T_{2m,d}$ is a subspace and is denoted by $\S_{2m,d}$.
\item[(iii)]{\bf Diagonal Tensor:} An \epb-symmetric tensor $\calA$ is
said to be diagonal if $a_{i_1\ldots i_mj_1\ldots j_m}=0$ whenever
$\prod_k\delta_{i_kj_k}=0$, where $\delta_{ij}$ is the Kronecker delta.
\item[(iii)]{\bf Identity Tensor:} The Einstein-identity tensor, denoted by
$\calI^\ep$, is a diagonal \epb-symmetric tensor with
$a_{i_1\ldots i_mi_1\ldots i_m}=1$ for all $i_1, \ldots, i_m$.
\item[(iv)]{\bf Orthogonal Tensor:} $\calA$ is called Einstein-orthogonal,
or \epb-orthogonal, if $\calA^\top\ep\calA = \calI^\ep$.
\item[(vi)]{\bf EVD:} If $\calA \in \S_{2m,d}$, then
\begin{equation}\label{EVD}
\calA = \calU \ep \calD \ep \calU^\top
\end{equation}
is called an eigenvalue decomposition (EVD) of $\calA$, where $\calU$ is
\epb-orthogonal and $\calD$ is \epb-symmetric and diagonal.
Each $d_{i_1\ldots i_mi_1\ldots i_m}$ in $\calD$ is called an Einstein eigenvalue of
$\calA$, or \epb-eigenvalue. The \epb-eigenvalues of $\calA$ are denoted by
$\lambda^\ep_i(\calA)$ ($i\in\ii{d^m}$).
\item[(vii)]{\bf Spectral norm and trace:} The Einstein-spectral norm and trace
of $\calA \in \S_{2m,d}$ are defined by
\[
\|\calA\|^\ep = \max_{i\in \ii{d^m}} \left|\lambda^\ep_i(\calA)\right|
\tim{ and }
\tr^\ep(\calA) = \sum_{i\in \ii{d^m}}\lambda^\ep_i(\calA).
\]
\end{itemize}
\end{definition}
As in \cite{BrazLiNavaTamo13}, we introduce the important bijective
``matricization'' transformation $f$ that maps each tensor $\calA\in
\T_{2m,d}$ to a matrix $A \in \R^{d^m\times d^m}$ with
$A_{ij} = a_{i_1\ldots i_m j_1\ldots j_m}$, where
\beqn{resp}
i = i_1 + \sum_{k=2}^m \left((i_k-1)d^{k-1}\right)
\tim{ and }
j = j_1 + \sum_{k=2}^m \left((j_k-1)d^{k-1}\right).
\eeqn
Note that
\beqn{f-of-vect}
f(x^{\otimes m})
= f( \underbrace{x \otimes \cdots \otimes x}_{m~{\rm times}})
= \underbrace{x \bullet \cdots \bullet x}_{m~{\rm times}}
= x^{\bullet m}
\tim{ for } x \in \Re^d,
\eeqn
where $\otimes$ denotes the tensor external product and $\bullet$ the
Kronecker product. Importantly for our purposes, it is proved in \cite{BrazLiNavaTamo13}
that
\beqn{transform}
f(\calA\ep\calB) = f(\calA) \cdot f(\calB),
\eeqn
where $\cdot$ is the standard matrix multiplication. Thus the consistency of
the concepts introduced in Definitions~\ref{concepts} results from standard
matrix analysis.
The property \req{transform} in turn implies the following useful results.
\begin{proposition}\label{observations} Let
$\calA=\left(a_{i_1\ldots i_mj_1\ldots j_m}\right)\in \T_{2m,d}$ be
an \epb-symmetric tensor with EVD given by
$\calA = \calU \ep\calD \ep \calU^\top$. We then have that
\begin{itemize}
\item[(i)] $f(\calA^\top) = f(\calA)^\top$, and hence
$f(\calA) = f(\calU)\cdot f(\calD)\cdot f(\calU)^\top$;
\item[(ii)] All eigenvalues of $f(\calA)$ are \epb-eigenvalues of
$\calA$ and vice-versa;
\item[(iii)] $\tr^\ep(\calA)= \sum_{i\in \ii{d^m}}\lambda_i(f(\calA))$;
\item[(iv)] $\calA\ep\calA = \calU\ep \left(\calD\circ \calD\right)\ep\calU^\top$,
where $\circ$ denotes the Hadamard product;
\end{itemize}
\end{proposition}
\noindent
Moreover, we may also establish a relation between
the Einstein- and the standard Z-eigenvalues. Here we simply recall that a real
scalar $\lambda$ is called a Z-eigenvalue of a symmetric real tensor
$\calA \in \T_{2m,d}$, if there exists real unit vector $x\in \R^d$ such that
\[
\calA x^{2m-1} = \lambda x,
\tim{ where } \calA x^{2m-1}
= \left(\sum_{i_2\ldots i_{2m}} a_{ii_2\ldots i_{2m}}x_{i_2}\cdots x_{i_{2m}}\right)\in\R^d
\]
(see \cite{QiLuo17}). As pointed out in \cite{Qi05}, Z-eigenvalues of
even-order symmetric real tensors always exist.
\begin{lemma}\label{bound} For an \epb-symmetric real tensor $\calA\in
\T_{2m,d}$, we have that, whenever Z-eigenvalues of $\calA$ exist,
$\lambda^\ep_{\max}(\calA)\geq \lambda^Z_{\max}(\calA)$.
\end{lemma}
\noindent{\emph{Proof.}}
By direct calculation, we have that
\begin{eqnarray}
\lambda^\ep_{\max}(\calA)
& = & \max_{y\in\R^{d^m}\setminus \{0\}}\frac{y^\top f(\calA)y}{\|y\|_2^2}
\nonumber \\
& \geq & \max_{x\in\R^d\setminus \{0\}} \frac{\langle f(\calA),
\left(x^{\otimes m}\right)\cdot\left(x^{\otimes m}\right)^\top
\rangle}{\|x^{\otimes m}\|_2^2 }
\nonumber\\
& \geq & \max_{x\in\R^d,~x^\top x =1} \calA x^{2m}\nonumber \\
& \geq & \lambda_{\max}^Z(\calA), \nonumber
\end{eqnarray}
where the second inequality results from the observation that
$x^\top x = 1$ implies that $\|x^{\otimes m}\|_2^2 = 1$.
\qed
\numsection{The Bernstein Inequality for Even-Order Tensors}\label{sBern-s}
We now turn to random tensors, which are defined as follows. Let $(\Omega,
\calF, \P)$ be a probability space. A real $(m,d)$ random tensor $\calX$ is a
measurable map from $\Omega$ to $\T_{m,d}$. A finite sequence $\{\calX_k\}$
of random tensors is independent whenever
\[
\P(\calX_k \in \calF_k \tim{ for all } k ) = \prod_k \P(\calX_k \in \calF_k)
\]
for every collection $\{\calF_k\}$ of Borel subsets of
$\T_{m,d}$. $\E(\calX)$, the expectation of the random tensor $\calX$, is, as
is the case for matrices, taken elementwise.
We are now in position to achieve our first objective: the Bernstein
inequality for even order real \epb-symmetric tensors based on Einstein products.
\begin{theorem}\label{main} Consider a finite sequence $\{\calX_k\}$ of
independent random real \epb-symmetric tensors of order $2m$ and dimension $d$.
Assume that
\[
\E(\calX_k) = \calO
\tim{ and }
\lambda_{\max}^\ep(\calX_k)\leq L \tim{for each } k.
\]
Consider the random tensor $\calY = \sum_k \calX_k$ and let $\nu(\calY)$ be
the tensor variance statistic of $\calY$ via Einstein product, that is
\[
\nu(\calY)
=\left\|\E(\calY^{\ep 2})\right\|^\ep
= \left\|\sum_k\E(\calX_k^{\ep 2})\right\|^\ep.
\]
Then
\beqn{expect}
\E\big(\lambda_{\max}^\ep(\calY)\big)
\leq\sqrt{2\nu(\calY)m\log d}+\frac{1}{3}Lm\log d.
\eeqn
Furthermore, for all $t\geq 0$,
\beqn{prob}
\P\left(\lambda_{\max}^\ep(\calY)\geq t\right)
\leq d^m\cdot \exp\left(\frac{-t^2/2}{\nu(\calY)+Lt/3}\right).
\eeqn
\end{theorem}
\noindent{\emph{Proof}.} First observe that the following equivalences
between tensors and matrices hold:
\begin{equation}\label{equvi}
\E(\calX_k)=\calO \Longleftrightarrow \E(f(\calX_k)) = 0,
\ms \|\calX_k\|^\ep\leq L \Longleftrightarrow \|f(\calX_k)\|\leq L,
\ms \lambda_{\max}^\ep(\calY) = \lambda_{\max}(f(\calY)).
\end{equation}
Using those equivalences and applying the matrix Bernstein
inequality \cite[Theorem 6.6.1]{Trop15} to $f(\calY) = \sum_k f(\calX_k)$, we
then deduce the desired result. \qed
\noindent
Using Lemma~\ref{bound}, we then immediately deduce the following corollary
involving Z-eigenvalues.
\begin{corollary}\label{main-cor} Suppose that the assumptions of
Theorem~\ref{main} hold and that Z-eigenvalues of $\calY$ exist. Then,
\beqn{expect-cor}
\E\big(\lambda_{\max}^Z(\calY)\big)
\leq\sqrt{2\nu(\calY)m\log d}+\frac{1}{3}Lm\log d.
\eeqn
Furthermore, for all $t\geq 0$,
\beqn{prob-cor}
\P\left(\lambda_{\max}^Z(\calY)\geq t\right)
\leq d^m\cdot \exp\left(\frac{-t^2/2}{\nu(\calY)+Lt/3}\right).
\eeqn
\end{corollary}
\noindent
This result reduces to the matrix Bernstein inequality for real symmetric
matrices \cite[Theorem 6.6.1]{Trop15} when $m=1$, since for any symmetric real
matrix $A$, $\calA$ is \epb-symmetric,
\[
\lambda_{\max}^Z(A) =\lambda_{\max}^\ep(A) = \lambda_{\max}(A),
\ms
\|\E(A^{\ep 2})\|^\ep = \|\E (A^2)\|
\]
and $d^m=d$.
\numsection{The General Tensor Bernstein Inequality}\label{gBern-s}
As is the case for the matrix case, extending the condensation inequality to
tensors of odd order requires additional work. The notion of Einstein
product itself must first be extended to general tensors in $\T_{N,d}$.
\begin{definition}[Generalized Einstein Products]\label{generalized-Einstein}
Let $\calA$, $\calB$ be two real tensors in $\T_{N,d}$, and
$m = \left\lceil\frac{N}{2}\right\rceil$. Two generalized
Einstein products of $\calA$ and $\calB$, denoted by $\calA \!\bep\! \calB$
and $\calA \uep\calB$, are defined by
\beqn{gEinstein1}
\left(\calA \!\bep\! \calB\right)_{i_1\ldots i_m j_1\ldots j_m}
= \sum_{k_1,\ldots, k_{N-m}}
a_{i_1\ldots i_m k_1\ldots k_{N-m}}b_{j_1\ldots j_m k_1\ldots k_{N-m}}
\in \T_{2m,d},
\eeqn
and
\beqn{gEinstein2}
\left(\calA\uep \calB\right)_{k_1\ldots k_{N-m} k'_1\ldots k'_{N-m}}
= \sum_{i_1,\ldots, i_{m}}
a_{i_1\ldots i_m k_1\ldots k_{N-m}}b_{i_1\ldots i_m k'_1\ldots k'_{N-m}}
\in \T_{2(N-m),d},
\eeqn
respectively.
\end{definition}
\noindent
We examine two special cases.
\begin{itemize}
\item[(i)] If $N=1$, the ranges between 1 and $N-m=0$
in the above definition are interpreted as empty. In this case,
$\calA$ and $\calB$ are vectors in $\Re^d$, say
$a$ and $b$. Thus, $a \!\bep\! b = ab^\top$ and
$a \uep b = a^\top b$, which are exactly the outer
and inner products of vectors.
\item[(ii)] If $N = 2m$, then $\calA$ and $\calB$ are in $\T_{2m,d}$, and
\beqn{prodsok}
\calA \!\bep\! \calB = \calA \ep \calB^\top,
\tim{ and }
\calA \uep \calB = \calA^\top \ep \calB,
\eeqn
where $\calB^\top$ is defined in Definition~\ref{concepts} and $\ep$ is the
Einstein product in Definition~\ref{Eproduct}, both for even-order tensors.
\end{itemize}
\noindent
We also need to generalize the bijective transformation $f$ which unfolds an
even-order tensor to a square matrix (as introduced in Section 2) to operate
on tensors of any order. This is done as follows.
\begin{definition}[Matricization]\label{bijective2} Let $N \geq 1$, $d\geq 1$ and
$m=\left\lceil \frac{N}{2}\right\rceil$. Define a bijective linear
transformation $\barf$ from $T_{N,d}$ to $\Re^{d^m \times d^{N-m}}$
such that for any tensor $\calA \in \T_{N,d}$,
\[
\left(\barf\left(\calA\right) \right)_{ik}
= a_{i_1\ldots i_m k_1\ldots k_{N-m}},
\]
where
\[
i=i_1+\sum\limits_{l=2}^m \left((i_l-1)d^{l-1}\right)
\tim{ and }
k=k_{1}+\sum\limits_{l=2}^{N-m} \left((k_l-1)d^{l-1}\right).
\]
\end{definition}
\noindent
Note that $\barf(\calA)$ need not be square or (obviously) symmetric.
As above, we consider two special cases.
\begin{itemize}
\item[(i)] If $N=1$, the range between 1 and $N-m=0$
is again interpreted as empty. It results that $\barf$
is the identity transformation that maps any vector ${\bf x}\in\R^d$ to
itself.
\item[(ii)] If $N=2m$, then $\barf$ coincides with the transformation $f$.
\end{itemize}
\noindent
The all important relation \req{transform} may also be generalized as follows.
\begin{lemma}\label{equiv} Let $N\geq 1$, $d\geq 1$ and $m=\left\lceil
\frac{N}{2}\right\rceil$. Then we have that, for all $\calA\in \T_{N,d}$,
\begin{equation}\label{eq}
f\left(\calA \!\bep\! \calA\right)
= \barf\left(\calA\right)\cdot\barf\left(\calA\right)^\top
\tim{ and }
f\left(\calA \uep \calA\right)
= \barf\left(\calA\right)^\top\cdot \barf\left(\calA\right).
\end{equation}
\end{lemma}
\noindent{\emph{Proof.}} Notice that
\begin{eqnarray}
\left(\calA\!\bep\! \calA\right)_{i_1\ldots i_m j_1\ldots j_m}
&=& \sum_{k_1,\ldots, k_{N-m}} a_{i_1\ldots i_m k_1\ldots k_{N-m}}a_{j_1\ldots j_m k_1\ldots k_{N-m}}
\nonumber \\
&=& \sum_{k_1,\ldots, k_{N-m}} a_{j_1\ldots j_m k_1\ldots k_{N-m}}a_{i_1\ldots i_m k_1\ldots k_{N-m}}
\nonumber\\
&=& \left(\calA\!\bep\! \calA\right)_{j_1\ldots j_m i_1\ldots i_m}, \nonumber
\end{eqnarray}
for any $i_1, \ldots, i_m, j_1, \ldots, j_m$. Thus, $\calA\!\bep\! \calA\in \S_{2m,d}$
and hence $f\left(\calA\!\bep\! \calA\right)$ is well-defined.
Denote $B = f\left(\calA \!\bep\! \calA\right)$ and
$C = \barf\left(\calA\right)\cdot \barf\left(\calA\right)^\top$.
From the definitions of $f$ and $\barf$, we know that the matrices
$B$ and $C$ have the same size, which is $d^m \times d^m$. For any $i$ and
$j \in \ii{d^m}$, there exist two $m$-tuples of indices
$\left(i_1,\ldots, i_m\right)$ and $\left(j_1,\ldots, j_m\right)$ that
uniquely determine by $i$ and $j$ via \req{resp}. By direct calculation, we
then obtain that
\begin{eqnarray}
C_{ij}
& = & \sum_{l=1}^{d^{N-m}}\left[\barf(\calA)\right]_{il}\left[\barf(\calA)\right]_{jl}
\nonumber \\
& = & \sum_{k_1,\ldots, k_{N-m}} a_{i_1\ldots i_m k_1\ldots k_{N-m}}a_{j_1\ldots j_m k_1\ldots k_{N-m}}
\nonumber\\
& = & \left(\calA\!\bep\! \calA\right)_{i_1\ldots i_m j_1\ldots j_m} \nonumber\\
& = & \left[f\left(\calA\!\bep\! \calA\right)\right]_{ij} \nonumber\\
& = & B_{ij}.
\end{eqnarray}
The proof for the case involving $\uep$ is similar. \qed
\noindent
We next need to revisit the definition of the spectral norm.
\begin{definition} Let $N, d\geq 1$ and
$m=\left\lceil \frac{N}{2}\right\rceil$.
Suppose that $\calA=\left(a_{i_1\ldots i_mk_1\ldots k_{N-m}}\right)\in \T_{N,d}$.
The spectral norm of $\calA$ in the sense of generalized Einstein products,
denoted as $\|\calA\|^{\bep}$, is defined by
$\|\calA\|^{\bep} = \sqrt{\|\calA \!\bep\! \calA\|^\ep}$.
\end{definition}
\noindent
Using Proposition~\ref{observations} (iii) and \req{prodsok}, one verifies that
$\|\calA\|^{\bep}=\|\calA\|^{E}$ whenever $\calA \in \T_{2m,d}$.
As for the matrix case \cite{Trop15}, we now use a construct to build a
symmetric even-order object from (possibly) odd-order non-square parts. This
is achieved by using the Hermitian dilation defined, for any real matrix $B$,
by
\[
H(B) = \left(\begin{array}{cc}
O & B \\
B^\top & O \\
\end{array}\right).
\]
It is then possible to establish a link between this construct and the
spectral norm just defined: first note that
\beqn{T2.1.28}
\lambda_{\max}(H(B)) = \|H(B)\| = \|B\|.
\eeqn
We may then use this identity to establish the following result.
\begin{lemma}
Let $N, d\geq 1$ and $m=\left\lceil \frac{N}{2}\right\rceil$.
Suppose $\calA=\left(a_{i_1\ldots i_mk_1\ldots k_{N-m}}\right)\in \T_{N,d}$.
We then have that
\begin{equation}\label{sym}
\|\calA\|^{\bep}
= \sqrt{\|\calA \uep \calA\|^\ep}
= \|\barf\left(\calA\right)\|
= \|H(\barf\left(\calA\right))\|
= \lambda_{\max}\left(H\left(\barf\left(\calA\right)\right)\right).
\end{equation}
\end{lemma}
\noindent{\emph{Proof.}} By direct calculation, we have
\begin{eqnarray}
\|\calA\|^{\bep} = \sqrt{\|\calA\!\bep\! \calA\|^\ep}
&=& \sqrt{\max_{i} \left|\lambda^\ep_i\left( \calA\!\bep\! \calA\right)\right|}
\nonumber\\
&=& \sqrt{\max_{i} \left|\lambda_i\left(f\left( \calA\!\bep\! \calA\right)\right)\right|}
\nonumber \\
&=& \sqrt{ \max_{i} \left|\lambda_i\left(\barf\left(\calA\right)\cdot
\barf\left(\calA\right)^\top\right)\right| } \nonumber\\
&=& \|\barf\left(\calA\right)\| \nonumber\\
&=& \sqrt{\|\calA \uep \calA\|^\ep } \nonumber\\
\end{eqnarray}
where the second equality follows from Definition~\ref{concepts} (vii), the
third one by applying Proposition~\ref{observations} (ii), the fourth and the
sixth resulting from \eqref{eq}. Now, using \req{T2.1.28},
\[
\|\barf\left(\calA\right)\|
= \|H(\barf\left(\calA\right))\|
= \lambda_{\max}\left(H\left(\barf\left(\calA\right)\right)\right),
\]
completing the proof. \qed
We are now in a position to state the general tensor
Bernstein inequality for random tensors of any order.
\begin{theorem}\label{main2}
Consider a finite sequence $\left\{\calX_k\right\}$ of independent
random tensors in $\T_{N,d}$ and let $m = \left\lceil \frac{N}{2}\right\rceil$.
Assume that, for some constant $L \geq 0$,
\[
\E(\calX_k) = \calO
\tim{ and }
\|\calX_k\|^{\bep} \leq L
\tim{ for all } k.
\]
Consider now the random tensor $\calY = \sum_k \calX_k$ and let $v(\calY)$ be the
generalized tensor variance statistic of the sum given by
\begin{eqnarray}
\nu(\calY)
& = & \max\left\{\left\|\E\left(\calY\!\bep\! \calY\right)\right\|^\ep,
\left\|\E\left(\calY\uep \calY\right)\right\|^\ep\right\}\nonumber\\
& = & \max\left\{\Big\|\sum_k\E\Big(\calX_k\!\bep\! \calX_k\Big)\Big\|^\ep,
\Big\|\sum_k\E\Big(\calX_k\uep \calX_k\Big)\Big\|^\ep\right\}\nonumber
\end{eqnarray}
Then
\begin{equation}\label{expect2}
\E(\|\calY\|^{\bep})
\leq\sqrt{2\nu(\calY)\log\left(d^m+d^{N-m}\right)}+\frac{1}{3}L\log\left(d^m+d^{N-m}\right).
\end{equation}
Furthermore, for all $t\geq 0$,
\begin{equation}\label{prob2}
\P\left(\|\calY\|^{\bep}\geq t\right)
\leq \left(d^m+d^{N-m}\right)\cdot \exp\left(\frac{-t^2/2}{\nu(\calY)+Lt/3}\right).
\end{equation}
\end{theorem}
\noindent{\emph{Proof.}} The desired result follows from applying
\cite[Theorem~6.1.1]{Trop15} to the random matrix
$\barf(\calY) = \sum_k \barf(\calX_k)$ and using the facts that
\[
\|\calX_k\|^{\bep} = \|\barf(\calX_k)\|,
\ms
\|\calY\|^{\bep} = \|\barf(\calY)\|,
\]
and that
\begin{eqnarray}
\nu(\calY)
& = & \max\left\{\left\|\E\left(\barf(\calY)\cdot\barf(\calY)^\top\right)\right\|,
\left\|\E\left(\barf(\calY)^\top\cdot \barf(\calY)\right)\right\|\right\}
\nonumber\\
& = & \max\left\{\Big\|\E\Big(\sum_k\barf(\calX_k)\cdot\barf(\calX_k)^\top\Big)\Big\|,
\Big\|\E\Big(\sum_k\barf(\calX_k)^\top\cdot \barf(\calX_k)\Big)\Big\|\right\}.
\end{eqnarray}
\qed
\noindent
Observe that the dimension-dependent factor on the right-hand side of
\req{prob} is $d^m+d^{N-m}$, which is larger than $md$, the factor one might
naively expect as a generalization of the matrix case, where this factor is
$2d$. This larger bound somewhat limits the applicability of the results to
moderate values of $d$ and $m$. It is however worthwhile to note that we
have merely assumed the \epb-symmetry of the random tensors under
consideration, which is weaker than true symmetry.
\numsection{The Tensor Bernstein Inequality in Intrinsic Dimension}\label{idBern-s}
The above discussion about the dimension-dependent factor of \req{prob}
prompts the question of the extension of a version of the Bernstein
inequality where this factor can be improved. This is the case of ``intrinsic
dimension'' version of this result, which we now consider.
Our approach first introduces Einstein-positive-(semi)definite tensors.
The positive semi-definiteness of real tensors has been discussed in
\cite{Qi05} and shown to have applications such as in biomedical imaging
\cite{QiYuWu10}. Recall that a real tensor $\calA\in \T_{2m,d}$ is called positive
semi-definite (PSD) if
\[
\calA x^{2m} =
\sum_{i_1\ldots i_mj_1\ldots j_m}a_{i_1\ldots i_m j_1\ldots j_m}
x_{i_1}\cdots x_{i_m}x_{j_1}\cdots x_{j_m}
\geq 0, \tim{ for all } x\in\R^d.
\]
(see \cite{QiLuo17}).
Moreover, it has been shown in \cite{Qi05} that an even-order symmetric real
tensors is PSD if and only if all Z-eigenvalues are nonnegative.
Similarly, we can define such a nonnegativity in the sense of Einstein
products as follows.
\begin{definition}
An Einstein-symmetric tensor $\calA\in\T_{2m,d}$ is called Einstein-positive
semi-definite (\epb-PSD) (\epb-positive-definite (\epb-PD), respectively) if
and only if all its Einstein-eigenvalues are nonnegative (positive, respectively).
\end{definition}
We adopt the notation $\calA\succeq^\ep (\succ^\ep) \calO$ to represent that
$\calA$ is \epb-PSD (\epb-PD), and similarly $\calA\succeq^\ep (\succ^\ep) \calB$
is $\calA-\calB$ if \epb-PSD (\epb-PD). Such an \epb-PSD (\epb-PD) property is
actually stronger than the original PSD (PD) property, as stated in the
following lemma.
\begin{lemma}\label{psd}
Suppose that $\calA\in\S_{2m,d}$. If $\calA$ is \epb-PSD (\epb-PD), then $\calA$ is PSD (PD).
\end{lemma}
\noindent{\emph{Proof.}} Because of \req{transform}, $\calA$ is \epb-PSD if and only
if $f(\calA)$ is a PSD matrix. Then, for any $x\in\R^d$, it follows that
\[
\calA x^{2m}
= \langle f(\calA), \left(x^{\otimes m}\right)\left(x^{\otimes m}\right)^\top\rangle
= \left(x^{\otimes m}\right)^\top f(\calA) \left(x^{\otimes m}\right)
\geq 0.
\]
The proof for the \epb-PD case is similar. \qed
\noindent
Note that the PSD property does not, in general, imply the \epb-PSD property.
The following counterexample is taken from \cite[Example 4.5]{LuoQiYe15}.
\begin{example}\label{countereg}
Let $\calA\in \S_{4,3}$ with
$a_{1122}=a_{1212}=a_{1221}=a_{2112}=a_{2121}=a_{2211}=1$ and other entries
$0$. It is easy to verify that
$\calA x^{4} = 6x_1^2x_2^2\geq 0$ for any $x \in \R^3$,
whereas $y^\top f(\calA)y = 2y_1y_5<0$ for $y= (1,0,0,0,-1,0,0,0,0)^\top$.
\end{example}
\begin{proposition}\label{AA-l}
Let $\calA \in \T_{N,d}$ and $m=\left\lceil\frac{N}{2}\right\rceil$. Then
$\calA \!\bep\! \calA \in \T_{2m,d}$ and
$\calA \uep \calA \in \T_{2(N-d),d}$
are both \epb-PSD and PSD.
\end{proposition}
\noindent{\emph{Proof.}}
\[
\begin{array}{lcl}
(\calA \!\bep\! \calA) x^{2m}
& = & \langle f(\calA \!\bep\! \calA),
\left(x^{\otimes m}\right)\left(x^{\otimes m}\right)^\top\rangle\\*[1.5ex]
& = & \left(x^{\otimes m}\right)^\top \barf(\calA)\cdot
\barf(\calA)^\top \left(x^{\otimes m}\right)\\*[1.5ex]
& = & \| \barf(\calA)^\top \left(x^{\otimes m}\right)\|^2\\*[1.5ex]
& \geq & 0.
\end{array}
\]
The proof is similar for $\calA \uep \calA$.
\qed
Armed with these extended notions and the fundamental relation \req{transform}
applied to the Einstein EVD, we finally state an intrinsic-dimension version
of the Bernstein concentration inequality for tensors.
\begin{theorem}\label{main3}
Consider a finite sequence $\left\{\calX_k\right\}$ of independent
random tensors in $\T_{N,d}$ and let $m = \left\lceil \frac{N}{2}\right\rceil$.
Assume that, for some constant $L \geq 0$,
\[
\E(\calX_k) = \calO
\tim{ and }
\|\calX_k\|^{\bep} \leq L
\tim{ for all } k.
\]
Consider now the random tensor $\calY = \sum_k \calX_k$ and let $\calV_1$ and
$\calV_2$ be upper bounds for the tensor-valued variance statistics of $\calY$
introduced in Theorem~\ref{main2}, that is
\beqn{Vbounds}
\calV_1
\succeq^\ep \E\left(\calY\!\bep\! \calY\right)
= \sum_k\E\left(\calX_k\!\bep\! \calX_k\right) \tim{ and }
\calV_2
\succeq^\ep \E\left(\calY\uep \calY\right)
= \sum_k\E\left(\calX_k\uep \calX_k\right).
\eeqn
Let
\[
\nu(\calY) = \max\left\{\|\calV_1\|^\ep,\|\calV_2\|^\ep\right\}
\tim{ and }
d_{\calV}(\calY)
= \frac{1}{\nu(\calY)}
\Big(\tr^\ep(\calV_1)+\tr^\ep(\calV_2)\Big).
\]
Then, for $t \geq \sqrt{\nu(\calY)}+L/3$,
\begin{equation}\label{prob3}
\P\left(\|\calY\|^{\bep}\geq t\right)
\leq 4d_{\calV}(\calY)\cdot \exp\left(\frac{-t^2/2}{\nu(\calY)+Lt/3}\right).
\end{equation}
\end{theorem}
\noindent{\emph{Proof.}} We first observe that, because of
Proposition~\ref{AA-l}, $\E\left(\calY \!\bep\! \calY\right)$
and $\E\left(\calY \uep \calY\right)$ are
\epb-positive-semidefinite, which make the \epb-PSD ordering in \req{Vbounds}
well-defined. We also note that $d_{\calV}(\calY)$ is identical to the
intrinsic dimension of the matrix
\beqn{V-def}
V = \left(\begin{array}{cc}
\barf(\calV_1)^T & 0 \\
0 & \barf(\calV_2)
\end{array}
\right),
\eeqn
where the standard (matrix) intrinsic dimension of a
positive-semidefinite matrix $M$ is the ratio $\tr(M)/\|M\|$.
The desired result then again follows from applying
an existing result for matrices (here \cite[Theorem~7.3.1]{Trop15})
to the random matrix $\barf(\calY) = \sum_k \barf(\calX_k)$.
\qed
\noindent
The main differerence between this theorem and Theorem~\ref{main2} is the
replacement of \req{prob2} by \req{prob3}: have to relax the range of $t$ for
which the inequality is valid but often gain in the ``dimension-dependent''
factor, since $d_{\calV}(\calY)$ never exceeds $d^m+d^{N-m}$ and can be much
smaller if $V$ in \req{V-def} is close to being of low rank.
\numsection{Conclusion}\label{concl-s}
We have considered the Einstein tensor products and reviewed the strong link
this concept establishes between standard matrix theory and tensor analysis.
This link has allowed us to restate the powerful Bernstein matrix
concentration inequality in the case of general tensors of arbitrary order.
Other concentration inequalities do exist for matrices (see \cite{Trop15} for
an overview). Whether they can be extended to tensors using a similar
approach, although likely, remains open at this stage.
It is interesting (and challenging) to examine if a better ``dimension
factor'' (closer to $md$) could be achieved by an approach where one does not
merely unfold tensors to matrices and use existing concentration results for these, but
where a true analysis of the tensor case is conducted. The main difficulty
is to find an eigenvalue decomposition of (random) tensors with a a number of
``eigenvalues'' smaller than $d^m$ (this is for instance not necessarily the
case of Z-eigenvalues \cite{CartStur13}).
If one is to judge by the vast diversity of applications where matrix
concentration inequalities have been useful, our new result
potentially opens several research paths in high-dimensional
computational statistics and numerical optimization. In particular, its
application to sub-sampling methods for the estimation of
derivative tensors beyong the Hessian may now be considered, as it makes algorithms
based on high-order Taylor's expansions and models practical. The complexity
of optimization methods of this type has been analyzed in
\cite{BellGuriMoriToin18}, but the necessary probabilistic estimation properties were
so far limited to quadratic models. The new tensor concentration inequality
thus allows further developements in a framework which is central to computational
deep learning.
{\footnotesize
\section*{Acknowledgment}
This research was partially supported by National Natural Science Foundation
of China (11771038, 11431002), the State Key Laboratory of Rail Traffic
Control and Safety, Beijing Jiaotong University (RCS2017ZJ001), and the Hong
Kong Research Grant Council (Grant No. PolyU 15300715, 15301716 and
15300717). The third author gratefully acknowledges the support of the
Hong Kong Polytechnic for the visit during which this research was initiated.
|
2,869,038,156,819 | arxiv | \section{Introduction}
The dynamic solar magnetic field is responsible for producing energetic events like solar flares and coronal mass ejections.
These events drive space weather which sometimes has hazardous impacts on
our space-based society.
In a strong cycle, we observe more such events and thus large impacts on the space weather. Hence, predicting the solar cycle strength is of our utmost importance.
As the solar cycle is irregular, the prediction is challenging.
Several methods have been applied to predict the amplitudes of the past few cycles and it is not an exception for the Cycle 25 \citep{Petrovay20, Nandy21}. Out of these methods, precursor, in which the information of the previous cycle is used to predict the strength of the cycle, is the most widely used method; see \citet{Hat02, CS07, Kane10, Haz15} and Section 2 of \citet{Petrovay20}.
One important feature of the solar cycle is the Waldmeier effect\ \citep{W35}, which says that strong cycles take less time to rise, and vice versa. While this correlation is somewhat poor and even difficult to establish \citep[][hereafter KC11]{Dik08, KC11}, there exists a robust correlation between
the rise rate (slope) and the amplitude of the cycle \citep{CS08}.
This correlation exists strongly in all the observed proxies of the solar cycle. KC11 called these two correlations, i.e., the correlations between the rise time and the rise rate of the cycle with the amplitude as WE1 and WE2, respectively. We mention that the Waldmeier effect\ is not even limited to our sun only, some other sun-like stars do show this feature \citep{garg19}.
As the rise rate can be computed when the solar cycle has just passed the minimum by a few years, we can apply WE2 to predict the amplitude of the solar cycle when the cycle is still growing and has not reached its peak. The current Cycle 25 has passed about 2~years and thus we can predict the amplitude of Cycle 25. This is one of the motivations of the present Letter.
While WE2 is derived based on the observed correlation, there is a strong physical basis for this. KC11 showed that WE2 was robustly reproduced in the Babcock--Leighton\ type flux transport dynamo models with stochastic fluctuations in the poloidal source. Observations, as well as the dynamo models, suggest that if the polar field at the solar minimum is strong, then the amplitude of the next cycle will be strong \citep{MMS89, JCC07, WS09, KO11, Muno13, Priy14, KM17, KMB18, KKV21}.
On the other hand, if a cycle is strong, then it rises fast (WE2). Hence, there is a link between the polar field at the solar minimum and the rise rate of the next cycle. We shall explicitly demonstrate this link in the present study.
The most interesting feature that we have found is that the rise rate of the polar field build-up (after its reversal) also determines the rise rate of the next sunspot cycle and thus the amplitude of the cycle.
Hence, we do not even need to wait for the time of the solar cycle minimum or the time of the peak of the polar field \citep[which is the usual time for the prediction;][]{Sch78, CCJ07} to get an idea of the next cycle strength, the rate at which the polar field develops carry this information.
In this Letter, we shall present this link both from the observed and the dynamo model data and discuss the physical reason based on the Babcock--Leighton\ dynamo.
Finally, we shall predict the amplitude of the ongoing solar Cycle 25, separately using the rise rate of the current solar cycle and the rise rate of the previous cycle's polar field. We shall show that the prediction made from these two methods are very close to each other because the physics behind these two are linked.
\section{Data and Methods}
\label{sec:method}
For our analysis, we have used the monthly sunspot number (SSN) and sunspot area (SSA) data. The SSA data however are not available for Cycle 25.
For the observational measures of solar polar field, we have included the polar field strength data (monthly binned) collected from Wilcox Solar Observatory (WSO).
To remove the high fluctuations in the data of SSA and SSN, we have used Gaussian smoothing filter with FWHM = 13 and 7 months, respectively \citep{Hat02}.
As the current solar cycle 25 has only undergone 2 years from its minimum, we can calculate the rise rates based on this two years data. Hence, to make it uniform for all 13 cycles (Cycle:12--24),
we have computed the rise rate for first 2 years of their rise phase only. As the data are not very smooth,
and there is some overlap between two consecutive cycles during the first few months of a cycle \citep{CS08}, we excluded the first six month's data from our analysis to avoid any contamination in the rise rates due to these reasons. Further, as the rise rate of a cycle is dynamic throughout its evolution, we computed the rates at different phases with different time intervals
(6 to 18 months, 12 to 24 months and 6 to 24 months) and finally we average these values to get one rise rate for each cycle.
For computing the rise rate of the polar field, we compute it within the first three years after the reversal, as there is no overlap between two consecutive cycles in the polar field data.
Again here also we compute the rates at different intervals (0 to 36 months and 12 to 36 months for north; 0 to 36 months and 24 to 36 months for south) and average those to get one rise rate for a cycle.
\section{Results and Discussion}
\subsection{WE2 and the prediction of Cycle 25}
\label{sec:pred_we2}
\Fig{fig:corrplot} shows the scatter plots of the rise rates with the amplitudes of the SSN (a) and SSA (b).
We find a strong correlation between these two quantities for both the data with linear (Pearson) correlation coefficients of 0.87 for SSN and 0.89 for SSA data. These results reproduce WE2 \citep{W35, KC11}. The straight lines in \Fig{fig:corrplot} are obtained from the linear
regression based on Bayesian probabilistic approach (using Python’s Pymc3 routine); see figure caption for the fitted parameters.
The strong correlation between these quantities in \Fig{fig:corrplot} implies that if the rise rate of a cycle is known even for some part of its rising phase, then the amplitude of the cycle can be predicted well in advance.
To test the reliability of this prediction method, we predict the amplitudes of the last few observed cycles and compare them with observed values.
We note that when we predict the amplitude of a given cycle, we exclude the data for that cycle while computing the regression relation. In \Tab{table1}, we mention our predicted peak values along with their errors for previous 6 cycles (Cycle: 19--24) and the actual observed values.
We can see that the the predicted values are not too far from the actual ones. For some cycles, like Cycle 22, the predicted value is quite different from the observed ones, but considering the error in the regression, it is not too much off from the allowed range.
We do the same exercise using SSA data and the predicted amplitude of SSA are given in \Tab{table1}.
However, in this case, we see a somewhat larger deviation in predicted values from the actual observations, although the correlation between the rise rate and the observed amplitude is better than that in the SSN data.
To compare these predicted peak areas
with the observed SSN, we
convert the predicted SSA into the SSN by employing the regression relation
(${\rm SSN} = 0.076 {\rm SSA} + 39.717$) between
SSN and SSA, which are
listed in the last column of \Tab{table1}.
Overall, prediction based on the rise rate of both sunspot number and area supports our idea.
\begin{figure}
\centering
\includegraphics[scale=0.3]{scatter1.eps}
\caption{Scatter plots between the rise rates and the amplitudes of the cycles for SSN (a) and SSA (b).
Lines are the linear regressions: $ y = m x + c$, where
$m = 2.001 \pm 0.308$ and $c = 77.305 \pm 16.545$ for SSN and $m = 2.423 \pm 0.114$ and $c = 596.789 \pm 62.028$ for SSA.
}
\label{fig:corrplot}
\end{figure}
\begin{table}
\centering
\caption{Predictions of the solar cycle {\it amplitudes} using the rise rates of SSN and SSA (in $\mu$Hem) for the last few known cycles and the ongoing Cycle~25.
}
\begin{tabular}{lllcclcccl}
\cline{1-6}
Cyc. & Obs. & Predicted & Obs. & Predicted & SSN from \\
No & SSN & peak SSN & SSA & SSA & Col. 5 \\
\cline{1-6}
25 & $---$ & $138\pm 26$ & $---$ & $---$ & $---$ \\
\cline{1-6}
24 & 116 & $125\pm 26$ & 1054.0 & $1058.2\pm 84.6$ & $121\pm 17$ \\
\cline{1-6}
23 & 181 & $177\pm 33$ & 1746.1 & $1536.3\pm 83.1$ & $157\pm 17$ \\
\cline{1-6}
22 & 213 & $170\pm 30$ & 2354.5 & $2026.2\pm 129.3$ & $195\pm 20$ \\
\cline{1-6}
21 & 232 & $216\pm 40$ & 2363.1 & $1831.5\pm 122.3$ & $180\pm 19$ \\
\cline{1-6}
20 & 158 & $183\pm 32$ & 1561.4 & $1978.8\pm 128.8$ & $191\pm 20$ \\
\cline{1-6}
19 & 287 & $290\pm 72$ & 3285.2 & $3008.6\pm 177.2$ & $270\pm 24$ \\
\cline{1-6}
\end{tabular}
\label{table1}
\end{table}
\begin{figure*}
\includegraphics[scale=0.3]{SSN1.eps}
\caption{
{
Comparison of our prediction with observations.
Temporal variation of the observed SSN is shown by the blue curve.
The predicted amplitudes are shown by black squares and their errors by vertical lines. The time of the peak of the predicted Cycle 25 is shown by the vertical dotted line with the error by a horizontal arrow.
The prediction for Cycle~25 using {\it the rise rate} of the previous cycle's polar field is shown by a (dark red) filled circle.
}
}
\label{fig:cycles}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale=0.3]{polar_rise.eps}
\caption{Scatter plots between the polar field at solar cycle minimum and the rise rate of next cycle. (a) Open circles and red crosses represent the rise rates calculated from SSN and SSA, respectively. (b) Same as (a) but from a Babcock--Leighton\ type dynamo model.}
\label{fig:pol}
\end{figure}
Finally we predict the peak of the ongoing Cycle 25 based on its available SSN data.
We find the predicted amplitude of Cycle 25 to be $138\pm26$; see \Tab{table1}.
As we do not have SSA data for this cycle, we cannot predict the peak of SSA for Cycle 25 directly from the rise rate of SSA data.
WE2 relation also gives how much time the cycle will take to reach its peak from its minimum.
For Cycle 25, this value comes to be $4.5\pm 0.8$ years, which is quite close to the
average time of $4.58\pm 0.81$ years between the cycle minimum to maximum
as reported in \citet[][see their Table~1]{pawan21}.
Therefore, we predict that the Cycle~25 will attain peak at $2024.5\pm 0.8$.
For better visualization, our predicted peak SSNs with their error ranges are shown
in \Fig{fig:cycles}.
\subsection{Connecting WE2 with the previous cycle polar field}
We would like to mention that although our prediction in \Sec{sec:pred_we2} is based on an empirical relation which holds `statistically' and hence the prediction for certain cycles (like cycle 22) may not perfectly agree with the observation. However, we still make prediction because our method is based on a strong physical ground.
It was shown that WE2 relation (on which our prediction is based) is a robust feature of solar dynamo \citep{KC11, pipin11, pipin11b}.
Particularly, KC11 found a strong correlation for WE2 in all the simulations they have performed. They explained that fluctuations in the generation of poloidal field (Babcock--Leighton\ process), makes the polar field at the solar minimum unequal for different cycles. As the polar field gives rise to the toroidal field and the sunspot for the next cycle, strong polar field makes the next cycle strong. This is also established in the observational data \citep{MMS89,CCJ07, JCC07, WS09, KO11, Muno13, Priy14}.
Finally if the cycle is strong, it has to rise fast.
Based on above discussion, we expect that the rise rate of a cycle should be directly linked with the poloidal field at the cycle minimum. To check this link, we compute the linear correlation between these two, based on the observed polar field data for last four cycles; see \Fig{fig:pol}(a). We observe a reasonably good correlation between these two quantities from SSN and SSA
data. However, the data of SSN for Cycle 22 is showing some deviation from the linear trend. This
may be due to the fact that we are calculating the rise rate based on only first two years data (to make it consistent with the available data for Cycle 25).
Furthermore, the polar field data are not always perfect due to limited observations in the polar regions \citep{Bertello14, Mord22}. Hence, the reliability of this relation cannot be endorsed with limited data.
Therefore, we try to explore this link between the polar field and the rise rate of the next cycle using Babcock--Leighton\ type dynamo theory.
To do so, we have taken the
data from the dynamo model: Run 2DR2 as presented in \citep{pawan21} which is produced using {\it{ Surya}} code \citep{CNC04}.
We compute the correlation between the polar field at a cycle minimum and the rise rate of the following cycle from the data of 100 cycles in the same manner as done for the observed data. We find a high correlation as seen in \
\Fig{fig:pol}(b). This supports the fact that a strong poloidal field indeed makes the following cycle rise faster and hence the cycle becomes stronger, obeying WE2.
So, we believe that our prediction for the ongoing cycle 25 should be comparable
with the prediction made by other groups using the observed polar field data
but not the same because there is one more physical process involved in it that we shall discuss in \Sec{sec:pol_rr}.
To facilitate the comparison, we enlist our predicted values of the amplitude and time of the peak for cycle 25 with various other groups in \Tab{table2}.
We find that our value is slightly larger
than most of the predictions,
but not too much considering the error range.
\subsection{Correlation with the rise rate of the polar field and the prediction of Cycle 25}
\label{sec:pol_rr}
Finally, further going one step backwards in the evolution of solar cycles, we find a very interesting relation that the rise rate of the polar field build-up (after reversal) has a correlation with the amplitude of the next cycle; see \Fig{fig:polr}(a).
We note that here we have used the hemispheric data for the correlation.
We find a similar strong correlation ($r = 0.99$, $p = 0.01$)
for both SSN and SSA data.
We could compute this correlation
in proxies of polar field, namely the $A(t)$ index and polar faculae count.
However, the timing of the polarity reversal is not determined in these data. Importantly, these data are very noisy
and computing the rise rate in these data sometimes leads to poor correlation; see Table~2 of \citet{pawan21}.
We note that we had computed the average rise rate during the first three years after the polar field reversal.
If we go beyond 3 years, then the polar field tends to saturate and the rise rate poorly correlates with the amplitude of the next cycle.
Unfortunately, again the reliability of this relation cannot be proven based on only three data points. However, we find a strong relation between these two quantities in the dynamo model
(again from Run 2DR2); see \Fig{fig:polr}(b).
We note that this relation is also strongly reproduced in other dynamo models; see Table~4 of \citet{pawan21}).
As this relation holds good, we obviously expect a strong correlation between the rise rate of the polar field build-up and the rise rate of the next cycle, which is indeed seen in \Fig{fig:polr}(c).
The physics behind this correlation is not difficult to understand. In the Babcock--Leighton\ process, the decay and dispersal of tilted BMRs produces polar field in the Sun. When a sunspot cycle reaches its maximum, the polar field is usually reversed and then as the new BMRs emerge, the polar field increases
(due to continuous supply of the trailing polarity flux from low latitudes) while the sunspot cycle decline.
Hence, if the polar field in a cycle rises rapidly, then the toroidal field for the next cycle will also be amplified rapidly. This causes the next sunspot cycle to rise fast and also makes it strong.
One follow-up question is why the rise rate of the polar field build-up is not the same for all cycles. It is because the generation of poloidal field involves some randomness, particularly due to scatter in the BMR tilts \citep{JCS14, HCM17, KM17, Jha20} and the latitudinal positions of BMRs \citep{MKB17, KM18, Kar20}. In fact, there is indication that the decline phase of the cycle (during which the polar field is built up after reversal) is more irregular having many anti-hale and non-Joy BMRs \citep{Zhukova22, Mord22}, that can disturb the growth of the polar field considerably.
Temporal variation in the meridional flow can also lead to a change in the polar field build-up \citep{Kar10}. Due to these inherent randomnesses in the Babcock--Leighton\ process, even if two cycles decay identically, their corresponding polar field build up can be different.
In conclusion, if the correlation between the rise rate and the amplitude of the next cycle, as seen in the observed data (\Fig{fig:polr}(a)) and in the dynamo model (\Fig{fig:polr}(b)) really holds good in the Sun, then we can make prediction of the solar cycle a few years before the time of the previous polar field peak or the solar minimum. This considerably increases the temporal scope of the predictability of the solar cycle.
Using the observed regression relation between the polar field rise rate and the amplitude of the next solar cycle (\Fig{fig:polr}(a)), we find the peak of the ongoing Cycle 25
to be $137\pm 23$.
Instead of hemispheric SSN data, if we use SSA data, and then convert the predicted value into SSN (using the regression relation between the SSN and SSA), we get the peak value to be $144 \pm 3$.
So we clearly see that these two values are quite close to the one that
we have obtained using WE2 relation ($138\pm26$) in \Sec{sec:pred_we2}.
We note that in \citet{pawan21}, our earlier prediction for Cycle~25 was $120\pm25$, which is
lower than the current prediction.
This is because there we have used the polar field value at 4 years after reversal when the field tends to saturate. In contrast, in the present work, we have used the average rise rate from the first three year's data.
Furthermore, in previous work, the regression relation based on the polar field data at 4 years of the reversal was not very tight.
\begin{table}
\centering
\caption{Comparison of our predictions for Solar Cycle 25 (P1: using the rise rate of the SSN, P2: using the rise rate of the previous cycle's polar field) with predictions by other groups who used observed polar precursor.}
\begin{tabular}{lllllcl}
\cline{1-5}
Authors && Predicted SSN && Time \\
\cline{1-5}
This work: P1 && $138\pm 26$ && $2024.5\pm 0.8$ \\
~~~~~~~~~~~~~~~~: P2 && $137\pm 23$ && \\
\cline{1-5}
\citet{pawan21} && $120\pm 25 $ && $---$ \\
\cline{1-5}
\citet{wg} && $126$ && $---$ \\
\cline{1-5}
\citet{HC19} && $140.5\pm 2.5$ && $---$ \\
\cline{1-5}
\citet{Pesnell18} && $135\pm 25$ && $2025.2\pm 1.5$ \\
\cline{1-5}
\citet{Petrovay18} && 130 && Late 2024 \\
\cline{1-5}
\citet{Gopalswamy18} && 148 && $---$ \\
\cline{1-5}
\citet{Bhowmik+Nandy} && 118 && $2024\pm 1$ \\
\cline{1-5}
\citet{Jiang_2018} && $125\pm 32$ && $---$ \\
\cline{1-5}
\citet{UH18} && 110 && $---$ \\
\cline{1-5}
\end{tabular}
\label{table2}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.3]{obs_pol_rr3.eps}
\caption{(a) Scatter plot between rise rate of observed polar field and the amplitude of the next cycle SSN. Regression lines: $y=mx+c$, where m = $3.677\pm 0.876$ and c = $29.093\pm 13.517$ for the northern (asterisks) and m = $3.689\pm 0.912$ and c = $33.405\pm 15.311$ for southern hemispheres (filled circles).
(b) Same as (a) but from the dynamo model.
(c) Same as (b) but for the rise rate of the next sunspot cycle.
}
\label{fig:polr}
\end{figure}
\section{Conclusions}
We have utilised a robust feature of Waldmeier effect\, namely, the rise rate of a cycle is strongly correlated with its amplitude (WE2)
and we have shown that a reliable
prediction of a solar cycle can be made when the cycle has just past a few years from its minimum.
The ongoing solar cycle has passed about two years, and using this data, we predict that the amplitude of Cycle 25 will be
$138\pm26$ and it will attain peak around mid to late 2024.
Hence, the ongoing cycle will be slightly stronger than the previous Cycle 24.
Our predicted strength is also significantly
larger than the NOAA/NASA Prediction Panel \footnote{\url{https://www.swpc.noaa.gov/news/solar-cycle-25-forecast-update}}, which predicted the peak SSN of Cycle 25 to be $115\pm10$.
We have shown that this prediction method (WE2) is based on a strong physical ground. If a polar field of a cycle is strong, then the next cycle has to be strong and it will also rise fast. Hence, our prediction based on the rise rate should be comparable to the one based on the polar field of the previous cycle; see \Tab{table2}. But this is not the
complete story.
If the polar field builds up (after its reversal) rapidly, then the next cycle will be strong and vice versa.
Therefore, we find a strong correlation between the rise rate of the polar field and the amplitude of the next cycle, both in observations and in dynamo models \citep[also see Appendix of][]{pawan21}.
Based on the observed regression relation between the rise rate
of the previous cycle's polar field with the amplitude of the sunspot cycle, we predict the amplitude of the ongoing Cycle 25 to be
$137\pm 23$.
Hence, our predictions made from both methods, namely using the rise rate of sunspot cycle and the rise rate of the previous cycle's polar field, match quite well. This agreement is because of the fact that they are linked. The polar field of the previous cycle gives rise to the toroidal field and the sunspot of the current cycle and thus, how rapidly the Sun builds up its polar field determines the rise rate of the next cycle. This link between the rate of build-up of the polar field with the amplitude of the next cycle suggest that we can predict the amplitude of the next solar cycle just after about 2-3 years of the reversal of its polar field (or up to about 9 years before the peak of a cycle).
Earlier \citet{pawan21} have shown that the prediction of the cycle can be made just after 4 years of the reversal of the polar field.
Hence, this study, along with our present work, extends the scope of solar cycle prediction by a considerable amount of time.
\section*{Acknowledgements}
The authors thank the anonymous referee for a suggestion that made the title more accurate.
The authors also acknowledge financial support provided by ISRO/RESPOND (project No.
ISRO/RES/2/430/19-20).
\section*{Data Availability}
\label{sec:data}
We have used the SSN data available at SILSO\footnote{\url{http://sidc.oma.be/silso/DATA/SN_ms_tot_V2.0.txt}} and SSA from the Royal Greenwich Observatory (RGO)\footnote{\url{https://solarscience.msfc.nasa.gov/greenwch.shtml}}. The hemispheric SSN data has been collected from \citet{hem21}\footnote{\url{https://wwwbis.sidc.be/silso/extheminum}}.
Polar field data is taken from Wilcox Solar Observatory (WSO)\footnote{\url{http://wso.stanford.edu/Polar.html}}.
Data from our dynamo models and the analyses codes can be shared upon a reasonable request.
\bibliographystyle{mnras}
|
2,869,038,156,820 | arxiv |
\section{Introduction}
The original promise of computing was to solve information overload in science. In his 1945 essay "As We May Think", Vannevar Bush observed how "publication has been extended far beyond our present ability to make real use of the record"~\citep{bush1945}. He proposed computers as a solution to manage the growing mountain of information. Licklider expanded on this with the vision of a symbiotic relationship between humans and machines. Computers would take care of routine tasks such as storage and retrieval, "preparing the way for insights and decisions in scientific thinking"~\citep{licklider1960}.
Computing has indeed revolutionized how research is conducted, but information overload remains an overwhelming problem~\citep{GrowthRateScience}. In May 2022, an average of 516 papers per day were submitted to arXiv~\citep{arxivpapers}. Beyond papers, scientific data is also growing much more quickly than our ability to process it~\citep{BigChallengesBigData}. As of August 2022, the NCBI GenBank contained $1.49 \times 10^{12}$ nucleotide bases~\citep{genbank}. Given the volume of information, it is impossible for a single person to read all the papers in a given field; and it is likewise challenging to organize data on the underlying scientific phenomena.
Search engines are the current interface for accessing scientific knowledge following the Licklider paradigm. But they do not organize knowledge directly, and instead point to secondary layers such as Wikipedia, UniProt and PubChem Compound which organize literature and data. These resources require costly human contributions, for example writing a review of literature, an encyclopedia article or annotating a protein. Given this bottleneck, researchers continue to feel overwhelmed even with powerful search tools to hand.
In this paper, we argue for a better way through large language models. Unlike search engines, language models can potentially store, combine and reason about scientific knowledge. For example, a model trained on the literature could potentially find hidden connections between different research, find hidden gems, and bring these insights to the surface. It could synthesize knowledge by generating secondary content automatically: such as literature reviews, encyclopedia articles, lecture notes and more. And lastly, it could organize different modalities: linking papers with code, protein sequences with compounds, theories with LaTeX, and more. Our ultimate vision is a single neural network for powering scientific tasks. We believe this is will be the next interface for how humans access scientific knowledge, and we get started in this paper.
\subsection{Our Contribution}
We introduce a new large language model called Galactica (GAL) for automatically organizing science. Galactica is trained on a large and curated corpus of humanity's scientific knowledge. This includes over 48 million papers, textbooks and lecture notes, millions of compounds and proteins, scientific websites, encyclopedias and more. Unlike existing language models, which rely on an uncurated crawl-based paradigm, our corpus is high-quality and highly curated. We are able to train on it for multiple epochs without overfitting, where upstream and downstream performance improves with use of repeated tokens.
Dataset design is critical to our approach, which includes curating a high-quality dataset and engineering an interface to interact with the body of knowledge. All data is processed in a common markdown format to blend knowledge between sources. We also include task-specific datasets in pre-training to facilitate composition of this knowledge into new task contexts. For the interface, we use task-specific tokens to support different types of knowledge. We process citations with a special token, that allows a researcher to predict a citation given any input context. We wrap step-by-step reasoning in a special token, that mimicks an internal working memory. And lastly, we wrap modalities such as SMILES and protein sequences in special tokens, which allows a researcher to interface with them using natural language. With this interface and the body of scientific knowledge in the model, we achieve state-of-the-art results across many scientific tasks.
On reasoning tasks, Galactica beats existing language models on benchmarks such as MMLU and MATH~\citep{MMMLU, MATH}. With our reasoning token approach, we outperform Chinchilla on mathematical MMLU with an average score of 41.3\% versus 35.7\%~\citep{Chinchilla}. Our 120B model achieves a score of 20.4\% versus PaLM 540B's 8.8\% on MATH~\citep{PaLM, Minerva}. The 30B model also beats PaLM 540B on this task with 18 times less parameters. We believe this adds another reasoning method to the deep learning toolkit, alongside the existing chain-of-thought approach that has been well explored recently~\citep{ChainOfThought, CoTBigBENCH}.
We also find Galactica performs strongly in knowledge-intensive scientific tasks. We conduct detailed knowledge probes of Galactica's knowledge of equations, chemical reactions and other scientific knowledge. Galactica significantly exceeds the performance of general language models such as the latest GPT-3 in these tasks; on LaTeX equations, it achieves a score of 68.2\% versus the latest GPT-3's 49.0\%~\citep{GPT3}. Galactica also performs well in downstream scientific tasks, and we set a new state-of-the-art on several downstream tasks such as PubMedQA (77.6\%) and MedMCQA dev (52.9\%)~\citep{PubMedQA, MedMCQA}.
We also demonstrate new capabilities with Galactica's interface. First, the capability of predicting citations improves smoothly with scale, and we also find the model becomes better at modelling the underlying distribution of citations: the empirical distribution function approaches the reference distribution with scale. Importantly, we find this approach outperforms tuned sparse and dense retrieval approaches for citation prediction. This, along other results, demonstrates the potential for language models to replace the Licklider paradigm, document storage and retrieval, with their context-associative power in weight memory.
In addition, Galactica can perform multi-modal tasks involving SMILES chemical formulas and protein sequences. We formulate drug discovery tasks as text prompts and show performance scales in a weakly supervised setup. We also demonstrate Galactica learns tasks such as IUPAC name prediction in a self-supervised way, and does so by attending to interpretable properties such as functional groups. Lastly, Galactica can annotate protein sequences with natural language, including predicting functional keywords.
Galactica was used to help write this paper, including recommending missing citations, topics to discuss in the introduction and related work, recommending further work, and helping write the abstract and conclusion.
\section{Related Work}
\paragraph{Large Language Models (LLMs)}LLMs have achieved breakthrough performance on NLP tasks in recent years. Models are trained with self-supervision on large, general corpuses and they perform well on hundreds of tasks~\citep{GPT3, Gopher, Chinchilla, GPTNeox, OPT, PaLM}. This includes scientific knowledge tasks such as MMLU~\citep{MMMLU}. They have the capability to learn in-context through few-shot learning~\citep{GPT3}. The capability set increases with scale, and recent work has highlighted reasoning capabilities at larger scales with a suitable prompting strategy~\citep{ChainOfThought,PaLM,StepByStep,Minerva}.
One downside of self-supervision has been the move towards uncurated data. Models may mirror misinformation, stereotypes and bias in the corpus~\citep{WomanBabysitter,MeasuringBias,MeasureMitigate,LanguageIsPower,SocietalBiases}. This is undesirable for scientific tasks which value truth. Uncurated data also means more tokens with limited transfer value for the target use-case; wasting compute budget. For example, the PaLM corpus is 50\% social media conversations, which may have limited transfer towards scientific tasks~\citep{PaLM}. The properties of scientific text also differ from general text - e.g. scientific terms and mathematics - meaning a general corpus and tokenizer may be inefficient. We explore whether a normative approach to dataset selection can work with the large model paradigm in this work.
\paragraph{Scientific Language Models}Works such as SciBERT, BioLM and others have shown the benefit of a curated, scientific corpus~\citep{SciBERT, BioLM, PubMedBERT, S2ORCBERT, PubMedBERT, BioMegatron, ScholarBERT}. The datasets and models were typically small in scale and scope, much less than corpora for general models\footnote{One of the larger corpora S2ORC has \(<20\)bn tokens, whereas corpora for GPT-3 and PaLM have \(\geq 300\)bn tokens. ScholarBERT has a very large corpus at >200bn tokens, but the model is small at 770M capacity.}. Beyond scientific text, Transformers for protein sequences and SMILES have shown potential for learning natural representations~\citep{BioEmerge, honda2019smiles, Chemformer, ProGen2, ESMFold}. However, sequences like SMILES have descriptive limitations for representing chemical structure. We explore in this work whether a large, multi-modal scientific corpus can aid representation learning, where sequences occur alongside footprints and text in a signal-dense context.
\paragraph{Scaling Laws}The idea of "scaling laws" was put forward by \citet{ScalingLaws}, who demonstrated evidence that loss scales as a power-law with model size, dataset size, and the amount of training compute. The focus was on upstream perplexity, and work by \citet{ScalingLawsModelArch} showed that this does not always correlate with downstream performance. \citet{Chinchilla} presented new analysis taking into account the optimal amount of data, and suggested that existing language models were undertrained: "Chinchilla scaling laws". This work did not take into the account of fresh versus repeated tokens. In this work, we show that we can improve upstream and downstream performance by training on repeated tokens.
\paragraph{Language Models as Knowledge Bases}Storing information in weights is more unreliable in the sense models may blend information together, \textit{hallucination}, but it is more "pliable" in the sense it can associate information through the representation space, \textit{association}. Despite hallucination risks, there is evidence large language models can act as implicit knowledge bases with sufficient capacity~\citep{petroni2019language}. They perform well on knowledge-intensive tasks such as general knowledge (TriviaQA) and specialist knowledge (MMLU) without an external retrieval mechanism~\citep{GPT3, MMMLU}.
The question of how to update network knowledge remains an active research question~\citep{ContinualT0, MBMES}. Likewise, the question of how to improve the reliability of generation is an active question~\citep{PostHocResearch}. Despite these limitations, today's large models will become cheaper with experience~\citep{WrightsLaw}, and so a growing proportion of scientific knowledge will enter weight memory as training and re-training costs fall. In this work we perform probes to investigate Galactica's depth of knowledge, and show that the ability to absorb scientific knowledge improves smoothly with scale.
\paragraph{Retrieval-Augmented Models}Retrieval-augmented models aim to alleviate the shortcomings of weight memory. Examples of such models include RAG, RETRO and Atlas~\citep{RAG, RETRO, izacard2022fewshot}. These models have the advantage of requiring less capacity but the disadvantage of needing supporting retrieval infrastructure. Since knowledge is often fine-grained, e.g. the sequence of a particular protein, or the characteristics of a particular exoplanet, retrieval will likely be needed in future even for larger models. In this work we focus on how far we can go with model weights alone, but we note the strong case for using retrieval augmentation for future research on this topic.
\section{Dataset}
\begin{table}[t!]
\begin{center}
\begin{tabular}{lllc}
\toprule
Modality & Entity & Sequence & \\
\midrule
\begin{tabular}{c}
Text
\end{tabular} &
\begin{tabular}{c}
Abell 370
\end{tabular} &
\begin{tabular}{c}
\verb|Abell 370 is a cluster...|
\end{tabular} &
\begin{tabular}{c}
\includegraphics[height=1.1cm]{abell_new_render.png}
\end{tabular} \\
\begin{tabular}{c}
\LaTeX
\end{tabular} &
\begin{tabular}{c}
Schwarzschild radius
\end{tabular} &
\begin{tabular}{c}
\verb|r_{s} = \frac{2GM}{c^2}|
\end{tabular} &
\begin{tabular}{c}
\includegraphics[height=1.1cm]{text_new_render.png}
\end{tabular} \\
\begin{tabular}{c}
Code
\end{tabular} &
\begin{tabular}{c}
Transformer
\end{tabular} &
\begin{tabular}{c}
\verb|class Transformer(nn.Module)|
\end{tabular} &
\begin{tabular}{c}
\includegraphics[height=1.1cm]{transformer_new_render.png}
\end{tabular} \\
\begin{tabular}{c}
SMILES
\end{tabular} &
\begin{tabular}{c}
Glycine
\end{tabular} &
\begin{tabular}{c}
\verb|C(C(=O)O)N|
\end{tabular} &
\begin{tabular}{c}
\includegraphics[height=1.1cm]{glycine_new_render.png}
\end{tabular} \\
\begin{tabular}{c}
AA Sequence
\end{tabular} &
\begin{tabular}{c}
Collagen $\alpha$-1(II) chain
\end{tabular} &
\begin{tabular}{c}
\verb|MIRLGAPQTL..|
\end{tabular} &
\begin{tabular}{c}
\includegraphics[height=1.1cm, width=2.6cm]{protein_ps.png}
\end{tabular} \\
\begin{tabular}{c}
DNA Sequence
\end{tabular} &
\begin{tabular}{c}
Human genome
\end{tabular} &
\begin{tabular}{c}
\verb|CGGTACCCTC..|
\end{tabular} &
\begin{tabular}{c}
\includegraphics[height=1.1cm]{dna_color.png}
\end{tabular} \\
\hline
\end{tabular}
\end{center}
\caption{\textbf{Tokenizing Nature}. Galactica trains on text sequences that represent scientific phenomena.}
\label{table:naturebook-modalities}
\end{table}
\begin{table}[t!]
\begin{center}
\begin{tabular}{ lrrc }
\toprule
\multicolumn{4}{c}{Total dataset size = 106 billion tokens} \\
\midrule
Data source & Documents & Tokens & Token \% \\
\midrule
Papers & 48 million & 88 billion & 83.0\% \\
Code & 2 million & 7 billion & 6.9\% \\
Reference Material & 8 million & 7 billion & 6.5\% \\
Knowledge Bases & 2 million & 2 billion & 2.0\% \\
Filtered CommonCrawl & 0.9 million & 1 billion & 1.0\% \\
Prompts & 1.3 million & 0.4 billion & 0.3\% \\
Other & 0.02 million & 0.2 billion & 0.2\% \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{The Galactica Corpus}. A full breakdown of these sources is contained in the Appendix.}
\label{table:naturebook-corpus}
\end{table}
\begin{displayquote}
“Nature is written in that great book which ever is before our eyes -- I mean the universe -- but we cannot understand it if we do not first learn the language and grasp the symbols in which it is written." \\ \\
\textit{Galileo Galilei, The Assayer}
\end{displayquote}
The idea that Nature can be understood in terms of an underlying language has a long history~\citep{Assayer, Wigner, Wheeler}. In recent years, deep learning has been used to represent Nature, such as proteins and molecules~\citep{AlphaFold2021, MoLformer}. Amino acids are an alphabet in which the language of protein structure is written, while atoms and bonds are the language of molecules. At a higher level, we organize knowledge through natural language, and many works have trained on scientific text~\citep{SciBERT, BioLM, PubMedBERT, S2ORCBERT}. With Galactica, we train a single neural network on a large scientific corpus to learn the different languages of science.
Our corpus consists of \(106\) billion tokens from papers, reference material, encyclopedias and other scientific sources. We combine natural language sources, such as papers and textbooks, and natural sequences, such as protein sequences and chemical formulae. We process \LaTeX\ where we can capture it, and also include academic code to capture computational science. We highlight the corpus details in Table~\ref{table:naturebook-modalities} and~\ref{table:naturebook-corpus}. Full details, including dataset components and filtering logic, are contained in the Appendix.
Notably the dataset is small and curated compared to other LLM corpuses, which are larger and uncurated. This is a key question of this work: can we make a working LLM based on a curated, normative paradigm? If true, we could make more purposefully-designed LLMs by having a clear understanding of what enters the corpus, similar to expert systems which had normative standards~\citep{Jackson}.
\subsection{Tokenization}
\begin{figure}[t!]
\begin{tcolorbox}[colback=galwhite,colframe=galpurple2]
\begin{small}
\verb|[START_AMINO]MIRLGAPQTLVLLTLLVAAVLRCQGQDVQEAGSCVQDGQRYNDKDVWKPEPCRICVCDTG...[END_AMINO]| \\
\textbf{Summary} \\
Protein: Collagen alpha-1(II) chain \\
Gene: COL2A1 \\
Organism: Homo sapiens (Human) \\
Status: evidence at protein level \\
\textbf{Function} \\
Type II collagen is specific for cartilaginous tissues. It is essential for the normal embryonic development of the skeleton, for linear growth and for the ability of cartilage to resist compressive forces. \verb|[START_REF]|Nucleotide sequence of the full length cDNA encoding for human type II procollage, Lee\verb|[END_REF]|... \\
\textbf{Features} \\
- Domain, 32-90, Cleavage; by procollagen N-endopeptidase \\
- Site Cleavage, 181-182, Cleavage; by procollagen N-endopeptidase \\
- Binding site, 1301, Ca2+ \\
... \\
\end{small}
\end{tcolorbox}
\caption{\textbf{Multi-Modal Data}.
A protein sequence occurs in a document context along with annotations, text and citations from UniProt. Full contents of the document are cut for clarity of exposition.
}
\label{fig:example_data}
\end{figure}
Tokenization is an important part of dataset design given the different modalities present. For example, protein sequences are written in terms of amino acid residues, where character-based tokenization is appropriate. To achieve the goal of \textit{specialized tokenization}, we utilize specialized tokens for different modalities:
\begin{enumerate}
\item \textbf{Citations}: we wrap citations with special reference tokens \verb|[START_REF]| and \verb|[END_REF]|.
\item \textbf{Step-by-Step Reasoning}: we wrap step-by-step reasoning with a working memory token \verb|<work>|, mimicking an internal working memory context.
\item \textbf{Mathematics}: for mathematical content, with or without LaTeX, we split ASCII operations into individual characters. Parentheses are treated like digits. The rest of the operations allow for unsplit repetitions. Operation characters are \verb|!"#
\item \textbf{Numbers}: we split digits into individual tokens. For example \verb|737612.62| -> \verb|7,3,7,6,1,2,.,6,2|.
\item \textbf{SMILES formula}: we wrap sequences with \verb|[START_SMILES]| and \verb|[END_SMILES]| and apply character-based tokenization. Similarly we use \verb|[START_I_SMILES]| and \verb|[END_I_SMILES]| where isomeric SMILES is denoted. For example, \verb|C(C(=O)O)N| \(\rightarrow\) \verb|C,(,C,(,=,O,),O,),N|.
\item \textbf{Amino acid sequences}: we wrap sequences with \verb|[START_AMINO]| and \verb|[END_AMINO]| and apply character-based tokenization, treating each amino acid character as a single token. For example, \verb|MIRLGAPQTL| -> \verb|M,I,R,L,G,A,P,Q,T,L|.
\item \textbf{DNA sequences}: we also apply a character-based tokenization, treating each nucleotide base as a token, where the start tokens are \verb|[START_DNA]| and \verb|[END_DNA]|. For example, \verb|CGGTACCCTC| -> \verb|C, G, G, T, A, C, C, C, T, C|.
\end{enumerate}
We cover a few of the specialized token approaches below that do not have clear parallels in the literature, in particular the working memory and citation tokens.
\subsubsection{Working Memory Token, <work>}
Transformer-based architectures lack an explicit working memory capability, which means a single-forward pass has limited efficacy. This is problematic for tasks that require multiple steps of computation. A current workaround is using a Transformer's output context as an external working memory to read from and write to. This is seen in recent work on chain-of-thought prompting~\citep{ChainOfThought, CoTBigBENCH}. In one sense this is intuitive, as humans also augment their limited working memory with scratchpads. In another sense, we would like models to refine their representations internally like humans; e.g. mental arithmetic.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{reasoning_new.png}
\caption{Given a task like "What is the average of 43, 29, 51, 13?" a human can use internal or external working memory. In practice, they will use both symbiotically; meaning that working out that is written down in text is usually "missing" some steps performed internally.}
\end{figure}
There are two limitations with chain-of-thought. First, it relies on prompt discovery to find a prompt that elicits robust step-by-step reasoning; i.e. minimizes mistakes from doing too much in a single forward pass. Not only does this require finding a robust prompt that works in all cases, but it also often relies on few-shot examples which take up context space. What is worse, much of the step-by-step reasoning on the internet misses intermediate steps that a human has performed using internal memory. Humans do not write down every step they perform because it would lead to long and tedious answers. They write down the principal steps of reasoning, and do lower-level steps via internal working memory. This means there is "missing data" in written text, i.e. between written steps there are internal memory steps that are not explicitly stated.
Secondly, chain-of-thought prompting uses the neural network to perform tasks that it is arguably not best suited to doing; for example, arithmetic. Prior work has shown that accuracy on tasks like multiplication is proportional to term frequency~\citep{PretrainingFrequency}. Given that classical computers are specialized for tasks like arithmetic, one strategy is to offload these tasks from the neural network to external modules. For example, prior work has looked at the possibilities of external tool augmentation, such as calculators~\citep{LaMBDA}. However, this requires a strategy to identify where the neural network should offload; and it may not be straightforward when combined with a discovered zero-shot prompt, especially where lower-level computation steps are not explicitly stated in writing.
Our solution is a working memory token we call \verb|<work>|. We construct a few prompt datasets, see Table \ref{table:reasoning-datasets}, that wrap step-by-by-step reasoning within \verb|<work>| \verb|</work>|. Some of these datasets were generated programmatically (\textit{OneSmallStep}), by creating a problem template and sampling the variables, others were sourced online (\textit{Workout}, \textit{Khan Problems}), and others used existing datasets and transformed them into a \verb|<work>| based context (\textit{GSM8k train}). Where a computation is performed that a human could not do internally, we offload by writing and executing a Python script. An example is shown in Figure~\ref{fig:example_MATH}. Importantly, we do not have to turn this on, and the model can also predict the output from running a program. For our experiments, we did not find the need to turn Python offloading on, and leave this aspect to future work.
\begin{figure}
\begin{tcolorbox}[colback=galwhite,colframe=galpurple2]
\begin{small}
\textbf{Question:} A needle $35 \mathrm{~mm}$ long rests on a water surface at \(20^{\circ} \mathrm{C}\). What force over and above the needle's weight is required to lift the needle from contact with the water surface? \(\sigma = 0.0728 \mathrm{m}\).
\vspace{3mm}
\verb|<work>| \\
\[
\begin{aligned}
\sigma &= 0.0728 \mathrm{~N} / \mathrm{m} \\
\sigma &= F/L \\
0.0728 &= F / (2 \times 0.035) \\
F &= 0.0728(2 \times 0.035)
\end{aligned}
\]
\verb|calculate.py| \\
\verb|```| \\
\verb|f = 0.0728*(2*0.035)| \\ \\
\verb|with open("output.txt", "w") as file:| \\
\verb| file.write(str(round(f, 5)))|\\
\verb|```| \\
<<run: "calculate.py"> \\
<<read: "output.txt">> \\
0.0051 \\
\verb|</work>| \\
\textbf{Answer:} \(F = 0.0051 \mathrm{~N}\)
\end{small}
\end{tcolorbox}
\caption{
\textbf{Model-Machine Symbiosis.} We show an example answer with the <work> working memory token. It performs exact steps for rearranging the equation, and when it reaches a calculation that it cannot solve reliably in a forward-pass, it writes a program, which can then be offloaded to a classical computer.
}
\label{fig:example_MATH}
\end{figure}
\begin{table}[h]
\vspace{20px}
\begin{center}
\begin{tabular}{ lrrr }
\toprule
Data source & Split & Prompts & Tokens \\
\midrule
GSM8k~\citep{GSM8k} & \textit{train} & 7,473 & 3,518,467 \\
OneSmallStep & \textit{n/a} & 9,314 & 3,392,252 \\
Khan Problems~\citep{MATH} & \textit{n/a} & 3,835 & 1,502,644 \\
Workout & \textit{n/a} & 921 & 470,921 \\
\midrule
\textbf{Total} & & 21,543 & 9 million \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Reasoning Datasets} To train the model to use <work> we include several datasets in pre-training that incorporate this token. Full details are contained in the Appendix.}
\label{table:reasoning-datasets}
\end{table}
Longer term, an architecture change may be needed to support adaptive computation, so machines can have internal working memory on the lines of work such as adaptive computation time and PonderNet~\citep{ACT, PonderNet}. In this paper, we explore the \verb|<work>| external working memory approach as a bridge to the next step. Notably our \verb|<work>| prompt datasets are not very large or diverse, so there are likely large further gains to be made with this approach.
\clearpage
\subsubsection{Citation Token}
A distinctive properties of academic text is citations. In order to represent the implicit citation graph within the text, we process citations with global identifiers and special tokens \texttt{[START\_REF]} and \texttt{[END\_REF]} signifying when a citation is made. Figure~\ref{fig:citation-example} shows an example of citation processed text from a paper.
\begin{figure}[h]
\begin{tcolorbox}[colback=galwhite,colframe=galpurple2]
\begin{small}
Recurrent neural networks, long short-term memory \verb|[START_REF]|Long Short-Term Memory, Hochreiter\verb|[END_REF]| and gated recurrent \verb|[START_REF]|Empirical Evaluation
of Gated Recurrent Neural Networks on Sequence Modeling, Chung\verb|[END_REF]| neural networks
in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \verb|[START_REF]|Sequence to Sequence Learning with Neural
Networks, Sutskever\verb|[END_REF]|\verb|[START_REF]|Neural Machine Translation by Jointly
Learning to Align and Translate, Bahdanau\verb|[END_REF]|\verb|[START_REF]|Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation, Cho\verb|[END_REF]|.
\end{small}
\end{tcolorbox}
\caption{\textbf{Citation Processed Text}.
Example of citation processed text from \textit{Attention Is All You Need}~\citep{VaswaniSPUJGKP17}. For title-processed citations, the title can be associated with the previous context.
}
\label{fig:citation-example}
\end{figure}
We considered two type of citation identifier: (a) paper titles and (b) alphanumeric IDs. Based on ablations, we found that title based identifiers have greater citation prediction accuracy than IDs. However, we also found that paper titles are more prone to hallucination error at lower scales given the text-based nature of the identifier. We consider title processing for this paper, but we note the trade-offs between both approaches. Experiments for these ablations are contained in the Appendix.
\subsection{Prompt Pre-Training}
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth]{figs/prompt_pretraining_new.png}
\caption{\textbf{Prompt Pre-training}. Pre-training weighs all tokens equally as part of the self-supervised loss. This leads to a weak relative signal for tasks of interest, meaning model scale has to be large to work. Instruction tuning boosts performance \textit{post hoc}, and can generalize to unseen tasks of interest, but it risks performance in tasks that are distant from instruction set tasks. Prompt pre-training has a weaker task of interest bias than instruction tuning but less risk of degrading overall task generality.}
\end{figure}
We deviate from existing language model research in one important direction, which is our decision to include prompts in pre-training \textit{alongside} the general corpora. This is motivated by a number of observations.
First, existing work has shown the importance of training token count on performance. The Chinchilla paper derived scaling "laws" taking into account number of tokens, training a 70bn model for 1.4 trillion tokens~\citep{Chinchilla}. They obtained state-of-the-art performance on MMLU, beating much larger models such as Gopher~\citep{Gopher}.
Separately, research such as FLAN and T0 showed prompt tuning can boost downstream performance~\citep{FLAN, T0, FLANPALM}. Their strategy involved converting tasks to text prompts, using prompt diversity in how the tasks are posed, and then fine-tuning on these prompt datasets. For FLAN and T0, this approach boosts performance, beating larger models such as GPT-3 on many tasks.
And additionally there is the UnifiedQA approach~\citep{UnifiedQA}. In this approach, a T5 model is fine-tuned on question answering datasets, and is shown to boost performance on out-of-domain question answering datasets~\citep{2020t5}. The model outperforms GPT-3 on MMLU, a model 16 times larger.
The first stream of research above focuses on total training tokens as a way to boost performance; i.e. it is \textit{token agnostic}. The second stream of research focuses on task-context tokens as a way to boost performance; i.e. it is \textit{token selective}. Since fine-tuned smaller models beat larger few-shot models on tasks like MMLU, this suggests world knowledge may be present in smaller models, but task-context knowledge may be poor given the relative number of task-context tokens seen in the general corpus.
For this paper, we opt to augment pre-training data with more task prompts to boost performance at lower scales. This is advantageous if it obviates the need for more data scale, e.g. a >\(1\) trillion corpus, or more model scale. The largest 120B model we train runs on a single NVIDIA A100 node. Additionally, given that fine-tuning requires expertise, making the model work out-the-box for popular tasks like question answering and summarization is more useful for users of the model. Lastly, by including prompts alongside general data, we maximize the generality of the model while boosting performance on some tasks of interest.
The closest analog to this approach for large language models is ExT5~\citep{ExT5}. We take a similar approach by taking many machine learning training datasets, converting them to a text format, with prompt diversity, and then including them alongside general corpora in our pre-training set. A summary of prompt types is given in Table~\ref{table:prompt-breakdown}; the full details of datasets and prompts used are covered in the Appendix.
\begin{table}[h!]
\vspace{20px}
\begin{center}
\begin{tabular}{ lrr }
\toprule
Task & Prompts & Tokens \\
\midrule
Chemical Properties & 782,599 & 275 million \\
Multiple-Choice QA & 256,886 & 31 million \\
Extractive QA & 30,935 & 13 million \\
Summarization & 6,339 & 11 million \\
Entity Extraction & 156,007 & 9 million \\
Reasoning & 21,543 & 9 million \\
Dialog & 18,930 & 5 million \\
Binary QA & 36,334 & 4 million \\
Other & 3,559 & 1 million \\
\midrule
\textbf{Total} & 783,599 & 358 million \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Pre-training Prompts}. We include zero-shot prompts in pre-training to boost the task signal.}
\label{table:prompt-breakdown}
\end{table}
Because of prompt inclusion, it is important to distinguish between in-domain performance, where the training dataset is included in pre-training, and out-of-domain performance, where the training dataset is not included in pre-training. We mark these results clearly in the Results section of this paper. Importantly, we do not advocate for prompt pre-training as an alternative to instruction tuning. In fact, instruction tuning on Galactica is likely useful follow-up work given its potential to boost performance on several tasks of interest.
\clearpage
\section{Method}
\subsection{Architecture}
Galactica uses a Transformer architecture in a decoder-only setup~\citep{VaswaniSPUJGKP17}, with the following modifications:
\begin{itemize}
\item \textbf{GeLU Activation} - we use GeLU activations for all model sizes~\citep{GeLU}.
\item \textbf{Context Window} - we use a 2048 length context window for all model sizes.
\item \textbf{No Biases} - following PaLM, we do not use biases in any of the dense kernels or layer norms~\citep{PaLM}.
\item \textbf{Learned Positional Embeddings} - we use learned positional embeddings for the model. We experimented with ALiBi at smaller scales but did not observe large gains, so we did not use it~\citep{ALiBi}.
\item \textbf{Vocabulary} - we construct a vocabulary of 50k tokens using BPE~\citep{BPE}. The vocabulary was generated from a randomly selected 2\% subset of the training data.
\end{itemize}
\subsection{Models}
The different model sizes we trained, along with training hyperparameters are outlined in Table~\ref{table:models-trained}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lcccccccc }
\toprule
Model & $n_{params}$ & $n_{layers}$ & $d_{model}$ & $n_{heads}$ & $d_{heads}$ & Batch Size & Max LR & Warmup \\
\midrule
GAL 125M & 125M & 12 & 768 & 12 & 64 & 0.5M & $6 \times 10^{-4}$ & 375M \\
GAL 1.3B & 1.3B & 24 & 2,048 & 32 & 64 & 1.0M & $2 \times 10^{-4}$ & 375M \\
GAL 6.7B & 6.7B & 32 & 4,096 & 32 & 128 & 2.0M & $1.2 \times 10^{-4}$ & 375M \\
GAL 30B & 30.0B & 48 & 7,168 & 56 & 128 & 2.0M & $1 \times 10^{-4}$ & 375M \\
GAL 120B & 120.0B & 96 & 10,240 & 80 & 128 & 2.0M & $0.7 \times 10^{-5}$ & 1.125B \\
\bottomrule
\end{tabular}
\end{center}
\caption{Details of the models trained}
\label{table:models-trained}
\end{table}
We train using AdamW with $\beta_{1}= 0.9$, $\beta_{2} = 0.95$ and weight decay of $0.1$~\citep{AdamW}. We clip the global norm of the gradient at 1.0, and we use linear decay for learning rate down to 10\% of it value. We use dropout and attention dropout of $p=0.1$. We do not use embedding dropout. We found longer warmup was important for the largest model in the early stages of training to protect against the effects of bad initialization, which can have long-memory effects on the optimizer variance state and slow down learning. This may be specific to our model and training setup, and it is not clear whether this advice generalizes.
\subsection{Libraries and Infrastructure}
We use the metaseq library\footnote{\href{https://github.com/facebookresearch/metaseq/}{https://github.com/facebookresearch/metaseq/}} for training the models, built by the NextSys team at Meta AI.
For training the largest 120B model, we use 128 NVIDIA A100 80GB nodes. For inference Galactica 120B requires a single A100 node. We choose the maximum model size to obey this constraint for downstream accessibility, and we will work to improve its accessibility for the research community in coming months.
\clearpage
\section{Results}
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{figs/scaling/final_curves.png}
\caption{\textbf{Repeated Tokens and Validation Loss}. With four epochs of training, we continue to see validation loss fall for all model sizes. For the 120B model we see the first signs of overfitting at the beginning of the fifth epoch, and we early stop at this point.}
\label{fig:validation_loss}
\end{figure}
\subsection{Repeated Tokens Considered Not Harmful}
We train the models for 450 billion tokens, or approximately 4.25 epochs. We find that performance continues to improve on validation set, in-domain and out-of-domain benchmarks with multiple repeats of the corpus.
First, from Figure~\ref{fig:validation_loss}, validation loss continues to fall with four epochs of training. The largest 120B model only begins to overfit at the start of the fifth epoch. This is unexpected as existing research suggests repeated tokens can be harmful on performance~\citep{HarmfulRepeats}. We also find the 30B and 120B exhibit a epoch-wise double descent effect of plateauing (or rising) validation loss followed by a decline. This effect becomes stronger with each epoch, and is most visible above with the 120B model towards end of training.
To investigate further, we examine the per-source breakdown of validation loss to see if there is heterogeneity in loss behaviour. We plot example curves in Figure~\ref{fig:source_validation_loss} overleaf for the 30B model. We see no signs of loss heterogeneity: loss falls for all sources. The 120B exhibits the same relative trend of declining validation loss for all sources until the beginning of fifth epoch, where all sources spike (see Appendix).
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{figs/scaling/per_dataset_val.png}
\caption{\textbf{Validation Loss Per Source}. Validation loss falls through training for all dataset categories. Results are shown for the 30B model above. The 120B exhibits the same relative trend of declining validation loss for all sources until the beginning of fifth epoch, where all sources spike (see Appendix).}
\label{fig:source_validation_loss}
\end{figure}
The next question to answer is whether this trend extends to downstream performance and out-of-domain generalization. For this we use a 57 task subset of \textit{BIG-bench} subset, a general corpus with principally non-scientific tasks and prompt types not included in pre-training~\citep{BIGBenchakasomanyauthorsitdoesntfitinthecontextwindow}. We plot results in Figure~\ref{fig:downstream_scale}. We see no signs of overfitting suggesting that use of repeated tokens is improving downstream performance as well as upstream performance.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{figs/scaling/bigbench120extra.png}
\caption{\textbf{BIG-bench Performance During Training}. The 57 task selection from BIG-bench contains principally non-scientific tasks. We use it as a proxy for \textit{out-of-domain} performance. For the 120B model above, we see no signs of overfitting after four repeats of the corpus.}
\label{fig:downstream_scale}
\end{figure}
We suspect that two factors could be at play, a \textit{quality factor}, the curated nature of the corpus enables more value per token to be extracted, or a \textit{modality factor}, the nature of scientific data enables more value per token to be extracted. The missing step of causation is what leads specifically from either factor towards less overfitting, and we leave this question to further work. We note the implication that the "$\text{tokens} \rightarrow \infty$" focus of current LLM projects may be overemphasised versus the importance of filtering the corpus for quality.
In the following sections, we turn to evaluating Galactica's scientific capabilities. Specifically, we focus on the high-level design goals of building an LLM that can store, combine and reason about scientific knowledge - as these are needed for building a new interface for science.
\clearpage
\subsection{Knowledge Probes}
First, we examine how well Galactica absorbs scientific knowledge. We set up several knowledge probe benchmarks, building off the LAMA approach of~\citet{petroni2019language}. These were critical metrics during model development for identifying knowledge gaps within the corpus, and informing how to iterate the corpus. They also provide insight into the relative knowledge strengths of Galactica versus general language models, and we cover these results in this section before turning to the downstream tasks.
\subsubsection{LaTeX Equations}
We construct a dataset of popular LaTeX equations from the fields of chemistry, physics, mathematics, statistics and economics. Memorisation of equations is useful to measure as it is necessary for many downstream tasks; for example, recalling an equation to use as part of an answer to a problem. Unless stated explicitly, Galactica results are reported as zero-shot. In total there are 434 equations we test for the knowledge probe.
We prompt with an equation name and generate LaTeX. An example is shown in Figure~\ref{fig:latex_ex}.
\begin{figure}[h]
\begin{tcolorbox}[colback=galwhite,colframe=galpurple2]
\begin{small}
\textbf{Prompt} \\
The formula for Bessel's differential equation is: \\
\textbf{Generated Answer}
$$ x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}+\left(x^{2}-\alpha ^{2}\right)y=0 $$
\end{small}
\end{tcolorbox}
\caption{\textbf{LaTeX Equations Probe}.
We prompt for the name of an equation and evaluate whether the generated LaTeX is correct. We manually evaluate given the possibility of multiple correct answers.
}
\label{fig:latex_ex}
\end{figure}
We summarize the results in Table~\ref{table:latex-perf}. Equation knowledge increases smoothly with scale. Galactica outperforms larger language models trained on general corpuses, indicating the value of a curated dataset.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lrrrrrrr }
\toprule
Model & Params (bn) & Chemistry & Maths & Physics & Stats & Econ & Overall \\
\midrule
OPT & 175 & 34.1\% & 4.5\% & 22.9\% & 1.0\% & 2.3\% & 8.9\% \\
BLOOM & 176 & 36.3\% & 36.1\% & 6.6\% & 14.1\% & 13.6\% & 21.4\% \\
GPT-3 (\verb|text-davinci-002|) & ? & 61.4\% & 65.4\% & 41.9\% & 25.3\% & 31.8\% & 49.0\% \\
\midrule
GAL 125M & 0.1 & 0.0\% & 0.8\% & 0.0\% & 1.0\% & 0.0\% & 0.5\% \\
GAL 1.3B & 1.3 & 31.8\% & 26.3\% & 23.8\% & 11.1\% & 4.6\% & 20.5\% \\
GAL 6.7B & 6.7 & 43.2\% & 59.4\% & 36.2\% & 29.3\% & 27.3\% & 41.7\% \\
GAL 30B & 30 & 63.6\% & 74.4\% & 35.2\% & 40.4\% & 34.1\% & 51.5\% \\
GAL 120B & 120 & \textbf{79.6\%} & \textbf{83.5\%} & \textbf{72.4\%} & \textbf{52.5\%} & \textbf{36.4\%} & \textbf{68.2\%} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Results on LaTeX equations}. Results are evaluated zero-shot.}
\label{table:latex-perf}
\end{table}
\subsubsection{Domain Probes}
We also set up domain probes to track specialized knowledge for certain fields. We detail these below:
\begin{itemize}
\item \textbf{AminoProbe}: a dataset of names, structures and properties of the 20 common amino acids.
\item \textbf{BioLAMA}: a dataset of biomedical factual knowledge triples.
\item \textbf{Chemical Reactions}: a dataset of chemical reactions.
\item \textbf{Galaxy Clusters}: a dataset of galaxy clusters with their constellation classifications.
\item \textbf{Mineral Groups}: a dataset of minerals and their mineral group classifications.
\end{itemize}
In each case, we construct a prompt to test the knowledge. For example, for \textbf{Chemical Reactions}, we ask Galactica to predict the products of the reaction in the chemical equation LaTeX. We mask out products in the description so the model is inferring based on the reactants only. An example is shown in Figure~\ref{fig:chemical_reactions}.
\begin{figure}[h!]
\begin{tcolorbox}[colback=galwhite,colframe=galpurple2]
\begin{small}
\textbf{Prompt} \\
Sulfuric acid reacts with sodium chloride, and gives \verb|_____| and \verb|_____|: \\
\verb|\[ \ce{ NaCl + H2SO4 ->| \\
\textbf{Generated Answer}
$$ \ce{NaCl + H2SO4 -> NaHSO4 + HCl} $$
\end{small}
\end{tcolorbox}
\caption{\textbf{Chemical Reactions}.
We prompt based on a description and reactants, and evaluate whether the generated products are correct.
}
\label{fig:chemical_reactions}
\end{figure}
We report results for these knowledge probes in Table~\ref{table:domain_probes}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lrrrrrr }
\toprule
Model & Params (bn) & Amino & BioLAMA & Reactions & Clusters & Minerals \\
\midrule
OPT & 175 & 12.0\% & 7.1\% & 12.7\% & 21.7\% & 1.6\% \\
BLOOM & 176 & 14.0\% & \textbf{9.7\%} & 22.4\% & 15.0\% & 10.3\% \\
GPT-3 (\verb|text-davinci-002|) & ? & 14.0\% & 8.4\% & 35.1\% & 20.8\% & 18.3\% \\
\midrule
GAL 125M & 0.1 & 12.0\% & 3.1\% & 0.3\% & 6.7\% & 0.0\% \\
GAL 1.3B & 1.3 & 16.0\% & 7.2\% & 14.4\% & 14.2\% & 10.3\% \\
GAL 6.7B & 6.7 & 17.0\% & 7.9\% & 26.4\% & 17.5\% & 8.7\% \\
GAL 30B & 30 & 21.0\% & 6.9\% & 36.5\% & 20.0\% & 17.5\% \\
GAL 120B & 120 & \textbf{21.0\%} & 8.0\% & \textbf{43.1\%} & \textbf{24.2\%} & \textbf{29.4\%} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Results on Domain Probes}. Results are evaluated zero-shot.}
\label{table:domain_probes}
\end{table}
We also observe steady scaling behaviour in these knowledge probes, with the exception of BioLAMA which we suspect reflects zero-shot prompt difficulty for all LLMs. Notably fine-grained factual knowledge, such as "\texttt{ConstellationOf(GalaxyCluster)}" type-queries seems to scale smoothly with the size of the model.
\clearpage
\subsubsection{Reasoning}
We now turn to reasoning capabilities with the \verb|<work>| token. We start by evaluating on the \textbf{MMLU} mathematics benchmarks, which we report in Table~\ref{table:mmlu-maths-perf}~\citep{MMMLU}. Galactica performs strongly compared to larger base models, and use of the \verb|<work>| token appears to boost performance over Chinchilla, even for the smaller 30B Galactica model.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lrrrrrrr }
\toprule
\multicolumn{8}{c}{Mathematics MMLU} \\
\midrule
Model & Params (bn) & A.Algebra & Elem & HS & College & F. Logic & Average \\
\midrule
BLOOM (5-shot) & 176 & 25.0\% & 26.7\% & 27.0\% & 25.0\% & 26.2\% & 26.4\% \\
OPT (5-shot) & 175 & 21.0\% & 25.7\% & 24.4\% & 33.0\% & 29.4\% & 26.7\% \\
Gopher (5-shot) & 280 & 25.0\% & 33.6\% & 23.7\% & 37.0\% & 35.7\% & 30.6\% \\
Chinchilla (5-shot) & 70 & 31.0\% & 41.5\% & 31.9\% & 32.0\% & 33.3\% & 35.7\% \\
\midrule
GAL 1.3B & 1.3 & 28.0\% & 27.2\% & 26.7\% & 30.0\% & 24.6\% & 27.1\% \\
GAL 6.7B & 6.7 & 28.0\% & 28.9\% & 26.7\% & 36.0\% & 31.0\% & 29.2\% \\
GAL 30B & 30 & 30.0\% & 30.2\% & 26.3\% & 36.0\% & 31.7\% & 29.9\% \\
GAL 120B & 120 & 33.0\% & 38.1\% & 32.6\% & 43.0\% & 32.5\% & 35.8\% \\
\midrule
GAL 1.3B \verb|<work>| & 1.3 & 22.0\% & 24.6\% & 18.9\% & 25.0\% & 31.0\% & 24.6\% \\
GAL 6.7B \verb|<work>| & 6.7 & \textbf{33.3\%} & 30.7\% & 25.2\% & 26.0\% & 33.3\% & 28.0\% \\
GAL 30B \verb|<work>| & 30 & 33.0\% & 41.5\% & 33.3\% & 39.0\% & 37.3\% & 37.1\%\\
GAL 120B \verb|<work>| & 120 & 27.0\% & \textbf{54.2\%} & \textbf{37.0\%} & \textbf{44.0\%} & \textbf{40.5\%} & \textbf{41.3\%} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Results on Mathematics MMLU}. Galactica is evaluated without few-shot examples. With the <work> token we see large gains in performance. Results are on MMLU test.}
\label{table:mmlu-maths-perf}
\end{table}
We also evaluate on the MATH dataset to further probe the reasoning capabilities of Galactica~\citep{MATH}. We compare the \verb|<work>| token prompt directly with the Minerva 5-shot chain-of-thought prompt \verb|mCoT| for comparability. We report results in Table~\ref{table:math-benchmark-results}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lrrrrrrrr }
\toprule
\multicolumn{9}{c}{MATH Results} \\
\midrule
Model & Alg & CProb & Geom & I.Alg & N.Theory & Prealg & Precalc & Average \\
\midrule
\multicolumn{9}{c}{Base Models} \\
\midrule
GPT-3 175B (8-shot) & 6.0\% & 4.7\% & 3.1\% & 4.4\% & 4.4\% & 7.7\% & 4.0\% & 5.2\% \\
PaLM 540B (5-shot) \verb|mCoT| & 9.7\% & 8.4\% & 7.3\% & 3.5\% & 6.0\% & 19.2\% & 4.4\% & 8.8\% \\
GAL 30B \verb|<work>| & 15.8\% & 6.3\% & 5.8\% & 4.9\% & 2.4\% & 19.4\% & 8.2\% & 11.4\%\\
GAL 30B (5-shot) \verb|mCoT| & 17.9\% & 6.8\% & 7.9\% & 7.0\% & 5.7\% & 17.9\% & 7.9\% & 12.7\% \\
GAL 120B \verb|<work>| & 23.1\% & 10.1\% & 9.8\% & 8.6\% & 6.5\% & 23.8\% & 11.7\% & 16.6\% \\
GAL 120B (5-shot) \verb|mCoT| & 29.0\% & 13.9\% & 12.3\% & 9.6\% & 11.7\% & 27.2\% & 12.8\% & 20.4\% \\
\midrule
\multicolumn{9}{c}{Fine-tuned LaTeX Models} \\
\midrule
Minerva 540B (5-shot) \verb|mCoT| & 51.3\% & 28.0\% & 26.8\% & 13.7\% & 21.2\% & 55.0\% & 18.0\% & 33.6\% \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Results on MATH}. With both the chain-of-thought and \texttt{<work>} token prompts, Galactica exceeds PaLM's performance with 18 times less capacity.}
\label{table:math-benchmark-results}
\end{table}
We see that Galactica outperforms the base PaLM model by a significant margin, with both chain-of-thought and \verb|<work>| prompts. Galactica 30B outperforms PaLM 540B on both prompts: an 18 times smaller model. This suggests Galactica may be a better base model for fine-tuning towards mathematical tasks.
We report Minerva results for completeness, which is a 540B PaLM fine-tuned towards LaTeX specifically. Minerva outperforms base Galactica, but the performance differences are non-uniform; which points towards different mathematical data biases. For a direct comparison to Minerva, the model is freely available for those who want to finetune Galactica towards LaTeX specifically as follow-up work.
\clearpage
\subsection{Downstream Scientific NLP}
We now evaluate on downstream scientific tasks to see how well Galactica can compose its knowledge in different task contexts. We focus on knowledge-intensive scientific tasks and report full results in Table~\ref{table:full-qa}. For this we use the MMLU benchmark as well as some other popular scientific QA benchmarks. We include the MMLU results earlier without <work> to test for knowledge association specifically. Full MMLU results, including social sciences and other fields, are reported in the Appendix. We also perform data leakage analysis on these benchmarks for more confidence; results are in the Appendix.
From Table~\ref{table:full-qa}, Galactica can compose its knowledge into the question-answering task, and performance is strong; significantly outperforming the other open language models, and outperforming a larger model (Gopher 280B) in the majority of tasks. Performance against Chinchilla is more variable, and Chinchilla appears to be stronger in a subset of tasks: in particular, high-school subjects and less-mathematical, more memorization intensive tasks. In contrast, Galactica tends to perform better in mathematical and graduate-level tasks.
Our working hypothesis is that the Galactica corpus is biased towards graduate scientific knowledge, given it consists mostly of papers, which explains lagging performance in high-school subjects. While we do pick up some high-school level content through encyclopedias, textbooks and the filtered CommonCrawl, this amounts to a small quantity of tokens (a few billion). We leave the question of how to capture more of this base scientific knowledge in a curated way to future work.
On remaining tasks, we achieve state-of-the-art results over fine-tuned models at the time of writing. On PubMedQA, we achieve a score of 77.6\% which outperforms the state-of-the-art of 72.2\%~\citep{BioLinkBERT}. On MedMCQA dev we achieve score of 52.9\% versus the state-of-the-art of 41.0\%~\citep{PubMedBERT}. For BioASQ and MedQA-USMLE, performance is close to the state-of-the-art performance of fine-tuned models (94.8\% and 44.6\%)~\citep{BioLinkBERT}.
\begin{table}[h]
\begin{center}
\begin{tabular}{ llc|ccccc }
\toprule
Dataset & Domain & GAL & OPT & BLOOM & GPT-3 & Gopher & Chinchilla \\
\midrule
Abstract Algebra & \textit{out-of-domain} & \textbf{33.3\%} & 21.0\% & 25.0\% & - & 25.0\% & 31.0\% \\
ARC Challenge & \textit{in-domain} & \textbf{67.9\%} & 31.1\% & 32.9\% & 51.4\% & - & - \\
ARC Easy & \textit{in-domain} & \textbf{83.8\%} & 37.4\% & 40.7\% & 68.8\% & - & - \\
Astronomy & \textit{out-of-domain} & 65.1\% & 23.0\% & 25.7\% & - & 65.8\% & \textbf{73.0\%} \\
BioASQ & \textit{in-domain} & \textbf{94.3\%} & 81.4\% & 91.4\% & - & - & - \\
Biology (College) & \textit{out-of-domain} & 68.8\% & 30.6\% & 28.5\% & - & 70.8\% & \textbf{79.9\%} \\
Biology (High-School) & \textit{out-of-domain} & 69.4\% & 27.7\% & 29.4\% & - & 71.3\% & \textbf{80.3\%} \\
Chemistry (College) & \textit{out-of-domain} & 46.0\% & 30.0\% & 19.0\% & - & 45.0\% & \textbf{51.0\%} \\
Chemistry (High-School) & \textit{out-of-domain} & 47.8\% & 21.7\% & 23.2\% & - & 47.8\% & \textbf{58.1\%} \\
Comp. Science (College) & \textit{out-of-domain} & 49.0\% & 17.0\% & 6.0\% & - & 49.0\% & \textbf{51.0\%} \\
Comp. Science (High-School) & \textit{out-of-domain} & \textbf{70.0\%} & 30.0\% & 25.0\% & - & 54.0\% & 58.0\% \\
Econometrics & \textit{out-of-domain} & 42.1\% & 21.0\% & 23.7\% & - & \textbf{43.0\%} & 38.6\% \\
Electrical Engineering & \textit{out-of-domain} & \textbf{62.8\%} & 36.6\% & 32.4\% & - & 60.0\% & 62.1\% \\
Elementary Mathematics & \textit{out-of-domain} & 38.1\% & 25.7\% & 27.6\% & - & 33.6\% & \textbf{41.5\%} \\
Formal Logic & \textit{out-of-domain} & 32.5\% & 29.4\% & 26.2\% & - & \textbf{35.7\%} & 33.3\% \\
Machine Learning & \textit{out-of-domain} & 38.4\% & 28.6\% & 25.0\% & - & 41.1\% & 41.1\% \\
Mathematics (College) & \textit{out-of-domain} & \textbf{43.0\%} & 33.0\% & 25.0\% & - & 37.0\% & 32.0\% \\
Mathematics (High-School) & \textit{out-of-domain} & \textbf{32.6\%} & 24.4\% & 27.0\% & - & 23.7\% & 31.9\% \\
Medical Genetics & \textit{out-of-domain} & \textbf{70.0\%} & 35.0\% & 36.0\% & - & 69.0\% & 69.0\% \\
Physics (College) & \textit{out-of-domain} & 42.2\% & 21.6\% & 18.6\% & - & 34.3\% & \textbf{46.1\%} \\
Physics (High-School) & \textit{out-of-domain} & 33.8\% & 29.8\% & 25.2\% & - & 33.8\% & \textbf{36.4\%} \\
MedQA-USMLE & \textit{out-of-domain} & 44.4\% & 22.8\% & 23.3\% & - & - & - \\
MedMCQA Dev & \textit{in-domain} & \textbf{52.9\%} & 29.6\% & 32.5\% & - & - & - \\
PubMedQA & \textit{in-domain} & \textbf{77.6\%} & 70.2\% & 73.6\% & - & - & - \\
Statistics (High-School) & \textit{out-of-domain} & 41.2\% & 43.5\% & 19.4\% & - & 50.0\% & \textbf{58.8\%} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Question Answering Results}. Galactica is evaluated without few-shot examples. Other LLMs are evaluated 5-shot, except for 0-shot results for GPT-3 on ARC results and OPT and BLOOM on PubMedQA and BioASQ. For abstract algebra and medical genetics, we obtained best results with 30B, so we report these scores; the 120B scores for these were 27.0\% and 68.0\% respectively. Rest of results are for 120B.}
\label{table:full-qa}
\end{table}
\clearpage
\subsection{Citation Prediction}
In this section we evaluate Galactica's capability to predict citations given an input context, which is an important test of Galactica's capability to organize the scientific literature. We find that both accuracy and the quality of distributional approximation improves with scale.
{\subsubsection{Citation Accuracy}
We construct three datasets to evaluate the model's capability to cite:
\begin{itemize}
\item \textbf{PWC Citations}: a dataset with 644 pairs of machine learning concepts and papers that introduced them. Concepts consist of methods (e.g. \textit{ResNet}) and datasets (e.g. \textit{ImageNet}) from \textit{Papers with Code}\footnote{\href{https://paperswithcode.com}{https://paperswithcode.com}}.
\item \textbf{Extended Citations}: a dataset with 110 pairs of non-machine learning concepts and papers that introduced them. Examples of concepts include \textit{Kozac sequence} and \textit{Breit-Wigner distribution}.
\item \textbf{Contextual Citations}: a dataset with 1,869 pairs of references and contexts from our arXiv validation set. The dataset is constructed by sampling 1,000 random references and collecting their contexts.
\end{itemize}
For the \textbf{PWC Citations} and \textbf{Extended Citations} datasets, the citation prediction task is framed as a text generation task. The model is given a prompt like "In this paper we use ResNet method \texttt{[START\_REF]}" in order to generate a prediction for the \textit{ResNet} concept. For \textbf{Contextual Citations}, we prompt after the input context for the citation, where the context ends with \verb|[START_REF]|.
We compare Galactica to sparse and dense retrieval-based approaches on this task.
For the sparse baseline, we use ElasticSearch to create an index of all the references, including their titles, abstracts, and short snippets of text with the contexts they appear in. Then, given a text query, we retrieve the top references ordered by the sum of matching scores across all selected fields.
For dense retriever baselines, we evaluate two different Contriever models \citep{Contriever}. The first is the pre-trained model released by \citet{Contriever}. The second model we use is fine-tuned on a random subset of 10 million context/paper pairs from our corpus, trained to retrieve the right paper given a context before a citation. The setup for dense retrieval is: (1) each reference is encoded by the model using its title and abstract, (2) a text query is encoded by the same model, (3) the references that match the query re returned. Retrieval is performed using a FAISS index~\citep{FAISS}.
The results can be seen in Table \ref{table:citation-pred}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lrrrr }
\toprule
Model & Params (bn) & PWC Citations & Extended Citations & Contextual Citations \\
\midrule
GAL 125M & 0.1 & 7.0\% & 6.4\% & 7.1\% \\
GAL 1.3B & 1.3 & 18.5\% & 45.5\% & 15.9\% \\
GAL 6.7B & 6.7 & 32.0\% & 60.0\% & 23.0\% \\
GAL 30B & 30 & 44.7\% & 66.4\% & 31.5\% \\
GAL 120B & 120 & \textbf{51.9\%} & \textbf{69.1\%} & \textbf{36.6\%} \\
\midrule
Sparse Retriever & n/a & 30.9\% & 17.3\% & 5.3\% \\
Dense Retriever (base) & n/a & 16.4\% & 8.8\% & 1.6\% \\
Dense Retriever (fine-tuned) & n/a & 27.6\% & 11.8\% & 8.2\% \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Citation Prediction Accuracy}. Performance of different model sizes on citation prediction.}
\label{table:citation-pred}
\end{table}
The performance on all evaluation sets increases smoothly with scale. At larger scales, Galactica outperforms the retrieval-based approaches as its context-associative power improves. This is an important result as current approaches for navigating the literature use these existing retrieval approaches. As the power of language models improves, we suspect they will become a valuable new tool for exploring the literature.
\subsubsection{Citation Distributional Analysis}
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/cits/ks_stat.png}
\caption{\textbf{Kolmogorov-Smirnov Distance}}
\label{fig:citation-ks}
\end{subfigure}%
~ \hspace{0.01\textwidth}
\begin{subfigure}[t]{0.64\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/cits/dist_comparison.png}
\caption{\textbf{Histogram Overlap}}
\label{fig:citation-dists}
\end{subfigure}
\caption{\textbf{Distributional Comparison of Citations}. Galactica's citation distribution approaches the ground truth with scale. This is seen through a declining KS distance with scale, and increasing histogram overlap.}
\label{fig:dists-comparison}
\end{figure*}
We now turn to look at how well Galactica can model the empirical citation distribution. For this analysis we use the \textbf{Contextual Citations} dataset, where prompts are extracted from a paper by taking the context before a citation as the prompt. An example prompt with a model prediction is shown overleaf in Figure \ref{fig:in_context_pred}.
We use the in-context citation data to analyse the distributional difference between predicted and ground truth paper counts. This allows us to assess the model bias towards predicting more popular papers. Specifically, for each context there is a ground truth and predicted reference. We count the number of times each reference appears in our corpus. We then compare the distribution of reference counts between the ground truth references and the predicted references using the Kolmogorov-Smirnov distance~\citep{Massey_1951}.
The comparison between the citation count distributions for different model sizes can be seen in Figure \ref{fig:dists-comparison}. Figure \ref{fig:citation-ks} shows the decrease in the Kolmogorov-Smirnov distance between the distribution of ground truth paper citations and the distribution of predicted papers citations. Figure \ref{fig:citation-dists} shows how the distribution of paper counts for the predicted papers gets closer to the ground truth as the model size grows. At smaller scales the model is more prone to predicting more popular papers. As the model grows in size this bias towards predicting popular papers diminishes.
\begin{figure}[t!]
\begin{tcolorbox}[colback=galwhite,colframe=galpurple2]
\begin{small}
\textbf{Prompt} \\
in the BQ literature as, when \(p\) is a mixture of Gaussians, the mean element \(\mu_{p}\) is analytically tractable (see Appendix C). Some other \((p,k)\) pairs that produce analytic mean elements are discussed in [\texttt{[START\_REF]} On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions, Bach\texttt{[START\_REF]}].
For this simulation study, we took \(p(x)\) to be a 20-component mixture of 2D-Gaussian distributions.
Monte Carlo (MC) is often used for such distributions but has a slow convergence rate in \(\mathcal{O}_{P}(n^{-1/2})\).
FW and FWLS are known to converge more quickly and are in this sense preferable to MC [\texttt{[START\_REF]} \\
\textbf{Prediction} \\
On the Equivalence between Herding and Conditional Gradient Algorithms, Bach
\end{small}
\end{tcolorbox}
\caption{\textbf{Citation Prompt}.
An example prompt predicting a citation in-context; from \citet{briol2015frank}.
}
\label{fig:in_context_pred}
\end{figure}
\subsection{General Capabilities}
We have studied Galactica's scientific capabilities. It is perhaps not surprising that a specialist scientific model outperforms general models on scientific tasks, but what would be more surprising was if it outperformed general models on general NLP tasks. In this section, we show surprising evidence that it does just that.
We evaluate on 57 BIG-bench tasks in Table~\ref{table:bigbench-summary}~\citep{BIGBenchakasomanyauthorsitdoesntfitinthecontextwindow}. The tasks are primarily non-scientific and test general language capability, for example anachronisms, figure of speech and metaphor boolean. We always evaluate with 5-shots, and we use the default prompt style from BIG-Bench. Importantly, we do not include this prompt style in pre-training; so the evaluation between Galactica and the other models is comparable 5-shot. Full details and results are in the Appendix. We summarize average scores in Table~\ref{table:bigbench-summary}:
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lrrr }
\toprule
Model & Params (bn) & Accuracy & Accuracy \\
& & \textit{weighted} & \textit{unweighted} \\
\midrule
OPT 30B & 30 & 39.6\% & 38.0\% \\
BLOOM 176B & 176 & 42.6\% & 42.2\% \\
OPT 175B & 175 & 43.4\% & 42.6\% \\
GAL 30B & 30 & 46.6\% & 42.7\% \\
GAL 120B & 120 & \textbf{48.7\%} & \textbf{45.3\%} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{BIG-bench 57 Task Results}. Galactica outperforms general open models at smaller scales.}
\label{table:bigbench-summary}
\end{table}
Both the 30B and 120B Galactica models outperform the larger OPT and BLOOM general models. This is a surprising result given we designed Galactica to trade-off generality for performance in scientific tasks.
We suspect this result reflects the higher-quality of the Galactica corpus, stemming from the fact it is curated and also primarily academic text. Previous open LLM efforts likely overfocused on scale goals and underfocused on data filtering. Another implication is that the focus on tokens $\rightarrow \infty$ from Chinchilla needs to be complemented with strong data quality procedures~\citep{Chinchilla}. With this paper, we took an opposite approach by focusing on high-quality tokens and repeated epochs of training. However, the Chinchilla insight stands: and there is much more scientific text that we have not exploited in this work.
\subsection{Chemical Understanding}
We now turn to Galactica's capability to interface with different scientific modalities. We start by looking at Galactica's chemical capabilities. Chemical properties exhibit complex correlations which means the chemical space is very large. Better organization of chemical information through language models could aid chemical design and discovery. We explore how Galactica can provide a new interface for these tasks in this section.
For this work, we only include a small subset of available compounds from PubChem Compound in pre-training. Specifically, we take a random subset ($2$ million) of total compounds ($110$ million). This is to ensure the model is not overly biased towards learning natural sequences over natural language. This is a constraint we can relax in future work, enabling for much larger corpus. Here we focus on the first step of investigating whether a single model can learn effectively in the multi-modal setting.
We find that a language model can learn chemical tasks such as IUPAC naming in a self-supervised way, and in addition, we can pose drug discovery tasks as natural language prompts and achieve reasonable results.
\subsubsection{IUPAC Name Prediction}
SMILES is a line notation which represents chemical structure as a sequence of characters~\citep{SMILES}. In the Galactica corpus, the SMILES formula occurs alongside information in the document, such as IUPAC names, molecular weight and XLogP. In the context of self-supervised learning, this means a language model is performing implicit multi-task learning: the model is predicting the next SMILES token, but can also use SMILES to predict other entities in the document.
As an initial test, we set up a \textbf{IUPAC Name Prediction} task, where the task is to name a compound according to the IUPAC nomenclature given a SMILES formula input. The IUPAC nomenclature is a method of naming organic compounds that has a ruleset based on naming the longest chain of carbons connected by single bonds~\citep{IUPACNaming}. There is a large set of rules and the procedure is algorithmically complex, meaning it is hard to automate. As a result, it is missing from standard cheminformatics toolkits.
Previous works such as STOUT and Struct2IUPAC have explored the possiblity of using RNNs and Transformers for this task~\citep{STOUT,Struct2IUPAC}. We explore in this section whether Galactica can translate a SMILES specification to its IUPAC name in the self-supervised setting. We design a prompt based on the PubChem structure, with the SMILES as the only input, and the output to predict the IUPAC name.
To evaluate, we use our compound validation set of 17,052 compounds, and prompt with the SMILES formula and predict the IUPAC name. To calculate accuracy, we use OPSIN to convert the generated IUPAC name to SMILES, canonicalize it and compare with the canonicalized SMILES target~\citep{OPSIN}.
Results are shown in Table~\ref{table:iupac-naming}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lrrr }
\toprule
Model & Params (bn) & Accuracy & Invalid Names \\
\midrule
GAL 125M & 0.1 & 0.0\% & 32.8\% \\
GAL 1.3B & 1.3 & 2.5\% & 12.0\% \\
GAL 6.7B & 6.7 & 10.7\% & 12.3\% \\
GAL 30B & 30 & 15.4\% & 9.7\% \\
GAL 120B & 120 & \textbf{39.2\%} & \textbf{9.2\%} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Results on IUPAC Naming}. Performance improves smoothly with scale.}
\label{table:iupac-naming}
\end{table}
Accuracy increases smoothly with scale. Given we restricted the corpus to 2 million molecules, it is likely much better performance is achievable through training or fine-tuning on more molecules. The model is freely available for those who want to perform this follow-up work.
The more immediate question is what is actually being learnt: is Galactica inferring names from the fundamental molecular structure? To answer this, we visualize the average atomic attention at each stage of a prediction in Figure~\ref{fig:iupac-naming-viz} overleaf. Encouragingly, the results are interpretable in terms of the underlying chemistry, and Galactica attends to the correct group when predicting a name, e.g. for "amino" it attends primarily to the $-\ce{NH_{2}}$ substituent.
\clearpage
\begin{figure}[h]
\centering
\textbf{Task: Convert the SMILES to IUPAC Name} \\
\vspace{8pt}
Example: \verb|CC(C)(C)C(=O)N(CC1=NC(=CS1)C(=O)OC)C2CCCCC2| \\
\vspace{16pt}
\begin{tabular}{ccc}
Atomic Attention & Predicted So Far & Token Predicted \\
\midrule
\begin{tabular}{c}
\includegraphics[width=0.25\textwidth]{figs/iupac_viz/iupac_viz_0.png}
\end{tabular}
& \begin{tabular}{c}
\small
-
\end{tabular} & \begin{tabular}{c}
\verb|methyl|
\end{tabular} \\
\begin{tabular}{c}
\includegraphics[width=0.25\textwidth]{figs/iupac_viz/iupac_viz_1.png}
\end{tabular}
& \begin{tabular}{c}
\small
methyl 2-[[cyclohexyl
\end{tabular} & \begin{tabular}{c}
\verb|cyclohexyl|
\end{tabular} \\
\begin{tabular}{c}
\includegraphics[width=0.25\textwidth]{figs/iupac_viz/iupac_viz_2.png}
\end{tabular}
& \begin{tabular}{c}
\small
methyl 2-[[cyclohexyl-(2,2-
\end{tabular} & \begin{tabular}{c}
\verb|dimethyl|
\end{tabular} \\
\begin{tabular}{c}
\includegraphics[width=0.25\textwidth]{figs/iupac_viz/iupac_viz_3.png}
\end{tabular}
& \begin{tabular}{c}
\small
methyl 2-[[cyclohexyl-(2,2-dimethyl
\end{tabular} & \begin{tabular}{c}
\verb|prop|
\end{tabular} \\
\begin{tabular}{c}
\includegraphics[width=0.25\textwidth]{figs/iupac_viz/iupac_viz_4.png}
\end{tabular}
& \begin{tabular}{c}
\small
methyl 2-[[cyclohexyl-(2,2-dimethylprop
\end{tabular} & \begin{tabular}{c}
\verb|anoyl|
\end{tabular} \\
\begin{tabular}{c}
\includegraphics[width=0.25\textwidth]{figs/iupac_viz/iupac_viz_5.png}
\end{tabular}
& \begin{tabular}{c}
\small
methyl 2-[[cyclohexyl-(2,2-dimethylpropanoyl)
\end{tabular} & \begin{tabular}{c}
\verb|amino|
\end{tabular} \\
\begin{tabular}{c}
\includegraphics[width=0.25\textwidth]{figs/iupac_viz/iupac_viz_6.png}
\end{tabular}
& \begin{tabular}{c}
\small
methyl 2-[[cyclohexyl-(2,2-dimethylpropanoyl)]amino] \\
\small methyl]
\end{tabular} & \begin{tabular}{c}
\verb|th|
\end{tabular} \\
\begin{tabular}{c}
\includegraphics[width=0.25\textwidth]{figs/iupac_viz/iupac_viz_7.png}
\end{tabular}
& \begin{tabular}{c}
\small
methyl 2-[[cyclohexyl-(2,2-dimethylpropanoyl)]amino] \\
\small methyl]th
\end{tabular} & \begin{tabular}{c}
\verb|iazole|
\end{tabular} \\
\begin{tabular}{c}
\includegraphics[width=0.25\textwidth]{figs/iupac_viz/iupac_viz_8.png}
\end{tabular}
& \begin{tabular}{c}
\small
methyl 2-[[cyclohexyl-(2,2-dimethylpropanoyl)]amino] \\ \small methyl]thiazole-4-
\end{tabular} & \begin{tabular}{c}
\verb|carboxylate|
\end{tabular} \\
\hline
\end{tabular}
\caption{\textbf{Attending to Functional Groups}. Galactica uses its knowledge of chemistry to help with the IUPAC Naming task. At each stage of prediction, it attends to the part of the molecular graph associated with the group name, e.g. for "amino" it attends to the nitrogen atom; for thiazole, the sulphur atom.}
\label{fig:iupac-naming-viz}
\end{figure}
\clearpage
\subsubsection{MoleculeNet}
We now explore whether we can pose traditional drug discovery tasks in a natural language format, combining the different modalities involved. Humans organize knowledge via natural language, and so learning an interface between natural language and scientific modalities like SMILES could be a new tool for navigating the chemical space. We use MoleculeNet classification benchmarks to answer this question, which are summarized in Table~\ref{table:moleculenet}~\citep{MoleculeNet}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lllll }
\toprule
Category & Dataset & Type & Other modalities \\
\midrule
\multirow{ 3}{*}{Biophysics} & HIV & Classification & n/a \\
& BACE C & Classification & n/a \\
\midrule
\multirow{ 4}{*}{Physiology} & BBBP & Classification & n/a \\
& Tox21 & Classification & protein sequences \\
& SIDER & Classification & n/a \\
& ClinTox & Classification & n/a \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{MoleculeNet datasets used for evaluation}. We convert training sets to text format and include in pre-training. We evaluate using the splits suggested by the DeepChem library~\citep{Ramsundaretal}.}
\label{table:moleculenet}
\end{table}
To evaluate, we include the training sets in pre-training by converting to a text format. We use prompt randomization (varying how the question is posed). For example, for BBBP the training prompt has forms like in Figure~\ref{fig:bbbp_prompt} below. These examples occur alongside the other corpuses in training, and each example is seen just over $4$ times. This is not comparable to \textit{direct} fine-tuning or supervision due to the presence of other data in pre-training, so it might be considered a form of weak supervision instead.
\begin{figure}[h]
\begin{tcolorbox}[colback=galwhite,colframe=galpurple2]
\begin{small}
Here is a SMILES formula: \\
\verb|[START_I_SMILES]O=C(O)CCCC1=CC=C(N(CCCl)CCCl)C=C1[END_I_SMILES]| \\
\textbf{Question:} Will the chemical compound penetrate the blood-brain barrier? \\
\textbf{Answer:} No
\end{small}
\end{tcolorbox}
\caption{\textbf{BBBP Prompt}.
We include the SMILES and pose the classification problem in natural language.
}
\label{fig:bbbp_prompt}
\end{figure}
For some MoleculeNet datasets, other modalities are implicitly present. For example, in the Tox21 dataset, bioassays concern particular receptors such as the androgen receptor (AR). As an experiment, we decided to frame the task in a text format with the protein sequence and the SMILES as part of the prompt. We show an example for Tox21 in Figure~\ref{fig:tox21_prompt}.
\begin{figure}[h]
\begin{tcolorbox}[colback=galwhite,colframe=galpurple2]
\begin{small}
Here is a sequence for a protein: \\
\verb|[START_AMINO]MEEPQSDPSVEPPLSQETFSDLWKLLPE...[END_AMINO]| \\
And here is an isomeric SMILES for a compound: \\
\verb|[START_I_SMILES]CC(O)(P(=O)(O)O)P(=O)(O)O[END_I_SMILES]| \\
\textbf{Question:} Will the the chemical compound be active against this protein? \\
\textbf{Answer:} No
\end{small}
\end{tcolorbox}
\caption{\textbf{Tox21 Prompt}.
We include the protein sequence and the SMILES formula and pose the classification problem in natural language.
}
\label{fig:tox21_prompt}
\end{figure}
We make sure to Kekulize the SMILES to be consistent with PubChem representations. For evaluation, we use the recommended splits from the DeepChem library~\citep{Ramsundaretal}.
We present results in Table~\ref{table:moleculenet-classification}. Performance scales with model size. The scaling is slower than tasks like QA, and the base model lags a specialist model with explicit 3D information and 10 times more molecules ~\citep{UniMol}. We suspect the weak supervision setup is harder for this task, and fine-tuning and/or more molecule data is required to get sufficient task signal. The model is available for work on this.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lccrrrrrrrr }
\toprule
\multicolumn{10}{c}{MoleculeNet Classification} \\
\midrule
Model & Modality & Molecules & BACE & BBBP & ClinTox & HIV & SIDER & Tox21 & Av. \\
\midrule
GAL 125M & SMILES & 2M & 0.561 & 0.393 & 0.518 & 0.702 & 0.559 & 0.543 & 0.581 \\
GAL 1.3B & SMILES & 2M & 0.576 & 0.604 & 0.589 & 0.724 & 0.540 & 0.606 & 0.619 \\
GAL 6.7B & SMILES & 2M & 0.584 & 0.535 & 0.784 & 0.722 & 0.559 & 0.639 & 0.640 \\
GAL 30B & SMILES & 2M & 0.727 & 0.596 & 0.822 & 0.759 & 0.613 & 0.685 & 0.687 \\
GAL 120B & SMILES & 2M & 0.617 & 0.661 & 0.826 & 0.745 & 0.632 & 0.689 & 0.690 \\
\midrule
\textit{Uni-Mol} & 3D & 20M & 0.857 & 0.729 & 0.919 & 0.808 & 0.659 & 0.796 & 0.770 \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Results on MoleculeNet Classification}. Results are scored by ROC-AUC.}
\label{table:moleculenet-classification}
\end{table}
For our purposes, the implication for future work is that we can learn drug discovery tasks via natural language prompts. If we can learn these relationships automatically in a signal-dense document context (e.g. online chemical databases), this might reduce the reliance on supervised datasets to perform these tasks.
As a final check, we can average Galactica's attention heads across layers, and visualize whereabouts the model looks in the SMILES sequence to make a prediction (atomic attention). We show an example in Figure~\ref{fig:moleculenet-viz} for some Tox21 predictions.
\begin{figure}
\centering
\textbf{Positive Examples} \\
\begin{subfigure}[b]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/tox21/tox21_p2.png}
\caption{Danazol (28417) on NR-AR}
\label{fig:three sin x}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/tox21/tox21_p1.png}
\tiny
\caption{Gestodene (3033968) on NR-AR}
\label{fig:gestodene}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/tox21/tox21_p3.png}
\caption{Mometasone f. (441336) on NR-AR}
\label{fig:five over x}
\end{subfigure}
\par\bigskip
\textbf{Negative Examples}
\par\bigskip
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/tox21/tox21_negative_1.png}
\tiny
\caption{$\gamma$-Terpinene (7461) on NR-PPAR-$\gamma$}
\label{fig:gestodene}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/tox21/tox21_negative_2.png}
\caption{Bemegride (2310) on NR-AR}
\label{fig:three sin x}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/tox21/tox21_negative_3.png}
\caption{Arecoline (2230) on NR-PPAR-$\gamma$}
\label{fig:five over x}
\end{subfigure}
\par\bigskip
\caption{\textbf{Attention Visualization on Tox21}. The top three molecules are highest confidence positive examples for the 30B model; the bottom three are the highest confidence negatives. We match attention weights from the SMILES with the canonical atom ordering. Danazol and gestodene are known to possess high affinities for the androgen receptor (AR)~\citep{andrology}.}
\label{fig:moleculenet-viz}
\end{figure}
\clearpage
\subsection{Biological Understanding}
In this section we examine Galactica's capability to interface with biological modalities. Language models could potentially play a role in automatic organisation of this data, for example annotating newly sequenced proteins with functional information. We explore the potential of this interface in this section.
For protein sequences from UniProt, we include a small subset of available sequences in pre-training. Specifically, we take reviewed Swiss-Prot proteins; a high-quality subset ($0.5$ million) of total ($227$ million). This is to ensure the model is not overly biased towards learning natural sequences over natural language. As with molecule data, this is a constraint we can relax in future work, enabling for much larger corpus. Here we focus on the first step of investigating whether a single model can learn effectively in the multi-modal setting.
We find that a language model can learn an implicit measure of sequence similarity that it can use for tasks such as functional annotation and descriptions.
\subsubsection{Sequence Validation Perplexity}
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{figs/bioseq/final_primarys.png}
\caption{\textbf{Primary Structure Prediction}. For three of the validation sets we observe smooth scaling, reflecting the potential for high sequence similarity with sequences in the training set; for example, orthologs in the case of the Paen validation set. The CASP set with sequence similarity constraints levels off, suggesting the gains from the 550k proteins in training quickly saturates for more out-of-domain sequences.}
\label{fig:protein_val_graph}
\end{figure}
While Galactica does not explicitly model the 3D structure of a protein, the information needed for a specific conformation is contained in the linear amino acid sequence, which in turn determine function. As a first step, we test upstream performance through evaluating protein sequence perplexity. Constructing a good validation set is important and data leakage is a problem for works in this field. We construct four holdout sets to obtain more confidence about what is being learnt and what generalizes.
First, we conduct BLAST on the sequences in the training set and remove all sequences with a sequence identity \(\geq 50\%\) with 51 CASP14 target sequences. These are the same test sequences used in ESMFold~\citep{ESMFold}. In total we remove 167 sequences from the training set using this approach. We call this this holdout set \textbf{CASPSimilarSeq}. We call the 51 CASP14 target sequences \textbf{CASPSeq}.
Secondly, we conduct organism-level holdout, and remove all sequences from the Paenungulata clade of organisms, including elephants, elephant shrews, manatees and aadvarks. This allows us to test whether Galactica can annotate sequeces for organisms it has never seen before. In total we remove 109 sequences from the training set using this approach. We call this holdout set \textbf{PaenSeq}. Note that this does not enforce any sequence similarity constraints, and there may be very similar sequences in the training set.
Lastly, we conduct a randomized test split, consisting of 5456 sequences. There is no sequence identity constraint applied, so memorization may be more at play, but it still provides a signal about the breadth of sequence knowledge absorbed by the model. We call this holdout set \textbf{UniProtSeq}.
We evaluate perplexity for all holdout sets in Table~\ref{table:protein-val} and plot in Figure~\ref{fig:protein_val_graph}. For three of the validation sets we observe smooth scaling, reflecting the potential for high sequence similarity with sequences in the training set; for example, orthologs in the case of the Paen validation set. Interestingly, the CASP set with sequence similarity constraints levels off, suggesting the gains from the 550k proteins in training quickly saturates.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lrrrrr }
\toprule
\multicolumn{6}{c}{Protein Sequence Validation Perplexity} \\
\midrule
Model & Param (bn) & CASPSeq & CASPSimSeq & PaenSeq & UniProtSeq \\
\midrule
GAL 125M & 0.1 & 20.62 & 19.18 & 16.35 & 19.05 \\
GAL 1.3B & 1.3 & 17.58 & 17.04 & 12.53 & 15.82 \\
GAL 6.7B & 6.7 & 17.29 & 16.35 & 7.76 & 11.58 \\
GAL 30B & 30 & 17.27 & 15.42 & 4.28 & 8.23 \\
GAL 120B & 120 & \textbf{17.26} & \textbf{12.77} & \textbf{3.14} & \textbf{5.54} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Protein Validation Perplexity}. Validation sets with higher potential sequence similarity with the training set have lower perplexity than the restricted sets (CASP validation sets).}
\label{table:protein-val}
\end{table}
To investigate further, we example validation perplexity on the \textbf{CASPSeq} set during training of the 120B model, and we plot results in Figure~\ref{fig:casp_intermed} below.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/bioseq/caspseq_inter.png}
\caption{\textbf{CASPSeq Validation During Training}. Overfitting occurs before the end of training, but the effect is not drastic, and repeating the protein sequences three times does not damage performance on this task. The final 120B model is the second-last point, reflecting the early stopping we applied (see earlier Sections)}
\label{fig:casp_intermed}
\end{figure}
We observe falling validation perplexity up until the start of the fourth epoch, at which point the model overfits for this particular dataset. This may suggest Galactica is getting worse at more "out-of-domain" proteins that differ significantly from the test set. For future work, less repetition is probably desirable; and more generally, increasing the diversity of proteins in the training dataset is likely to be beneficial.
\clearpage
\subsubsection{Functional Keyword Prediction}
We now look at specific translation capabilities from protein sequence toward natural language, which may be useful for tasks such as protein annotation. As a first test, we look at UniProt keywords that Galactica can infer from the sequence. An example of these is shown in Figure~\ref{fig:protein_keywords_example} overleaf.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{figs/bioseq/final_protein_keyword.png}
\caption{\textbf{Protein Keyword Prediction}. This test's Galactica's capability to predict protein keywords, e.g. "cytoplasm", from the sequence alone. For the Paen and General datasets, this capability improves smoothly with scale. It scales more slowly and begins to saturate for the CASPSimSeq set, reflecting the lower sequence similarity with sequences in the training set.}
\end{figure}
\begin{figure}[h]
\begin{tcolorbox}[colback=galwhite,colframe=galpurple2]
\begin{small}
\textbf{\#\# Sequence} \\
Here is the sequence: \\
\verb|[START_AMINO]MQKSPLERASVISKLFFSWPGPILRKGYRQHLKLSDIYQIPSVDSADNLSEKLERE...[END_AMINO]| \\
\textbf{\#\#\# Ground-Truth Keywords} \\
ATP-binding, Cell membrane, Chloride, Chloride channel, Endoplasmic reticulum, Endosome, Glycoprotein, Ion channel, Ion transport, Isomerase, Isopeptide bond, Lipoprotein, Membrane, Nucleotide-binding, Nucleus, Palmitate, Phosphoprotein, Reference proteome, Repeat, Transmembrane, Transmembrane helix, Transport, Ubl conjugation \\
\textbf{\#\#\# Galactica 30B Predicted Keywords} \\
ATP-binding, Cell membrane, Chloride, Chloride channel, Endoplasmic reticulum, Endosome, Glycoprotein, Ion channel, Ion transport, Isomerase, Isopeptide bond, Lipoprotein, Membrane, Nucleotide-binding, Nucleus, Palmitate, Phosphoprotein, Reference proteome, Repeat, Transmembrane, Transmembrane helix, Transport, Ubl conjugation
\end{small}
\end{tcolorbox}
\caption{\textbf{Protein Keyword Prediction}. Example shown is Q108U0 from the PaenSeq holdout, a cystic fibrosis transmembrane conductance regulator from the African elephant. The closest protein by sequence similarity in the training set is the Q2QLA3 protein, a cystic fibrosis transmembrane conductance regular from a horse, with 91.8\% sequence similarity.}
\label{fig:protein_keywords_example}
\end{figure}
We report results in Table~\ref{table:protein-keyword}. $F_{1}$ score increases across the holdout sets with scale, suggesting that Galactica can learn keywords by inferring from the sequence. However, we see saturation for the CASPSimSeq, suggesting this capability depends on how similar the sequences are to those in the training set. This is reflected in the example in Figure~\ref{fig:protein_keywords_example}, where Galactica uses its knowledge of a similar proteins from different organisms, with a maximum sequence similarity of 91.8\% in the training set, to help annotate.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lrrrr }
\toprule
\multicolumn{5}{c}{Protein Keyword Prediction} \\
\midrule
Model & Param (bn) & CASPSimSeq & PaenSeq & UniProtSeq \\
\midrule
GAL 125M & 0.1 & 10.5\% & 9.3\% & 15.2\% \\
GAL 1.3B & 1.3 & 17.4\% & 26.0\% & 21.9\% \\
GAL 6.7B & 6.7 & 18.4\% & 33.3\% & 25.1\% \\
GAL 30B & 30 & \textbf{22.0\%} & 42.6\% & 40.8\% \\
GAL 120B & 120 & 21.9\% & \textbf{54.5\%} & \textbf{48.7\%} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Protein Keyword Prediction}. Metric shown is $F_{1}$ score. Performance increases with scale across the holdout sets. Note we do not include CASPSeq as these do not have UniProt keywords we can test against.}
\label{table:protein-keyword}
\end{table}
We attempted to visualize attention in the protein sequence, but we did not observe anything with biological intepretation (e.g. attention to domains). Our working hypothesis is that Galactica has learnt an implicit measure of sequence similarity that it uses to associate predicted keywords, but that this is not directly interpretable from where it attends to. This differs from our chemistry analysis where results were interpretable in terms of attention to the underlying atomic structure.
\subsubsection{Protein Function Description}
As the next test, we look at generating free-form descriptions of protein function from the sequence. We look at the UniProt function descriptions and compare to Galactica generated descriptions.
We report results in Table~\ref{table:protein-function}. ROUGE-L score increases smoothly across all the holdout sets. We show an example overleaf in Figure~\ref{fig:protein_function_example} from PaenSeq. The protein is a Cytochrome b protein from a rock hyrax (Q7Y8J5). The closest sequence by similarity in the training set is a Cytochrome b protein from a pygmy hippopotamus (O03363) with 83\% sequence similarity. In this case we get a perfect prediction from the description.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lrrrr }
\toprule
\multicolumn{5}{c}{Protein Function Prediction} \\
\midrule
Model & Param (bn) & CASPSimSeq & PaenSeq & UniProtSeq \\
\midrule
GAL 125M & 0.1 & 0.062 & 0.073 & 0.061 \\
GAL 1.3B & 1.3 & 0.069 & 0.084 & 0.079 \\
GAL 6.7B & 6.7 & 0.109 & 0.137 & 0.111\\
GAL 30B & 30 & 0.137 & 0.196 & 0.186 \\
GAL 120B & 120 & \textbf{0.252} & \textbf{0.272} & \textbf{0.252} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Protein Function Prediction}. Metric shown is ROUGE-L. Performance increases with scale.}
\label{table:protein-function}
\end{table}
\begin{figure}[h]
\begin{tcolorbox}[colback=galwhite,colframe=galpurple2]
\begin{small}
This is the sequence: \\
\verb|[START_AMINO]MTNIRKNHPLLKTINDAFIDLPTPSNISTWWNFGSLLGACLIIQVLTGLFLAMHYTSDT...[END_AMINO]| \\
\textbf{\#\#\# Ground-Truth Description} \\
Component of the ubiquinol-cytochrome c reductase complex (complex III or cytochrome b-c1 complex) that is part of the mitochondrial respiratory chain. The b-c1 complex mediates electron transfer from ubiquinol to cytochrome c. Contributes to the generation of a proton gradient across the mitochondrial membrane that is then used for ATP synthesis. \\
\textbf{\#\#\# Galactica 120B Predicted Description} \\
Component of the ubiquinol-cytochrome c reductase complex (complex III or cytochrome b-c1 complex) that is part of the mitochondrial respiratory chain. The b-c1 complex mediates electron transfer from ubiquinol to cytochrome c. Contributes to the generation of a proton gradient across the mitochondrial membrane that is then used for ATP synthesis.
\end{small}
\end{tcolorbox}
\caption{\textbf{Protein Description Prediction}. Example shown is Q7Y8J5 from the PaenSeq holdout, a Cytochrome b protein from a rock hyrax. The closest protein by sequence similarity in the training set is the O03363 protein, a Cytochrome b protein from a pygmy hippopotamus, with 83\% sequence similarity.}
\label{fig:protein_function_example}
\end{figure}
As with the keyword prediction task, Galactica appears to be learning based on matching sequences with similar ones it has seen in training, and using this to form a description. This suggests language models for protein sequences could serve as useful alternatives to existing search methods such as BLAST and MMseqs2~\citep{BLAST, MMseq2}.
\section{Toxicity and Bias}
In this section we study the toxicity and bias of the Galactica model. We evaluate on benchmarks related to stereotypes, toxicity, and misinformation. We compare results to other language models. We find Galactica is significantly less biased and toxic than existing language models.
\subsection{Bias and Stereotypes}
For the following evaluations, we investigate Galactica’s ability to detect (and generate) harmful stereotypes and hate speech, using four widely used benchmarks.
\subsubsection{CrowS-Pairs}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lrrrr }
\toprule
\multicolumn{4}{c}{CrowS-Pairs} \\
\midrule
Bias type & \verb|text-davinci-002| & OPT 175B & Galactica 120B \\
\midrule
Race & 64.7 & 68.6 & \textbf{59.9} \\
Socioeconomic & 73.8 & 76.2 & \textbf{65.7} \\
Gender & 62.6 & 65.7 & \textbf{51.9} \\
Disability & 76.7 & 76.7 & \textbf{66.7} \\
Nationality & 61.6 & 62.9 & \textbf{51.6} \\
Sexual-orientation & \textbf{76.2} & 78.6 & 77.4 \\
Physical-appearance & 74.6 & 76.2 & \textbf{58.7} \\
Religion & 73.3 & 68.6 & \textbf{67.6} \\
Age & \textbf{64.4} & 67.8 & 69.0 \\
Overall & 67.2 & 69.5 & \textbf{60.5} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{CrowS-Pairs Results}. Galactica demonstrates significantly lower stereotypical bias in all categories with the exception of sexual orientation and age.}
\label{table:crows}
\end{table}
CrowS-Pairs is a collection of 1,508 crowd-sourced pairs of sentences, one which is "more" stereotyping and one which is "less" stereotyping, and covers nine characteristics ~\citep{crows}. These characteristics are race, religion, socioeconomic status, age, disability, nationality, sexual orientation, physical appearance, and gender. A language model’s preference for stereotypical content is measured by computing the proportion of examples in which the "more" stereotypical sentence is preferred (as determined by log likelihood). Higher scores indicate a more harmfully biased model, whereas an ideal model with no bias would score 50\%.
We report results for Galactica and other language models in Table \ref{table:crows}. Galactica exhibits significantly lower stereotypical biases in most categories, with the exception of sexual orientation and age, when compared to the latest GPT-3 (\verb|text-davinci-002|) and OPT 175B. Galactica attains a better overall score of 60.5\% compared to the other models. Language models such as OPT use the Pushshift.io Reddit corpus as a primary data source, which likely leads the model to learn more discriminatory associations~\citep{OPT}. Galactica is trained on a scientific corpus where the incidence rate for stereotypes and discriminatory text is likely to be lower.
\subsubsection{StereoSet}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ lcrrr }
\toprule
\multicolumn{5}{c}{StereoSet} \\
\midrule
Category & & \verb|text-davinci-002| & OPT 175B & Galactica 120B \\
\midrule
& LMS (\(\uparrow\)) & 78.4 & 74.1 & 75.2 \\ Prof. & SS (\(\downarrow\)) & 63.4 & 62.6 & 57.2 \\ & ICAT (\(\uparrow\)) & 57.5 & 55.4 & \textbf{64.3} \\
\hline
& LMS (\(\uparrow\)) & 75.6 & 74.0 & 74.6 \\ Gend. & SS (\(\downarrow\)) & 66.5 & 63.6 & 59.1 \\ & ICAT (\(\uparrow\)) & 50.6 & 53.8 & \textbf{61.0} \\
\hline
& LMS (\(\uparrow\)) & 80.8 & 84.0 & 81.4 \\ Reli. & SS (\(\downarrow\)) & 59.0 & 59.0 & 55.1 \\ & ICAT (\(\uparrow\)) & 66.3 & 68.9 & \textbf{73.1} \\
\hline & LMS (\(\uparrow\)) & 77.0 & 74.9 & 74.5 \\ Race & SS (\(\downarrow\)) & 57.4 & 56.8 & 54.8 \\ & ICAT (\(\uparrow\)) & 65.7 & 64.8 & \textbf{67.3} \\
\hline & LMS (\(\uparrow\)) & 77.6 & 74.8 & 75.0 \\ Overall & SS (\(\downarrow\)) & 60.8 & 59.9 & 56.2 \\ & ICAT (\(\uparrow\)) & 60.8 & 60.0 & \textbf{65.6} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{StereoSet Results}. Galactica outperforms all models across all categories on the ICAT score.}
\label{table:stereoset}
\end{table}
StereoSet aims to measure stereotypical biases across profession, religion, gender, and race~\citep{stereoset}. The benchmark contains two tasks: an intrasentence task and an intersentence task, with around 2,100 examples each in the development set.
\begin{itemize}
\item \textbf{Intrasentence Task}: the stereotype and associated context are in the same sentence.
\item \textbf{Intersentence Task}: the context and stereotype are in different (consecutive) sentences.
\end{itemize}
Alongside stereo- and anti-stereotypical variants of sentences, each example in StereoSet contains an unrelated sentence. This sentence is included for measuring a Language Modelling Score (LMS) and a Stereotype Score (SS). These two metrics are combined to form the Idealized Context Association Test score (ICAT), which is a balanced measure of bias detection and language modeling. An ideal, unbiased language model would score an LMS of 100, an SS of 50, and an ICAT of 100.
We report results in Table~\ref{table:stereoset}. Galactica outperforms other models on all categories for the overall ICAT score.
\subsubsection{Toxicity}
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{figs/toxicity/toxicity_working_final.png}
\caption{\textbf{Toxicity rate on RealToxicityPrompts}. Galactica exhibits much lower toxicity continuation rates, even as we increase the original prompt toxicity.}
\label{fig:tox}
\end{figure}
To measure toxicity we use the RealToxicityPrompts (RTP) benchmark introduced in \citet{RealToxicityPrompts}. We follow the same setup of \cite{OPT} and sample 25 generations of 20 tokens using nucleus sampling \textit{(p=0.9)} for each of 5000 randomly sampled prompts from RTP. We use the prompts to produce sequences (i.e, continuations) which are then scored by a toxicity classifier provided by Perspective API\footnote{\url{https://github.com/conversationai/perspectiveapi}}.
Figure \ref{fig:tox} plots the results. The chart shows the mean toxicity probability of continuations (y-axis), stratified across bucketed toxicities of the original prompts (x-axis). Galactica exhibits substantially lower toxicity rates than the other models.
\subsection{TruthfulQA}
TruthfulQA is a benchmark that measures answer truthfulness of language model generations~\citep{TruthfulQA}. It comprises 817 questions that span health, law, finance and other categories. We compare to other published language models. We report results in Table~\ref{table:truthqa}. Galactica exceeds the performance of other language models on this benchmark. However, absolute performance is still low. Given the curated nature of our corpus, this suggests that data alone does not cause language models to struggle at this task.
\begin{table}[h]
\begin{center}
\begin{tabular}{lrr}
\toprule
\multicolumn{3}{c}{TruthfulQA} \\
\midrule
Model & MC1 (Acc) & MC1 (Std) \\
\midrule
OPT 175B & 21\% & 0.13 \\
BLOOM 176B & 19\% & 0.07 \\
GAL 125M & 19\% & 0.11 \\
GAL 1.3B & 19\% & 0.15 \\
GAL 6.7B & 19\% & 0.03 \\
GAL 30B & 24\% & 0.05 \\
GAL 120B & 26\% & 0.02 \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{TruthfulQA Results}. Galactica exhibits superior performance to other language models, and performance increases with scale. but slowly and at low levels.}
\label{table:truthqa}
\end{table}
\section{Limitations and Future Work}
\subsection{Limitations}
We cover some of the limitations with work in this section.
\paragraph{Corpus Limitations} Our corpus has several limitations, both external and internally imposed. The main external constraint is our restriction to use open-access resources, and much of scientific knowledge like papers and textbooks are not open access. With access to these closed sources of knowledge, performance is likely to be considerably higher. We also use self-imposed constraints, like restricting the number of molecules and proteins for this work; without these constraints, we are likely to see considerable performance gains due to much larger corpuses for these modalities.
\paragraph{Corpus Effects vs Prompt Effects} In several benchmarks, we show performance gains over existing language models, but we do not specifically disentangle the effects of the prompts we included in pre-training versus the core scientific corpus. In future work, we likely need to disentangle these effects in order to see whether general language capabilities are possible with a scientific corpus alone without prompt boosting.
\paragraph{Citation Bias} While we demonstrate that the model approaches the true citation distribution with scale, some bias towards popular papers still remains with the 120B scale model, so the model likely requires augmentation before being used in a production environment.
\paragraph{Prompt Pre-Training vs Instruction Tuning} We opted for the former in this paper, but ideally we would need to explore what the latter could achieve, along the lines of the recent work of \citet{FLANPALM}. A limitation of this work is that we do not perform this direct comparison through ablations, making clear the trade-offs between approaches.
\paragraph{General Knowledge} While Galactica absorbs broad societal knowledge through sources such as Wikipedia - e.g. 120B knows Kota Kinabalu is the capital of Malaysia’s Sabah state - we would not advise using it for tasks that require this type of knowledge as this is not the intended use-case.
\paragraph{Text as a Modality} While we have shown text-based Transformers are surprisingly powerful with text representations of scientific phenomena, we caution against the interpretation that text is all you need. For example, in chemistry, geometry is a fundamental language that determines meaning, yet Galactica has no notion of geometry; e.g. 3D co-ordinates of atoms.
\subsection{Future Work}
For development of the base model, we highlight several directions that may be worth pursuing.
\paragraph{New Objective Function} It is likely further gains can be obtained with mixture-of-denoising training as U-PaLM has recently shown ~\citep{UPALM, FLANPALM}. We suspect this might be beneficial for the scientific modalities such as protein sequences, where the left-to-right LM objective is quite limiting.
\paragraph{Larger Context Window} We use a maximum context window length of $2048$ tokens in this work. Extending this is likely to be beneficial for understanding in long-form scientific documents, such as textbooks and also documents with longer modality sequences (e.g. long protein sequences).
\paragraph{Extending to Images} We cannot capture scientific knowledge adequately without capturing images. This is a natural follow-up project, although it likely requires some architectural modification to make it work well. Existing work such as \citet{Flamingo} has shown how to extend LLMs with this modality.
\paragraph{More <work> examples} We feel \verb|<work>| could be a general-purpose reasoning token and we would like to invest more in this direction, including increasing prompt diversity and exploring performance on more benchmarks.
\paragraph{Verification} Even as language models become more accurate with scale, we need assurances that their generations are correct and factual. Developing this layer is critical for production applications of language models in general beyond scientific applications.
\paragraph{Continual Learning} Should we re-train from scratch to incorporate new scientific knowledge or train from older checkpoints? This is an open question, and further research is needed to find the best procedure for incorporating new knowledge into the model.
\paragraph{Retrieval Augmentation} While we have shown how large language models can absorb large bodies of scientific knowledge, retrieval has a place for fine-grained types of knowledge, and we believe this is a strong direction to pursue to complement the flexible weight memory of the Transformer.
\section{Discussion and Conclusion}
For over half a century, the dominant way of accessing scientific knowledge has been through a store-and-retrieve paradigm. The limitation of this approach is the reasoning, combining and organization of information still relies on human effort. This has led to a significant knowledge throughput bottleneck. In this work we explored how language models might disrupt this paradigm and bring about a new interface for humanity to interface with knowledge.
We showed that language models are surprisingly strong absorbers of technical knowledge, such as LaTeX equations and chemical reactions, and these capabilities tend to scale smoothly with model size. The context-associative power of language models likely confers significant advantages over search engines in the long-run. We demonstrated this for citation prediction, where a language model outperforms tuned sparse and dense retrieval pipelines for this task. Language models will likely provide a valuable new tool for exploring the literature and the body of scientific knowledge in coming years.
We also demonstrated that language models can compose a curated knowledge base to perform well in knowledge-intensive question answering tasks. This includes composing knowledge in a step-by-step reasoning manner. We showed that with a working memory token approach, we can achieve strong performance over existing methods on mathematical MMLU and MATH benchmarks. We suspect tasks like MATH are in principle solvable with language model approaches. The current bottleneck is the availability of high quality step-by-step datasets. However, language models will not perform these tasks like humans until they have an architectural change that supports adaptive computation.
We also performed initial investigations on the potential of LLMs to act as a bridge between scientific modalities and natural language. We showed Galactica could learn tasks like IUPAC naming through self-supervision. We also showed that it is possible to formulate drug discovery tasks like MoleculeNet in a natural language prompt and achieve strong results without direct fine-tuning. Lastly, we showed the potential for tasks such as automatic protein annotation. In all, increasing the number (and size) of datasets that bridge between natural language and natural sequences is likely to boost performance further.
Taken together, we feel there is a strong potential for language models to take on knowledge tasks that are currently human specialisms. We open source the models so others can build on our work, and we look forward to seeing how the open machine learning community will extend it.
\section*{Acknowledgments}
Thanks to to Susan Zhang, Stephen Roller, Naman Goyal and others for their support in using metaseq. We build on the open LLM training foundation they made possible with the OPT project~\citep{OPT}.
Thanks to Iliyan Zarov, Lukas Blecher, Jian Xiang Kuan and Mikhail Pershin for their contributions to the project.
Thanks to Faisal Azhar and Joe Spisak for their valuable support in delivering this project.
Thanks to Antonine Bordes, Laurens van der Maaten and Joelle Pineau for leadership support, and belief in this project. Additional thanks to Laurens for his valuable feedback on the paper.
Thanks to Geeta Chauhan, Hamid Shojanazeri and Eric Han for help with faster inference.
Thanks to numerous others for comments and advice over the past year: Patrick Lewis, Pontus Stenetorp, Timo Schick, Sebastian Riedel, Soumith Chintala.
Thanks to the open source creators whose libraries, datasets and other tools we utilized. Your efforts accelerated our efforts; and we open source our model to accelerate yours.
Thanks to the GPU nodes that didn't die on us when training the 120B model.
\bibliographystyle{plainnat}
|
2,869,038,156,821 | arxiv | \section{Introduction}
Recently, a new variable, called \emph{the contact}, was defined by Tan
\cite{Tan08,Bra12} for a two component Fermi gas interacting via
short-range forces.
The contact measures the probability to find
two unlike fermions close to each other.
In a series of theorems, called Tan's relations,
many other properties of the system, such as its energy, pressure, and momentum
distribution, were connected to the contact.
Tan assumes that the range of the interaction is much smaller than
the scattering length and the averaged distance between the fermions.
Following these theoretical predictions, several experiments
were conducted, verifying Tan's relations in ultracold atomic systems
consisting of $^{40}$K \cite{SteGaeDra10,SagDraPau12}
and $^6$Li \cite{ParStrKam05,WerTarCas09,KuhHuLiu10} atoms.
Nuclear systems differ from these ultracold atomic systems
in many aspects. First, the nucleons are not two-component
fermions. Second, while in the atomic systems the strength
of the interaction between the atoms and the density can
be changed easily, such that Tan's assumptions are satisfied,
in nuclear physics it cannot be done. In nuclear systems,
the $s$-wave spin-singlet and spin-triplet scattering lengths are about
$-20$ fm and $5.38$ fm, respectively, and the average distance between
two adjacent nucleons is about $2.4$ fm. The interaction
range of the long range part of the nuclear potential, which is
governed by the pion exchange Yukawa force, is about
$\mu^{-1}=\hbar/m_\pi c \approx 1.4$ fm. Thus, in nuclear physics
the interaction range is only slightly smaller than the average distance
between two particles and the scattering length.
Consequently, some changes are to be done in order to generalize Tan's
relations to nuclear systems.
Considering a two-component Fermi gas that obeys Tan's assumptions,
the high momentum tail of the momentum distribution is connected to the contact
through the relation, $n_{\sigma}(k)\rightarrow C/k^4$ as $k\rightarrow\infty$,
where $n_\sigma (k)$ is the momentum distribution of fermions with spin
$\sigma$, and $C$ is the contact.
In nuclear physics, the high-momentum part of the nucleon's
momentum distribution is one of the main tools for studying short range correlations (SRCs)
between nucleons. The main focus in current studies
of two-body SRCs (see e.g.
\cite{WirSchPie14,Fmoin12,HenSci14,AlvCioMor08,ArrHigRos12})
is around the momentum range
$1.5\: \mathrm{fm^{-1}} < k < 3 \: \mathrm{fm^{-1}}$.
In few of these studies it is claimed that
higher momentum is affected also by
3-body correlations \cite{Egiyan06}. In this momentum range,
a dominance of neutron-proton ($np$) correlated pairs was observed
in electron scattering experiments \cite{HenSci14}.
This $np$ dominance is usually explained by
the contribution of the tensor force, which affects
only spin-triplet $np$ pairs.
Another
observation is that the correlated pairs usually have
high relative momentum and low center of mass momentum, i.e. they
move approximatelly back-to-back.
Generalizing Tan's relation between the high momentum tail
and the contact to nuclear systems, should help in understanding
more properties of SRCs in nuclei.
In a previous paper \cite{WeiBazBar14}, we have suggested that it might be
fruitful to use the contact formalism in nuclear systems. There we have defined
the neutron-proton $s$-wave nuclear contacts and evaluated their average value
by relating them to the Levinger constant of the photoabsorption process.
In this work we
generalize the definition of the nuclear contacts from $s$-wave
to all partial waves. We also consider finite-range interactions instead
of zero-range. The result is the matrices of nuclear contacts.
We discuss the properties of these matrices, and
use our generalized contact formalism to relate
the nuclear contacts to the one-nucleon and two-nucleon momentum distributions
in nuclei. Doing so, we find an asymptotic relation between these two
distributions which is relevant to the study of SRCs in nuclei.
This relation is verified by available numerical data.
Further analysis of the numerical data
and its implications to the contact formalism are also presented.
In this paper we focus on the two-body contacts
and on two-body correlations, postponing the discussion on
three-body effects to future publications.
\section{The matrices of nuclear contacts}
Consider a two-component Fermi gas that obeys Tan's assumptions.
In such a gas, when a spin-up particle $i$ gets close to a spin-down
particle $j$, the many-body wave function can be factorized into a
product of an asymptotic pair wave function $\varphi(\bs r_{ij})$,
$\bs r_{ij}=\bs{r}_i-\bs{r}_j$,
and a function $A$, also called the regular part of $\Psi$,
describing the residual $A-2$ particle system
and the pair's center of mass $\bs{R}_{ij}=(\bs{r}_i+\bs{r}_j)/2$
motion \cite{Tan08,WerCas12},
\begin{equation} \label{wf}
\Psi \xrightarrow[r_{ij}\rightarrow 0]{}\varphi(\bs{r}_{ij})
A(\bs{R}_{ij},\{\bs r_k\}_{k\neq i,j})\;.
\end{equation}
Due to the suppression of higher partial waves in these systems,
the asymptotic pair wave function will be predominantly an $s$-wave.
In particular, in the zero-range model \cite{zerorange} the
pair wave function is given by
$\varphi=\left(1/r_{ij}-1/a\right)$, where $a$ is
the scattering length.
The contact $C$ is then defined by \cite{Tan08,WerCas12}
\begin{equation}\label{contact_generic}
C=16\pi^2 N_{\uparrow\downarrow} \langle A| A\rangle,
\end{equation}
where
\begin{align}
\langle A| A\rangle & =
\int \prod_{k\neq i,j} d\bs{r}_{k} \,d\bs{R}_{ij} \,
\\ \nonumber & \times
A^{\dagger}\left(\bs{R}_{ij},\{\bs{r}_{k}\}_{k\neq i,j}\right)
\cdot
A\left(\bs{R}_{ij},\{\bs{r}_{k}\}_{k\neq i,j}\right)\;
\end{align}
and $N_{\uparrow\downarrow}$ is the number of possible spin up - spin down pairs.
In nuclear physics, we have four-component fermions,
which are the protons and neutrons with their spin being either up or down.
Moreover, the assumption of a
zero-range $s$-wave interaction is not accurate for nuclei.
As a result, few changes must be made in order to generalize the contact
formalism to study nuclear systems.
A nucleus can be described by a wave function $\Psi$ with
total angular momentum $J$ and projection $M$. We will assume that when
particle $i$ gets close to particle $j$, the wave function is
still factorized but the pair wave function depends on the total
spin of the pair $s_2$, and its angular momentum quantum number $\ell_2$
(with
respect to the relative coordinate $\bs{r}_{ij}$) which
are coupled to create the total pair angular momentum $j_2$
and projection $m_2$.
The asymptotic form of the wave function is then given by
\begin{equation}\label{full_asymp}
\Psi\xrightarrow[r_{ij}\rightarrow 0]{}\sum_\alpha\varphi_{ij}^\alpha\left(\bs{r}_{ij})A_{ij}^\alpha(\bs{R}_{ij},\{\bs{r}_k\}_{k\not=i,j}\right).
\end{equation}
Here the index $ij$ corresponds to one of
the three particle pairs: proton-proton ($pp$),
neutron-neutron ($nn$) or neutron-proton ($np$). We note that
due to symmetry the asymptotic functions are invariant under same particle
permutations.
The sum over $\alpha$ denotes a sum over
the four quantum numbers $\left(s_2,\ell_2,j_2,m_2\right)$.
\begin{equation}
A_{ij}^\alpha=\sum_{J_{A-2},M_{A-2}}\langle j_2m_2J_{A-2}M_{A-2}|JM\rangle A_{ij}^{\{s_2,\ell_2,j_2\}J_{A-2},M_{A-2}}\;.
\end{equation}
Here, $J_{A-2}$ and $M_{A-2}$ are the angular momentum
quantum numbers with respect to
$\bs{J}_{A-2}+\bs{L}_{2,CM}$, where $\bs{J}_{A-2}$ is the total angular
momentum of the residual $(A-2)$ particles
and $\bs{L}_{2,CM}$ is the spatial angular momentum with respect to $\bs{R}_{ij}$.
$A_{ij}^{\{s_2,\ell_2,j_2\}J_{A-2},M_{A-2}}$ is a set of functions
with angular momentum quantum numbers $J_{A-2}$ and $M_{A-2}$,
which depends also on the numbers $s_2,\ell_2,j_2$.
\begin{equation}
\varphi_{ij}^{\alpha}\equiv\varphi_{ij}^{(\ell_2s_2)j_2m_2}=
[\varphi_{ij}^{\{s_2,j_2\}\ell_2}\otimes\chi_{s_2}]^{j_2m_2}
\;,
\end{equation}
where $\chi_{s_2\mu_s}$ is the two-body spin function, and
$\varphi_{ij}^{\{s_2,j_2\}\ell_2\mu_\ell}(\bs{r}_{ij})
={\phi}_{ij}^{\{\ell_2,s_2,j_2\}} (r_{ij}) Y_{\ell_2 \mu_\ell}(\hat{r}_{ij})$.
For clarity, when angular momentum indices are written without any
brackets they denote the relevant angular momentum quantum numbers of the
function.
When the indices are in curly brackets, it means that the function depends
on this numbers but they do not denote the angular momentum of the function.
When two indices are inside round brackets, it means that the angular momentum
of the function is created by a coupling of these two indices.
The only assumption we make regarding the set of functions
$\{\varphi_{ij}^\alpha\}$
is that they do not depend on the specific nuclei or its total angular momentum
$J$ and $M$.
This is a reasonable assumption, because when two particles are very close they
interact with
each other regardless to the background of the $A-2$ particle system. Doing so,
we no longer use the $s$-wave or the zero-range assumptions.
Since the $A_{ij}^\alpha$ functions are not generally orthogonal for different $\alpha$, we are led
to define matrices of nuclear contacts in the following way:
\begin{equation}\label{JM_contacts_def}
C_{ij}^{\alpha \beta}(JM)=16{\pi}^2N_{ij}\langle A_{ij}^\alpha | A_{ij}^\beta \rangle.
\end{equation}
As before, $ij$ stands for one of the pairs: $pp$, $nn$ or $np$,
$N_{ij}$ is the number of $ij$ pairs, and $\alpha$ and $\beta$ are the matrix
indices.
We also denote
$\alpha=(s_\alpha,\ell_\alpha,j_\alpha,m_\alpha)$
and $\beta=(s_\beta,\ell_\beta,j_\beta,m_\beta)$. One can see that if
$m_\alpha\neq m_\beta$, then $C_{ij}^{\alpha \beta}(JM)=0$, but it is not generally true
for $j_2$, $s_2$ or $\ell_2$. For spherical nuclei $(J=0)$ we do get
$C_{ij}^{\alpha \beta}(JM)=0$ if $j_\alpha \neq j_\beta$. For $pp$ and $nn$ pairs,
Pauli's exclusion principle tells us that unless $s_\alpha+\ell_\alpha$ is even,
we have $A_{pp}^\alpha=A_{nn}^\alpha=0$, so $C_{pp}^{\alpha\beta}=C_{nn}^{\alpha\beta}=0$
if $s_\alpha+\ell_\alpha$ or $s_\beta+\ell_\beta$ are odd.
Moreover, if $\Psi$ is the ground state of the nucleus, or any eigenstate of the nuclear Hamiltonian,
then $\Psi$ has a defined parity. $\varphi_{ij}^\alpha$ has a parity of $(-1)^{\ell_\alpha}$, so
it dictates the parity of $A_{ij}^\alpha$. Thus, $C_{ij}^{\alpha\beta}(JM)=0$ for $\alpha$
and $\beta$ such that $\ell_\alpha$ and $\ell_\beta$ have different parities.
Since the projection $M$ is usually unknown in experiments, it is useful
to define the averaged nuclear contacts:
\begin{equation}\label{ave_contacts_def}
C_{ij}^{\alpha \beta}=\frac{1}{2J+1}\sum_M C_{ij}^{\alpha\beta}(JM).
\end{equation}
According to this definition, we have three matrices of averaged contacts,
one for each kind of nucleon-nucleon pair.
We note that the averaged contacts still depend on
$J$, but we will not write it explictly.
Using Clebsch Gordan identities
one can prove that if $m_\alpha\neq m_\beta$
or $j_\alpha \neq j_\beta$, then $C_{ij}^{\alpha \beta}=0$, and
also that the averaged contacts are independent of $m_\alpha$ and $m_\beta$.
The averaged contacts inherit the properties of the non-averaged contacts
$C_{ij}^{\alpha\beta}(JM)$
regarding parity and Pauli's principle.
Concluding, for a given $\alpha$, the relevant $\beta$'s such that
$C_{ij}^{\alpha\beta}$
can be different from zero must obey $j_\beta=j_\alpha$ and $m_\beta=m_\alpha$.
Since, $s_2=0,1$ there are four $(s_2,\ell_2)$ pairs that can create a given $j_\alpha\neq 0$:
$(0,j_\alpha)$, $(1,j_\alpha)$, $(1,j_\alpha-1)$ and $(1,j_\alpha+1)$. The first two
options have the same parity of $\ell_2$ and the last two have the opposite
parity. For $j_\alpha=0$ we have only two possible $(s_2,\ell_2)$ pairs:
$(0,0)$ and $(1,1)$, which have different parity of $\ell_2$.
Thus, in general the matrices $C_{ij}^{\alpha\beta}$ are built from $2\times2$ blocks,
except for the two $1\times 1$ blocks associated with the $j_2=0$ case.
Each block has a well defined $j_2,m_2$ values. For any $j_2\neq 0$
there are two blocks, one with $(s_2,\ell_2)=(0,j_2),(1,j_2)$
and the other with $(s_2,\ell_2)=(1,j_2-1),(1,j_2+1)$.
For $pp$ and $nn$ pairs, Pauli's principle dictates
that any matrix element with an odd
$s_2+\ell_2$ is zero, so some of the $2 \times 2$ blocks
are reduced into two $1 \times 1$ blocks.
In a previous paper \cite{WeiBazBar14}
we have defined the $s$-wave nuclear contacts, $C_{ij}^{s_2}(JM)$, for $s_2=0,1$.
The definition there was slightly different from the current one, as the two-body
spin functions were included into the regular $(A-2)$ particle function
$A^{\alpha}_{ij}$. In our current definition,
we have four diagonal $s$-wave contacts $C_{ij}^{\alpha_{00}\alpha_{00}}$
and
$C_{ij}^{\alpha_{1\mu}\alpha_{1\mu}}$, where
$\alpha_{00}=(s_2=0,\ell_2=0,j_2=0,m_2=0)$,
$\alpha_{1\mu}=(s_2=1,\ell_2=0,j_2=1,m_2=\mu)$, and $\mu=-1,0,1$.
The relations between the two definitions are
\begin{equation}
C_{ij}^{s_2=0}(JM)=C_{ij}^{\alpha_{00}\alpha_{00}}(JM)
\end{equation}
\begin{equation}
C_{ij}^{s_2=1}(JM)=\sum_{\mu=-1}^{1}C_{ij}^{\alpha_{1\mu}\alpha_{1\mu}}(JM)
\end{equation}
Averaging over $M$ and using the fact
that the averaged contacts are independent of $m_2$
we get
\begin{equation} \label{rel_singlet}
C_{ij}^{s_2=0}=C_{ij}^{\alpha_{00}\alpha_{00}}
\end{equation}
\begin{equation} \label{rel_triplet}
C_{ij}^{s_2=1}=\sum_{\mu=-1}^{1}C_{ij}^{\alpha_{1\mu}\alpha_{1\mu}}=3C_{ij}^{\alpha_{1\mu}\alpha_{1\mu}}
\end{equation}
We also note that the previously defined
$s$-wave contacts, $C_{ij}^{s_2}(JM)$, are actually independent of $M$.
Thus, also $C_{ij}^{\alpha_{00}\alpha_{00}}(JM)$
and $\sum_{\mu=-1}^{1}C_{ij}^{\alpha_{1\mu}\alpha_{1\mu}}(JM)$
are independent of $M$.
\section{Momentum distributions}
\subsection{The two-nucleon momentum distribution}
In the following we will utilize the above generalized contact formalism
to find a relation between the two-nucleon momentum distribution
and the nuclear contacts.
Let's denote by $f_{ij}^{JM}(\bs{k}+\bs{K}/2, -\bs{k}+\bs{K}/2)$ the
density probability to find a pair of nucleons, $ij\in\{pp,nn,pn\}$, with
any particle of type $i$
with momentum $\bs{k}+\bs{K}/2$ and any particle of type $j$
with momentum $-\bs{k}+\bs{K}/2$. $J$ and $M$
are the angular momentum quantum numbers of the nuclear wave function
$\Psi$.
Working in the momentum space
\begin{equation}
\tilde{\Psi}(\bs{k}_1,...,\bs{k}_A)=\int \prod_{n=1}^A d^3r_n \Psi e^{\sum_{n}i\bs{k}_n\cdot\bs{r}_n},
\end{equation}
and we can write
\begin{align}\label{fij}
& f_{ij}^{JM}(\bs{k}+\bs{K}/2, -\bs{k}+\bs{K}/2)=
N_{ij} \int \prod_{m\neq i,j} \frac{d^3k_m}{(2\pi)^3} \nonumber \\
& \times
\left| \tilde{\Psi}(\bs{k}_1,...,\bs{k}_i=\bs{k}+\bs{K}/2,...
,\bs{k}_j=-\bs{k}+\bs{K}/2,...,\bs{k}_A)\right| ^2
\end{align}
where A is the number of nucleons, $N_{ij}$ is the number of $ij$ pairs, and we notice that $f_{ij}^{JM}$
is normalized in such away that
$\int f_{ij}^{JM} \frac{d^3k}{(2\pi)^3} \frac{d^3K}{(2\pi)^3}=N_{ij}$.
In the limit
$k\rightarrow \infty$ the main contribution to $f_{ij}^{JM}$
comes from the asymptotic $\bs{r}_{ij}\rightarrow 0$ part of the wave function,
given in Eq. (\ref{full_asymp}). All other terms will cancel each other
due to the
fast oscillating $\exp(i\bs{k}\cdot \bs{r}_{ij})$ factor.
Substituting $\tilde{\Psi}$ into Eq. (\ref{fij}), and using Eq. (\ref{full_asymp})
we get
\begin{align}\label{fij_asymp}
& f_{ij}^{JM}(\bs{k}+\bs{K}/2, -\bs{k}+\bs{K}/2)=
N_{ij} \int \prod_{m\neq i,j} \frac{d^3k_m}{(2\pi)^3} \nonumber \\
& \times
| \int \prod_{n\neq i,j}^A d^3r_nd^3r_{ij}d^3R_{ij} \sum_\alpha \varphi_{ij}^\alpha(\bs{r}_{ij})A_{ij}^\alpha
\nonumber \\
& \times
\exp({i\bs{k} \cdot \bs{r}_{ij}}+{i\bs{K}\cdot \bs{R}_{ij}}+{\sum_{n\neq i,j}i\bs{k}_n\cdot\bs{r}_n})
| ^2.
\end{align}
We will define now
\begin{equation}
F_{ij}^{JM}(\bs{k})=\int \frac{d^3K}{(2\pi)^3} f_{ij}^{JM}(\bs{k}+\bs{K}/2, -\bs{k}+\bs{K}/2).
\end{equation}
$F_{ij}^{JM}$ is the density probability to find an $ij$ pair with relative momentum $\bs{k}$, and
it obeys the normalization condition $\int F_{ij}^{JM}(\bs{k})\frac{d^3k}{(2\pi)^3}=N_{ij}$.
We can now substitute the asymptotic form of $f_{ij}^{JM}$, Eq. (\ref{fij_asymp}), into the definition
of $F_{ij}^{JM}$. In the resulting expression we can separate the integration over
$\bs{r}_{ij}$ from the rest of the coordinates.
Using the notation
\begin{equation}
\tilde{\varphi}_{ij}^\alpha(\bs{k})=\int d^3r \varphi_{ij}^\alpha(\bs{r})\exp(i\bs{k}\cdot\bs{r})
\end{equation}
and
\begin{equation}
\tilde{A}_{ij}^\alpha=\int \prod_{n \neq i,j}d^3r_nd^3R_{ij}A_{ij}^\alpha
\exp(i\bs{K}\cdot \bs{R}_{ij}+\sum_{n \neq i,j} i\bs{k}_n\cdot \bs{r}_n)\;,
\end{equation}
we get
\begin{align}
&F_{ij}^{JM}(\bs{k})= \nonumber\\
&
N_{ij}\sum_{\alpha,\beta}\tilde{\varphi}_{ij}^{\alpha\dagger} (\bs{k}) \tilde{\varphi}_{ij}^\beta (\bs{k})
\int \prod_{m \neq i,j} \frac{d^3k_m}{(2\pi)^3}\frac{d^3K}{(2\pi)^3}
\tilde{A}_{ij}^{\alpha\dagger}\tilde{A}_{ij}^\beta.
\end{align}
Noting the equality
\begin{equation}
\int \prod_{m \neq i,j} \frac{d^3k_m}{(2\pi)^3}\frac{d^3K}{(2\pi)^3}
\tilde{A}_{ij}^{\alpha\dagger}\tilde{A}_{ij}^\beta
=\int \prod_{n \neq i,j}d^3r_n d^3R_{ij}A_{ij}^{\alpha\dagger}A_{ij}^\beta \;,
\end{equation}
we obtain the following asymptotic $k\rightarrow \infty$ expression for the two nucleon
momentum distribution,
\begin{equation}
F_{ij}^{JM}(\bs{k})=\sum_{\alpha,\beta}
\tilde{\varphi}_{ij}^{\alpha\dagger}(\bs{k}) \tilde{\varphi}_{ij}^\beta (\bs{k})
\frac{C_{ij}^{\alpha\beta}(JM)}{16\pi^2}.
\end{equation}
Here we have used the definition of the
contacts from Eq. (\ref{JM_contacts_def}). Averaging over $M$, we get
the asymptotic relation
\begin{equation} \label{2nuc}
F_{ij}(\bs{k})=\sum_{\alpha,\beta}
\tilde{\varphi}_{ij}^{\alpha\dagger}(\bs{k})\tilde{\varphi}_{ij}^\beta(\bs{k})
\frac{C_{ij}^{\alpha\beta}}{16\pi^2},
\end{equation}
where $F_{ij}=(2J+1)^{-1}\sum_M F_{ij}^{JM}$, and
$C_{ij}^{\alpha\beta}$ are the averaged contacts defined in Eq. (\ref{ave_contacts_def}).
Like $C_{ij}^{\alpha\beta}$, also $F_{ij}$ depends implicitly on $J$.
\subsection{The one-nucleon momentum distribution}
We would like now to connect the nuclear contacts also to the
one-nucleon momentum distributions. The following derivation is
based on Tan's derivation for the two-body case in atomic systems \cite{Tan08}.
We will first focus on the proton's momentum distribution $n_p^{JM}(\bs{k})$.
Normalized to the number of protons in the system $Z$,
$\int\frac{d^3k}{(2\pi)^3}n_p^{JM}(\bs{k})=Z$,
$n_p^{JM}$ is given by
\begin{align}\label{n_p_jm}
&n_p^{JM}(\bs{k})=Z \int \prod_{l \neq p} \frac{d^3k_l}{(2\pi)^3}
\left| \tilde{\Psi}(\bs{k}_{1},...,\bs{k}_{p}=\bs{k},...,\bs{k}_{A})\right|^2,
\end{align}
where $p$ is any proton.
In the $k\longrightarrow \infty$ limit the main contribution to $n_p^{JM}$
emerges from the asymptotic parts of the wave function, i.e.
from $r_{p s}=|\bs{r}_{p}-\bs{r}_s|\rightarrow 0$, for any particle $s \neq p$, being proton or neutron.
In this limit
\begin{align}
\tilde{\Psi}(\bs{k}_{1},...,&\bs{k}_{p}=\bs{k},...,\bs{k}_{A})=
\sum_{s \neq p} \sum_\alpha\tilde{\varphi}_{p s}^\alpha\left((\bs{k}-\bs{k_s})/2\right)
\nonumber \\ & \times
\tilde{A}_{p s}^\alpha\left(\bs{K}_{p s}=\bs{k}+\bs{k}_s,\{\bs{k}_j\}_{j \neq p,s}\right),
\end{align}
where $\bs{K}_{p s}$ is the center of mass momentum of the $p s$ pair.
Substituting into $n_p^{JM}(\bs{k})$, we see that
since $A_{p s}^\alpha$ is regular, $\tilde{A}_{p s}^\alpha$
will be significant only if $\left| \bs{k}+\bs{k}_s \right| \ll k$.
It means that $\bs{k}_s\approx -\bs{k}$ so $\bs{k}-\bs{k}_s\approx 2\bs{k}$.
Substituting $\tilde\Psi^\dagger\tilde\Psi$ into Eq. (\ref{n_p_jm}), we get summations over
$s,s' \neq p$.
The contribution of the $s, s'$ element, for $s \neq s'$,
will be significant only for $\bs{k}_s\approx \bs{k}_{s'} \approx -\bs{k}$.
In this case $k,k_s,k_{s'}\rightarrow \infty$ together, which
is clearly a three body effect and we expect it to be less important \cite{BraKanPla11}.
Therefore we are left only with the diagonal elements and get
\begin{align}
n_p^{JM}(\bs{k})&= Z \sum_{s \neq p} \sum_{\alpha,\beta}
\int \prod_{l \neq p, s} \frac{d^3k_l}{(2\pi)^3}\frac{d^3K_{p s}}{(2\pi)^3}
\tilde{\varphi}_{p s}^{\alpha\dagger}(\bs{k}) \tilde{\varphi}_{p s}^\beta(\bs{k}) \nonumber \\
& \times
\tilde{A}_{p s}^{\alpha\dagger}(\bs{K}_{p s},\{\bs{k}_j\}_{j \neq p,s})
\tilde{A}_{p s}^\beta(\bs{K}_{p s},\{\bs{k}_j\}_{j \neq p,s}).
\end{align}
We will now divide the sum
$\sum_{s \neq p}$ into a sum over protons and a sum
over neutrons $\sum_{p' \neq p}+\sum_n$. Since the asymptotic
functions $A_{pp'}^\alpha$ and $\varphi_{pp'}^\alpha$ are the same for all $pp'$
pairs we can take them out of the sum.
The same holds for the $np$ pairs. As a result we get
\begin{align}
n_p^{JM}(\bs{k})&=\sum_{\alpha,\beta} \tilde{\varphi}_{pp}^{\alpha\dagger}(\bs{k})
\tilde{\varphi}_{pp}^\beta(\bs{k})
Z(Z-1) \langle A_{pp}^\alpha | A_{pp}^\beta \rangle \nonumber \\
& \times
\sum_{\alpha,\beta} \tilde{\varphi}_{pn}^{\alpha\dagger}(\bs{k}) \tilde{\varphi}_{pn}^\beta(\bs{k})
NZ \langle A_{pn}^\alpha | A_{pn}^\beta \rangle.
\end{align}
Here $N$ is the number of neutrons in the system.
Using the definition of the contacts, Eq. (\ref{JM_contacts_def}),
we see that for $k\rightarrow \infty$
\begin{align}
n_p^{JM}(\bs{k})&=\sum_{\alpha,\beta}
\tilde{\varphi}_{pp}^{\alpha\dagger}(\bs{k}) \tilde{\varphi}_{pp}^\beta(\bs{k})
\frac{2C_{pp}^{\alpha\beta}(JM)}{16\pi^2}\nonumber \\
&
+\sum_{\alpha,\beta}
\tilde{\varphi}_{pn}^{\alpha\dagger}(\bs{k}) \tilde{\varphi}_{pn}^\beta(\bs{k})
\frac{C_{pn}^{\alpha\beta}(JM)}{16\pi^2}.
\end{align}
Averaging over $M$ we further obtain the relation
between the averaged contacts and the averaged protons' momentum
distribution $n_p(\bs{k})=(2J+1)^{-1}\sum_M n_p^{JM}(\bs{k})$ for $k\rightarrow\infty$:
\begin{align} \label{1p}
n_p(\bs{k})&=\sum_{\alpha,\beta}
\tilde{\varphi}_{pp}^{\alpha\dagger}(\bs{k}) \tilde{\varphi}_{pp}^\beta(\bs{k})
\frac{2C_{pp}^{\alpha\beta}}{16\pi^2}
\nonumber \\
&
+\sum_{\alpha,\beta}
\tilde{\varphi}_{pn}^{\alpha\dagger}(\bs{k}) \tilde{\varphi}_{pn}^\beta(\bs{k})
\frac{C_{pn}^{\alpha\beta}}{16\pi^2}.
\end{align}
We note that $n_p(\bs{k})$ still depends on $J$.
Similarly, for the neutrons:
\begin{align} \label{1n}
n_n(\bs{k})&=\sum_{\alpha,\beta} \tilde{\varphi}_{nn}^{\alpha\dagger}(\bs{k}) \tilde{\varphi}_{nn}^\beta(\bs{k}) \frac{2C_{nn}^{\alpha\beta}}{16\pi^2} \nonumber \\
&
+\sum_{\alpha,\beta} \tilde{\varphi}_{pn}^{\alpha\dagger}(\bs{k}) \tilde{\varphi}_{pn}^\beta(\bs{k})\frac{C_{pn}^{\alpha\beta}}{16\pi^2}.
\end{align}
Comparing Eqs. (\ref{1p}) and (\ref{1n}) to Eq. (\ref{2nuc}),
we can see that for $k\longrightarrow \infty$ there is a simple
relation between the one-nucleon and
the two-nucleon momentum distributions:
\begin{equation} \label{1pto2}
n_p(\bs{k})=2F_{pp}(\bs{k})+F_{pn}(\bs{k})
\end{equation}
\begin{equation} \label{1nto2}
n_n(\bs{k})=2F_{nn}(\bs{k})+F_{pn}(\bs{k}).
\end{equation}
These connections seem intuitive if we assume
that a nucleon will have high momentum $\bs{k}$ only if
there is another nucleon close to it with opposite momentum
$-\bs{k}$. In this case, if we find a proton with high momentum
$\bs{k}$ we know that we will find close to it a neutron or
a proton, that is a correlated $pp$ or $np$
pair with relative momentum $\bs{k}$. Notice that the
factor of $2$ before $F_{pp}$ and $F_{nn}$ in Eqs.
(\ref{1pto2}), (\ref{1nto2}), can be also explained
in this picture by the fact that
for example a $pp$ pair with momenta $(-\bs{k},\bs{k})$ has
a relative momentum $-\bs{k}$ even though there is a proton
with momentum $\bs{k}$ in this pair. It means that such a pair
will be counted for $n_p(\bs{k})$ but not for $F_{pp}(\bs{k})$ and the
factor of 2 takes it into consideration.
These relations emphasize
the importance of the two-body correlations to the high momentum
one-nucleon distribution. As mentioned before,
the picture of short-range correlated pairs
of nucleons with back-to-back momentum is one of the main features
of SRCs in nuclei, and the above relations between
the one-nucleon and two-nucleon momentum distributions give a
theoretical support to this picture.
We also note here that similar derivations can be done
easily for atomic systems consisting of two-component fermions,
denoted by $\uparrow$ and $\downarrow$.
The one-body high momentum distribution is already known
and given by $n_{\uparrow}(\bs{k})=n_{\downarrow}(\bs{k})=C/k^4$.
Adjusting the above derivation for the two-nucleon momentum
distribution to atomic systems will produce an identical relation between
the two-body momentum distribution,
$F_{\uparrow\downarrow}(\bs{k})$, describing the
probability to find an $\uparrow\downarrow$
pair with high relative momentum, and the atomic contact. Explicitly,
$F_{\uparrow\downarrow}(\bs{k})=C/k^4$. As a result we find that
$n_\uparrow(\bs{k})=n_\downarrow(\bs{k})=F_{\uparrow\downarrow}(\bs{k})$
for high momentum $\bs{k}$.
This relation tells us that also in the ultracold atomic systems
the correlated $\uparrow\downarrow$ pairs have
back-to-back momentum, like in nuclear systems.
\section{Analysis of numerical data}
\subsection{Momentum distributions}
In order to check the validity of our results in actual nuclear systems,
we turn now to compare our theoretical predictions to
available numerical data.
To this end, we will use numerical data of one-nucleon and
two-nucleon momentum distributions calculated by Wiringa et al. \cite{WirSchPie14},
using the Variational Monte Carlo method (VMC), for nuclei with $A\le 12$.
In this VMC results, the calculation of both one-nucleon and two-nucleon momentum
distributions were done for nuclei in their ground state.
Consequently, the following analysis is limited to the nuclear ground
state.
\begin{figure}
\includegraphics[width=8.6 cm]{ratio_2nuc_over_1nuc_protons.pdf}
\caption{\label{2VS1_protons} (Color online)
The ratio $(2F_{pp}+F_{pn})/n_p$ for different nuclei.
The numerical data is taken from Ref. \cite{WirSchPie14}. Red
line - $^4$He, green line - $^6$He,
cyan line - $^8$He, black line - $^6$Li,
blue line - $^8$Be, and pink line - $^{10}$B.
The dashed red line is the reference $y=1$.}
\end{figure}
\begin{figure}
\includegraphics[width=8.6 cm]{ratio_2nuc_over_1nuc_neutrons.pdf}
\caption{\label{2VS1_neutrons} (Color online)
The ratio $(2F_{nn}+F_{pn})/n_n$ for the non-symmetric nuclei
in the numerical data of Ref. \cite{WirSchPie14}. Blue
line - $^6$He, and green line - $^8$He.
The dashed red line is the reference $y=1$.}
\end{figure}
First we check the relation between the one-nucleon and the two-nucleon
momentum distributions, Eqs. (\ref{1pto2}), (\ref{1nto2}).
In Fig. \ref{2VS1_protons} the ratio between $2F_{pp}+F_{pn}$ and $n_p$ is
presented for various nuclei.
We can see that for $k\longrightarrow\infty$ the two quantities
coincide and our prediction (\ref{1pto2})
is indeed satisfied.
In Fig. \ref{2VS1_neutrons} we present the ratio
between $2F_{nn}+F_{pn}$ and $n_n$. We show
only the results for non-symmetric nucei, because for
symmetric nuclei
there is no difference between protons
and neutrons in the numerical VMC data.
We can see that also here, the ratio $(2F_{nn}+F_{pn})/n_n \longrightarrow 1$
as $k\longrightarrow\infty$ and our prediction (\ref{1nto2})
is satisfied.
This result is obtained for all available nuclei: $^4$He, $^6$He, $^8$He,
$^6$Li, $^8$Be, and $^{10}$B, for both protons and neutrons.
For all these nuclei the momentum relations hold for
$4\:\mathrm{fm^{-1}} < k < 5\:\mathrm{fm^{-1}}$.
The correspondence between our predictions, derived using the
contact formalism, and the numerical data is a good indication for the
relevance of the contact formalism to nuclear systems. We also learn here
that the approximations made in the above theoretical
derivations for $k\longrightarrow \infty$ are valid for
$4\:\mathrm{fm^{-1}} < k < 5\:\mathrm{fm^{-1}}$. This is the first indication for the
momentum range which is relevant to the contact formalism in nuclear systems.
Moreover, as we mentioned before, in current studies of
SRCs in nuclei this momentum range of $k>4\:\mathrm{fm^{-1}}$
is believed to be affected by three-body correlations.
As explained, Eqs. (\ref{1pto2}) and (\ref{1nto2}) are
suppose to be satisfied when the two-body correlations
are the only significant correlations and every high momentum
nucleon has a sinlge nucleon near it with back-to-back momentum.
It means that according to this numerical data the momentum range of
$4\:\mathrm{fm^{-1}} < k < 5\:\mathrm{fm^{-1}}$ is affected almost exclusively
by two-body SRCs while three-body SRCs are negligible, and that in this
momentum range the picture of back-to-back short-range correlated
pairs is accurate.
We note that this momentum range of $4\:\mathrm{fm^{-1}} < k < 5\:\mathrm{fm^{-1}}$
might be model dependent, and it should be
verified using other numerical methods, and different nuclear potentials.
It should also be mentioned
that the VMC method utilize
two and three-body Jastrow correlations in
the nuclear wave function.
Hen et al. \cite{HenArx14} also discuss the possibility that the contact
formalism is relevant in nuclear physics. In their work,
they present an experimental measurement
of a $k^{-4}$ behavior in the proton momentum
distribution in the deuteron for $1.6\:\mathrm{fm^{-1}}< k<3.2\:\mathrm{fm^{-1}}$.
They also claim that the $k^{-4}$ behavior exists in heavier nuclei
in the same momentum range.
As mentioned before, one of the results of the contact formalism in atomic systems
is the $k^{-4}$ tail in the momentum distribution, but this
behavior is a direct consequence of the zero-range model.
In nuclear systems this model is not accurate, so
we can only expect a high momentum tail universal to all nuclei,
but not a $k^{-4}$ behavior. We also note that in the numerical
VMC data there is no $k^{-4}$ tail for nuclei
heavier than the deuteron. Moreover,
we have found here that the relevant momentum range for the
contact formalism in nuclei is
$4\:\mathrm{fm^{-1}} < k < 5\:\mathrm{fm^{-1}}$,
which is higher than the momentum range discussed by Hen et al.
\subsection{The $pp$ and $nn$ contacts along the nuclear chart}
We continue now by examining the
ratio between $F_{pp}(\bs{k})$ and $F_{nn}(\bs{k})$ in
the same nuclei. In the VMC results, this ratio equals 1
for all $k$ for symmetric nuclei $(N=Z)$. Therefore, we are
left with the available non-symmetric nuclei $^6$He and
$^8$He. The relevant results are shown in Fig. \ref{F_pp_F_nn}.
\begin{figure}
\includegraphics[width=8.6 cm]{F_ppX_over_F_nnX.pdf}
\caption{\label{F_pp_F_nn} (Color online)
The ratio between $F_{pp}$ and $F_{nn}$ in the same nuclei
for the available non-symmetric nuclei in \cite{WirSchPie14}.
Blue line - $^6$He, and red line - $^8$He.
The dashed blue and red lines indicate the
value of $Z/N$ in $^6$He, and $^8$He, respectively.}
\end{figure}
We can see that for $4\:\mathrm{fm^{-1}} < k < 5\:\mathrm{fm^{-1}}$ the ratio is
approximately constant. Inspecting Eq. (\ref{2nuc}), we see that
the only way for this ratio to be constant is that {\it (i)} only
pairs in $\alpha,\beta$ states with the same $k$-dependence of
$\tilde{\varphi}_{ij}^{\alpha\dagger}\tilde{\varphi}_{ij}^{\beta}$
contribute significantly to
both $F_{pp}$ and $F_{nn}$, and {\it(ii)}
both $pp$ and $nn$ pairs have the same $k$-dependence.
It is reasonable to
assume that the $s$-wave
contacts are the most significant contacts.
For $pp$ and $nn$ pairs the only possible non-zero $s$-wave contact
is $C_{ij}^{\alpha_{00}\alpha_{00}}$ where $\alpha_{00}\equiv(s_2=0,\ell_2=0,j_2=0,m_2=0)$.
This point can be verified numerically through analysis of the angular dependence of the
momentum distributions. If the $s$-wave contact is indeed dominant
we expect to see no angular-dependence. If we further assume that
$\tilde{\varphi}_{pp}^{\alpha_{00}\dagger}\tilde{\varphi}_{pp}^{\alpha_{00}}
=\tilde{\varphi}_{nn}^{\alpha_{00}\dagger}\tilde{\varphi}_{nn}^{\alpha_{00}}$,
which seems reasonable from isospin symmetry,
then the ratio between $F_{pp}$ and $F_{nn}$ for large momentum
equals to the ratio between $C_{pp}^{\alpha_{00}\alpha_{00}}$
and $C_{nn}^{\alpha_{00}\alpha_{00}}$.
We can also see in Fig. \ref{F_pp_F_nn}, that for the two relevant nuclei the
ratio $F_{pp}/F_{nn}$ is close to the ratio $Z/N$ between the number of
protons and neutrons in the nucleus.
If this relation turns out to be true in general along the nuclear chart,
it means that for a nucleus $X$
in its ground state, the most significant
$pp$ and $nn$ contacts are
$C_{pp}^{\alpha_{00}\alpha_{00}}$ and $C_{nn}^{\alpha_{00}\alpha_{00}}$
and their ratio is given by
\begin{equation} \label{cpp2cnn}
\frac{C_{pp}^{\alpha_{00}\alpha_{00}}(X)}{C_{nn}^{\alpha_{00}\alpha_{00}}(X)}
\approx\frac{Z(X)}{N(X)},
\end{equation}
and
\begin{equation}
\varphi^{\alpha_{00}\alpha_{00}}_{pp}(r)=\varphi^{\alpha_{00}\alpha_{00}}_{nn}(r).
\end{equation}
Here $Z(X)$ ($N(X)$) is the number of protons (neutrons) in the nucleus $X$.
This result is surprising because one might think that the ratio (\ref{cpp2cnn})
should scale as the ratio between the total number of $pp$
pairs and the number of $nn$ pairs in the nucleus, i.e. $Z^2/N^2$. The above result
tells us that the number of correlated $pp$ and $nn$ pairs in nuclei goes
like $Z$ and $N$, respectively.
If we check the ratio between $F_{pp}$ or $F_{nn}$ and $F_{pn}$,
no plateau is observed.
We can also examine the ratio between
$F_{pp}$ of nucleus $X$ and $F_{pp}$ of another nucleus $Y$.
The results are presented in Fig. \ref{F_pp_F_pp}, where all
the available nuclei are compared to $^4$He.
\begin{figure}
\includegraphics[width=8.6 cm]{mana_F_pp_X_F_pp_He4_all_nuclei.pdf}
\caption{\label{F_pp_F_pp} (Color online)
The ratio between $F_{pp}(X)$ and $F_{pp}(^4\mathrm{He})$
for the available nuclei X in the numerical data of Ref. \cite{WirSchPie14}.
Blue line - $^6$He, red line - $^8$He,
green line - $^6$Li, black line - $^8$Be,
and pink line - $^{10}$B.}
\end{figure}
Here again we see flattening
for $4\:\mathrm{fm^{-1}} < k < 5\:\mathrm{fm^{-1}}$. This behavior supports
the claim that only one contact contributes
significantly to $F_{pp}$, and so the value of this ratio is just
the value of the ratio of this $pp$ contact in the two different nuclei
(see Eq. (\ref{2nuc})). The constant behavior also
supports the assumption that the pair wave functions
$\varphi_{pp}^{\alpha\beta}$ are universal along the
nuclear chart, because that way the $k$-dependence indeed vanishes.
The average values of this ratio for $4\:\mathrm{fm^{-1}}\le k \le 5 \:\mathrm{fm^{-1}}$
are presented in table \ref{table_F_ppX_F_pp_4He} and compared to the ratio
between the number of protons in the relevant nuclei and the
number of protons in $^4$He. We can see that the two ratios
are approximately equal for the different nuclei.
If the most significant $pp$ contact is the $s$-wave
contact $C_{pp}^{\alpha_{00}\alpha_{00}}$, then we
we can deduce that for nuclei $X$ and $Y$ in their ground state:
\begin{equation}
\frac{C_{pp}^{\alpha_{00}\alpha_{00}}(X)}{C_{pp}^{\alpha_{00}\alpha_{00}}(Y)}\approx\frac{Z(X)}{Z(Y)}.
\end{equation}
For $F_{nn}$ similar results are observed, therefore we can also deduce that
\begin{equation}
\frac{C_{nn}^{\alpha_{00}\alpha_{00}}(X)}{C_{nn}^{\alpha_{00}\alpha_{00}}(Y)}\approx\frac{N(X)}{N(Y)}.
\end{equation}
These relations support the claim that the number of
correlated $pp$ and $nn$ pairs in nuclei is proportional to
$Z$ and $N$, respectively.
\begin{table}
\begin{tabular}{ c c c }
\hline
\hline
$X$ & $\langle F_{pp}(X)/F_{pp}(^4\mathrm{He})\rangle$ & $Z(X)/Z(^4\mathrm{He})$ \\
\hline
$^6$He & $0.94\pm 0.01 (1\sigma)$ & 1 \\
$^8$He & $0.90\pm 0.01 (1\sigma)$ & 1 \\
$^6$Li & $1.09\pm 0.02 (1\sigma)$ & 1.5 \\
$^8$Be & $2.31\pm 0.07 (1\sigma)$ & 2 \\
$^{10}$B & $2.91\pm 0.06 (1\sigma)$ & 2.5 \\
\hline
\hline
\end{tabular}
\caption{\label{table_F_ppX_F_pp_4He}The averaged value of the ratio
between $F_{pp}(X)$ and $F_{pp}(^4\mathrm{He})$ for
$4\:\mathrm{fm^{-1}}\le k \le 5\:\mathrm{fm^{-1}}$ for all the available
nuclei in the numerical data of Ref. \cite{WirSchPie14}.}
\end{table}
\subsection{The $pn$ contacts and the Levinger constant}
So far we have studied the properties of the $pp$ and $nn$
contacts, now we turn to study the $pn$
contacts. The $pn$ contacts might be the most interesting ones
because of the dominance of correlated $pn$ pairs in nuclear SRCs \cite{HenSci14}.
In order to study the properties of the $pn$ contacts we
examine the variation in $F_{pn}$ between different nuclei.
As in the $pp$ and $nn$ cases, also in this case we shall
assume that the $s$-wave is the most
dominant partial wave. For a $pn$
pair in an $s$-wave there are two possible spin configurations,
spin-singlet and spin-triplet.
For the deuteron $^2$H,
only the spin triplet is relevant as it is a $J=1$ state.
In Fig. \ref{F_pn_n_p_2H}
we present the ratio between $F_{pn}(X)$ and $n_p(^2\mathrm{H})$
for the available nuclei in the VMC results.
\begin{figure}
\includegraphics[width=8.6 cm]{F_pn_X_over_n_p_2H_all_nuclei.pdf}
\caption{\label{F_pn_n_p_2H} (Color online)
The ratio between $F_{pn}(X)$ and $n_p(^2\mathrm{H})$
for the available nuclei X in the numerical data of Ref. \cite{WirSchPie14}.
Blue line - $^4$He, red line - $^6$He,
green line - $^8$He, black line - $^6$Li,
pink line - $^{8}$Be, and cyan line - $^{10}$B.}
\end{figure}
Once again, a constant behavior
is seen for $4\:\mathrm{fm^{-1}} < k < 5\:\mathrm{fm^{-1}}$.
As mentioned before, we have three equal spin triplet $s$-wave $np$
contacts, $C_{pn}^{\alpha_{1\mu}\alpha_{1\mu}}$, and one spin-singlet
$s$-wave $np$ contact $C_{pn}^{\alpha_{00}\alpha_{00}}$. Moreover,
$|\tilde{\varphi}_{pn}^{\alpha_{1\mu}}|^2$
is independent of $\mu$.
Consequently, we would expect to see a plateau in the
ratio $F_{pn}(X)/n_p(^2\mathrm{H})$,
if either the asymptotic pair wave functions obey the relation
$|\tilde{\varphi}_{pn}^{\alpha_{1\mu}}|^2=
|\tilde{\varphi}_{pn}^{\alpha_{00}}|^2$
or alternatively if the spin-triplet $s$-wave contacts are dominant.
In the first case we can deduce from the relations between
the contacts and the one-nucleon and two-nucleon momentum
distributions that asymptotically
\begin{align}
\frac{F_{pn}(X)}{n_p(^2\mathrm{H})} &\approx
\frac{3 C_{pn}^{\alpha_{10}\alpha_{10}}(X)
+C_{pn}^{\alpha_{00}\alpha_{00}}(X)}
{3 C_{pn}^{\alpha_{10}\alpha_{10}}(^2\mathrm{H})} \nonumber \\
& =
\frac{C_{pn}^{s_2=0}(X)+C_{pn}^{s_2=1}(X)}{C_{pn}^{s_2=1}(^2\mathrm{H})},
\end{align}
where here we have also used the notation
of Eqs. (\ref{rel_singlet}) and (\ref{rel_triplet}).
In the second case we get
\begin{equation}
\frac{F_{pn}(X)}{n_p(^2H)}
\approx
\frac{C_{pn}^{\alpha_{10}\alpha_{10}}(X)}
{C_{pn}^{\alpha_{10}\alpha_{10}}(^2\mathrm{H})}
=
\frac{C_{pn}^{s_2=1}(X)}{C_{pn}^{s_2=1}(^2H)}.
\end{equation}
In a previous paper \cite{WeiBazBar14}, we have predicted
that the ratio between the sum of the two $s$-wave $np$ contacts
of a nucleus $X$ in his ground state
and the deuteron's $s$-wave $np$ contact is given by
\begin{equation}
\frac{C_{pn}^{s_2=0}(X)+C_{pn}^{s_2=1}(X) }{C_{pn}^{s_2=1}(^2\mathrm{H})}=L\frac{NZ}{A},
\end{equation}
where $L$ is Levinger's constant that
relates, at the high energy hand, the photoabsorption
cross section of a nucleus to the photoabsorption cross section
of the deuteron \cite{Lev51}.
Analysis of the experimental results \cite{TavTer92}
suggest that the $L$ is approximately a constant
along the nuclear chart $L\approx5.50\pm 0.21$ \cite{WeiBazBar14}.
In \cite{WeiBazBar14}, we have assumed that the two $s$-wave
states have the same asymptotic pair wave function in small distances,
which corresponds to the first case above.
If we were to assume that only the spin-triplet $np$ $s$-wave
is significant, then our result would have been
\begin{equation}
\frac{C_{pn}^{s_2=1}(X)} {C_{pn}^{s_2=1}(^2\mathrm{H})}=L\frac{NZ}{A}.
\end{equation}
In any of the two cases, we get the relation
\begin{equation}
\frac{F_{pn}(X)}{n_p(^2\mathrm{H})}\approx
L\frac{NZ}{A},
\end{equation}
that should hold in the high momentum range.
For this range of high momentum the ratio between
$F_{pn}$ and $n_p(^2\mathrm{H})$ is the number of
quasideuteron (qd) pairs with high relative momentum
in the nucleus.
In table \ref{table_F_pnX_n_p_2H} we present the
averaged value of this ratio for $4\:\mathrm{fm^{-1}}\le k \le 5\:\mathrm{fm^{-1}}$
and its multiplication by $A/NZ$ for each nuclei $X$,
which should be equal to $L$ according to the above
prediction.
One can see that the values of the multiplied ratio
are close to the above value of $L$ for all the nuclei
and their average value is $5.7\pm 0.7(1\sigma)$.
This value is in a very good agreement
with the above mentioned value of $L$.
Evaluation of Levinger's constant from the number
of qd pairs was done by Benhar et al. \cite{BenFabFan03}.
In their work, they calculate numerically the number of qd
pairs in the nucleus and extract Levinger's constant.
In our evaluation we consider only the qd pairs
with high relative momentum, which corresponds to
small relative distance. Only such qd pairs can be
emitted in the photoabsorption process, and therefore
only they should be considered.
We have compared here two independent
relations between the $np$ contacts and different
properties of nuclei (momentum distribution and
photoabsorption cross section) and obtained a good agreement
between the two. Doing so, we have also obtained here an
established estimation for the leading
$s$-wave $np$ contact(s) along the nuclear chart
for nuclei in their ground state
(in units of the deuteron's $s$-wave $np$ contact).
\begin{table}
\begin{tabular}{c c c }
\hline
\hline
$X$ & $\langle F_{pn}(X)/n_{p}(^2H)\rangle$ & $A/NZ\langle F_{pn}(X)/n_{p}(^2H)\rangle$ \\
\hline
$^4$He & $6.10\pm0.06 (1\sigma)$ &$6.10\pm0.06$ \\
$^6$He & $6.5\pm 0.1 (1\sigma)$ & $4.88\pm 0.08$ \\
$^8$He & $7.82\pm 0.03 (1\sigma)$ & $5.21\pm 0.02$ \\
$^6$Li & $7.63\pm 0.04(1\sigma)$ &$5.09\pm 0.03$ \\
$^8$Be & $13.25\pm0.08(1\sigma)$ &$6.63\pm0.04$ \\
$^{10}$B & $15.3\pm 0.3(1\sigma)$ &$6.1\pm 0.1$ \\
\hline
\hline
\end{tabular}
\caption{\label{table_F_pnX_n_p_2H}The
averaged value of the ratio
between $F_{pn}(X)$ and $n_p(^2\mathrm{H})$ for
$4\:\mathrm{fm^{-1}}\le k \le 5\:\mathrm{fm^{-1}}$ and its
multiplication by $A/NZ$ for all the available
nuclei in the numerical VMC results of \cite{WirSchPie14}.}
\end{table}
\section{Summary}
Summing up,
we have generalized the contact formalism to nuclear systems and defined
a matrix of contacts for each particle pair: $pp$, $nn$ and $pn$. With this
generalization we have taken into consideration both different
partial waves, and finite-range interaction.
We have discussed the simple properties of the nuclear contacts and demonstrated
the use of the generalized formalism by relating the contacts to the one-nucleon
and two-nucleon momentum distributions. As a result we have obtained a relation
between these two momentum distributions, which emphasizes the significant
contribution of SRCs to the high one-nucleon momentum tail.
Using avilable VMC numerical data \cite{WirSchPie14}, we have verified the above
relation and found further relations
between the different nuclear contacts.
Using few of these new relations and a previous prediction
connecting the $pn$ contacts to the Levinger constant, we
have calculated Levinger's constant for the available nuclei
and got a good agreement with its experimental value.
This is an important indication
for the relevance of the contact formalism to nuclear systems, and might
open the path to revealing many more interesting relations.
We have also learned from the numerical data that the relevant
momentum range for the contact's approximations in nuclear systems
is $4\:\mathrm{fm^{-1}} < k < 5\:\mathrm{fm^{-1}}$. However, we note that this result might be model
dependent.
The fact that the relations between the one-nucleon
and two-nucleon momentum distribution were satisfied
in this momentum range teaches us that for such momenta
the two-body SRCs, rather than three-body SRCs, are dominant.
Additional numerical or experimental data for both
one-nucleon and two-nucleon momentum distributions in more nuclei,
also in excited states, including angular-dependence
is needed in order to improve our understanding regarding the
properties of the nuclear contacts.
\begin{acknowledgments}
This work was supported by the Pazy foundation.
\end{acknowledgments}
|
2,869,038,156,822 | arxiv | \section{Introduction}\label{Sec1}
Controlling quantum phenomena lies at the heart of quantum technology and quantum
control theory is drawing wide interests from scientists and engineers
\cite{Dong and Petersen 2010IET}-\cite{Brif et al 2010}. In recent years, robust control of quantum systems has been recognized as a key requirement
for practical quantum technology since the existence of uncertainties is unavoidable in the modeling and control process for real quantum
systems \cite{Pravia et al 2003}-\cite{James 2004}. Several methods have been
proposed for robust control design of quantum systems. For example,
James \emph{et al}. \cite{James et al 2007} have formulated and
solved an $H^{\infty}$ controller synthesis problem for a class of
quantum linear stochastic systems in the Heisenberg picture. Dong
and Petersen \cite{Dong and Petersen 2009NJP}-\cite{Dong and Petersen 2011IFAC} have proposed a sliding mode control
approach to deal with Hamiltonian uncertainties in two-level quantum systems. Chen
\emph{et al}. \cite{Chen et al 2012} have proposed a fuzzy estimator based approach
for robust control design in quantum systems.
In this paper, we present a systematic numerical methodology for control
design of quantum systems with Hamiltonian uncertainties.
The proposed method includes two steps: ``training" and ``testing
and evaluation", and we call it sampling-based learning control
(SLC). In the training step, we sample the uncertainties according to
possible distributions of uncertainty parameters and construct an
augmented system using these samples. Then we develop a gradient
flow based learning and optimization algorithm to find the control
with desired performance for the augmented system. In the
process of testing and evaluation, we test a number of
samples of the uncertainties to evaluate the control performance. Numerical
results show that the SLC method is useful for
control design of quantum systems with Hamiltonian uncertainties.
This paper is organized as follows. Section \ref{Sec2} formulates
the control problem. Section \ref{Sec3} presents the
approach of sampling-based learning control and introduces a gradient
flow based learning and optimization algorithm. A result on
control design in three-level quantum systems is
presented in Section \ref{Sec4}. Concluding remarks are
presented in Section \ref{Sec5}.
\section{Model and problem formulation}\label{Sec2}
We focus on finite-dimensional closed quantum systems. For a finite-dimensional closed quantum system, the
evolution of its state $|\psi(t)\rangle$ can be described by the
following Schr\"{o}dinger equation (setting $\hbar=1$):
\begin{equation} \label{systemmodel}
\left\{ \begin{array}{l}
\frac{d}{dt}|{\psi}(t)\rangle=-iH(t)|\psi(t)\rangle \\
t\in [0, T], \ |\psi(0)\rangle=|\psi_{0}\rangle.\\
\end{array}
\right.
\end{equation}
The dynamics of the system are governed by a
time-dependent Hamiltonian of the form
\begin{equation}\label{Hamiltonian}
H(t)=H_{0}+H_{c}(t)=H_{0}+\sum_{m=1}^{M}u_{m}(t)H_{m},
\end{equation}
where $H_{0}$ is the free Hamiltonian of the system,
$H_{c}(t)=\sum_{m=1}^{M}u_{m}(t)H_{m}$ is the time-dependent control
Hamiltonian that represents the interaction of the system with the
external fields $u_{m}(t)$, and the $H_{m}$ are Hermitian operators through which the
controls couple to the system.
The solution of (\ref{systemmodel}) is given
by $\displaystyle |\psi(t)\rangle=U(t)|\psi_{0}\rangle$, where the
propagator $U(t)$ satisfies
\begin{equation}
\left\{ \begin{array}{c}
\frac{d}{dt}U(t)=-iH(t)U(t),\\
t\in [0, T], \ U(0)=\textrm{Id}.\\
\end{array}
\right.
\end{equation}
For an ideal model, there exist no uncertainties in (\ref{Hamiltonian}). However, for a practical quantum system, the existence of uncertainties is unavoidable. In this paper, we consider that the system Hamiltonian has the following form
\begin{equation}
H_{\omega, \theta}(t)=g(\omega(t))H_{0}+\sum_{m=1}^{M}f(\theta(t))u_{m}(t)H_{m}.
\end{equation}
The functions $g(\omega(t))$ and $f(\theta(t))$ characterize possible Hamiltonian uncertainties. We assume that the parameters
$\omega(t)$ and $\theta(t)$ are time-dependent, $\omega(t)\in [-\Omega, \Omega]$ and $\theta(t)\in [-\Theta, \Theta]$. The constants
$\Omega \in [0,1]$ and $\Theta \in [0,1]$ represent the bounds of
the uncertainty parameters. Now the objective is to design the controls
$\{u_{m}(t), m=1,2,\ldots , M\}$ to steer the
quantum system with Hamiltonian uncertainties from an initial state $|\psi_{0}\rangle$ to a target
state $|\psi_{\text{target}}\rangle$ with high fidelity. The control
performance is described by a \emph{performance function} $J(u)$ for
each control strategy $u=\{u_{m}(t), m=1,2,\ldots , M\}$. The
control problem can then be formulated as a maximization problem as
follows:
\begin{equation}\label{ensemble control}
\begin{split}
\displaystyle \max_u \ \ \ & J(u):=\vert
\langle\psi(T)|\psi_{\text{target}}\rangle\vert^{2}\\
\text{s.t.} \ \ \ & \frac{d}{dt}|\psi(t)\rangle=-iH_{\omega,\theta}(t)|\psi(t)\rangle, \ |\psi(0)\rangle=|\psi_{0}\rangle \\
& H_{\omega,\theta}(t)=g(\omega(t))H_{0}+\sum_{m=1}^{M}f(\theta(t))u_{m}(t)H_{m},\\
& \textrm{ with } \omega(t) \in [-\Omega,\Omega], \ \ \theta(t) \in [-\Theta,\Theta],~ t \in [0, T].
\end{split}
\end{equation}
Note that $J(u)$ depends implicitly on the control
$u$ through the Schr\"odinger equation.
\section{Sampling-based learning control of quantum systems}\label{Sec3}
Gradient-based methods \cite{Brif et al 2010}, \cite{Long and Rabitz
2011}, \cite{Roslund and Rabitz 2009}
have been successfully applied to search for optimal solutions to a
variety of quantum control problems, including theoretical and
laboratory applications. In this paper, a gradient-based learning
method is employed to optimize the controls for quantum systems with uncertainties. However, it is impossible to directly calculate
the derivative of $J(u)$ since there exist Hamiltonian uncertainties. Hence
we present a systematic numerical methodology for control design
using some samples obtained through sampling the uncertainties.
These samples are artificial quantum systems whose Hamiltonians are determined
according to the distribution of the uncertainty parameters. Then the
designed control law is applied to additional samples to test and
evaluate the control performance. A similar idea has been used to design robust control pulses for electron shuttling \cite{Zhang et al 2012} and to design a control law for inhomogeneous quantum ensembles \cite{Chen et al 2013arXiv}. In this paper, a systematic sampling-based learning control method is
presented to design control laws for quantum systems with Hamiltonian uncertainties. This method includes two steps of
``training" and ``testing and evaluation".
\subsection{Sampling-based learning control}
\subsubsection{Training step}\label{sec:training}
In the training step, we obtain $N$ samples through sampling uncertainties according to the distribution (e.g., uniform
distribution) of the uncertainty parameters and then construct an
augmented system as follows
\begin{equation}\label{augmented-system}
\frac{d}{dt}\left
\begin{array}{c}
|{\psi}_{\omega_1,\theta_1}(t)\rangle \\
|{\psi}_{\omega_2,\theta_2}(t)\rangle \\
\vdots \\
|{\psi}_{\omega_N,\theta_N}(t)\rangle \\
\end{array
\right)
=-i\left
\begin{array}{c}
H_{\omega_1,\theta_1}(t)|\psi_{\omega_1,\theta_1}(t)\rangle \\
H_{\omega_2,\theta_2}(t)|\psi_{\omega_2,\theta_2}(t)\rangle \\
\vdots \\
H_{\omega_N,\theta_N}(t)|\psi_{\omega_N,\theta_N}(t)\rangle \\
\end{array
\right),
\end{equation}
where
$H_{\omega_n,\theta_n}=g(\omega_{n})H_{0}+\sum_{m}f(\theta_{n})u_{m}(t)H_{m}$
with $n=1,2,\dots,N$. The performance function for the augmented
system is defined by
\begin{equation}\label{eq:cost}
J_N(u):=\frac{1}{N}\sum_{n=1}^N J_{\omega_n,\theta_n}(u)=\frac{1}{N}\sum_{n=1}^{N}\vert \langle\psi_{\omega_n,\theta_n}(T)|\psi_{\text{target}}\rangle\vert^{2}.
\end{equation} The task in the training step is to find a control
strategy $u^*$
that maximizes the performance function defined in Eq.
\eqref{eq:cost}.
Assume that the performance function is $J_N(u^{0})$ with an initial
control strategy $u^{0}=\{u^{0}_{m}(t)\}$. We can apply the
gradient flow method to approximate an optimal control strategy
$u^{*}=\{u^{*}_{m}(t)\}$. The detailed gradient flow algorithm
will be presented in Subsection \ref{sec2.3}.
As for the issue of choosing $N$ samples,
we generally choose them according to possible distributions of the
uncertain parameters $\omega(t) \in [-\Omega,\Omega]$ and
$\theta(t) \in [-\Theta, \Theta]$. It is clear that the basic
motivation of the proposed sampling-based approach is to design the
control law using only a few samples instead of unknown uncertainties. Therefore, it is
necessary to choose the set of samples that are representative for
these uncertainties.
For example, if the distributions of both $\omega(t)$ and $\theta(t)$
are uniform, we may choose some equally spaced samples in the
$\omega-\theta$ space. For example, the intervals $[-\Omega,
\Omega]$ and $[-\Theta, \Theta]$ are divided into
$N_{\Omega}+1$ and $N_{\Theta}+1$ subintervals, respectively,
where $N_{\Omega}$ and $N_{\Theta}$ are usually positive odd
numbers. Then the number of samples $N=N_{\Omega}N_{\Theta}$,
where $\omega_{n}$ and $\theta_{n}$ can be chosen from the
combination of $(\omega_{n_{\Omega}}, \theta_{n_{\Theta}})$ as
follows
\begin{equation}\label{discrete}
\left\{ \begin{array}{c} \omega_{n} \in
\{\omega_{n_{\Omega}}=1-\Omega+\frac{(2n_{\Omega}-1)\Omega}{N_{\Omega}},
\ n_{\Omega}=1,2,\ldots, N_{\Omega}\},\\
\theta_{n} \in
\{\theta_{n_{\Theta}}=1-\Theta+\frac{(2n_{\Theta}-1)\Theta}{N_{\Theta}},\
\
n_{\Theta}=1,2,\ldots, N_{\Theta}\}. \\
\end{array}
\right.
\end{equation}
In practical applications, the numbers of $N_{\Omega}$ and
$N_{\Theta}$ can be chosen by experience or tried through
numerical computation. As long as the augmented system can model
the quantum system with uncertainties and is effective to find the optimal control
strategy, we prefer smaller numbers of $N_{\Omega}$ and
$N_{\Theta}$ to speed up the training process and simplify the
augmented system.
\subsubsection{Evaluation step}
In the step of testing and evaluation, we apply the optimized
control $u^{*}$ obtained in the training step to a large number of
samples through randomly sampling the uncertainties
and evaluate for each sample the control performance in terms of
the fidelity $F(|\psi(T)\rangle,|\psi_{\text{target}}\rangle)$ between the final state
$|\psi(T)\rangle$ and the target state
$|\psi_{\text{target}}\rangle$ defined as follows \cite{Nielsen
and Chuang 2000}
\begin{equation}\label{fidelity}
F(|\psi(T)\rangle,|\psi_{\text{target}}\rangle)=|\langle \psi(T)|\psi_{\text{target}}\rangle| .
\end{equation}
If the average fidelity for all the
tested samples are satisfactory, we accept the designed control
law and end the control design process. Otherwise, we should go
back to the training step and generate another optimized control
strategy (e.g., restarting the training step with a new initial
control strategy or a new set of samples).
\subsection{Gradient flow based learning and optimization algorithm}
\label{sec2.3}
To get an optimal control strategy $u^{*}=\{u^{*}_{m}(t), (t \in
[0,T]), m=1,2,\ldots, M\}$ for the augmented system
(\ref{augmented-system}), a good choice is to follow the
direction of the gradient of $J_N(u)$ as an ascent direction. For
ease of notation, we present the method for $M=1$. We introduce a
time-like variable $s$ to characterize different control
strategies $u^{(s)}(t)$. Then a gradient flow in the control space
can be defined as
\begin{equation}\label{gradientflowequation}
\frac{du^{(s)}}{ds} =\nabla J_N(u^{(s)}),
\end{equation}
where $\nabla J_N(u)$ denotes the gradient of $J_N$ with respect
to the control $u$. It is easy to show that if $u^{(s)}$ is the
solution of \eqref{gradientflowequation} starting from an
arbitrary initial condition $u^{(0)}$, then the value of $J_N$ is
increasing along $u^{(s)}$, i.e., $\frac{d}{ds}J_N(u^{(s)})\geq
0$. In other words, starting from an initial guess $u^{0}$, we
solve the following initial value problem
\begin{equation}\label{gradientflowequation2}
\left\
\begin{split}
& \frac{du^{(s)}}{ds} = \nabla J_N(u^{(s)}) \\
& u^{(0)}=u^{0} \\
\end{split
\right.
\end{equation}
in order to find a control strategy which maximizes $J_N$. This
initial value problem can be solved numerically by using a forward
Euler method over
the $s$-domain, i.e.,
\begin{equation}\label{iteration1}
u(s+\triangle s, t)=u(s,t)+\triangle s\nabla J_N(u^{(s)}).
\end{equation}
As for practical applications, we present its iterative
approximation version to find the optimal controls $u^*(t)$ in an
iterative learning way, where we use $k$ as an index of iterations
instead of the variable $s$ and denote the controls at iteration
step $k$ as $u^{k}(t)$.
Equation \eqref{iteration1} can be rewritten as
\begin{equation}\label{iteration2}
u^{k+1}(t)=u^{k}(t)+ \eta^{k}\nabla J_N(u^{k}),
\end{equation}
where $\eta^{k}$ is the updating step (learning rate in computer
science) for the $kth$ iteration. By \eqref{eq:cost}, we also obtain that
\begin{equation}
\nabla J_N(u)=\frac{1}{N}\sum_{n=1}^{N}\nabla J_{\omega_n,\theta_n}(u).
\end{equation}
Recall that $J_{\omega,\theta}(u)=\vert \langle\psi_{\omega,\theta}(T)\vert\psi_{\textrm{target}}\rangle\vert^2$ and $\vert\psi_{\omega,\theta}(\cdot)\rangle$ satisfies
\begin{equation}\label{app-eq:sch}
\frac{d}{dt}\vert\psi_{\omega,\theta}\rangle=-iH_{\omega,\theta}(t)\vert\psi_{\omega,\theta}\rangle,\quad \vert\psi_{\omega,\theta}(0)\rangle=\vert\psi_{0}\rangle.
\end{equation} For ease of notation, we consider the case where only one control is involved, i.e., $H_{\omega,\theta}(t)=g(\omega)H_0+u(t)f(\theta)H_1$. We now derive an expression for the gradient of $J_{\omega,\theta}(u)$ with respect to the control $u$ by using a first order perturbation. Let $\delta\psi(t)$ be the modification of $\vert \psi(t)\rangle$ induced by a perturbation of the control from $u(t)$ to $u(t)+\delta u(t)$.
By keeping only the first order terms, we obtain the equation satisfied by $\delta\psi$:
\begin{eqnarray*}
\frac{d}{dt}\delta\psi=-i\left(g(\omega)H_0+u(t)f(\theta)H_1\right)\delta\psi\\
\ \ \ -i\delta u(t)f(\theta)H_1\vert\psi_{\omega,\theta}(t)\rangle,\ \ \ \ \ \ \ \\
\delta\psi(0)=0. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\end{eqnarray*} Let $U_{\omega,\theta}(t)$ be the propagator corresponding to \eqref{app-eq:sch}. Then, $U_{\omega,\theta}(t)$ satisfies
$$\frac{d}{dt}U_{\omega,\theta}(t)=-iH_{\omega,\theta}(t)U_{\omega,\theta}(t),\quad U(0)=\textrm{Id}.$$
Therefore,
\begin{eqnarray}
\delta\psi(T)=-iU_{\omega,\theta}(T)\int_0^T\delta u(t)U_{\omega,\theta}^\dagger(t)f(\theta)H_1\vert\psi_{\omega,\theta}(t)\rangle dt\nonumber \\
\ =-iU_{\omega,\theta}(T)\int_0^TU_{\omega,\theta}^\dagger(t)f(\theta)H_1U_{\omega,\theta}(t) \delta u(t)dt~ \vert\psi_0\rangle.\label{app-eq:deltapsi}
\end{eqnarray}
Using \eqref{app-eq:deltapsi}, we compute $J_{\omega,\theta}(u+\delta u)$ as follows
\begin{eqnarray}
&J_{\omega,\theta}(u+\delta u)-J_{\omega,\theta}(u)\nonumber\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\%~=~\vert\langle\psi_{\omega,\theta}(T)+\delta\psi(T)\vert\psi_{\textrm{target}}\rangle\vert^2\nonumber\\
\approx&2\Re\left(\langle\psi_{\omega,\theta}(T)\vert\psi_{\textrm{target}}\rangle\langle\psi_{\textrm{target}}\vert\delta\psi(T)\right)\nonumber \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
=&2\Re\left(-i\langle\psi_{\omega,\theta}(T)\vert\psi_{\textrm{target}}\rangle\langle\psi_{\textrm{target}}\vert \int_0^TV(t) \delta u(t)dt~ \vert\psi_0\rangle\right)\nonumber\\
=&\int_0^T2\Im\left(\langle\psi_{\omega,\theta}(T)\vert\psi_{\textrm{target}}\rangle\langle\psi_{\textrm{target}}\vert V(t)\vert\psi_0\rangle\right)\delta u(t)dt, \ \ \ \label{app-eq:dJ}
\end{eqnarray}where $\Re(\cdot)$ and $\Im(\cdot)$ denote, respectively, the real and imaginary parts of a complex number, and $V(t)=U_{\omega,\theta}(T)U_{\omega,\theta}^\dagger(t)f(\theta)H_1U_{\omega,\theta}(t)$.
Recall also that the definition of the gradient implies that
\begin{eqnarray}
&J_{\omega,\theta}(u+\delta u)-J_{\omega,\theta}(u)\nonumber\ \ \ \ \ \ \ \ \ \ \ \ \\
=&\langle \nabla J_{\omega,\theta}(u),\delta u\rangle_{L^2([0,T])}+o(\Vert\delta u\Vert)\nonumber\\
=&\int_0^T \nabla J_{\omega,\theta}(u)\delta u(t)dt+o(\Vert\delta u\Vert).\label{app-eq:dJ2}
\end{eqnarray}Therefore, by identifying \eqref{app-eq:dJ} with \eqref{app-eq:dJ2}, we obtain
\begin{equation}\label{app-eq:gradJ}
\nabla J_{\omega,\theta}(u)=2\Im\left(\langle\psi_{\omega,\theta}(T)\vert\psi_{\textrm{target}}\rangle\langle\psi_{\textrm{target}}\vert V(t)\vert\psi_0\rangle\right).
\end{equation}
The
gradient flow method can be generalized to the case with $M>1$ as
shown in \emph{Algorithm 1}.
\begin{algorithm}
\caption{Gradient flow based iterative learning}
\label{ModifiedGradientFlow}
\begin{algorithmic}[1]
\State Set the index of iterations $k=0$
\State Choose a set of arbitrary controls $u^{k=0}=\{u_{m}^{0}(t),\
m=1,2,\ldots,M\}, t \in [0,T]$
\Repeat {\ (for each iterative process)}
\Repeat {\ (for each training samples $n=1,2,\ldots,N$)}
\State Compute the propagator $U_{n}^{k}(t)$ with the control
strategy $u^{k}(t)$
\Until {\ $n=N$}
\Repeat {\ (for each control $u_{m}(m=1,2,\ldots,M)$ of the control
vector $u$)}
\State
$\delta_m^{k}(t)=2\Im\left(\langle\psi_{\omega_n,\theta_n}(T)\vert\rho_{\textrm{target}} V_{\omega_n,\theta_n}(t)\vert\psi_0\rangle\right)$ where $V_{\omega_n,\theta_n}(t)=U_{\omega_n,\theta_n}(T)U_{\omega_n,\theta_n}^\dagger(t)f(\theta_n)H_mU_{\omega_n,\theta_n}(t)$ and $\rho_{\textrm{target}}=\vert\psi_{\textrm{target}}\rangle\langle\psi_{\textrm{target}}\vert$
\State $u_{m}^{k+1}(t)=u_{m}^{k}(t)+\eta^{k} \delta_{m}^{k}(t)$
\Until {\ $m=M$}
\State $k=k+1$
\Until {\ the learning process ends}
\State The optimal control strategy
$u^{*}=\{u_{m}^*\}=\{u_{m}^{k}\}, \ m=1,2,\ldots,M$
\end{algorithmic}
\end{algorithm}
\begin{remark}
The numerical solution of the control design using \emph{Algorithm
1} is always difficult with a time varying continuous control
strategy $u(t)$. In a practical implementation, we usually
divide the time interval $[0,T]$ equally into a number of time
slices $\triangle t$ and assume that the controls are constant
within each time slice. Instead of $t \in [0,T]$ the time
index will be $t_{q}=qT/Q$, where $Q=T/\triangle t$ and
$q=1,2,\ldots,Q$.
\end{remark}
\section{SLC for three-level quantum systems with uncertainties}\label{Sec4}
In this section, we demonstrate the application of the
proposed SLC method to a $V$-type three-level quantum systems with Hamiltonian uncertainties.
\subsection{Control of a $V$-type quantum system}
We consider a $V$-type quantum system and
demonstrate the SLC design process. Assume that the initial state is
$|\psi(t)\rangle=c_{1}(t)|1\rangle+c_{2}(t)|2\rangle+c_{3}(t)|3\rangle$.
Let $C(t)=(c_{1}(t),c_{2}(t),c_{3}(t))$ where the $c_i(t)$'s are complex numbers. We have
\begin{equation}
i\dot{C}(t)=(g(\omega(t))H_{0}+f(\theta(t))H_{u}(t))C(t).
\end{equation}
We take $H_{0}=\textrm{diag}(1.5, 1, 0)$ and choose $H_{1}$, $H_{2}$,
$H_{3}$ and $H_{4}$ as follows \cite{Hou et al 2012}:
\begin{equation}\label{h0}
H_{1}=
\left
\begin{array}{ccc}
0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 0 \\
\end{array
\right), \ H_{2}=
\left
\begin{array}{ccc}
0 & -i & 0 \\
i & 0 & 0 \\
0 & 0 & 0 \\
\end{array
\right), \nonumber \end{equation}
\begin{equation}
\ H_{3}=
\left
\begin{array}{ccc}
0 & 0 & 1 \\
0 & 0 & 0 \\
1 & 0 & 0 \\
\end{array
\right), \ \ \ H_{4}=
\left
\begin{array}{ccc}
0 & 0 & -i \\
0 & 0 & 0 \\
i & 0 & 0 \\
\end{array
\right).
\end{equation}
After we sample the uncertainties, every sample can be described as follows:
\begin{equation}\label{general3level}
\left
\begin{array}{c}
\dot{c_{1}}(t) \\
\dot{c_{2}}(t) \\
\dot{c_{3}}(t) \\
\end{array
\right)=
\left
\begin{array}{ccc}
-1.5g(\omega) i & F_{1}(\theta) & F_{2}(\theta) \\
F^{*}_{1}(\theta) & -g(\omega) i & 0 \\
F^{*}_{2}(\theta) & 0 & 0 \\
\end{array
\right) \left
\begin{array}{c}
c_{1}(t) \\
c_{2}(t) \\
c_{3}(t) \\
\end{array
\right)
\end{equation}
where $F_{1}(\theta)=f(\theta)[u_{2}(t)-iu_{1}(t)]$, $F_{2}(\theta)=f(\theta)[u_{4}(t)-iu_{3}(t)]$, $\omega\in [-\Omega, \Omega]$ and $\theta \in [-\Theta,
\Theta]$. $\Omega \in [0,1]$ and $\Theta \in [0,1]$ are given
constants.
To construct an augmented system for the training step of the
SLC design, we choose $N$ training samples
(denoted as $n=1, 2, \ldots, N$) through sampling the uncertainties as follows:
\begin{equation}\label{3level-element1}
\left
\begin{array}{c}
\dot{c}_{1,n}(t) \\
\dot{c}_{2,n}(t) \\
\dot{c}_{3,n}(t) \\
\end{array
\right)=B_{n}(t)\left
\begin{array}{c}
c_{1,n}(t) \\
c_{2,n}(t) \\
c_{3,n}(t) \\
\end{array
\right),\end{equation}
\begin{equation}B_{n}(t)=
\left
\begin{array}{ccc}
-1.5g(\omega_{n}) i & F_{1}(\theta_{n}) & F_{2}(\theta_{n}) \\
F^{*}_{1}(\theta_{n}) & -g(\omega_{n}) i & 0 \\
F^{*}_{2}(\theta_{n}) & 0 & 0 \\
\end{array
\right)\nonumber,
\end{equation}
where $F_{1}(\theta_{n})=f(\theta_{n})[u_{2}(t)-iu_{1}(t)]$, $F_{2}(\theta_{n})=f(\theta_{n})[u_{4}(t)-iu_{3}(t)]$. We assume that $\omega_{n} \in [-\Omega, \Omega]$ and $\theta_{n} \in
[-\Theta, \Theta]$ have uniform distributions. Now the
objective is to find a robust control strategy $u(t)=\{u_{m}(t), m=1,2,
3,4\}$ to drive the quantum system from
$|\psi_{0}\rangle=\frac{1}{\sqrt{3}}(|1\rangle+|2\rangle+|3\rangle)$
(i.e.,
$C_{0}=(\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}})$)
to $|\psi_{\text{target}}\rangle=|3\rangle$ (i.e., $C_{\text{target}}=(0,0,1)$).
If write (\ref{3level-element1}) as
$\dot{C}_{n}(t)=B_{n}(t)C_{n}(t)$ ($n=1,2,\ldots, N$), we can
construct the following augmented equation
\begin{equation}\label{augmented-equation-5samples3level}
\left
\begin{array}{c}
\dot{C}_{1}(t) \\
\dot{C}_{2}(t) \\
\vdots \\
\dot{C}_{N}(t) \\
\end{array
\right)=
\left
\begin{array}{cccc}
B_{1}(t) & 0 & \cdots & 0 \\
0 & B_{2}(t) & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & B_{N}(t) \\
\end{array
\right) \left
\begin{array}{c}
C_{1}(t) \\
C_{2} (t)\\
\vdots \\
C_{N} (t)\\
\end{array
\right).
\end{equation}
For this augmented equation, we use the training step to learn
an optimal control strategy $u(t)$ to maximize the following
performance function
\begin{equation}
J(u)=\frac{1}{N}\sum_{n=1}^{N}\vert \langle
C_{n}(T)|C_{\text{target}}\rangle\vert^{2}.
\end{equation}
Now we employ \emph{Algorithm 1} to find the optimal control strategy
$u^{*}(t)=\{u^{*}_{m}(t), m=1,2,3,4\}$ for this augmented
system. Then we apply the optimal control strategy to other
samples to evaluate its performance.
\subsection{Numerical example}
For the numerical experiments on a $V$-type quantum system \cite{You and Nori 2011},
we use the parameter settings listed as follows: the initial state
$C_{0}=(\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}})$,
and the target state $C_{\text{target}}=(0,0,1)$; The end time is $T=5$
and the total time interval $[0,T]$ is equally discretized into
$Q=200$ time slices with each time slice $\Delta
t=(t_{q}-t_{q-1})|_{q=1,2,\ldots,Q}=T/Q=0.025$; The learning rate is
$\eta^{k}=0.2$; The control
strategy is initialized with $u^{k=0}(t)=\{u^{0}_{m}(t)=\sin t,
m=1,2,3,4 \}$.
First, we assume that there exist only uncertainty $g(\omega(t))$ (i.e., $f(\theta(t))\equiv 1$), $g(\omega(t))=1-\omega \cos t$, $\Omega=0.28$ and $\omega$ has a uniform
distribution in the interval $[-0.28, 0.28]$. To construct an augmented system for the
training step, we have the training samples for
this $V$-type quantum system as follows
\begin{equation}
\left\{ \begin{split}
& g(\omega_{n})=1-0.28+\frac{0.28(2n-1)}{7},\\
& f(\theta_{n})=1, \\
\end{split}\right.
\end{equation}
where $n=1,2,\ldots,7$. The training
performance for this augmented system is shown in Fig. 1. It is
clear that the
learning process converges to a quite satisfying stage with only
about $300$ iterations. The optimal control strategy is
demonstrated in Fig. 2, which is compared with the initial one.
To test the optimal control strategy obtained from the training
step using the augmented system, we randomly choose $200$
samples through sampling the uncertainty $g(\omega(t))$ and demonstrate the control performance
in Fig. 3. The average fidelity is 0.9989.
\begin{figure}\label{fig1}
\centering
\includegraphics[width=3.6in]{1.eps}
\caption{Training performance to find the optimal control
strategy by maximizing $J(u)$ for the $V$-type quantum system with
only uncertainty $g(\omega(t))$ where $\omega(t) \in [-0.28, 0.28]$.}
\end{figure}
\begin{figure}\label{fig2}
\centering
\includegraphics[width=3.75in]{2.eps}
\caption{The learned optimal control strategy with maximized J(u)
for the $V$-type quantum system with
only uncertainty $g(\omega(t))$ where $\omega(t) \in [-0.28, 0.28]$.}
\end{figure}
\begin{figure*}\label{fig3}
\centering
\includegraphics[width=5.08in]{3.eps}
\caption{The testing performance (with respect to fidelity) of the
learned optimal control strategy for the $V$-type quantum system with
only uncertainty $g(\omega(t))$ where $\omega(t) \in [-0.28, 0.28]$. For the 200 testing samples, the mean fidelity is 0.9989.}
\end{figure*}
Now we consider the more general case that there exist uncertainties $g(\omega(t))$ and $f(\theta(t))$. Assume
$\Omega=\Theta=0.28$, $g(\omega(t))=1-\omega \cos t$, $f(\theta(t))=1-\theta \cos t$ and both $\omega$ and $\theta$ have uniform
distributions in the interval $[-0.28, 0.28]$.
To construct an augmented system for the
training step, we have the training samples as follows
\begin{equation}
\left\{ \begin{split}
& g(\omega_{n})=1-0.28+\frac{0.28(2\text{fix}(n/7)-1)}{7},\\
& f(\theta_{n})=1-0.28+\frac{0.28(2\text{mod}(n,7)-1)}{7}, \\
\end{split}\right.
\end{equation}
where $n=1,2,\ldots,49$, $\text{fix}(x)=\max \{z\in \mathbb{Z}|z\leq x\}$, $\text{mod}(n,7)=n-7z (z\in \mathbb{Z}\ \text{and}\ \frac{n}{7}-1<z\leq \frac{n}{7} )$ and $\mathbb{Z}$ is the set of integers.
The training
performance for this augmented system is shown in Fig. 4. The optimal control strategy is
presented in Fig. 5.
To test the optimal control strategy obtained from the training
step using the augmented system, we randomly choose $200$
samples through sampling the uncertainties $g(\omega(t))$ and $f(\theta(t))$ whose control performance is presented
in Fig. 6. The average fidelity is 0.9901.
These numerical results show that the proposed SLC method
using an augmented system for training is effective for control
design of quantum systems with Hamiltonian uncertainties.
\begin{figure}\label{fig4}
\centering
\includegraphics[width=3.6in]{4.eps}
\caption{Training performance to find the optimal control
strategy by maximizing $J(u)$ for the $V$-type quantum system with
uncertainties $g(\omega(t))$ and $f(\theta(t))$ where $\omega(t) \in [-0.28, 0.28]$ and $\theta(t) \in [-0.28, 0.28]$.}
\end{figure}
\begin{figure}\label{fig5}
\centering
\includegraphics[width=3.75in]{5.eps}
\caption{The learned optimal control strategy with maximized J(u)
for the $V$-type quantum system with
uncertainties $g(\omega(t))$ and $f(\theta(t))$ where $\omega(t) \in [-0.28, 0.28]$ and $\theta(t) \in [-0.28, 0.28]$.}
\end{figure}
\begin{figure*}\label{fig6}
\centering
\includegraphics[width=5.68in]{6.eps}
\caption{The testing performance (with respect to fidelity) of the
learned optimal control strategy for the $V$-type quantum system with
uncertainties $g(\omega(t))$ and $f(\theta(t))$ where $\omega(t) \in [-0.28, 0.28]$ and $\theta(t) \in [-0.28, 0.28]$.
For the 200 testing samples, the mean fidelity is 0.9901.}
\end{figure*}
\section{Conclusion}\label{Sec5}
In this paper, we presented a systematic numerical methodology for control
design of quantum systems with Hamiltonian uncertainties. The proposed sampling-based learning control method
includes two steps of ``training" and ``testing and evaluation".
In the training step, the control is learned using a gradient flow based
learning and optimization algorithm for an augmented system
constructed from samples. In the process of testing and evaluation, the control
obtained in the first step is evaluated for additional
samples. The results show the effectiveness of the SLC method for
control design of quantum systems with Hamiltonian uncertainties.
\section*{Acknowledgment}
The authors would like to thank Prof. Herschel Rabitz
for his helpful discussion.
|
2,869,038,156,823 | arxiv |
\section*{Appendix A: Head of First Appendix}
\section{Conclusion}
\label{sec:conclusion}
In this work, we presented a novel methodology for recovering active subspaces (AS) and constructing surrogate models in applications with high-dimensional uncertain parameter spaces.
Our approach rests on a reparameterization of the AS projection matrix using the Gram-Schmidt procedure.
Noting the fact that the GS procedure is, in principle, a fully-differentiable operation, one can easily backpropagate through the GS process to obtain gradients necessary in standard optimization routines.
This formulation liberates us from the GPR approach of past gradient-free AS methods, and allows us to couple AS recovery with deep neural networks (DNNs) - a nonlinear function approximator which can be scaled to high-dimensions/larger datasets much more easily.
We demonstrated the proposed approach on benchmark problems in AS recovery, showing that we do indeed recover the correct AS.
This work represents a first-step toward scaling gradient-free recovery of AS - an important objective since many quantities of interest encoding physical laws exhibit AS or AS-like ridge structure \cite{constantine2016many}.
Our long term interest is in the development of efficient, Bayesian surrogates that are capable of quantify epistemic uncertainty.
The reparameterization of the projection matrix allows us to leverage standard Bayesian inference methods such as stochastic variational inference (SVI) \cite{hoffman2013stochastic} or Hamiltonian Monte Carlo (HMC) \cite{hoffman2014no} to construct Bayesian AS surrogates without resorting to specialized Riemannian manifold versions of these techniques.
\section{NUMERICAL EXAMPLES}
\label{sec:examples}
\subsection{Synthetic example with known active subspace}
\label{sec:synthetic}
Let $f:\mathbb{R}^D \rightarrow \mathbb{R}$ such that $f(\boldsymbol{\xi}) = g(\boldsymbol{\zeta}) = g(\mathbf{W}^T \boldsymbol{\xi})$ where $\mathbf{W} \in \mathcal{V}_{d}\left(\mathbb{R}^D \right)$. Define $g: \mathbb{R}^d \rightarrow \mathbb{R}$ as a quadratic function in $\mathbb{R}^d$:
\begin{equation}
\label{eqn:link_1d_as}
g(\boldsymbol{\zeta}) = \mathbf{W}^T \boldsymbol{\xi} = \alpha + \boldsymbol{\beta}^T \mathbf{z} + \mathbf{z}^T \boldsymbol{\Gamma} \mathbf{z}.
\end{equation}
The gradients of $f$ are given by
\begin{equation}
\label{eqn:f_grad}
\nabla f(\boldsymbol{\xi}) = \left(\beta + 2\boldsymbol{\xi}^T \mathbf{W} \boldsymbol{\Gamma} \right) \mathbf{W}^T.
\end{equation}
For this pedagogical example, we set $D = 20$ and test our approach on two cases with true AS dimensionality, $d = 1$ and $2$. The data for inputs $\boldsymbol{\xi}$, $\alpha$, $\beta$ and $\boldsymbol{\Gamma}$ are generated by sampling standard Gaussians of appropriate shapes. The matrix $\mathbf{W}$ is generated by performing the QR decomposition on a similarly generated matrix in $\mathbb{R}^{D \times d}$. Random seed is fixed for reproducibility. The output data $\mathbf{y}$ used for training is standardized, i.e. it is scaled to have 0 mean and unit variance.
\subsubsection{Case 1: 1 dimensional active subspace}
\label{sec:as_1d}
We begin testing our proposed gradient-free approach on a synthetic function which an AS of dimensionality $d=1$.
To train the AS DNN, we use $N=50$ input-output observations.
Furthermore, the output data is corrupted with zero mean Gaussian noise of standard deviation $1\times 10^{-2}$.
The link function is approximated with a 2 layer DNN of 50 units per hidden layer and $L_2$ regularization with a weight decay constant of $1 \times 10^{-4}$ is used to prevent overfitting.
The ADAM optimizer is set to perform $3 \times 10^4$ iterations with a base learning rate of $1\times 10{-3}$ dropped by a factor of $10$ every $10^{4}$ iterations.
In Fig. \ref{fig:ex1_case_1_N_50} we visually compare the true AS for this case and the AS discovered by the DNN.
We note that they are very close, indicating that our approach has found the correct AS.
Fig. \ref{fig:ex1_case_1_N_50} also shows a comparison of the AS DNN predicted response with the true response from a test dataset of $500$ observations.
Qualitatively, the predictions match the observations very closely. Quantitatively, we achieve a root mean square error of $0.039689$ on the test dataset.
Note that we pursue no further optimization of our DNN structure.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{ex_1_case_1_N=50_noise_std=001.pdf}
\caption{Synthetic function with $D=20$ input dimensions admitting an $d=1$ dimensional active subspace. Top left - True AS of $f$. Bottom left - AS predicted by DNN. Top right - Spectral decomposition of the empirical covariance of the gradients. Bottom right - Comparison of predicted output and correct output on the test dataset.}
\label{fig:ex1_case_1_N_50}
\end{figure}
\subsection{Case 2: 2 dimensional active subspace}
\label{sec:as_2d}
We now test our proposed on a synthetic function with AS dimensionality, $d=2$.
We use $N=100$ input-output observations for training and corrupt the output data with zero mean Gaussian noise of standard deviation $1\times 10^{-2}$.
We retain all other experimental settings from Sec. \ref{sec:as_1d}.
A comparison of the true AS and the predicted AS shown in Fig. \ref{fig:ex1_case_2_N_100} reveals that we recover the low-dimensional quadratic response upto arbitrary rotations of the coordinate system.
We compare the predicted response of the AS DNN and true outputs from a test dataset of $500$ observations. Again, we obtain excellent qualitative agreement as seen in Fig. \ref{fig:ex1_case_2_N_100} and quantitatively, we obtain a root mean squared error of $0.028748$ between the predicted and true outputs.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{ex_1_case_2_N=100_noise_std=001.pdf}
\caption{Synthetic function with $D=20$ input dimensions admitting an $d=2$ dimensional active subspace. Top left - True AS of $f$. Bottom left - AS predicted by DNN. Top right - Spectral decomposition of the empirical covariance of the gradients. Bottom right - Comparison of predicted output and correct output on the test dataset.}
\label{fig:ex1_case_2_N_100}
\end{figure}
\subsection{Benchmark elliptic PDE example}
\label{sec:constantine_spde}
Consider the following stochastic elliptic partial differential equation defined on the unit square in $\mathbb{R}^2$:
\begin{equation}
\nabla \cdot \left(a(\mathbf{s}) \nabla u(\mathbf{s}) \right) = 1, \mathbf{s} \in \Omega = [0, 1]^2,
\end{equation}
with boundary conditions:
\begin{align}
\label{eqn:const_bc}
u(\mathbf{s}) &= 0, \mathbf{s} \in \Gamma_u, \\
\nabla u(\mathbf{s}) \cdot \mathbf{n} &= 0, \mathbf{s} \in \Gamma_n,
\end{align}
where, $\Gamma_u$ is the top, bottom and left boundaries and $\Gamma_n$ denotes the right boundary of $\Omega$.
The diffusion coefficient $a$ (or conductivity field) is a spatially-varying uncertain input, and it's logarithm is modeled as a 0-mean Gaussian random field, i.e., $\log a(\mathbf{s}) \sim GP(a|0, k(\mathbf{s}, \mathbf{s}')))$, where, the covariance function $k$ is defined as follows:
\begin{equation}
\label{eqn:const_elliptic_cov}
k(\mathbf{s}, \mathbf{s}') = \exp \left(- \frac{\sum_{i=1}^{2} |\mathbf{s}_i - \mathbf{s}_{i}^{'}|}{\ell} \right),
\end{equation}
with $\ell$ being the correlation length. This formalization of the uncertainty around $a(\mathbf{s})$ makes it a stochastic process - an infinite dimensional quantity.
We use the truncated KL expansion to perform a preliminary dimensionality reduction by expressing the logarithm of the field as:
\begin{equation}
\label{eqn:ex_2_log_a}
\log a(\mathbf{s}) = \sum_{i=1}^{100} \sqrt{\lambda_i} \varphi_i (\mathbf{s}) x_i,
\end{equation}
where, the $\lambda_i$s and the $\varphi_i$s are the eigenvalues and eigenfunctions of the correlation function, numerically obtained using the \textit{Nystr\"om approximation} \cite{bilionis2016bayesian}, and the $x_i$s are uncorrelated, standard normal random variables.
Denote all the $x_i$s collectively as $\mathbf{x} = (x_1, x_2, \cdots, x_{100}) \sim \mathcal{N}(\mathbf{x}|\mathbf{0}, \mathbf{I}_{100})$.
We are interested in the following scalar QoI:
\begin{equation}
\label{eqn:ex2_qoi}
q(\mathbf{x}) = \mathcal{F}[u(\mathbf{s}; \mathbf{x})] = \frac{1}{|\Gamma_2|}\int u (\mathbf{s}; \mathbf{x}) \mathrm{d}\mathbf{s}.
\end{equation}
Given a realization of the random variable, $\mathbf{x} = (x_1, x_2, \cdots, x_100) $, one can generate a realization of the QoI, $q$, whose statistics one wishes to estimate.
We have, at our disposal, a dataset of $300$ realizations of the random variable $\mathbf{x}$ and the corresponding solution $q$.
With this dataset, we construct a surrogate that maps $\mathbf{x}$ to $q$, i.e., $\hat{f}:\mathbb{R}^{300} \rightarrow \mathbb{R}$.
We will consider two cases of the correlation length $\ell$ - a short correlation length of $\ell = 0.01$ and a long correlation length of $\ell = 1$ and attempt to recover as AS with $d=1$.
We randomly shuffle and split the data into a training set of $250$ samples, and test on the remaining $50$ samples.
The output data is standardized to have zero mean and unit variance for numerical stability.
The dataset for this example, and the code to generate it, can be found here: \href{https://github.com/paulcon/as-data-sets/tree/master/Elliptic_PDE}{https://github.com/paulcon/as-data-sets/tree/master/Elliptic\_PDE}.
Once again, we set our approximation of the link function to be a DNN with 2 hidden layers and 50 units per layer.
All other experimental settings from Sec. \ref{sec:synthetic} are retained.
Lastly, for this example, samples of the QoI gradients are available and we use this to compare our results with the results obtained from classic AS.
For the case of the classic AS, we use GPR as the link function.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{ex_2_long_corr_pred_obs.pdf}
\caption{Stochastic elliptic PDE with $\ell = 1$ - The plots on the top visualize the 1d active subspace recovered by our gradient-free DNN AS approach and the classic AS approach.
The bottom plots compare the output predictions vs observations on the test dataset for the DNN AS and the classic AS approaches.}
\label{fig:ex_2_long_corr}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{ex_2_short_corr_pred_obs.pdf}
\caption{Stochastic elliptic PDE with $\ell = 0.01$ - The plots on the top visualize the 1d active subspace recovered by our gradient-free DNN AS approach and the classic AS approach.
The bottom plots compare the output predictions vs observations on the test dataset for the DNN AS and the classic AS approaches.}
\label{fig:ex_2_short_corr}
\end{figure}
Fig. \ref{fig:ex_2_long_corr} shows results comparing the deep AS approach and the classic AS approach for the $\ell = 1$ case.
Fig. \ref{fig:ex_2_short_corr} shows the same for the case of $\ell = 0.01$.
We observe that there is good qualitative agreement between the AS recovered by gradient-free deep AS approach and the gradient-based classic approach.
This serves to verify the fact that our approach does indeed recover the correct AS.
Table \ref{tab:elliptic_pde_rmse} shows a comparison of the RMSE error in the prediction of the outputs from the test dataset, for both $\ell$ cases.
We observe that inspite of the fact that we do not use the information about the gradients of QoI, our gradient-free deep AS approach is are able to achieve RMSE comparable to the classic AS.
Once again, we emphasize that we do not pursue optimization of the modeling choices involved in the DNN approximation of the link function.
\begin{table}[ht]
\centering
\begin{tabular}{||c|c|c||}
\hline \hline
Approach & $\ell = 1$ & $\ell = 0.01$ \\
\hline \hline
Classic AS & 0.04378 & 0.17276 \\
\hline
Deep AS & 0.09241 & 0.23626 \\
\hline \hline
\end{tabular}
\caption{Root mean square error (RMSE) on test dataset predictions from classic AS and deep AS response surfaces.}
\label{tab:elliptic_pde_rmse}
\end{table}
An interesting observation that emerges from the comparison of the classic and deep AS approaches for the short correlation length in Fig. \ref{fig:ex_2_short_corr} is that inspite of recovering a one-dimensional AS, the test data predictions from both approaches show a discrepancy from their true values.
Since the QoI is generated from a deterministic computer code, we cannot explain this deviation as `noise'.
Rather, this suggests that a linear dimensionality reduction is sub-optimal and one might wish to recover a nonlinear generalization of the active subspace, such as the one discussed in \cite{tripathy2018deep}.
An investigation into this shortcoming is beyond the scope of the present work.
Finally, one may note that both the classic and the deep AS approaches perform much better on the $\ell = 1$ case, relative to the $\ell = 0.01$ case.
This is unsurprising, considering that it becomes much more difficult to capture the uncertainty of the diffusion coefficient as it's lengthscale reduces.
\section{Introduction}
\label{sec:introduction}
Inspite of the advent of the exascale era of computing and the rapid increase in the availability of computational resources \cite{reed2015exascale}, the sophistication of computer codes that simulate physical systems have also risen exponentially, either due to the incorporation of more realistic physics or higher-order numerical algorithms.
A further consequence of the increasing sophistication of modern day computational solvers is the introduction of more parameters into the model to accurately describe boundary/initial conditions, material properties, constitutive laws etc.
It is often the case that many (or all) of these parameters and unknown exactly.
This brings up several questions for the computational scientist such as - i. how must one go about making robust predictions about the quantities of interest in a complex simulation, ii. how can one assess the impact of input parameter uncertainty on the model outputs, iii. how can one calibrate the model from experimental data, and so on.
Answering such questions lie at the heart of uncertainty quantification (UQ) \cite{smith2013uncertainty, sullivan2015introduction}.
The most common task in UQ is what is known as the forward UQ or uncertainty propagation (UP) problem \cite{le2004uncertainty, knio2006uncertainty, lee2009comparative}. A complete description of the UP problem can be found in Sec. \ref{sec:up}.
UP is the task of estimating the statistical properties of the model outputs given a formal description of the uncertainty in the model input parameters.
Monte Carlo (MC) methods \cite{robert2013monte} are the most straightforward techniques for solving the UP problem and have, indeed, long been the workhorse of UQ \cite{watt2012study, baraldi2008combined, rochman2014efficient}.
A remarkable property of MC is that the variance of the standard MC estimate converges independent of the dimensionality of the stochastic parameters \cite{robert2013monte}. However, MC methods require a very large number of samples (hundreds of thousands) to show convergence in statistics \cite{robert2013monte}.
This makes MC infeasible for state-of-the-art numerical simulators which have a large computational burden associated to each individual run.
To overcome the fact that one does not have the computational budget for hundreds of thousands of runs of a numerical simulator, one resorts to the surrogate approach to UP \cite{nobari2015uncertainty}.
The idea is simple - perform a limited number of forward model evaluations, collect the resulting data, and construct a cheap-to-evaluate, yet accurate, map between the input uncertain parameters and the model outputs.
This map serves as approximation to the true solver, and is referred to as a \textit{surrogate} or a \textit{response surface}.
Since the response surface can be repeatedly evaluated very quickly, it is now easy to couple the surrogate with a MC approach to estimate model output statistics.
Surrogate models can be either \textit{intrusive} (i.e. requiring modification of the simulator for the analogous deterministic problem) or \textit{non-intrusive} (i.e. where the application of the surrogate model is an `outer-loop' process and the numerical solver can be treated as a black-box).
Naturally, given the sophistication of state-of-the-art numerical models, intrusive methods such as \textit{polynomial chaos} \cite{xiu2002wiener, xiu2003modeling} have fallen out of favor in recent times.
This has coincided with the rise in popularity of non-intrusive methods such as \textit{Gaussian process regression (GPR)} (or \textit{Kriging}) \cite{stein2012interpolation,rasmussen2003gaussian}.
The surrogate approach to UQ has seen tremendous success in a broad range of applications \cite{zhang2012uncertainty, angelikopoulos2012bayesian, knio2006uncertainty}.
However, state-of-the-art surrogate modeling techniques become exponentially difficult to scale as the number of stochastic parameters increases \cite{tripathy2016gaussian, constantine2014active}.
This is due to the phenomenon known, universally, as the \textit{curse of dimensionality} (CoD).
Originally coined by Bellman in the context of dynamic programming \cite{bellman1956dynamic}, the CoD refers to the phenomenon where the volume of the input space that one must explore to gather data sufficient for constructing an accurate response surface, rises exponentially with a linear increase in the input dimensionality.
To address the CoD, one needs to perform \textit{dimensionality reduction} of the stochastic parameter space.
The easiest approach to parameter space reduction is through a ranking of the importance of individual input dimensions.
Methods that adopt this approach include sensitivity analysis \cite{saltelli2004sensitivity} and automatic relevance determination (ARD) \cite{neal1998assessing}. Unfortunately, such variable reduction techniques are most effective when the input variables have some degree of correlation.
In generic UP problems, the stochastic parameters are frequently uncorrelated.
For instance, the common scenario of functional uncertainties (such as random permeability in porous media flow) comprise of high (or infinite) dimensional stochastic parameter spaces with statistical independence between input dimensions.
The most popular approach to dimensionality reduction within the UQ community is the truncated \textit{Karhunen-Lo\'eve (KL) expansion} \cite{ghanem2003stochastic}. The idea behind the truncated KL expansion is that a spectral decomposition of the uncertain parameters can be used to express them as a linear combination of an infinite number of iid (independent and identically distributed) random variables, following which the infinite series can be truncated by picking basis functions corresponding to the highest ranked eigenvalues by magnitude.
This is the functional analogue of the well-known principal component analysis (PCA) \cite{jolliffe2011principal} used extensively in the machine learning (ML) community.
Although extensively used, the truncated KL expansion (or PCA) only considers information contained in the input data resulting in an overestimation of the intrinsic dimensionality of the stochastic parameter space.
Generally speaking, an effective technique for dimensionality reduction needs to exploit intrinsic structure within the underlying map being approximated.
One such technique that has been recently popularized is the method of \textit{active subspaces} (AS), introduced in \cite{constantine2014active}.
An AS is a low-dimensional linear manifold embedded within the true high-dimensional parameter space, which maximally captures the variation of the underlying map.
AS has been successfully applied to numerous applications since it's introduction \cite{gilbert2016global, constantine2015exploiting, leon2017identifiability, grey2018active}.
However, a key drawback of the original AS framework is it's reliance on gradient information about the model outputs, which are often difficult (if not impossible) to obtain.
To overcome the gradient requirement for AS recovery, \cite{tripathy2016gaussian} proposed a framework which subsumes the AS projection matrix into the covariance kernel in GP regression and attempt to learn it from available data.
In this work, we propose a simple solution for recovering the AS and constructing surrogate models that does not rely on non-Euclidean algorithms for surrogate model optimization.
Specifically, we express the AS projection matrix as the Gram-Schmidt orthogonalization of an unconstrained matrix.
We rely on the fact that the Gram-Schmidt procedure is fully differentiable (and thus perfectly amenable to gradient-based learning through backpropagation).
Furthermore, our method is agnostic to the specific class of function approximation. This is contrasted with the previous gradient-free AS framework proposed in \cite{tripathy2016gaussian}.
Lastly, we couple this idea with deep neural networks (DNNs) and demonstrate true AS recovery.
An added benefit of the proposed approach is that one can use it to improve DNN identifiability (even though it is not our primary concern).
This manuscript is organized as follows.
We begin with a formal description of the UP problem and the surrogate approach to solving it in Sec. \ref{sec:up}.
We then, in Sec. \ref{sec:methodology}, review the classical approach to AS recovery (see Sec. \ref{sec:as_classic}) and the gradient-free GPR based AS approach proposed in \cite{tripathy2016gaussian} (see Sec. \ref{sec:as_grad_free}).
This is followed up with the description of our approach to AS, in Sec. \ref{sec:deepas}.
Finally, we demonstrate the proposed methodology on challenging high-dimensional surrogate modeling problems in Sec. \ref{sec:examples} and demonstrate that we recover the true AS through comparisons with the classical approach.
\section{METHODOLOGY}
\label{sec:methodology}
\subsection{The surrogate approach to uncertainty propagation}
\label{sec:up}
Consider a physical system modeled with a (potentially complex, coupled) system of partial differential equations.
The PDE(s) is solved numerically using a black-box computer code, which we denote as $f$.
$f$ may be thought of as a multivariate function which accepts a vector of inputs $\boldsymbol{\xi} \in \boldsymbol{\Xi} \subset \mathbb{R}^D$ and produces a scalar quantity of interest (QoI) $f(\boldsymbol{\xi}) \in \mathcal{Y} \subset \mathbb{R}$.
Information about $f$ may be obtained through querying the solver at suitable input design locations $\boldsymbol{\xi}$.
We allow for the possibility that our measurement from the computer code may be noisy, i.e., $y = f(\boldsymbol{\xi}) + \epsilon$, where $\epsilon$ is a random variable (the measurement noise might arise as a consequence of quasi-random stochasticity or chaotic behavior).
Given this setup, the uncertainty propagation (UP) task is summarized as follows. Given a formal description of the uncertainty in the input parameters, $\boldsymbol{\xi} \sim p(\boldsymbol{\xi})$, we would like to estimate the statistical properties of the QoI. This includes, the probability density,
\begin{equation}
\label{eqn:qoi_pdf}
p(f) = \int \delta(f - f(\boldsymbol{\xi})) p(\boldsymbol{\xi}) \mathrm{d}\boldsymbol{\xi},
\end{equation}
and measures of central tendency such as the mean:
\begin{equation}
\label{eqn:qoi_mean}
\mu_f = \int f(\boldsymbol{\xi}) p(\boldsymbol{\xi}) \mathrm{d}\boldsymbol{\xi},
\end{equation}
and variance:
\begin{equation}
\label{eqn:qoi_var}
\sigma_{f}^{2} = \int (f(\boldsymbol{\xi}) - \mu_f )^2 p(\boldsymbol{\xi}) \mathrm{d}\boldsymbol{\xi},
\end{equation}
where, $\delta(\cdot)$ in Eqn. (\ref{eqn:qoi_pdf}) refers to the Dirac $\delta$-function.
As already discussed in the introduction, the standard MC method is infeasible when there is a large computational cost associated with querying $f$ and one must resort to the surrogate approach - replacing the true simulator $f$ with an accurate, cheap-to-evaluate approximation, $\hat{f}$.
To do this, one queries $f$ at a set of $N$ carefully selected design locations $\mathbf{X} = (\boldsymbol{\xi}^{i})_{i=1}^{N}$, resulting in a corresponding set of measurements, $\mathbf{y} = (y^{i})_{i=1}^{N}$. We refer to the observed data, collectively, as $\mathcal{D} = \{\mathbf{X}, \mathbf{y} \}$. Although the task of careful selection of the input design locations are a subject of a great deal of research, an in-depth discussion of this topic is beyond the scope of the present work. Here we simply assume that we are given $\mathcal{D}$.
\subsection{Active subspaces}
\label{sec:as}
The fact that we are working in a high-dimensional regime $D (\gg 1)$ makes the task of constructing an accurate surrogate model with limited data $\left(N \approx \mathcal{O}(D) \right)$ practically infeasible because of the curse of dimensionality. To circumvent this, one seeks to exploit low-rank structure within the true response $f$ and methods for doing so are broadly categorized as `dimensionality reduction' techniques.
In this work, we focus on the case where the response admits the following structure:
\begin{equation}
\label{eqn:asstructure}
f(\boldsymbol{\xi}) = g(\boldsymbol{\zeta}) = g(\mathbf{W}^T \boldsymbol{\xi}),
\end{equation}
where, $\mathbf{W} \in \mathbb{R}^{D \times d}$ is a tall-and-skinny matrix of orthogonal columns which projects the high-dimensional input $\boldsymbol{\xi}$ to $\boldsymbol{\zeta}$ lying in a $d$-dimensional subspace such that $d \ll D$.
In particular, $\mathbf{W}$ is constrained to be an element of the set:
\begin{equation}
\label{eqn:stiefel}
\mathcal{V}_{d}\left(\mathbb{R}^D \right) = \left\{ \mathbf{A} \in \mathbb{R}^{D \times d} : \mathbf{A}^T \mathbf{A} = \mathbf{I}_d \right\}.
\end{equation}
$\mathcal{V}_{d}\left(\mathbb{R}^D \right)$ is known as the \textit{Stiefel manifold} with $\mathbf{I}_d$ being the identity matrix in $\mathbb{R}^{d \times d}$ and $g:\mathbb{R}^d \rightarrow \mathbb{R}$ is known as the \textit{link function}.
The structure posited in Eqn. (\ref{eqn:asstructure}) takes on physical meaning where the columns of $\mathbf{W}$ correspond to directions of the input space most sensitive to variation in the response $f$.
The dimensionality reduction induced by the introduction of this structure, significantly simplifies the task of learning an accurate surrogate model.
\subsubsection{Classical approach to active subspaces}
\label{sec:as_classic}
The classical approach to recovering the \textit{active subspace}, introduced in \cite{constantine2014active}, proceeds as follows.
Let the gradient of the QoI w.r.t. the input be denoted as $\nabla_{\boldsymbol{\xi}} f = \left(\frac{\partial f}{\partial \boldsymbol{\xi}_1}, \frac{\partial f}{\partial \boldsymbol{\xi}_2}, \cdots, \frac{\partial f}{\partial \boldsymbol{\xi}_D} \right) \in \mathbb{R}^D$.
Given a probability distribution $\rho$ endowed upon the input space, we define the symmetric positive semi-definite matrix,
\begin{equation}
\label{eqn:as_matrix_true}
\mathbf{C} = \int (\nabla_{\boldsymbol{\xi}}f(\boldsymbol{\xi})) (\nabla_{\boldsymbol{\xi}}f(\boldsymbol{\xi}))^T \rho(\boldsymbol{\xi}) \mathrm{d}\boldsymbol{\xi},
\end{equation}
which admits the spectral decomposition $\mathbf{C} = \mathbf{V} \boldsymbol{\Lambda} \mathbf{V}^T$, where $\boldsymbol{\Lambda}$ is a diagonal matrix of eigenvalues ordered by magnitude. Separating the $d$ largest eigenvalues from the rest, we can write the matrix of eigenvectors, $\mathbf{V}$, as:
\begin{equation}
\mathbf{V} = \left[\mathbf{V}_1, \mathbf{V}_2 \right],
\end{equation}
where, $\mathbf{V}_1 \in \mathbb{R}^{D \times d}$ is a matrix consisting of the eigenvectors corresponding to the $d$ largest eigenvalues and $\mathbf{V}_2 \in \mathbb{R}^{D \times (D-d)}$ is composed of the remaining eigenvectors. The active subspace projection matrix, then, is simply, $\mathbf{W} = \mathbf{V}_1$.
Since the integral in Eqn. (\ref{eqn:as_matrix_true}) is intractable analytically (due to the black-box nature of $f$), one only has access to discrete samples of the gradient at input locations $\boldsymbol{\xi}$ sampled from the distribution $\rho$. Given a dataset of $S$ gradient evaluations, $\mathbf{g}^{(i)} = \nabla_{\boldsymbol{\xi}} f(\boldsymbol{\xi}^{(i)}), i = 1, 2, \cdots, S$, where the $\boldsymbol{\xi}^{(i)}$s are sampled iid from $\rho$, an approximation to the matrix $\mathbf{C}$ may be constructed as:
\begin{equation}
\label{eqn:as_matrix_approx}
\mathbf{C}_S = \frac{1}{S} \sum_{i=1}^{S} \mathbf{g}^{(i)} \mathbf{g}^{(i),T}.
\end{equation}
One may think of the approximation $\mathbf{C}_S$ as an empirical covariance matrix. After recovering the projection matrix $\mathbf{W}$ through the above procedure, one can obtain projected inputs, using $\mathbf{z} = \mathbf{W}^T \mathbf{x}$ and using a suitable technique such as Kriging to learn the link function $g(\cdot)$.
\subsubsection{Gradient-free approach to active subspaces}
\label{sec:as_grad_free}
As discussed in Sec. \ref{sec:as_classic}, the classic approach to AS recovery requires the evaluation of an empirical covariance matrix from samples of the gradient $\nabla_{\boldsymbol{\xi}}f$. Obtaining gradient samples in challenging in practice.
In some cases (such as simple dynamical systems), one might have access to an adjoint solver which can compute the gradients of the QoI wrt the input parameters \cite{jameson2003aerodynamic}.
In other cases, the gradients can be approximated through finite differences (FD). Note that a single first-order FD gradient evaluation requires 2 expensive forward model runs.
Lastly, one might even approximate gradients through approximate global models for the data \cite{jefferson2015active}.
In general, the black-box nature of the response as well the associated cost of FD gradients means that one simply does not have access to $\nabla_{\boldsymbol{\xi}}f$ and therefore cannot construct $\mathbf{W}$ through the classical approach.
To alleviate this limitation, \cite{tripathy2016gaussian} introduced a methodology for constructing surrogate models without requiring gradient information. The gradient-free approach relies on two key ideas:
\begin{enumerate}
\item In GPR, prior knowledge about the underlying function can be encoded in a principled manner through the mean and the covariance functions of the GP. Thus, a new covariance kernel may be defined where the AS projection matrix $\mathbf{W}$ is simply a hyperparameter and learned through available data, $\mathcal{D}$. Formally, the prior knowledge about the active subspace structure described in Eqn. (\ref{eqn:asstructure}) is expressed through a GP kernel which takes on the form:
\begin{equation}
\label{eqn:as_kernel}
k_{\mathrm{AS}}(\mathbf{x}, \mathbf{x}') = k_{\mathrm{base}}(\mathbf{z}, \mathbf{z}') = k_{\mathrm{base}}(\mathbf{W}^T \mathbf{x}, \mathbf{W}^T \mathbf{x}'),
\end{equation}
where, $k_{\mathrm{base}}(\cdot, \cdot)$ is any standard kernel (such as the \textit{Matern} or \textit{Radial basis function (RBF)} kernels) which expresses prior knowledge about the regularity properties of the link function $g(\cdot)$.
Once the active subspace kernel has been defined, inference in GPR proceeds through the maximization of the log marginal likelihood of the data wrt the kernel hyperparameters i.e.,
\begin{equation}
\label{eqn:as_grad_free_opt}
\mathbf{W}^*, \mathcal{H}^*, \sigma^{*}_{n} = \underset{\mathbf{W}, \mathcal{H}}{\mathrm{argmax}} \log p(\mathbf{y}|\mathbf{X}, \mathbf{W}, \mathcal{H}, \sigma_{n}),
\end{equation}
where, $\mathcal{H}$ is the set of all hyperparameters of the base kernel $k_{\mathrm{base}}$, and $\sigma_{n}$ is the standard deviation of the likelihood noise.
\item While it is easy to enforce positivity constraints on the hyperparameters $(\mathcal{H}, \sigma_{n}) = \boldsymbol{\phi}$, the optimization task in Eqn. (\ref{eqn:as_grad_free_opt}) is made challenging because of the fact that it is non-trivial to enforce the orthogonality constraints on the projection matrix $\mathbf{W}$.
In order to do so, the complete methodology of \cite{tripathy2016gaussian} relies on a coordinate-ascent scheme to iteratively optimize over the variables $\boldsymbol{\phi}$ while keeping $\mathbf{W}$ constant and vice versa.
The optimization steps over $\boldsymbol{\phi}$ proceed via standard second-order techniques for unconstrained optimization, such as the L-BFGS method \cite{byrd1995limited}.
The optimization steps over the projection matrix $\mathbf{W}$ utilize an adapted version of gradient-ascent on the Stiefel manifold described in \cite{wen2013feasible}.
\end{enumerate}
\subsection{Deep active subspaces}
\label{sec:deepas}
The methodology introduced by \cite{tripathy2016gaussian} lifts the gradient requirement of the classical approach to AS recovery by subsuming the AS projection matrix into the covariance kernel of a GP. While the methodology is sound and experimentally shown to recover the true AS, it suffers from two major drawbacks -
\begin{enumerate}
\item It is not agnostic to the choice of the surrogate model.
Note that the gradient-free method described in Sec. \ref{sec:as_grad_free} necessitates a GP surrogate by construction.
Inspite of the elegance of GPR, arising out of the principled framework it offers for incorporating prior knowledge, quantifying epistemic uncertainty and performing model selection, it's standard formulation scales poorly due to the $\mathcal{O}(N^3)$ inversion of the (potentially dense) covariance matrix required at each optimization step.
While sparse GPR \cite{titsias2009variational, snelson2006sparse} partially alleviates this poor scaling through the introduction of $M (\ll N)$ inducing variables or `pseudo-inputs', the task of selecting or optimizing for the inducing input locations is non-trivial.
\item The proposed solution for optimizing over the projection matrix $\mathbf{W}$, while respecting orthogonality constraints, is itself non-trivial, introduces $Dd$ additional hyperparameters into the covariance kernel, and is prone to getting trapped in local stationary points \cite{tripathy2016gaussian}.
\end{enumerate}
We propose, here, a much simpler approach that is:
\begin{enumerate}
\item Is agnostic to the choice of the link function approximator,
\item Is trivial to implement.
\end{enumerate}
Specifically, we express $\mathbf{W}$ as:
\begin{equation}
\label{eqn:W_express}
\mathbf{W} = h(\mathbf{Q}),
\end{equation}
where, $\mathbf{Q} \in \mathbb{R}^{D \times d}$ lies on the standard Euclidean space, and $h:\mathbb{R}^{D \times d} \rightarrow \mathbb{R}^{D \times d}$ orthogonalizes the columns of $\mathbf{Q}$. Specifically, we chose $h$ to be the celebrated \textit{Gram-Schmidt (GS) orthonormalization} procedure \cite{bjorck1967solving}. The GS process may be summarized as follows. Given an unconstrained matrix $\mathbf{Q} = [\mathbf{q}_1, \mathbf{q}_2, \cdots, \mathbf{q}_d] \in \mathbb{R}^{D \times d}$, where the $\mathbf{q}_i$s are the columns of $\mathbf{Q}$, we apply the transformation,
\begin{equation}
\label{eqn:gs_trans}
\mathbf{w}_i = \mathbf{w}_{i-1} - \left( \frac{\mathbf{w}_{i-1}^{T} \mathbf{q}_{i}}{\mathbf{w}_{i-1}^{T} \mathbf{w}_{i-1}} \right) \mathbf{w}_{i-1},\ i=2, 3, \cdots, d,
\end{equation}
with $\mathbf{w}_1 = \mathbf{q}_1$. The projection matrix $\mathbf{W}$ is then assembled by normalizing the $\mathbf{w}_i$s, i.e., $\mathbf{W} = \left[\frac{\mathbf{w}_1}{\| \mathbf{w}_1 \|_2}, \frac{\mathbf{w}_2}{\| \mathbf{w}_2 \|_2}, \cdots, \frac{\mathbf{w}_d}{\| \mathbf{w}_d \|_2}\right]$.
Now one only needs to care about the the Euclidean matrix $\mathbf{Q}$, and optimize it to the available data.
Noting that the transformation specified by Eqn. (\ref{eqn:gs_trans}) is fully differentiable (as it composed entirely of differentiable mathematical operations), one may simply define a routine implementing the GS process using a backpropagation-capable library (such as \texttt{TensorFlow} or \texttt{PyTorch}) to obtain exact gradients of any QoI wrt $\mathbf{Q}$.
Since the projection matrix $\mathbf{W}$ has been reparameterized without any concern for the specific structure of the link function, $g$, we are free to pick any suitable class of function approximator for $g$. In this work, we define $g$ to be a deep neural networks (DNN) \cite{goodfellow2016deep}, a class of highly flexible nonlinear function approximators with satisfy universal approximation properties. Formally, a $L$-layered DNN representation for $g$ is defined as:
\begin{equation}
\label{eqn:link_dnn}
g(\mathbf{z}) = f_{L+1} \circ f_{L} \circ \cdots \circ f_1(\mathbf{z}),
\end{equation}
where, $f_i(\mathbf{z}_{i-1}) = h_i(\mathbf{W}_i^T \mathbf{z}_{i-1} + \mathbf{b}_i)$, with $\mathbf{W}_i \in \mathbb{R}^{d_{i} \times d_{i-1}}, \mathbf{b}_i \in \mathbb{R}^{d_{i}}, \mathbf{z}_i = \mathbb{R}^{d_{i}}, \mathbf{z}_0 = \mathbf{z}, \mathbf{z}_{L+1} = g(\mathbf{z})$, and $h_i(\cdot)$ is a suitable nonlinear function applied elementwise on it's argument.
$h_{L+1}$ is set to be the identity function (since we are dealing with unconstrained real-valued outputs) and the other $h_i$s are set as the hyperbolic tangent function, a standard choice in the literature.
The matrices $\mathbf{W}_i$s and the vectors $\mathbf{b}_i$s are called the `weights' and `biases' of the DNN and here we denote all of them collectively as $\boldsymbol{\theta} = \{\mathbf{W}_1, \mathbf{W}_2, \cdots, \mathbf{W}_{L+1}, \mathbf{b}_1, \mathbf{b}_2, \cdots, \mathbf{b}_{L+1} \}$.
The full surrogate, is there expressed as:
\begin{equation}
\label{eqn:full_surr}
\hat{f}(\boldsymbol{\xi}; \boldsymbol{\theta}) = g(h(\mathbf{Q})^T \mathbf{\boldsymbol{\xi}}; \boldsymbol{\theta}),
\end{equation}
where the unknown parameters $(\boldsymbol{\theta}, \mathbf{Q})$ can be optimized through standard gradient-descent techniques. In this work, we use the famous Adaptive Moments (ADAM) optimization method \cite{kingma2014adam}.
\section*{PAPER NUMBER}
\bibliographystyle{asmems4.bst}
\input{acknowledgement.tex}
|
2,869,038,156,824 | arxiv | \section{Introduction}
The redshifted 21cm emission from Neutral Hydrogen (\textrm{H\textsc{i}}) gas provides an alternative view into the structure, dynamics, and evolution of galaxies. H\textsc{i}\ gas is the fundamental fuel for molecular gas and star formation and plays an essential role in galaxy formation and evolution and models thereof.
Blind H\textsc{i}\ surveys of the local Universe provide constraints on the H\textsc{i}\ abundance via the H\textsc{i}\ mass function \citep{Jones:2020ik, 2003AJ....125.2842Z} and the global H\textsc{i}\ abundance $\Omega_{\textrm{H\textsc{i}}}=(4.3\pm0.3)10^{-4}H_0/70$ \citep{Martin:2010ij}. Spectral stacking techniques have also been used (see e.g. \citealt{Hu:2019xmd} and references therein).
Targeted deep surveys investigate the H\textsc{i}\ scaling relations with galaxy properties such as stellar mass, star formation activity, or star formation efficiency with multi-wavelength data. It has been inferred that cold gas properties are tightly related to their star-forming properties and less to their morphology, with scatter on the relations being driven by inflows mechanisms and dynamics \citep{2019MNRAS.490.4060C, Chen:2019gz}. H\textsc{i}\ gas mass has been found to strongly anti-correlate with stellar mass, particularly when traced by NUV-r colour \citep{2018MNRAS.476..875C}. Multiple studies on the H\textsc{i}\ deficiency in high density regions such as the VIRGO cluster confirm the high impact of environment on atomic gas abundance (see \citealt{Cortese:2011sj, 2014MNRAS.444..667D, Reynolds:2020tm}). \citet{2020arXiv200914585B} studied environmental effects using an infra-red selected sample of H\textsc{i}\ detections finding a reduced scatter in scaling relations for isolated galaxies. Some investigations have been made into the relation between H\textsc{i}\ and its host halo mass to constrain a H\textsc{i}\ halo occupation distribution, see for example \citet{Guo:2020kt} or \citet{Paul:2017bi}. The most important limitations of all blind and targeted H\textsc{i}\ surveys are their sensitivity limitations on relatively \textrm{H\textsc{i}}-rich galaxy samples, as well as volume-limited sample sizes. Additionally, there is little information on H\textsc{i}\ abundances and scaling relations beyond our local Universe \citep{Crighton:2015pza, Padmanabhan:2015wja,Hu:2019xmd}.
The technique of H\textsc{i}\ intensity mapping has been proposed to perform fast observations of very large cosmic volumes in a wide redshift range. Intensity mapping does not rely on detecting individual galaxies, but instead measures the integrated redshifted spectral line emission without sensitivity cuts in large voxels on the sky, whith the voxel volume determined by the radio telescope beam and frequency channelisation, see e.g. \citep{Battye2004, Chang_2008, Wyithe_2009, Mao_2008, peterson2009, Chang:2010jp, Seo_2010, Ansari_2012}. Using the H\textsc{i}\ signal as a biased tracer for the underlying matter distribution, it is possible to probe the large-scale structure of the Universe, and constrain both, global H\textsc{i}\ properties and cosmological parameters. Particularly, the amplitude of the H\textsc{i}\ intensity mapping clustering signal scales with the global H\textsc{i}\ energy density $\Omega_{\rm HI}$ and can constrain it for various redshifts.
The next few years will see data from a number of H\textsc{i}\ intensity mapping experiments, for example the proposed MeerKLASS survey at the Square Kilometre Array (SKA) precursor MeerKAT \citep{Santos:2017qgq,Wang:2020lkn}, an H\textsc{i}\ survey at the 500m dish telescope FAST \citep{Hu:2019okh}, and multiple surveys with the SKA using the single-dish mode of operation \citep{Battye:2012tg, Bull:2014rha, Santos:2017qgq,Bacon:2018dui}. Other international experiments include the CHIME project \citep{Bandura:2014gwa}, HIRAX \citep{Newburgh:2016mwi}, and Tianlai \citep{Li:2020ast, Wu:2020jwm}.
The observed intensity maps suffer from foreground contamination from Galactic and extra-galactic sources. Our own Galaxy emits high synchroton and free-free emission up to three orders of magnitude brighter than the redshifted 21cm line \citep{Di_Matteo_2002}, which need to be subtracted from the data (see e.g. \citealt{Wolz:2013wna,Alonso:2014dhk, Shaw:2014khi, Olivari:2015dc,Cunnington:2019lvb, Carucci:2020ca}).
To-date, the intensity mapping signal has not been detected in auto-correlation due to calibration errors, radio frequency interference, residual foregrounds and noise systematics \citep{Switzer_2013,Switzer_2015, 2018MNRAS.478.2416H, 2020arXiv200701767L}.
The impact of the contaminations can be reduced by cross-correlating the H\textsc{i}\ signal with optical surveys. The first successful detection with Green Bank Telescope (GBT) data has been achieved at $0.6<z<1.0$ using the cross-correlations with the DEEP2 survey \citep{Chang:2010jp}, followed by the cross-correlations with the WiggleZ Dark Energy survey \citep{Masui:2012zc}. The GBT-WiggleZ correlations at $z=0.8$ have constrained the combination of the H\textsc{i}\ abundance $\Omega_\textrm{H\textsc{i}}$ and linear H\textsc{i}\ bias $b_\textrm{H\textsc{i}}$, finding $\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r_{\textrm{H\textsc{i}},{\rm Wig}} = [4.3 \pm 1.1] \times 10^{-4}$, where $r_{\textrm{H\textsc{i}},{\rm Wig}}$ is the galaxy-H\textsc{i}\ cross-correlation coefficient. The significance of detection was $7.4\sigma$ for the combined 1hr and 15hr fields observations \citep{Masui:2012zc}.
More recently, the Parkes radio telescope reported a cross-correlation detection at $z\simeq 0.1$ using galaxies from the 2dF survey \citep{Anderson_2018}. In this study, upon dividing the galaxies into red and blue colours, a drop in amplitude on small scales was detected for the red sample. This result is in agreement with aforementioned studies on H\textsc{i}\ in dense environments as well as with theoretical predictions on the H\textsc{i}\ -galaxy cross-correlation of a correlation coefficient dependent on the H\textsc{i}\ content of the optical galaxy sample \citep{2016MNRAS.458.3399W}. Additionally, it is also predicted that the amplitude of the shot noise on the cross-power spectra scales with the averaged H\textsc{i}\ mass of the galaxy sample \citep{2017MNRAS.470.3220W}.
In this work, we present the analysis of the extended and deepened 1-hr field observations from the previous study in \citet{Masui:2012zc}. We apply the foreground subtraction technique \textsc{FastICA} as outlined in \citet{Wolz:2015lwa} and, for the first time, construct the \textsc{FastICA} transfer function using mock lognormal simulations. We cross-correlate the H\textsc{i}\ intensity mapping data with three distinct galaxy samples,
the Emission Line Galaxy (ELG) and Luminous Red Galaxy (LRG) samples from the eBOSS survey \citep{Raichoor:2020jcl,Ross:2020lqz,Alam:2020sor} as well as the previously considered WiggleZ survey \citep{blake2011}. This leads to a robust confirmation of detection with multiple galaxy samples, as well as a first attempt to quantify the cross-correlation coefficient between H\textsc{i}\ and the galaxy sample properties. We also qualitatively compare our measurements with predictions from the semi-analytic galaxy evolution model DARK SAGE \citep{2016MNRAS.461..859S} to investigate the H\textsc{i}\ contents of the samples. Finally, we use the cross-correlation measurements to constrain the quantity $\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r_{\textrm{H\textsc{i}},{\rm opt}}$, and also provide estimates for $\Omega_\textrm{H\textsc{i}}$ using external estimates for $b_\textrm{H\textsc{i}}$ and $r_{\textrm{H\textsc{i}},{\rm opt}}$.
The paper is organised as follows: In \secref{sec:data}, we describe the GBT intensity maps, and the WiggleZ and eBOSS galaxy samples. We also give a brief description of our simulations. In \secref{sec:FGremoval}, we outline the application of the \textsc{FastICA} technique to the GBT maps, as well as the construction of the foreground transfer function. In \secref{sec:PSresults} we present and discuss our cross-correlation results. In \secref{sec:HIconstraints} we derive the H\textsc{i}\ constraints. We conclude in \secref{sec:conclusions}. The appendix contains details on our mock galaxy selection in \autoref{app:samples} as well as figures of our covariance analysis in \autoref{appb}.
\section{Description of data products}
\label{sec:data}
\subsection{Green Bank Telescope intensity maps}
The H\textsc{i}\ intensity mapping data from the Green Bank Telescope (GBT) used in this study is located in the 1hr field of the WiggleZ Dark Energy survey at right ascension $5.43\degree < {\rm RA} <18.9 \degree$ and declination $-2.55\degree < {\rm DEC} < 4.8\degree$. This field was observed with the receiver band at $700<\nu<900 \, \rm{MHz}$, which results in a 21cm redshift range of $0.6<z<1.0$. The data is divided into $N_{\nu}=256$ frequency channels with width $\delta\nu=0.78 \, \rm MHz$, after rebinning from the original 2048 correlator channels. The observational spatial resolution of the maps, quantified by the full width half maximum (FWHM) of the GBT telescope beam, evolves from $\rm{FWHM}\approx0.31\deg$ at $\nu=700 \, \rm{MHz}$ to $\rm{FWHM}\approx 0.25\deg$ at $\nu=900 \, \rm{MHz}$. The maps are pixelised with spatial resolution angle of $\delta\theta\approx \delta\phi=0.067\deg$, which results in $N_{\rm RA}=217$ pixels in right ascension and $N_{\rm DEC}=119$ pixels in declination. The pixel size was chosen such that approximately 4 pixels cover the beam at mid-frequency $\nu\approx800 \, \rm MHz$, and the instrumental noise can be approximated as uncorrelated between pixels.
The maps are an extended version of the previously published observations described in \citet{Masui:2012zc} with added scans to increase the area to $100\deg^2$ and survey depth to $100 \, \rm hrs$ total integration time collected from 2010-2015. The details on Radio Frequency Interference (RFI) flagging, calibration, and map making procedures can be found in \citet{Masui:2012zc, Switzer_2013, 2013PhDT.......570M}.
As described in previous studies, the GBT intensity maps suffer a number of instrumental systematic effects. To reduce the impact of the systematic effects, the following measures have been taken:
\begin{itemize}
\item RFI and resonance: The data is contaminated by RFI and two telescope resonance frequencies. \autoref{fig:Tmean_redshift} shows the mean absolute temperature of each channel as a function of redshift. The red line shows the initial data with strong RFI contamination at the lowest redshift as well as towards the highest redshift end. The RFI flagging causes an overall signal loss of $\approx 11\%$, more details on the RFI flagging process can be found in \citep{Switzer_2013}. The two telescope resonances can be seen at $\nu=798 \, \rm MHz$ and $\nu=817 \, \rm MHz$ which corresponds to the dips in amplitude seen at $z=0.78$ and $z=0.74$. To minimise these effects, we dismiss the lowest 30 channels in redshift and the intervals around the resonances before the foreground removal.
\item Sub-seasons: The time-ordered data is divided into 4 seasons $\rm \{A, B, C, D\}$. Thermal noise is uncorrelated between these seasons, which have been chosen to have similar integration depth and coverage \citep{Switzer_2013}. More specifically, the Gaussian sampling noise and time-dependent RFI in each season are independent, however, observational systematics in seasons can correlate. The individual season data is shown as faded purple and yellow lines in \autoref{fig:Tmean_redshift}.
\item Masking: The noise properties are highly anisotropic towards the spatial edges of the map due to the scanning strategy and resulting anisotropic survey depth. We therefore mask out 15 pixels per side, which significantly reduces residual anisotropic noise in the foreground subtracted maps. About an order of magnitude decrease of the mean temperature of the maps is found comparing the original and masked foreground subtracted data marked by the purple and yellow lines in \autoref{fig:Tmean_redshift}.
The solid purple and yellow lines show the signal averaged over the four seasons, and the faded lines around them show the individual seasons.
\item Beam: The beam of the instrument can be approximated by a symmetric Gaussian function with a frequency-dependent FWHM with maximum ${\rm FWHM}_{\rm max}\approx 0.31\deg$. In order to aid the data analysis as well as to minimise systematics caused by polarisation leakage of the receiver \citep{Switzer_2013}, we convolve the data to a common Gaussian beam with ${\rm FWHM}=1.4\,{\rm FWHM_{max}}$, which results in an angular resolution of ${\rm FWHM}=0.44\deg$. This strategy is adopted as polarization leakage is considered the most significant contaminant in the data. However, we acknowledge that this would not be an optimal strategy to mitigate effects of beam chromaticity, as shown in \cite{FgSKA}.
\end{itemize}
\autoref{fig:Tmean_redshift} shows that even after applying these measures and removing foregrounds modelled by 36 Independent Components, the mean temperature of the H\textsc{i}\ maps is about an order of magnitude higher than the theoretically predicted H\textsc{i}\ brightness temperature. We model this following \citet{Chang:2010jp} and \citet{Masui:2012zc} as:
\begin{equation}
T_{\rm HI}(z)=0.29\frac{\Omega_{\rm HI}}{10^{-3}} \left(\frac{\Omega_m+\Omega_{\Lambda}(1+z)^{-3}}{0.37}\right)^{-0.5}\left( \frac{1+z}{1.8}\right)^{0.5} {\rm mK} \,
\label{eq:thi}
\end{equation}
which is shown as the green dotted line. We are unable to directly detect the H\textsc{i}\ signal with our current pipeline in this systematics dominated data.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{figs/Tmean_data_simulation_lognormal_redshift.png}
\caption{Mean of the absolute temperature of the GBT intensity maps as a function of redshift, binned into 256 frequency channels. The solid lines represent the mean over the 4 GBT seasons with original data (red), the \textsc{FastICA} foreground subtracted data with $N_{\rm IC}=36$ (purple), and the masked, \textsc{FastICA} foreground subtracted data with $N_{\rm IC}=36$ (yellow). The faded purple and yellow lines indicate the individual seasons. The green dotted line represents the analytical brightness temperature prediction from \autoref{eq:thi}, the pink dashed line the averaged temperature of the lognormal simulations used for the foreground removal transfer function (see \secref{sec:FGremoval} for details), and the teal dashed line the numerical prediction from the DARK SAGE simulation described in \secref{sec:data}.}
\label{fig:Tmean_redshift}
\end{figure*}
\subsection{Galaxy samples}
In this study, we consider three galaxy samples overlapping with the H\textsc{i}\ intensity maps in the 1hr field. We use the WiggleZ Dark Energy Survey galaxy sample based on \citet{blake2011} as previously presented in \citet{Masui:2012zc}. And, for the first time, we use the SDSS Emission Line Galaxy (ELG) and Luminous Red Galaxy (LRG) sample of the eBOSS survey (DR16) for the \textrm{H\textsc{i}}-galaxy cross-correlation analysis.
In \autoref{fig:Sel_gal}, we show the spatial footprint of each survey in the 1hr field, where dark patches indicate unobserved regions and the red lines mark the edge masking as part of the systematics mitigation of the GBT data. The LRG and WiggleZ samples both have a reduced spatial overlap with the GBT data as it has unobserved regions, however, since we introduce the red mask, this effect is somewhat diminished. The ELG sample has the most complete overlap with the GBT data.
{\bf WiggleZ} -
The WiggleZ galaxies are part of the WiggleZ Dark Energy Survey \citep{Drinkwater}, a large-scale spectroscopic survey of emission-line galaxies selected from UV and optical imaging. These are active, highly star-forming objects, and it has been suggested that they contain a large amount of H\textsc{i}\ gas to fuel their star-formation.
The selection function \citep{2010MNRAS.406..803B} has angular dependence determined primarily by the UV selection, and redshift coverage favouring the $z = 0.6$ end of the radio band. The galaxies are binned into volumes with the same pixelization as the radio maps and divided by the selection function, and we consider the cross-power with respect to optical over-density.
{\bf eBOSS ELG} - The extended Baryon Oscillation Spectroscopic Survey (eBOSS; \citealt{dawson_sdss-iv_2016}), is part of the SDSS-IV experiment \citep{2017AJ....154...28B}, and has spectroscopically observed $173,736$ ELGs in the redshift range $0.6<z<1.1$ \citep{Raichoor:2020jcl}. Targets were colour-selected from the DECaLS photometric survey, with an algorithm designed to select OII emitting galaxies with high star-formation rates. Spectra were then obtained using the BOSS spectrographs \citep{2013AJ....146...32S} mounted on the 2.5-meter Sloan telescope \citep{2006AJ....131.2332G}. Details of the sample, including standard Baryon Acoustic Oscillation (BAO) and Redshift Space Distortion (RSD) measurements can be found in \citet{Raichoor:2020jcl,Tamone:2020qrl,deMattia:2020fkb}.
{\bf eBOSS LRG} - Luminous Red Galaxies were observed by eBOSS from a target sample selected \citep{Prakash:2015eua} from SDSS DR13 photometric data \citep{Albareti:2016xlm}, combined with infrared observations from the WISE satellite \citep{2016AJ....151...36L}. This sample was selected to be composed of large, old, strongly-biased galaxies, typically found in high mass haloes. In total, the sample contains $174,816$ LRGs with measured redshifts between $0.6<z<1.0$. In our analysis we do not combine the eBOSS LRGs with the $z > 0.6$ BOSS CMASS galaxies as in the standard BAO and RSD measurements \citep{Bautista:2020ahg,Gil-Marin:2020bct}. Possible systematics related to the eBOSS LRG sample have been quantified via realistic N-body-based mocks in \citep{2021MNRAS.505..377R}. The cosmological interpretation of the BAO and RSD results from all eBOSS samples was presented in \citet{Alam:2020sor}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/Sel_spatial_Wig.png} \\
\includegraphics[width=\columnwidth]{figs/Sel_spatial_ELG.png}
\includegraphics[width=\columnwidth]{figs/Sel_spatial_LRG.png}
\caption{Spatial footprint of the galaxy samples. \emph{From top to bottom:} WiggleZ, ELG, and LRG samples. The survey window is binned on the same spatial pixelisation as the GBT data with pixel size of $\delta\theta=\delta\phi=0.067\deg$.
}
\label{fig:Sel_gal}
\end{figure}
\autoref{fig:Nz_gal} shows the galaxy density distribution with redshift, $N(z)$, where we binned the data according to the frequency bins of the GBT intensity mapping data. This implies that the bin size is constant in frequency rather than redshift, and the co-moving volume of the bins evolves with redshift. The line-of-sight resolution is very high with an average redshift bin size of $\delta z \approx 0.0016$. The galaxy density normalisation has taken into account the evolving co-moving volume of the bins.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/Ngal_z_Wig_eBOSS_log.png}
\caption{Galaxy density distribution with redshift. The solid lines represent the mean of the random catalogues used to determine the selection function, and the markers show the data points of the samples. }
\label{fig:Nz_gal}
\end{figure}
We can see that both the WiggleZ galaxy and eBOSS LRG samples peak towards the low-redshift end of the data, around $z \sim 0.6$, and that the density of the LRGs drops significantly faster with redshift compared to the other samples. The eBOSS ELG distribution is at higher redshift and peaks around $z \sim 0.8$ with a significant signal density at the highest redshift $z \sim 1.0$. As the low redshift end of the intensity maps is significantly contaminated by RFI, we lose the peak of the LRG and WiggleZ sample in the cross-correlation. The total number of galaxies for the samples is significantly reduced from $N_{\rm Wig,all}=7445$, $N_{\rm LRG,all}=5632$, and $N_{\rm ELG,all}=15553$ to $N_{\rm Wig}=4815$, $N_{\rm LRG}=3281$, and $N_{\rm ELG}=8534$, respectively.
\subsection{Simulations}
In order to examine the underlying astrophysics of \textrm{H\textsc{i}}-galaxy cross-correlations, we use the online service ``Theoretical Astrophysical Observatory'' (TAO\footnote{https://tao.asvo.org.au/}) to create a mock galaxy catalogue. We create the galaxy distribution using the semi-analytic galaxy formation model DARK SAGE \citep{2016MNRAS.461..859S} run on the merger trees of the Millennium simulation \citep{2006Natur.440.1137S} with box of comoving side length of $500\, {\rm Mpc}/h$. DARK SAGE is a modified version of SAGE \citep{2006MNRAS.365...11C}, which includes a pressure-based description of the atomic and molecular gas components of the cold gas based on an advanced computation of disk structure and cooling processes. DARK SAGE is calibrated to reproduce the Stellar, H\textsc{i}\ and ${\rm H}_2$ Mass Functions as well as the fraction of H\textsc{i}\ to stellar mass as a function of stellar mass as observed at $z=0$. For more details, we refer the reader to \citet{2016MNRAS.461..859S}. In our study, we create a lightcone with the same survey geometry covering the redshift range $0.6<z<1.0$, and the same spatial and redshift binning as the GBT data.
We post-process the galaxy catalogue from TAO to create H\textsc{i}\ intensity maps as well as the three optical galaxy samples. We apply the same resolution-motivated mass cut as in \citet{2016MNRAS.461..859S} and only use galaxies with $M_*>10^{8.5}M_{\rm sun}$ for our analysis. This might be a slightly conservative choice compared to, for example, \citet{2020MNRAS.493.5434S}, but the specific purpose of this simulation is to examine the H\textsc{i}\ content of the galaxy samples rather than the universal properties of the H\textsc{i}\ maps. Furthermore, \citet{2020MNRAS.493.5434S} showed that for low redshift observations, resolution effects of Millennium-based simulations are negligible for $k<1h/{\rm Mpc}$.
For the H\textsc{i}\ intensity maps, we sum the H\textsc{i}\ mass $M_{i,{\rm HI}}$ of all galaxies falling into the same pixel $i$ with spatial dimension $\delta \phi=\delta \theta=0.067\deg$ and the same frequency bins as the data, where we also include redshift space distortions via line-of-sight peculiar velocities of the galaxies. We transform the maps in brightness temperature using
\begin{equation}
T_{\rm HI}(x_i) = \frac{ 3A_{12}\hbar c^3 }{ 32\pi m_{\rm H} k_{\rm B}\nu_{\rm HI}^2} \frac{(1+z_i)^2}{H(z_i)}\frac{M_{i, \rm HI}}{V_{\rm pix}} \, ,
\end{equation}
with $\hbar$ the Planck constant, $ k_{\rm B}$ the Boltzmann constant, $m_{\rm H}$ the Hydrogen atom mass, $\nu_{\rm HI}$ the rest frequency of the H\textsc{i}\ emission line, $c$ the speed of light, $A_{12}$ the transition rate of the spin flip, and $V_{\rm pix}$ the co-moving volume of the pixel at mid-redshift.
We also remove the mean temperature $\bar{T}_\textrm{H\textsc{i}}$ of each map to create temperature fluctuation maps, also referred to as over-temperature maps. We then convolve the resulting maps with a Gaussian beam with ${\rm FWHM}=0.44\deg$.
Based on our galaxy lightcone catalogue, we additionally create optical, near-infrared and UV band emissions for each galaxy with the Spectral Energy Distribution (SED) module of TAO, using the Chabrier Initial Mass Function \citep{Conroy_2012}. The SED is based on the star-formation history primarily dependent on stellar mass, age, and metallicity of each galaxy. Galaxy photometry is applied after the construction of the SED. In our case, we use the SDSS filter $\{g, r, i, z\}$, and the Galex near ultra-violet filter NUV and FUV, as well as the near-infrared filter IRAC1 as an approximation for the WISE filter W1.
We apply the same observational colour cuts to the simulated lightcone to create mock galaxy samples resembling the eBOSS LRG, eBOSS ELG and WiggleZ selections, following the approach in \citet{2016MNRAS.458.3399W}. Details on the target selection are given in Appendix~\ref{app:samples}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/Mockgal_redshift_distribution_Vnorm.png}
\caption{The galaxy density of the mock galaxy samples from the DARK SAGE simulation as a function of redshift.}
\label{fig:Nz_sim}
\end{figure}
In \autoref{fig:Nz_sim}, we show the redshift distribution of the resulting mock galaxy samples from the semi-analytic simulation. We note that the overall galaxy numbers are off by several factors as there are many observational subtleties that can not be replicated by our approach. In addition, the eBOSS ELG-like sample peaks at slightly lower redshift around $z \sim 0.7$ compared to the actual data. However, we can see that the overall trends of the galaxy redshift distribution are present in our mock samples, and we believe that they qualitatively sample the respective galaxy types and allow us to investigate the relation between galaxy types and their H\textsc{i}\ abundance. In this work, we use the simulation to qualitatively study the predicted H\textsc{i}\ abundance in the galaxy samples and examine their impact on the cross-correlation power spectrum. Particularly, we investigate the non-linear shape the correlations and the amplitude of the predicted cross-shot noise. We only perform qualitative rather than quantitative comparisons between the power spectra of the semi-analytic simulation and the data.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/Mockgal_NUV-r_MHI.png}
\caption{The H\textsc{i}\ mass $M_{\rm HI}$ of our mock galaxy lightcone as a function of galaxy colour, $({\rm NUV}-r)$. The full light cone of $N=8.7 \cdot 10^6$ galaxies with $M_*>10^{8.5}M_{\rm sun}$ spanning $0.6<z<1.0$ is represented in grey, and the galaxy samples in coloured dots.}
\label{fig:Colour_MHI_sim}
\end{figure}
In \autoref{fig:Colour_MHI_sim}, we present the galaxy colour to H\textsc{i}\ mass diagram, where we use the combination of Galex-NUV and SDSS-$r$ filter to project the galaxies onto the red-blue colour scale. The NUV-$r$ colour division has been shown to be a good proxy for the star formation activity of the objects, see e.g. \cite{Cortese:2011sj}. We can see that all three samples occupy different spaces in the colour diagram with WiggleZ galaxies testing the bluest, most highly star-forming objects that are also rich in H\textsc{i}\ gas. The ELG sample contains slightly less blue systems with lower star formation and also spanning a wider range of H\textsc{i}\ masses. The LRG selection incorporates objects more red in colour, however, since objects are supposed to be large and luminous enough for detection at such high redshift, these are still relatively H\textsc{i}\ rich.
\section{Foreground Subtraction}
\label{sec:FGremoval}
\subsection{\textsc{FastICA}}
Fast Independent Component Analysis (\textsc{FastICA}) \citep{Hyvrinen1999FastAR} is one of the most popular methods for 21cm foreground cleaning and has been tested on simulated data \citep{Chapman:2012yj,Wolz:2013wna, Cunnington:2019lvb} as well as real data from the GBT \citep{Wolz:2015lwa} and LOFAR \citep{Hothi:2020dgq}. As with most foreground removal methods, \textsc{FastICA} exploits the fact that the foregrounds dominated by synchrotron and free-free emission smoothly scale in the line-of-sight direction (frequency) \citep{2003MNRAS.346..871O,Seo_2010,PhysRevD.83.103006}, whereas the H\textsc{i}\ signal from the Large Scale Structure follows a near-Gaussian approximation with frequency.
We apply \textsc{FastICA} to the GBT intensity mapping data cube in order
to remove the foregrounds and non-Gaussian systematics and noise. We provide a brief summary of the method here, and refer the interested reader to \cite{Wolz:2013wna,Wolz:2015lwa} for more details.
\textsc{FastICA} is a blind component separation method designed to divide a mixture of signals into its individual source components, commonly referred to as the ``Cocktail Party problem''. It operates on the assumption that the observed signal is composed of statistically independent sources which are mixed in a linear manner. More specifically,
the technique solves the linear problem
\begin{equation}
\boldsymbol x= \mathbf A \boldsymbol s + \epsilon=
\sum_{i=1}^{N_{\rm{IC}}}\boldsymbol{a_i} s_i + \epsilon,
\label{eq:ica}
\end{equation}
where $\boldsymbol x$ is the mixed signal, $\boldsymbol s$ represents the $N_{\rm IC}$ independent components (ICs), and $\mathbf A$ the mixing matrix. $\epsilon$ is the residual of the analysis. The amplitude of each IC $s_i$ is given by the mixing modes $\boldsymbol{a_i}$. \textsc{FastICA}
separates the signal into components by using the Central Limit theorem,
such that the non-Gaussianity of the probability density function of
each IC is maximized. This implies that FastICA by definition only incorporates data into $\mathbf A \boldsymbol s$ that will maximise the non-Gaussianity. The residual $\epsilon$ is obtained by subtracting the $N_{\rm IC}$ components from the original data and this should contain mostly Gaussian-like signal.
In our application of \textsc{FastICA}, the input data is of dimension $N_{\rm pix} \times N_{\nu}$ and the algorithm constructs the mixing matrix $\mathbf A$ with dimension $N_{\rm IC} \times N_{\nu}$ and the ICs $\boldsymbol s$ with dimension $N_{\rm pix} \times N_{\rm IC}$.
\textsc{FastICA} incorporates any features with frequency correlation, such as point sources, diffuse foregrounds and non-Gaussian noise and systematics into the ICs. It also identifies frequency-localised RFI contributions with weak correlations, as they usually exhibit strong non-Gaussian spatial features. The residual of the component separation should, in theory, only contain the H\textsc{i}\ signal and the Gaussian telescope noise.
The number of ICs ($N_{\rm IC}$) used in the component separation is a
free parameter and can not be determined by \textsc{FastICA}.
In the following sub-sections, we carefully examine the sensitivity of the
foreground-subtracted data to different choices of $N_{\rm IC}$, ensuring that our results do not depend on this choice.
\subsection{Transfer Function}
\label{subsec:transfer}
Foreground subtraction with \textsc{FastICA} and its applications to simulations has been thoroughly investigated by many studies \citep{Wolz:2013wna, Alonso:2014dhk, 2020MNRAS.495.1788A, Cunnington:2020njn}, but the vast majority of simulations published to date have been highly idealised and do not included any instrumental effects other than Gaussian noise. In this idealised setting, \textsc{FastICA} has been found to very effectively remove foregrounds for low numbers of ICs starting from $N_{\rm IC}=4$. We note that these numbers also depend on the sophistication of the foreground models, for example, see \citet{Cunnington:2020njn} for $N_{\rm IC}>4$ in the case where polarisation leakage is included in the simulations.
\textsc{FastICA} applied to systematics dominated data can effectively remove non-Gaussian and anisotropic systematics \citep{Wolz:2015lwa}, as well as the astrophysical foregrounds. This means that for increasing $N_{\rm IC}$, the algorithm incorporates more subtle signals as well as more local features into the components. This can significantly reduce the presence of noise and systematics in the data, however, it could also lead to H\textsc{i}\ signal loss.
In the following, we investigate the signal loss for different numbers of $N_{\rm IC}$ in the presence of systematics and use the methodology presented in \cite{Switzer_2015} to construct the transfer function to correct for H\textsc{i}\ signal loss.
In absence of a telescope simulator for the (unknown) systematics, we obtain the transfer function by injecting mock H\textsc{i}\ signal from simulations into the observed maps before foreground removal. We then process the combined maps with \textsc{FastICA}, and determine the H\textsc{i}\ signal loss by cross-correlating the cleaned maps with the injected H\textsc{i}\ simulation. In order to reduce noise, we use the average of 100 H\textsc{i}\ realisations and we also subtract the cleaned GBT data from the combined data before cross-correlating with the injected signal.
\noindent We describe the process in detail below:
\begin{itemize}
\item We create $N_m=100$ mock simulations $m_i$ of lognormal halo distributions using the python package \textsc{powerbox} \citep{Murray2018} with a halo mass limit of $M_{h,{\rm min}}=10^{12.3}M_\odot/h$.
\item We populate each dark matter halo with a H\textsc{i}\ mass following a simple H\textsc{i}\ halo mass relation as in \cite{2019MNRAS.484.1007W}.
\item We grid the H\textsc{i}\ mass of each halo to the same spatial and frequency resolution as the GBT data at median redshift $z \approx 0.8$.
\item We convert the H\textsc{i}\ grid into brightness temperature $T_{\rm HI}$ using \autoref{eq:thi}, re-scale the overall averaged temperature to the same order of magnitude as the theory prediction with $\Omega_{\rm HI}=0.5\times 10^{-3}$, and convolve the data with a constant, symmetric Gaussian beam with ${\rm FWHM}=0.44\deg$.
\item We add each mock H\textsc{i}\ brightness temperature realisation $m_i$ to each GBT season
$j \in \{A, B, C, D\}$ of the GBT data to create combined cubes $(d_j+m_i)$.
\item We run \textsc{FastICA} with $q$ number of independent components on each sub-dataset as ${\rm ICA}_q(d_j+m_i)$, where $q \in \{ 4, 8, 20, 36\}$.
\item We subtract the original, cleaned GBT data cube to obtain the cleaned mock simulations $\tilde m_{qi}^j ={\rm ICA}_q(d_j+m_i) - {\rm ICA}_q(d_j)$ for each realisation $i$, each GBT season $j$ and each choice of foreground removal $N_{\rm IC} = q$.
\end{itemize}
A comparison of the amplitudes and shapes of the auto-power spectrum of the foreground cleaned injected mock $\tilde m_{qi}^j$ and auto-power spectrum of the original mock $m_i$ measures the H\textsc{i}\ signal loss of the power spectrum through the foreground removal. However, in this study, we are interested in quantifying the H\textsc{i}\ signal loss through foreground subtraction on the cross-correlation power spectrum with galaxy surveys. In order to approximate this effect, we examine the cross-power spectrum of the foreground removed mock $\tilde m_{qi}^j$ with the original mock $m_i$, where the original mock acts as approximate of the galaxy field with cross-correlation coefficient equal to unity. We define the signal loss function $\Delta$ per season $j$ for different $q=N_{\rm IC}$ averaged over all realisations as
\begin{equation}
\Delta^{j}_{q}(k)=\frac{\sum_i^{{N_m}}P(\tilde m_{q,i}^j, m_i)(k) }{\sum_i^{N_m}P(m_i)(k)} \, .
\end{equation}
In an ideal situation without any signal loss, $\Delta^{j}_{q}(k)$ is equal to unity across all scales. Note that here, $\Delta$ is defined as the H\textsc{i}\ signal loss function on the \textrm{H\textsc{i}}-galaxy cross-correlation.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/Delta_Tranfercross_beam044_mask15_medz0785974.png}
\caption{The signal loss function $\Delta(k)$ for the foreground subtraction with \textsc{FastICA} for different numbers of ICs $N_{\rm IC}$. Note that $\Delta=0.8$ means $20\%$ signal loss. We show the the individual seasons $\rm \{A, B, C, D\}$ to highlight the sensitivity of the transfer function to the individual season-dependent systematics.}
\label{fig:signalloss}
\end{figure}
In our analysis, the signal loss is corrected via the transfer function of the cross-correlation defined as $\Theta^j_q=(\Delta^j_q)^{-1}$. We show the signal loss function in \autoref{fig:signalloss}. For all tested $N_{\rm IC}$, there is some significant degree of signal loss ranging between $10\%-50\%$ on the largest scales $k<0.1 \, h{\rm Mpc^{-1}}$. This can be explained considering the survey geometry, as these scales are mostly tested by line-of-sight modes which are highly affected by diffuse foreground subtraction. Even for increasing numbers of ICs in the subtraction, the transfer function converges towards unity on smaller scales. However, for very high number $N_{\rm IC}=36$, there is signal loss on all scales of the power spectrum. Note that the divergent behaviour from $k>1h{\rm Mpc}^{-1}$ is due to the effect of the beam on these scales, and they are not considered in our final analysis. We can see that in general, the amplitude of the transfer function of season B is somewhat higher than the others, which suggests that this season might suffer more from systematic effects.
\section{Power spectrum Results}
\label{sec:PSresults}
We use the inverse-noise weighted power spectrum estimator as described in \cite{Wolz:2015lwa}. For the cross-correlation of two tracers $a$ and $b$, that is:
\begin{equation}
\hat P^{ab}(\vec k_l)=\frac{V \mathrm{Re}\{ \tilde \delta^a(\vec k_l)\cdot
\tilde\delta^b(\vec k_l)^*\}
}{\sum_{j=1}^{N_{\rm pix}} w^a(\vec x_j)\cdot w^b(\vec x_j)} \, ,
\label{eq:PScross}
\end{equation}
with $\tilde \delta$ the Fourier transform of the weighted density field $w(\vec x_j)\delta(\vec x_j)$ of the tracer, $N_{\rm pix}$ the total number of pixels, $ w(\vec x_j)$ the weighting function, and $V$ the survey volume. For H\textsc{i}\ intensity maps, $w(\vec x_j)$ is given by the inverse noise map of each season. For galaxy surveys, the total weighting factor is $w(\vec x_j)=W(\vec x_j)w_{\rm opt}(\vec x_j)$, where $w_{\rm opt}(\vec x_j)$ is given by optimal weighting function $w_{\rm opt}(\vec x_i)=1/(1+W(\vec x_i)\times \bar N P_0)$, with $P_0=10^3 h^{-3}\rm{Mpc}^3$, and the selection function $W(\vec x_j)$. We derive the selection function for each sample from binning the random catalogues. The redshift evolution of these is shown as dashed lines in \autoref{fig:Nz_gal}, and the spatial footprint in \autoref{fig:Sel_gal}. We note, that we do not use any additional weighting functions for the galaxy power spectrum.
\autoref{eq:PScross} holds for \textrm{H\textsc{i}}-auto, galaxy-auto, as well as \textrm{H\textsc{i}}-galaxy correlations. For galaxy power spectra, we additionally remove the shot noise weighted by the selection function as described in \citet{blake2011}. The 1-d power spectra $\hat P(k)$ are determined by averaging all modes with $k = |\vec k|$ within the $k$ bin width.
In the following, we use $\hat P$ to indicate the estimated power spectrum, and $P$ for the theory prediction. All power spectra are estimated using the redshift range $0.62<z<0.95$ with $N_{\nu}=190$, and spatial resolution $N_{\rm RA}=187$ and $N_{\rm DEC}=89$. We use the flat sky approximation at mid-redshift $z=0.78$, resulting in a volume of $V=4.2\cdot 10^7 ({\rm Mpc}/h)^3$. Note, that we do not correct for gridding effects with our power spectrum estimator since the power spectrum is dominated by the beam from $k \sim 1 \, h{\rm Mpc}^{-1}$.
\subsection{H\textsc{i}\ Power Spectrum}
In this section, we present the H\textsc{i}\ power spectrum to visualise the impact of the foreground subtraction and the transfer function.
In \autoref{fig:autoPS_HI}, we show the H\textsc{i}\ power spectrum in auto-correlation $\hat P_{\rm HI}^{i}$ for each season $i$, as well as the cross-correlation between the seasons $\hat P_{\rm HI}^{ij}$ for all investigated numbers of ICs $N_{\rm IC}\in \{4, 8, 20,36\}$. We present the H\textsc{i}\ power spectrum with foreground subtraction correction, where we use $\Theta^2_i$ as an approximation to correct the auto-power spectrum $i$, and $\Theta_i\Theta_j$ to correct for the cross-season correlation $ij$.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{figs/GBT_autoPS_tranfercross_beam044_mask15_medz0785974.png}
\caption{The absolute value of the H\textsc{i}\ power spectrum of the GBT intensity maps for different number of ICs in the foreground subtraction. All power spectra are transfer function corrected. We show the auto-correlation between the seasons marked with crosses, and the season cross-correlation with circles. There are a few negative data points (indicated by stars), which demonstrate the high noise on the measurements. Note that these measurements are about an order of magnitude higher than theory predictions and should be treated as upper limits which is in agreement with \citep{Switzer_2015}.}
\label{fig:autoPS_HI}
\end{figure*}
As expected, the auto-power spectrum is dominated by instrument noise whose amplitude is higher than the H\textsc{i}\ signal. Unlike other subtraction techniques like PCA, \textsc{FastICA} cannot remove and mitigate effects of Gaussian telescope noise. Hence, $P_{\rm HI}^{i}$ can be used as an estimate for the noise present in the data and we use the averaged auto-power spectrum $\hat P_{\rm HI, q}^{\rm auto}(k) = \sum_i^4 \hat P_{\rm HI, q}^{i}(k) /4$ to estimate the error bars on the H\textsc{i}\ power spectrum as:
\begin{equation}
\sigma_{\rm HI,q}(k) = \hat P_{\rm HI, q}^{\rm auto}(k)/\sqrt{2N_{\rm modes}} \, ,
\label{eq:shi}
\end{equation}
with $N_{\rm modes}$ the number of $k$ modes sampled in the survey volume, and $q$ the number of ICs, $N_{\rm IC}$. As we use the auto-correlation between seasons as proxy for the noise on the H\textsc{i}\ power spectrum, an extra scaling of $\sqrt{2}$ is applied to the error between seasons.
Another way to estimate the noise directly from the data, is using the scatter between cross-season power spectra as noise estimate, and we find that the standard deviation of cross-season is the same order of magnitude as the auto-power spectrum, however, given the limited number of independent seasons, the auto-power spectrum is much less sensitive to sampling variance. For a comparison of these two approaches on the GBT data, please refer to Fig 8 of \citep{Wolz:2015lwa}.
The cross-season power spectra contain a few negative data points, which are indicated by stars in \autoref{fig:autoPS_HI}. This is the result of the high noise properties in the map which can dominate certain scales.
For the cross-season power spectra, we can see that the amplitude of the spectra is starting to converge for increasing number of $N_{\rm IC}$ on all scales. We are therefore confident that these two choices of ICs in the foreground subtraction are removing sufficient foregrounds. We use $N_{\rm IC}=20$ as a conservative choice with minimal H\textsc{i}\ signal loss, and possibly higher residual systematics and noise. Whereas $N_{\rm IC}=36$ is a more assertive choice in the subtraction resulting in lower noise properties with higher levels of H\textsc{i}\ signal loss.
\subsection{Galaxy Power spectrum}
In \autoref{fig:autoPS_gal}, we show the galaxy power spectra $\hat P_{\rm g}(k)$ of our samples in auto- as well as cross-correlation. Note that our power spectrum estimator is not optimised for galaxy surveys and we do not use the galaxy power spectra for a quantitative analysis. Only the auto-galaxy power spectra are shot noise removed, as we do not assume a sample overlap between galaxy surveys.
The error bars on the auto-correlation are estimated as
\begin{equation}
\sigma_{\rm g}(k) = \frac{1}{\sqrt{N_{\rm modes}}}\left(\hat P_{\rm g}(k)+\frac{1}{n_{\rm g}}\right) \, ,
\label{eq:sgal}
\end{equation}
where $N_{\rm modes}$ is again the number of independent $k$ modes in the survey volume, and $n_{\rm g}$ is the galaxy density of the samples, computed as $n_{\rm g} = N_{\rm g}/V$, with $N_{\rm g}$ the number of galaxies and $V$ the survey volume. The cross-galaxy error bars are estimated as
\begin{equation}
\sigma_{\rm g}^{ij}(k) = \frac{1}{\sqrt{2N_{\rm modes}}}\sqrt{\hat P_{\rm g}^{ij}(k)^2+\left(\hat P_{\rm g}^i(k)+\frac{1}{n_{\rm g}^i}\right) \left(\hat P_{\rm g}^j(k)+\frac{1}{n_{\rm g}^j}\right) } \, .
\end{equation}
In the upper panel of \autoref{fig:autoPS_gal}, we can see that the ELG and WiggleZ samples are similarly biased across scales, with tentatively an opposite trend in the scale-dependent behaviour. This result is in agreement with theory, as the WiggleZ and ELG samples trace similar populations of galaxies. The bias of the LRG sample is significantly higher, which is again as expected as this sample traces more quiescent, early-type objects in denser environments.
The lower panel of \autoref{fig:autoPS_gal} shows the cross-correlation between the galaxy samples, similarly to \cite{Anderson_2018}. The idea being that the bluer, star-forming samples (ELG and WiggleZ) trace the dark matter in a similar manner to \textrm{H\textsc{i}}, therefore the shape of the blue-red correlation power spectrum could also be used as a qualitative estimate of the \textrm{H\textsc{i}}-LRG cross power spectrum. In our data, most notably, the WiggleZ-LRG power spectrum exhibits a drop in amplitude for smaller scales which is not seen for the other two spectra.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/AutogalPS_mask15_medz0785974.png}
\includegraphics[width=\columnwidth]{figs/CrossgalPS_mask15_medz0785974.png}
\caption{The galaxy power spectrum of the eBOSS LRG, eBOSS ELG and WiggleZ samples. \emph{Top}: The auto-power spectra of the individual samples, with higher amplitude in the LRG sample and similar amplitudes of the ELG and WiggleZ samples, reflecting the different biases of the samples. \emph{Bottom:} The cross-correlation between galaxy samples. We observe a drop in small scale amplitude for the LRG-WiggleZ correlation. }
\label{fig:autoPS_gal}
\end{figure}
\subsection{\textrm{H\textsc{i}}-Galaxy Power Spectrum}
In \autoref{fig:crossPS_HIgal_IC}, we present the \textrm{H\textsc{i}}-galaxy cross-power spectra in absolute power for the three galaxy samples and different numbers of ICs in the foreground subtraction. The error bars on these power spectra are determined by the errors
on the galaxy sample, see \autoref{eq:sgal}, and the H\textsc{i}\ data, see \autoref{eq:shi}, combined as
\begin{equation}
\sigma_{\rm g, HI}^{q}(k) = \frac{1}{\sqrt{2N_{\rm modes}}}\sqrt{\hat P_{\rm g, HI}^{q}(k)^2+ \hat P_{\rm HI}^q(k)\left(\hat P_{\rm g}(k)+\frac{1}{n_{\rm g}}\right) } \, ,
\label{eq:sgHI}
\end{equation}
with $q$ the number of ICs $\{ 4, 8, 20, 36\}$. We note that the H\textsc{i}\ data errors dominate the total cross-power error budget. We discuss errors and covariances in more detail in \autoref{analysistests} and \autoref{appb}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/PSxWig_1hrdeep_allICs_mask15_beam044_medz0785974.png}
\includegraphics[width=\columnwidth]{figs/PSxELG_1hrdeep_allICs_mask15_beam044_medz0785974.png}
\includegraphics[width=\columnwidth]{figs/PSxLRG_1hrdeep_allICs_mask15_beam044_medz0785974.png}
\caption{The GBT H\textsc{i}\ intensity mapping cross-correlation with the galaxy samples for different numbers of ICs in the foreground subtraction. Note that all power spectra were estimated at the same $k$, and the staggered $k$ values in the plots are for illustration purposes only. \emph{From top to bottom:} \textrm{H\textsc{i}}-WiggleZ, \textrm{H\textsc{i}}-ELG, and \textrm{H\textsc{i}}-LRG cross-correlation power spectrum. }
\label{fig:crossPS_HIgal_IC}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/PSxgals_1hrdeep_IC36_comp_mask15_beam044_medz0785974.png}
\caption{The GBT H\textsc{i}\ intensity mapping cross-correlation with the galaxy samples in comparison. Note that all power spectra were estimated at the same $k$, and the staggered $k$ values in the plots are for illustration purposes only.}
\label{fig:crossPS_HIgal}
\end{figure}
We can see in all three panels of \autoref{fig:crossPS_HIgal_IC}, that the amplitude of the cross-power signal is not very sensitive to the foreground removal parameters within the error bars. We do not observe a drop in amplitude with increasing numbers of ICs, and we are confident that we correctly account for H\textsc{i}\ signal loss with our transfer function, particularly, within the large errors of the GBT data. Generally, as the amplitude of the noise of the GBT data is decreased with increasing $N_{\rm IC}$, the detection of the signal becomes more statistically significant and the error bars decrease with increasing components removed. In \autoref{fig:crossPS_HIgal}, we show the cross-correlation of the three galaxy samples for fixed $N_{\rm IC}=36$ in comparison.
The GBT-WiggleZ cross-correlation in the upper panel of \autoref{fig:crossPS_HIgal}, is detected for both $N_{\rm IC}=20,36$ on scales $0.1 < k < 0.8 \, h{\rm Mpc}^{-1}$. Qualitatively, the middle panel showing the amplitude of the GBT-ELG correlation looks very similar, but the detection seems more noise dominated on the larger scales, around $k\approx 0.1h{\rm Mpc}^{-1}$.
The GBT-LRG correlation shown in the lowest panel demonstrates a detection of the signal for $N_{\rm IC}=36$. At the smallest scales around $k \sim 1 \, h{\rm Mpc}^{-1}$, the amplitude of the correlation signal drops off and the power spectrum is highly noise dominated. \cite{Anderson_2018} reported a drop in amplitude in the cross-correlation of the Parkes H\textsc{i}\ intensity maps with the red sub-sample 2dF galaxies. However, the signal-to-noise ratio of the GBT-LRG measurements is not large enough to confirm this trend.
The cross-correlation of WiggleZ-LRG galaxies as shown in \autoref{fig:autoPS_gal} supports that this would be an expected result for our data. The negligible power of the correlation of the H\textsc{i}\ intensity maps with the LRG galaxy sample on small scales, implies that the LRG galaxies that contribute to these scales are H\textsc{i}\ deficient. The power spectrum signal on these scales originates from galaxy pairs most likely part of the same halo in a dense cluster environment. The H\textsc{i}\ deficiency of these types of quiescent galaxies has been predicted in theory and observed for the local Universe\citep{Reynolds:2020tm}. Our work is an indicator for this trend for cosmological times.
We will make more quantitative estimates for the significance of the detections when we present our derived H\textsc{i}\ constraints in \secref{sec:HIconstraints}.
\subsection{Comparison to Simulations}
We use our simulations for qualitative interpretation of our results. We use the same redshift range with $\bar z\approx 0.78$ to estimate the power spectra of our mock data, however, we do not mask the edges of the data which results in a bigger volume of $V=4.8\cdot 10^7 ({\rm Mpc}/h)^3 $. We do not include any noise and instrumental effects in this simulation suite as we focus on understanding the implication from galaxy evolution on the cross-correlation signal.
In \autoref{fig:mockPS} from top to bottom, we show the power spectra for the galaxy samples, the cross-galaxy and the \textrm{H\textsc{i}}-galaxy correlations. The shapes and amplitudes of the galaxy power spectrum are comparable to the data power spectrum. We presume that the fluctuations of the mock LRG sample are due to the low galaxy density. The cross-galaxy power spectra are comparable to the data measurements, with a drop in amplitude at smaller scales $k>0.8 \, h{\rm Mpc}^{-1}$.
In the bottom panel of \autoref{fig:mockPS} we show the resulting mock \textrm{H\textsc{i}}-galaxy cross-correlation. We note that the overall amplitude is lower than the data due to a lower $\Omega_{\rm HI}$ than data measurements suggest. The simulations predict the amplitude of all power spectra at the same level of magnitude. We show the beam-convolved mock as well as a unconvolved power spectrum, to demonstrate the effect of the H\textsc{i}\ shot noise, as predicted in \cite{2017MNRAS.470.3220W}. The amplitude of the cross-shot noise is proportional to the ensemble averaged H\textsc{i}\ mass of the respective galaxy sample. Our simulation predicts the highest shot noise amplitude for the \textrm{H\textsc{i}}-WiggleZ correlation, and very similar levels for both eBOSS samples. However, on the scales unaffected by the GBT telescope beam, the shot noise does not have a measurable effect, in particular when considering the signal-to-noise ratio of our data. Notably, we do not find a drop in amplitude of the \textrm{H\textsc{i}}-LRG correlation. This could suggest, that the drop could be caused by an unknown observational effect, which we were unable to identify with our tests given the large uncertainties of the data, or, alternatively, that our selection of mock LRG galaxies or the model itself misses some features and our mock sample can not fully represent the data. We hope to investigate this interesting feature in future work with less noise-dominated H\textsc{i}\ intensity maps.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/MockLC_autogalaxy_0.785974.png}
\includegraphics[width=\columnwidth]{figs/MockLC_crossgalaxy_0.785974.png}
\includegraphics[width=\columnwidth]{figs/MockLC_cross_IMxgalaxy.png}
\caption{The power spectra of our simulation suite. \emph{Top:} The auto-galaxy power spectra of the three galaxy samples. The mock-ELG and WiggleZ power spectra are of similar amplitude, whereas the mock-LRG exhibits a higher bias, consistent with the data. \emph{Middle:} The cross-galaxy power spectra of the mock samples. Similarly to the data, we see a possible drop in amplitude on smaller scales for the LRG-WiggleZ correlation. \emph{Bottom:} The \textrm{H\textsc{i}}-galaxy cross-correlation, beam-convolved and with no beam to demonstrate the effect of the cross-shot noise. The dashed-dotted lines indicate the shot noise amplitude. }
\label{fig:mockPS}
\end{figure}
\subsection{Analysis tests}
\label{analysistests}
We perform several tests of our analysis pipeline listed in this section. For these tests, we examine the covariance matrix of the mock data computed as
\begin{equation}
\mathbf C_q= C_{q}(k_i, k_j)=\sum_m^{N_m} \frac{(P_m^q(k_i)-\bar{P}^q(k_i)) (P_m^q(k_j)-\bar{P}^q(k_j)}{N_m}
\label{eqcovariance}
\end{equation}
where the number of independent components $q = \{4,8,20,36\}$, $\bar P$ the averaged power spectrum over all realisations, and $N_m$ the number of realisations. We can derive an estimate for error bars from the diagonal as
$\sigma_{i}^q=\sqrt{\mathbf C^q_{ii}}$. Figures of the resulting covariance matrices and tests can be found in \autoref{appb}.
\begin{itemize}
\item Mode correlation from \textsc{fastICA}: We derive the covariance of the data to determine the statistical independence between $k$ bins. We use the power spectra $P(\tilde m_{q,i}^j, m_i)(k)$ of the foreground-subtracted lognormal simulations $\tilde m_{q,i}^j$ with the original simulation $m_i$, and compute the covariance matrix. We find no significant off-diagonal correlations between the modes $0.05< k <0.8 \, h{\rm Mpc}^{-1}$ considered in our analysis. We also compute the errors from the diagonal of the inverted covariance matrix to determine the additional error introduced from the foreground removal. We find that this contribution is more than 2 orders of magnitude lower than the analytical errors based on noise and cosmic variance as determined by \autoref{eq:shi}. We therefore can safely neglect this contribution in the present analysis.
\item Randoms null test: We correlate the GBT sub-season data with the $N_m=100$ random WiggleZ catalogues used to derive the selection function. As expected, we find a signal consistent with zero within the error bars. We also derive the covariance matrix from the mocks and find that the error bars $\sigma_{\rm cov}$ are in agreement with the empirically derived $\sigma_{\rm g,HI}$ in \autoref{eq:sgHI}.
\item Shuffled null test: We correlate the GBT sub-season data with the three galaxy samples which are each re-shuffled in redshift to remove the correlation. As expected, we find all signals consistent with zero within the error bars.
\end{itemize}
\section{H\textsc{i}\ constraints}
\label{sec:HIconstraints}
Here, we are present our derived H\textsc{i}\ constraints from the cross-correlation power spectra analysis (summarised in \autoref{tab:constraints}). Before doing so, we briefly review the findings of \citet{Masui:2012zc}, who measured the GBT maps cross-correlation with the WiggleZ 15hr and 1hr fields. Fitting in the range of scales
$0.05 \, h{\rm Mpc}^{-1} < k < 0.8 \, h{\rm Mpc}^{-1}$, they found $10^3\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r = 0.40 \pm 0.05$ for the combined, $10^3\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r = 0.46 \pm 0.08$ for the 15hr field and $10^3\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r = 0.34 \pm 0.07$ for the 1hr field (which is the one we are considering in this paper).
For a more restrictive range of scales, their combined measurement was $10^3\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r = 0.44 \pm 0.07$. Note that \citet{Masui:2012zc} used Singular Value Decomposition (SVD) for their foreground removal, but we use \textsc{FastICA} here following \citet{Wolz:2015lwa}. Our transfer function construction methods are identical.
We note that the errors quoted are statistical, and \citet{Masui:2012zc} also estimated a $\pm 0.04$ systematic error representing their $9\%$ absolute calibration uncertainty. We will adopt the same systematic error in our analysis.
In this paper we will explore different ranges of scales, by performing fits for three cases: \textbf{Case I}, with $0.05 \, h{\rm Mpc}^{-1} < k < 0.8 \, h{\rm Mpc}^{-1}$. \textbf{Case II}, with $0.05 \, h{\rm Mpc}^{-1} < k < 0.45 \, h{\rm Mpc}^{-1}$, and \textbf{Case III}, with $0.05 \, h{\rm Mpc}^{-1} < k < 0.35 \, h{\rm Mpc}^{-1}$.
Considering different ranges of scales is motivated by the fact that, while small scales (high $k$) contain most of the statistical power of the measurement, the beam and model of non-linearities become less robust as $k$ increases.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/GBTxWig_PS.png}
\caption{\emph{Top}: The measured GBT-WiggleZ cross-correlation power spectrum. We show two cases with $20$ and $36$ Independent Components used in \textsc{FastICA} for the H\textsc{i}\ maps foreground cleaning, corrected with the corresponding transfer functions. We also show the best-fit models from \autoref{tab:constraints} (Cases I, II, and III) for $N_{\rm IC}=36$. \emph{Bottom}: A null diagnostic test plotting the ratio of data and error.}
\label{fig:GBTxWig}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/GBTxELG_PS.png}
\caption{\emph{Top}: The measured GBT-ELG cross-correlation power spectrum for $N_{\rm IC}=20, 36$. We also show the best-fit models from \autoref{tab:constraints} (Cases I, II, and III) for $N_{\rm IC}=36$. \emph{Bottom}: A null diagnostic test plotting the ratio of data and error.}
\label{fig:GBTxELG}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/GBTxLRG_PS.png}
\caption{\emph{Top}: The measured GBT-LRG cross-correlation power spectrum for $N_{\rm IC}=20, 36$. We also show the best-fit models from \autoref{tab:constraints} (Cases I, II, and III) for $N_{\rm IC}=36$. \emph{Bottom}: A null diagnostic test plotting the ratio of data and error.}
\label{fig:GBTxLRG}
\end{figure}
In \autoref{fig:GBTxWig} we show the measured GBT-WiggleZ power spectrum, concentrating on the results with $N_{\rm IC}=20,36$.
In the bottom panel, we perform a simple \emph{null diagnostic test} by plotting the ratio of data and error. This shows that most of the measurements in the range of scales with high signal-to-noise ratio are more than $1\sigma$ positively away from $0$.
For our fiducial IC=36 results for Case I, corresponding to the same range of scales considered in \citet{Masui:2012zc}, the detection significance is estimated to be $4.4\sigma$ (we note that in \citet{Masui:2012zc} this was found to be $7.4\sigma$ but for the combined 1hr and 15hr fields observations).
We show similar plots for the GBT-ELG and GBT-LRG cross-correlations in \autoref{fig:GBTxELG} and \autoref{fig:GBTxLRG}, respectively. We note that our null tests suggest that the GBT-LRG detection is the most tentative of the three. Indeed, estimating the detection significance for GBT-ELG and GBT-LRG, we find $4.5 \sigma$ and $2.9 \sigma$, respectively, for Case I. In \autoref{tab:constraints} we show the detection significance for $N_{\rm IC}=36$ for all Cases.
We see that the detection significance for the GBT-LRG cross-correlation considerably improves when considering the restricted ranges of scales, Cases II and III.
To relate the measured power spectra with a theory model and derive the H\textsc{i}\ constraints, we use \autoref{eq:thi} to express the mean 21cm emission brightness temperature $T_{\textrm{H\textsc{i}}}$ as a function of $\Omega_{\textrm{H\textsc{i}}}$.
We observe the brightness contrast, $\delta T = T_{\textrm{H\textsc{i}}}\delta_\textrm{H\textsc{i}}$. We also assume that the neutral hydrogen and the optical galaxies are biased tracers of dark matter, but we also include a galaxy-H\textsc{i}\ stochastic correlation coefficient $r_{\textrm{H\textsc{i}},{\rm opt}}$. To compare the theoretical prediction with the measurements, we follow a procedure similar to the one described in \citet{Masui:2012zc}:
\begin{itemize}
\item We assume a fixed Planck cosmology \citep{Ade:2015xua}.
\item We assume a known galaxy bias $b_{\rm opt}$ at the mean redshift $z\simeq 0.8$, with ${\rm opt}$ corresponding to WiggleZ \citep{blake2011}, eBOSS ELGs, and eBOSS LRGs \citep{Alam:2020sor} depending on the galaxy sample we cross-correlate the H\textsc{i}\ maps with. That is, $b_{\rm Wig} = 1.22$, $b_{\rm ELG}=1.4$, $b_{\rm LRG}=2.3$.
\item We include non-linear effects to the matter power spectrum $P_{\rm m}(k)$ using \texttt{CAMB} \citep{Lewis:1999bs} with \texttt{HALOFIT} \citep{Smith:2002dz,Takahashi:2012em} and also include (linear) redshift space distortions as $(1+f\mu^2)^2$ \citep{Kaiser:1987qv}, where $f$ the growth rate of structure and $\mu$ the cosine of the angle to the line-of-sight. When spherically averaged to compute the matter power spectrum monopole, $P_{\delta \delta}(k)$, this RSD factor gives an amplitude boost of $1.7$ for our fiducial cosmology.
\item We then construct an empirical cross-power spectrum model $P_{\textrm{H\textsc{i}},{\rm g}}$ given by \citep{Masui:2012zc}:
\begin{equation}
P_{\textrm{H\textsc{i}},{\rm g}}(k) = T_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} b_{\rm g}r_{\textrm{H\textsc{i}},{\rm opt}} P_{\delta \delta}(k) \, .
\label{eq:model}
\end{equation}
The model is run through the same pipeline as the data to include weighting, beam\footnote{The telescope beam is modelled as a Gaussian with transverse smoothing scale $R$. This is related to the beam angular resolution, $\theta_{\rm FWHM}$, by
$R=\chi(z)\theta_{\rm FWHM}/(2\sqrt{2\mathrm{ln}2})$,
with $\chi(z)$ being the radial comoving distance to redshift $z$. In cross-correlation, the beam induces a smoothing in the transverse direction as ${\rm e}^{-k^2R^2(1-\mu^2)/2}$.}, and window function effects, as described in \citet{Wolz:2015lwa}. We will comment further on our modelling choices at the end of this section.
\item We fit the unknown prefactor $\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r_{\textrm{H\textsc{i}},{\rm opt}}$ to the data. We perform fits for all three ranges of scales (Cases I, II, and III in \autoref{tab:constraints}). We find a good reduced chi-squared $\chi_{\rm red}^2 \sim 1$ for our choice of model in all cases and samples.
We also note that excluding the measurements at $k<0.08 \, h{\rm Mpc}^{-1}$ (where there are too few modes in the volume) does not make a discernible difference to our results.
\item We report our $\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r_{\textrm{H\textsc{i}},{\rm opt}}$ at three different effective scales $k_{\rm eff}$, which are estimated by weighting each $k$-point in the cross-power by its $(\mathrm{S_{\rm best-fit}/N})^2$, for Cases I, II, and III. As we already mentioned, we do this because most of our measurements lie at the nonlinear regime. Assigning an effective scale also allows for a better interpretation of the implications for the values of $r_{\textrm{H\textsc{i}},{\rm opt}}$.
\end{itemize}
\begin{table*}
\caption{Best-fit and $1\sigma$ statistical errors on $10^3\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r_{\textrm{H\textsc{i}},{\rm opt}}$ at a mean redshift $z\simeq 0.8$ for $N_{\rm IC}=20,36$, together with the effective scale $k_{\rm eff}$, detection significance, and reduced chi-squared $\chi^2_{\rm red} = \chi^2 / {\rm dof}$} for $N_{\rm IC}=36$ (Cases I, II, and III; see main text for details).\\
\label{tab:constraints}
\centering
\begin{tabular}{lccccc}
& \bf{GBT$\times$WiggleZ} & \bf{GBT$\times$ELGs} & \bf{GBT$\times$LRGs} & $k_{\rm eff} [h/{\rm Mpc}]$\\
\hline
{\bf{Case I} [$k < 0.8 \, h/{\rm Mpc}$]} & & & &\\
{\bf{NIC=20}:} & $0.35 \pm 0.09$ & $0.20 \pm 0.06$ & $0.12 \pm 0.06$ & - \\
{\bf{NIC=36}:} & $0.38 \pm 0.08$ ($4.4\sigma$, $\chi^2_{\rm red} \simeq 16/18$) & $0.26 \pm 0.06$ ($4.5\sigma$, $22.6/18$) & $0.16 \pm 0.06$ ($2.9\sigma$, $22.9/18$) & 0.48 \\
\hline
{\bf{Case II} [$k < 0.45 \, h/{\rm Mpc}$]} & & & &\\
{\bf{NIC=20}:} & $0.53 \pm 0.12$ & $0.36 \pm 0.09$ & $0.28 \pm 0.09$ & - \\
{\bf{NIC=36}:} & $0.58 \pm 0.09$ ($4.8\sigma$, $\chi^2_{\rm red} \simeq 8.3/14$) & $0.40 \pm 0.09$ ($4.9\sigma$, $16/14$) & $0.35 \pm 0.08$ ($4.4\sigma$, $12.3/14$) & 0.31 \\
\hline
{\bf{Case III} [$k < 0.35 \, h/{\rm Mpc}$]} & & & & \\
{\bf{NIC=20}:} & $0.58 \pm 0.17$ & $0.48 \pm 0.12$ & $0.38 \pm 0.12$ & - \\
{\bf{NIC=36}:} & $0.70 \pm 0.12$ ($4.4\sigma$, $\chi^2_{\rm red} \simeq 6.7/12$) & $0.55 \pm 0.11$ ($5\sigma$, $11.6/12$) & $0.45 \pm 0.10$ ($4.2\sigma$, $10/12$) & 0.24 \\
\end{tabular}
\end{table*}
Our derived constraints are shown in \autoref{tab:constraints}, for $N_{\rm IC}=20$ and $N_{\rm IC}=36$ (for the smaller $N_{\rm IC}$ cases the errors are too large due to residual foreground variance). In the GBT-WiggleZ Case I, we find excellent agreement with the \citet{Masui:2012zc} results for the 1hr field, $10^3\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r_{\textrm{H\textsc{i}},{\rm Wig}} = 0.34 \pm 0.07$. Using this case as our benchmark, the lower result in the GBT-ELGs case implies a smaller correlation coefficient between these galaxies and \textrm{H\textsc{i}}, and even smaller in the GBT-LRGs case. The results imply that red galaxies are much more weakly correlated with H\textsc{i}\ on the scales we are considering, suggesting that H\textsc{i}\ is more associated with blue star-forming galaxies and tends to avoid red galaxies. The same trend is followed in the restricted ranges of scales Cases II and III, albeit with different derived best-fit amplitudes.
This is in qualitative agreement with what was found in \cite{Anderson_2018} when separating the 2dF survey sample into red and blue galaxies, albeit at a much lower redshift $z=0.08$. The effective scales of the three Cases are different: Case I has
$k_{\rm eff} = 0.48 \, \, h/{\rm Mpc}$, Case II has $k_{\rm eff} = 0.31 \, \, h/{\rm Mpc}$, and Case III has $k_{\rm eff} = 0.24 \, \, h/{\rm Mpc}$. The different derived best-fit amplitudes are within expectation as $r_{\textrm{H\textsc{i}},{\rm opt}}$ and $b_\textrm{H\textsc{i}}$ are predicted to be scale-dependent. Therefore, we also expect that if another survey targets larger (linear) scales, e.g. $k < 0.1 \, h/{\rm Mpc}$, it will derive different $\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r_{\textrm{H\textsc{i}},{\rm opt}}$.
To illustrate the variation between cases, we also present the $N_{\rm IC}=36$ results in \autoref{fig:omHI_bHI_ropt_constraints}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/omHIbHIropt.png}
\caption{Best-fit and $1\sigma$ statistical errors on $10^3\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r_{\textrm{H\textsc{i}},{\rm opt}}$ at a mean redshift $z\simeq 0.8$ for $N_{\rm IC}=36$, together with the effective scale $k_{\rm eff}$ (staggered for illustration purposes).}
\label{fig:omHI_bHI_ropt_constraints}
\end{figure}
We can proceed with the interpretation of our results making some further assumptions. First of all, since the correlation coefficient $r<1$, our results put a lower limit on $\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}}$. It would also be interesting to attempt to determine $\Omega_\textrm{H\textsc{i}}$ from our measurements taking some external estimates for $b_\textrm{H\textsc{i}}$ and $r_{\textrm{H\textsc{i}},{\rm opt}}$. The linear bias of H\textsc{i}\ is expected to be $\sim 0.65$ to $\sim 1$ at these redshifts \citep{Marin:2009aw}, and we will assume $r_{\rm \textrm{H\textsc{i}}, Wig} = 0.9$ \citep{Khandai:2010hs}. Using our simulations (taking their ratios at $k_{\rm eff}$ for Case III, which is the case where non-linearities are expected to be milder), we can estimate $r_{\rm \textrm{H\textsc{i}}, ELG} \sim 0.7$ and $r_{\rm \textrm{H\textsc{i}}, LRG} \sim 0.6$. Combining these values with the results in \autoref{tab:constraints} and our assumption of perfect knowledge of the galaxy samples biases, we get the $\Omega_\textrm{H\textsc{i}}$ estimates shown in \autoref{fig:omHI_constraints}. These are shown together with other available constraints from the literature \citep{Braun_2012,Zwaan:2005cz,Rao:2005ab,Lah:2007nk,Martin:2010ij,Rhee:2013fma,hoppmann2015blind, Rao2017, Jones2018, Bera:2019gtq, Hu:2019xmd, Chowdhury:2020uqa}. For recent compilations of $\Omega_\textrm{H\textsc{i}}$ measurements in the redshift range $0<z<5$, see \citet{Crighton:2015pza, Neeleman2016, Hu:2019xmd} .
As a final note, we caution the reader that these estimates are crude given the number of assumptions we have made. In principle, the degeneracy between $\Omega_\textrm{H\textsc{i}}$ and $b_\textrm{H\textsc{i}}$ can be broken with the use of redshift space distortions \citep{Wyithe:2008th,Masui:2012zc}, but we need higher quality H\textsc{i}\ intensity mapping data with a much better signal-to-noise ratio to achieve this \citep{2010PhRvD..81j3527M,Pourtsidou:2016dzn}. We also stress that while our empirical model (\autoref{eq:model}) has provided an acceptable statistical fit to our data sets, it is not appropriate for high-precision future data. Following what is done in optical galaxy surveys (see e.g. \citet{blake2011,Beutler_2014}), with better data we would need to use more sophisticated models and perform a comprehensive H\textsc{i}\ power spectrum multipole expansion analysis \citep{Cunnington:2020mnn}. For example, for the cross-correlation case a more appropriate model to use would be:
\begin{equation}
P_{\textrm{H\textsc{i}},g}(k,\mu) = T_{\textrm{H\textsc{i}}}b_gb_{\textrm{H\textsc{i}}}
\frac{[r_{\textrm{H\textsc{i}},{\rm opt}}+(\beta_{\textrm{H\textsc{i}}}+\beta_g)\mu^2+\beta_{\textrm{H\textsc{i}}}\beta_g\mu^4]}{1+(k\mu\sigma_v/H_0)^2}P_{\rm m}(k) \, ,
\end{equation}
with $\beta_i = f/b_i$ and $\sigma_v$ the velocity dispersion parameter. Further, to appropriately model the power spectrum at scales above $k\sim 0.15 \, h{\rm Mpc}^{-1}$ at $z\sim 1$ we would also need to account for scale-dependent bias and $r_{\textrm{H\textsc{i}},{\rm opt}}$, and construct perturbation theory based models \citep{Villaescusa-Navarro:2018vsg,Castorina:2019zho} including observational effects \citep{Blake:2019ddd,Soares:2020zaq}. To summarise, with our currently available measurements we are very constrained in the number of parameters we can simultaneously fit, and we cannot break any degeneracies unless we use several assumptions and external estimates, hence our empirical choice of model. Furthermore, for precision cosmology studies with future data we will need to take into account the cosmology dependence of the transfer function \citep{Soares:2020zaq}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/omHI_constraints.png}
\caption{Estimates for $\Omega_\textrm{H\textsc{i}}$ from this work compared to other measurements in the literature. All our estimates are at the central redshift $z=0.78$ but they have been staggered for illustration purposes. We used the results from \autoref{tab:constraints} Case III ($k_{\rm eff}=0.24 \, h/{\rm Mpc}$) for deriving these estimates. \citet{Masui:2012zc} estimated $10^3\Omega_{\textrm{H\textsc{i}}}$ between $0.45$ and $0.75$}. Our assumptions and methodology are detailed in the main text.
\label{fig:omHI_constraints}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this work, we performed the first ever comparison of the H\textsc{i}\ intensity mapping detections in cross-correlation with multiple galaxy surveys. We use an extended version of the previously published GBT H\textsc{i}\ intensity mapping data located in the 1hr field in combination with the WiggleZ Dark Energy Galaxy survey, and the SDSS eBOSS ELG and LRG samples.
For the GBT data, we subtract the foregrounds and mitigate some systematics via \textsc{FastICA} for $N_{\rm IC} \in \{4,8,20,36\}$. In addition, for the first time for \textsc{FastICA}, we construct a transfer function for the H\textsc{i}\ signal loss via mock simulations. We find that there can be a high signal loss up to $50\%$ for $k<0.2 \, h{\rm Mpc}^{-1}$, as foreground removal affects the line-of-sight modes on these scales for all $N_{\rm IC}$. The transfer function converges towards unity for smaller scales, however, for $N_{\rm IC}=36$, we find there is a minimum of $20\%$ signal loss on all scales. The amplitude of the transfer function varies between seasons, indicating that the systematics strongly affect the H\textsc{i}\ signal loss.
For the H\textsc{i}\ intensity mapping auto-power spectrum, we find that the amplitude of the cross-season power spectrum converges for increasing number of ICs. The amplitude is in agreement with previous work in \cite{Masui:2012zc, Switzer_2013, Wolz:2015lwa}, and should be interpreted as an upper limit for detection.
We investigate the shapes of the galaxy cross-power spectrum, particularly, the correlation between the WiggleZ and the LRG data. We observe a drop in amplitude on the small scales $k\approx0.8$ for the LRG-Wigglez correlation, which can be assumed as a proxy for the \textrm{H\textsc{i}}-LRG correlation, as WiggleZ galaxies are assumed to be H\textsc{i}\ -rich and hence a similar tracer to H\textsc{i}\ intensity maps.
We find that the amplitudes of the \textrm{H\textsc{i}}-galaxy cross-correlations do not strongly depend on the $N_{\rm IC}$ of our foreground subtraction. We find a significant drop in amplitude in the \textrm{H\textsc{i}}-LRG correlation at large scales, in agreement with previous findings in \cite{Anderson_2018}.
We construct a mock data set including H\textsc{i}\ information and optical galaxy magnitudes based on the outputs of the semi-analytic model DARKSAGE and qualitatively compare the results to our data. Our mock catalogues predict the WiggleZ sample to contain the \textrm{H\textsc{i}}-richest galaxies. Due to the selection of bright objects, the LRG sample also has relatively \textrm{H\textsc{i}}-rich objects, and the averaged mass is in a similar range as the ELG sample. The simulations confirm a drop in amplitude in the LRG-WiggleZ correlation, but not in the \textrm{H\textsc{i}}-LRG correlation. This could be due to failure of our simulation (not matching selection of our galaxies), or the decrease in amplitude caused by observational effects. The present signal-to-noise ratio is not high enough to investigate this further.
Finally, we use the cross-correlation measurements to constrain the quantity $\Omega_\textrm{H\textsc{i}} b_\textrm{H\textsc{i}} r_{\textrm{H\textsc{i}},{\rm opt}}$, where $\Omega_\textrm{H\textsc{i}}$ is the H\textsc{i}\ density fraction, $b_\textrm{H\textsc{i}}$ is the H\textsc{i}\ bias, and $r_{\textrm{H\textsc{i}},{\rm opt}}$ the galaxy-hydrogen correlation coefficient.
We consider three different ranges of scales, which correspond to three different effective scales $k_{\rm eff}$ for our derived constraints.
At $k_{\rm eff}=0.31 \, h/{\rm Mpc}$
we find $\Omega_{\textrm{H\textsc{i}}} b_{\textrm{H\textsc{i}}} r_{\textrm{H\textsc{i}},{\rm Wig}} = [0.58 \pm 0.09 \, {\rm (stat) \pm 0.05 \, {\rm (sys)}}] \times 10^{-3}$ for GBT-WiggleZ, $\Omega_{\textrm{H\textsc{i}}} b_{\textrm{H\textsc{i}}} r_{\textrm{H\textsc{i}},{\rm ELG}} = [0.40 \pm 0.09 \, {\rm (stat) \pm 0.04 \, {\rm (sys)}}] \times 10^{-3}$ for GBT-ELG, and $\Omega_{\textrm{H\textsc{i}}} b_{\textrm{H\textsc{i}}} r_{\textrm{H\textsc{i}},{\rm LRG}} = [0.35 \pm 0.08 \, {\rm (stat) \pm 0.03 \, {\rm (sys)}}] \times 10^{-3}$ for GBT-LRG, at $z\simeq 0.8$. We also report results at $k_{\rm eff}=0.24 \, h/{\rm Mpc}$ and $k_{\rm eff}=0.48 \, h/{\rm Mpc}$.
The best-fit amplitudes and $1\sigma$ statistical errors for all these cases are shown in \autoref{tab:constraints}.
Our results are amongst the most precise constraints on neutral hydrogen density fluctuations in a relatively unexplored redshift range, using three different galaxy samples.
Our findings as well as our developed simulations and data analysis pipelines will be useful for the analysis of forthcoming H\textsc{i}\ intensity mapping data, and for the preparation of future surveys.
\section*{Acknowledgements}
We are grateful to Chris Blake for very useful discussions and feedback. We thank the anonymous referee for their insightful questions and helpful suggestions. A.P. is a UK Research and Innovation Future Leaders Fellow [grant number MR/S016066/1], and also acknowledges support by STFC grant ST/S000437/1.
T.C.C. acknowledges support by the JPL Research and Technology Development Fund. Part of the research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
S.A. is supported
by the MICUES project, funded by the EU H2020 Marie Skłodowska-Curie
Actions grant agreement no. 713366 (InterTalentum UAM).
U.-L.P. receives support from Natural Sciences and Engineering Research Council of Canada (NSERC) [funding reference number RGPIN-2019-067, 523638-201], Canadian Institute for Advanced Research (CIFAR), Canadian Foundation for Innovation (CFI), Simons Foundation, and Alexander von Humboldt Foundation. S.C. acknowledges support by STFC grant ST/S000437/1. G.R. acknowledges support from the National Research Foundation of Korea (NRF) through Grant No. 2020R1A2C1005655 funded by the Korean Ministry of Education, Science and Technology (MoEST).
Funding for the Sloan Digital Sky
Survey IV has been provided by the
Alfred P. Sloan Foundation, the U.S.
Department of Energy Office of
Science, and the Participating
Institutions.
SDSS-IV acknowledges support and
resources from the Center for High
Performance Computing at the
University of Utah. The SDSS
website is www.sdss.org.
SDSS-IV is managed by the
Astrophysical Research Consortium
for the Participating Institutions
of the SDSS Collaboration including
the Brazilian Participation Group,
the Carnegie Institution for Science,
Carnegie Mellon University, Center for
Astrophysics | Harvard \&
Smithsonian, the Chilean Participation
Group, the French Participation Group,
Instituto de Astrof\'isica de
Canarias, The Johns Hopkins
University, Kavli Institute for the
Physics and Mathematics of the
Universe (IPMU) / University of
Tokyo, the Korean Participation Group,
Lawrence Berkeley National Laboratory,
Leibniz Institut f\"ur Astrophysik
Potsdam (AIP), Max-Planck-Institut
f\"ur Astronomie (MPIA Heidelberg),
Max-Planck-Institut f\"ur
Astrophysik (MPA Garching),
Max-Planck-Institut f\"ur
Extraterrestrische Physik (MPE),
National Astronomical Observatories of
China, New Mexico State University,
New York University, University of
Notre Dame, Observat\'ario
Nacional / MCTI, The Ohio State
University, Pennsylvania State
University, Shanghai
Astronomical Observatory, United
Kingdom Participation Group,
Universidad Nacional Aut\'onoma
de M\'exico, University of Arizona,
University of Colorado Boulder,
University of Oxford, University of
Portsmouth, University of Utah,
University of Virginia, University
of Washington, University of
Wisconsin, Vanderbilt University,
and Yale University.
Simulation data used in this work was generated using Swinburne University's Theoretical Astrophysical Observatory (TAO) and is freely accessible at https://tao.asvo.org.au/. The DARK SAGE semi-analytic galaxy formation model is a public codebase available for download at https://github.com/arhstevens/DarkSage. The Millennium Simulation was carried out by the Virgo Supercomputing Consortium at the Computing Centre of the Max Plank Society in Garching, accessible at http://www.mpa-garching.mpg.de/Millennium/.
We acknowledge the use of open source software \citep{scipy:2001,Hunter:2007, mckinney-proc-scipy-2010, numpy:2011}.
\emph{Author contributions}: L.W. and A.P. conceived the idea, designed the methodology, led the data analysis, and drafted the paper. All authors contributed to the development and writing of the paper, or made a significant contribution to the data products.
\section*{Data Availability}
The raw GBT intensity mapping data (the observed time stream data) is publicly available according to the NRAO data policy, which can be found at \url{https://science.nrao.edu/observing/proposal-types/datapolicies}. The data products, such as maps and foreground removed maps, will be shared on reasonable request to the corresponding author. We foresee a public release of the GBT data products once the analysis of the maps is finalised and the results are published in scientific journals. The SDSS-IV DR16 data is available at \url{https://www.sdss.org/dr16/}.
The DR16 LSS catalogues are publicly available: \url{https://data.sdss.org/sas/dr16/eboss/lss/catalogs/DR16/}.
\bibliographystyle{mnras}
|
2,869,038,156,825 | arxiv | \section{Introduction}
Top quark production at a future $e^+e^-$ linear collider
provides an excellent possibility to study
polarization phenomena of quarks without hadronization ambiguities
in a `clean' environment.
Due to its short lifetime, the top quark decays as a quasi-free quark,
before hadronization effects can take place. The large
width of the top quark thus serves effectively as a cutoff for
non-perturbative effects.
Information on the polarization and spin correlations of top quarks
is therefore not diluted by hadronization effects but
transferred to the decay products.
This means that the underlying dynamics of both the production and decay
process of the heaviest elementary particle known to date
can be studied in greater detail, leading to either a confirmation
of the Standard Model (SM) predictions or to hints for `new' physics.
For example, the chirality structure of the $tWb$ vertex
can be tested with a highly polarized top quark sample \cite{JeKu94}.
Further, anomalous CP-violating dipole form factors
contributing to the $Z t\bar{t}$ and $\gamma t\bar{t}$ vertex
would show up as nonzero expectation values of CP-odd spin
observables (see, e.g., \cite{BeNaOvSc92}).
Needless to say, the predictions of the SM must be known to high
precision in order to establish possible deviations.\par
The polarization and spin correlations of top quarks
can be traced in the angular-energy distributions and
momentum correlations of the decay products.
Consider for example the decay distribution of charged
leptons in semileptonic decays
$t\to \ell^+ \nu_{\ell} b$. At leading order within
the SM, this distribution reads
in the top quark rest frame \cite{JeKu89}
\begin{eqnarray}\label{decay}
\frac{d^2\Gamma}{dE_{\ell}d\cos\theta}=\frac{1}{2}\left(1+|{\bf P}_t|
\cos\theta\right)\frac{d\Gamma}{dE_{\ell}},
\end{eqnarray}
where $E_{\ell}$ is the energy of the charged lepton and
$\theta$ is the angle between the direction of $\ell^+$
and the polarization ${\bf P}_t$ of the top quark sample.
A remarkable feature of (\ref{decay}) is the factorization into
an energy-dependent and angular-dependent part, which
is also respected to a high degree of accuracy by QCD corrections
\cite{CzJeKu91}. The direction of flight of the charged lepton
in the top quark rest frame is thus a perfect analyser of the top
quark polarization.
Analogously, angular correlations between ${\ell^+}$ and ${\ell^-}$
efficiently probe spin correlations between $t$ and $\bar{t}$.
Of course, momenta of other
final state particles in semileptonic
decays as well as in hadronic top decays
can also be used to probe top quark spin effects.
\par
In the remainder of this
paper we discuss the polarization and spin correlations
in $e^+e^-\to t\bar{t}X$ to order $\alpha_s$
within the SM.
Neglecting so-called non-factorizable contributions it is
straightforward to combine our results with the known
decay distributions of polarized top quarks.
\section{Review of leading order results}\label{sec2}
In this section we write down in a compact form
leading order results for the top quark
polarization and the spin correlations between $t$ and $\bar{t}$ in the
reaction
\begin{equation}
\label{process} e^+(p_+,\lambda_+)+e^-(p_-,\lambda_-)\to (\gamma^\ast,Z^\ast)\to t(k_t)+
\bar t(k_{\bar t}) + X,
\end{equation}
where $\lambda_-$ ($\lambda_+$) denotes the longitudinal
polarization of the electron (positron) beam\footnote{
For a right-handed electron (positron), $\lambda_{\mp}=+1$.}.
Spin effects of top quarks in reaction (\ref{process}) have been analysed
first in ref. \cite{KuReZe86}. A more recent analysis of spin correlations
at leading order can be found in ref. \cite{PaSh96}, where a so-called
`optimal' spin basis is constructed.
\par
The top quark polarization is defined as two times the expectation value
of the top quark spin operator ${\bf S}_t$.
The operator ${\bf S}_t$ acts on the tensor product of
the $t$ and $\bar{t}$ spin spaces and is given by
${\bf S}_t= \frac{{\mbox{\boldmath $\sigma$}}}{2}\otimes 1\!\mbox{l} $, where
the first (second) factor in the tensor product
refers to the $t$ ($\bar{t}$) spin space. (The spin operator of the
top antiquark is defined by ${\bf S}_{\bar t}= 1\!\mbox{l} \otimes \frac{{\mbox{\boldmath $\sigma$}}}{2}$.)
The expectation value is taken with respect to the spin degrees
of freedom of the $t\bar{t}$ sample described
by a spin density matrix $\rho$,
i.e.
\begin{eqnarray} \label{pol}
{\bf P}_t = 2\,\langle {\bf S}_t\rangle =
2\frac{{\rm Tr}\, \left[\rho\, {\bf S}_t\right]}{{\rm Tr}\, \rho}.
\end{eqnarray}
For details on the definition and computation of $\rho$, see \cite{BrFlUw99}.
The polarization of the top antiquark ${\bf P}_{\bar t}$
is defined by replacing ${\bf S}_t$ by ${\bf S}_{\bar t}$ in (\ref{pol}).
For top quark pairs produced by CP invariant interactions,
${\bf P}_{\bar t}={\bf P}_{t}$.
The spin correlations between
$t$ and $\bar{t}$ are encoded in the matrix
\begin{eqnarray} \label{corr}
C_{ij} = 4\,\langle S_{t,i} S_{\bar{t},j} \rangle =
4\frac{{\rm Tr}\,\left[ \rho \,S_{t,i}S_{\bar{t},j}\right]}{{\rm Tr}\, \rho}.
\end{eqnarray}
The definitions (\ref{pol}) and (\ref{corr})
imply that ${\bf P}_t$ and $C_{ij}$ are independent
of the choice of the spin basis.
It is convenient to write the results in terms of the electron and top quark
directions
$\hat{\bf p}$ and $\hat{\bf k}$ defined in the c.m. system, the
cosine of the scattering angle $z=\hat{\bf p}\cdot\hat{\bf k}$, the
scaled top quark mass $r=2m_t/\sqrt{s}$ and the top quark velocity
$\beta=\sqrt{1-r^2}$.
The electroweak couplings that enter
the results are given by
\begin{eqnarray}
\label{wcouplings}
g_{PC (PV)}^{VV} &=&Q_t^2\, f_{PC(PV)}^{\gamma\gamma}
+ 2 \,g_v^t\,Q_t\, \chi\,f_{PC(PV)}^{\gamma Z}
+ g_v^{t\,2}\, \chi^2\,f_{PC(PV)}^{ZZ},\nonumber\\
g_{PC(PV)}^{AA} &=& g_a^{t\,2} \chi^2 f_{PC(PV)}^{ZZ},
\nonumber\\
g_{PC(PV)}^{VA}&=& -g_a^t\,Q_t\,\chi\,f_{PC(PV)}^
{\gamma Z} -g_v^t\,g_a^t\, \chi^2 f_{PC(PV)}^{ZZ},
\end{eqnarray}
where
\begin{eqnarray}
f_{PC}^{\gamma\gamma}&=&1-\lambda_-\lambda_+,\nonumber \\
f_{PV}^{\gamma\gamma}&=& \lambda_--\lambda_+,\nonumber \\
f_{PC}^{ZZ}&=&(1-\lambda_-\lambda_+)(g_v^{e2}+g_a^{e2})-
2(\lambda_--\lambda_+) g_v^{e} g_a^{e},\nonumber \\
f_{PV}^{ZZ}&=&(\lambda_--\lambda_+) (g_v^{e\,2}+g_a^{e\,2}) -
2\,(1-\lambda_-\lambda_+)g_v^e\,g_a^e,\nonumber \\
f_{PC}^{\gamma Z}&=&-(1-\lambda_-\lambda_+)g_v^e +
(\lambda_--\lambda_+)g_a^e,\nonumber \\
f_{PV}^{\gamma Z}&=& (1-\lambda_-\lambda_+)g_a^e -(\lambda_--\lambda_+)
g_v^e.
\end{eqnarray}
In (\ref{wcouplings}),
$Q_t$ denotes
the electric charge of the top quark in units of $e=\sqrt{4\pi\alpha}$, and
$g_v^f$, $g_a^f$ are the vector- and the axial-vector couplings of a
fermion of type $f$, i.e.
$g_v^e = -\frac{1}{2} + 2 \sin^2\vartheta_W$,
$g_a^e =-\frac{1}{2}$ for an electron, and
$g_v^t = \frac{1}{2} - \frac{4}{3} \sin^2\vartheta_W$,
$g_a^t = \frac{1}{2}$ for a top quark, with $\vartheta_W$ denoting the
weak mixing angle. The function
$\chi$ is given by
\begin{equation}
\label{chi}
\chi = \frac{1}{4\sin^2\vartheta_W\cos^2\vartheta_W}\,
\frac{s}{s-m_Z^2},
\end{equation}
where $m_Z$ stands for the mass of the Z boson.
\par
We further introduce a vector perpendicular to ${\bf k}$ in the
production plane,
${\bf k}^{\perp}=\hat{\bf p}-z\hat{\bf k}$. A simple calculation yields:
\begin{eqnarray}
{\bf P}_t
&\!=&\!
2\frac{ r\left(\beta z g_{PC}^{VA}+g_{PV}^{VV}\right){\bf k}^{\perp}+\left[\beta (1+z^2 )g_{PC}^{VA}
+z g_{PV}^{VV}+\beta^2 z g_{PV}^{AA}\right] \hat{\bf k} }
{ \left[2-\beta^2 (1-z^2)\right]g_{PC}^{VV}+\beta^2
(1+z^2 )g_{PC}^{AA}+4\beta z g_{PV}^{VA} },
\end{eqnarray}
\newpage
\begin{eqnarray}
C_{ij}
&\!=&\! \frac{1}{3}\delta_{ij}
+\frac{2}{\left[2-\beta^2 (1-z^2)\right]
g_{PC}^{VV} +\beta^2 (1+z^2) g_{PC}^{AA}
+4\beta z g_{PV}^{VA}} \nonumber \\
&\!\times&\!
\bigg[ \left(\left[z^2 +\beta^2(1-z^2)\right]g_{PC}^{VV}+
\beta^2 z^2 g_{PC}^{AA}+2\beta z g_{PV}^{VA} \right)
\left(\hat{k}_i\hat{k}_j-\frac{1}{3}\delta_{ij}\right)
\nonumber \\ & & +\,
(g_{PC}^{VV}-\beta^2 g_{PC}^{AA})
\left(k^{\perp}_i k^{\perp}_j-\frac{1}{3}\delta_{ij}
(1-z^2)\right)
\nonumber \\
& & +\,
r (z g_{PC}^{VV}+ \beta g_{PV}^{VA})
(k^{\perp}_i\hat{k}_j+k^{\perp}_j\hat{k}_i)\bigg].
\end{eqnarray}
In the limit $\beta\to 0$
(threshold) we obtain for the top quark polarization:
\begin{eqnarray} \label{plimit}
{\bf P}_t & \!{\buildrel
\beta\to 0\over \longrightarrow}&\! \frac{g_{PV}^{VV}}{g_{PC}^{VV}}\hat{\bf p}
+ \beta\left[\left(
\frac{g_{PC}^{VA}}{g_{PC}^{VV}}-2\,\frac{g_{PV}^{VV}g_{PV}^{VA}}{(g_{PC}^{VV})^2}\right)z\,\hat{\bf p} + \frac{g_{PC}^{VA}}{g_{PC}^{VV}}\hat{\bf k} \right] +
{\cal O}(\beta^2).
\end{eqnarray}
In the leading order
parton model calculation, the
top quark polarization becomes parallel to the electron
beam for $\beta=0$.
For a fully polarized electron beam
(and unpolarized positrons), we
have $g_{PV}^{VV(VA)}=\pm g_{PC}^{VV(VA)}$ for $\lambda_-=\pm 1$.
In that case the top quark polarization along the beam
is equal to the electron polarisation,
${\bf P_t}\cdot \hat{\bf p}=\lambda_-=\pm 1$,
up to corrections of order $\beta^2$.
\par
The spin correlations also have a simple limit:
\begin{eqnarray} \label{corrlimit}
C_{ij}
&\!{\buildrel
\beta\to 0\over \longrightarrow}&\! \hat{p}_i\hat{p}_j+
\beta \frac{g_{PV}^{VA}}{g_{PC}^{VV}}(\hat{p}_i \hat{k}_j +
\hat{p}_j \hat{k}_i- 2 z \hat{p}_i \hat{p}_j)
+{\cal O}(\beta^2).
\end{eqnarray}
Note that in the threshold region
QCD binding effects modify the above parton model
results significantly. More precisely,
the simple factor $\beta$ in (\ref{plimit}) and (\ref{corrlimit})
gets replaced by a function incorporating the complex dynamics of the
$t\bar{t}$ system close to threshold, which is
governed by the QCD potential \cite{Ha95}.
\par
In the high-energy limit $r=\sqrt{1-\beta^2}\to 0$,
\begin{eqnarray}
{\bf P}_t
{\buildrel
r\to 0\over \longrightarrow} \,2\,\frac{
(1+z^2 )g_{PC}^{VA}+ z (g_{PV}^{VV}+g_{PV}^{AA})}{(1+z^2 )
(g_{PC}^{VV}+g_{PC}^{AA})+4 z g_{PV}^{VA} } \hat{\bf k} + {\cal O}(r),
\end{eqnarray}
i.e. the top quark polarization becomes parallel to its direction of flight.
\newpage
Finally,
\begin{eqnarray}
C_{ij}
&\!{\buildrel
r\to 0\over \longrightarrow}&\!
\frac{1}{3}\delta_{ij} + \frac{2}
{(1+z^2)(g_{PC}^{VV}+g_{PC}^{AA})+4 g_{PV}^{VA} z }\nonumber \\
&\!\times&\!
\bigg[(g_{PC}^{VV}+z^2 g_{PC}^{AA}+2 z g_{PV}^{VA} )
\left(\hat{k}_i\hat{k}_j-\frac{1}{3}\delta_{ij}\right) \nonumber \\
& & +\, (g_{PC}^{VV}-g_{PC}^{AA} )\left(k^{\perp}_i
k^{\perp}_j-\frac{1}{3}\delta_{ij}
(1-z^2)\right)\bigg] + {\cal O}(r).
\end{eqnarray}
\section{QCD corrections at order $\alpha_s$}
The QCD corrections at order $\alpha_s$
to the above results are given by the contributions from
one-loop virtual corrections to $e^+e^-\to t\bar{t}$
and from the real gluon emission process $e^+e^-\to t\bar{t}g$.
The so-called phase space slicing method is used to isolate
the soft gluon singularities.
The contribution of hard gluons to ${\bf P}_t$ and $C_{ij}$
is computed by numerically integrating
all phase space variables of the $t\bar{t}g$ final state
except for the top quark scattering angle.
Further details of the computation are given
in ref. \cite{BrFlUw99}.
Results to order $\alpha_s$ for the
polarization projected onto $\hat{\bf k}$ and
${\bf k}^{\perp}/|{\bf k}^{\perp}|$
can also be found in ref. \cite{KoPiTu94} and
ref. \cite{GrKo96}, respectively.
Absorptive parts of the one-loop amplitude induce as new structures
a polarization normal to the event plane \cite{KuReZe86,KaPuRe78,BeMaSc92}
as well as new types
of spin correlations. We denote these additional structures
by an upper index `abs'.
Defining ${\bf n}= \hat{\bf p}\times \hat{\bf k}$, they read:
\begin{eqnarray}
{\bf P}_t^{\rm abs.}&\! = &\!\frac{\alpha_s C_F r
\left[(\beta^2-2) g_{PV}^{VA}-\beta z g_{PC}^{VV}\right]}
{2\left(g_{PC}^{VV}\left[2-\beta^2(1-z^2)\right]+g_{PC}^{AA}\beta^2
(1+z^2)+4 g_{PV}^{VA} \beta z\right)}\, {\bf n} \nonumber \\
&\!{\buildrel
\beta\to 0\over \longrightarrow}&\! =
-\frac{\alpha_s C_F}{2} \frac{g_{PV}^{VA}}{g_{PC}^{VV}}
\,{\bf n}+{\cal O}(\beta),
\end{eqnarray}
\begin{eqnarray}
C_{ij}^{\rm abs.}
&\!=&\! \frac{-\alpha_s C_F r }{2\left(g_{PC}^{VV}\left[2-\beta^2(1-z^2)\right]
+g_{PC}^{AA}\beta^2
(1+z^2)+4 g_{PV}^{VA} \beta z\right)}\nonumber \\
&\!\times&\! \big[(\beta g_{PV}^{VV}-(\beta^2 - 2) z g_{PC}^{VA})
({n}_i\hat{k}_j+\hat{k}_i{n}_j)\nonumber \\
& & +\,
2 r g_{PC}^{VA}
({n}_i k^{\perp}_j+k^{\perp}_i{n}_j)\big]\nonumber \\
&\!{\buildrel
\beta\to 0\over \longrightarrow}&\! = -\frac{\alpha_s C_F}{2}
\frac{g_{PC}^{VA}}{g_{PC}^{VV}}
\left({n}_i\hat{p}_j+\hat{p}_i{n}_j\right)
+{\cal O}(\beta),
\end{eqnarray}
where $C_F=4/3$.
The threshold behaviour of the other order $\alpha_s$ corrections
to the Born results for ${\bf P}_t$ and
$C_{ij}$ is very simple:
First, all these corrections vanish at $\beta=0$.
Second, the QCD corrections of order $\beta$ can be implemented in the Born
formulas (\ref{plimit}) and (\ref{corrlimit}) by
multiplying the respective order $\beta$ term with
the factor $(1+\alpha_s C_F/\pi)$.
\par
\begin{figure}
\centering
\mbox{\epsfysize=41mm\epsffile{toppol_fig1a.ps}}
\mbox{\epsfysize=41mm\epsffile{toppol_fig1b.ps}}
\mbox{\epsfysize=41mm\epsffile{toppol_fig1c.ps}}
\caption{Top quark polarization
projected onto the direction of the electron, i.e. ${\bf P}_t\cdot
\hat{\bf p}$, as a function of $z$.
The left and middle figure are the results including the order $\alpha_s$
corrections, the
right figure shows the value of the QCD correction itself
at $\sqrt{s}=1$ TeV.
Input values: $m_t=175$ GeV,
$\alpha_s=0.1$ (fixed), $\sin^2\vartheta_W=0.2236$, and $\lambda_+=0$.
The solid line is for $\lambda_-=0$, the dashed line for
$\lambda_-=-1$, and the dotted line for $\lambda_-=+1$.}
\label{F1}
\end{figure}
\begin{figure}
\centering
\mbox{\epsfysize=41mm\epsffile{toppol_fig2a.ps}}
\mbox{\epsfysize=41mm\epsffile{toppol_fig2b.ps}}
\mbox{\epsfysize=41mm\epsffile{toppol_fig2c.ps}}
\caption{Same as Fig. 1, but for ${\bf P}_t\cdot
\hat{\bf k}$.}
\label{F2}
\end{figure}
\begin{figure}
\centering
\mbox{\epsfysize=41mm\epsffile{toppol_fig3a.ps}}
\mbox{\epsfysize=41mm\epsffile{toppol_fig3b.ps}}
\caption{Top quark polarization
projected onto $\hat{\bf n}={\bf n}/|{\bf n}|$, i.e.
${\bf P}_t\cdot\hat{\bf n}$, which is zero at Born level.
The labelling of the curves is as in Fig. 1.}
\label{F3}
\end{figure}
\begin{figure}
\centering
\mbox{\epsfysize=41mm\epsffile{toppol_fig4a.ps}}
\mbox{\epsfysize=41mm\epsffile{toppol_fig4b.ps}}
\caption{Order $\alpha_s$ correction to the correlation
$C_{ij}\delta_{ij}=4\langle {\bf S}_t\cdot {\bf S}_{\bar{t}}\rangle$,
which is equal to 1 at the Born level. The labelling of
the curves is as in Fig. 1.
}
\label{F4}
\end{figure}
\begin{figure}
\centering
\mbox{\epsfysize=41mm\epsffile{toppol_fig5a.ps}}
\mbox{\epsfysize=41mm\epsffile{toppol_fig5b.ps}}
\mbox{\epsfysize=41mm\epsffile{toppol_fig5c.ps}}
\caption{Same as Fig. 1 , but for the correlation
$\hat{p}_iC_{ij}\hat{p}_j$.
}
\label{F5}
\end{figure}
\begin{figure}
\centering
\mbox{\epsfysize=41mm\epsffile{toppol_fig6a.ps}}
\mbox{\epsfysize=41mm\epsffile{toppol_fig6b.ps}}
\mbox{\epsfysize=41mm\epsffile{toppol_fig6c.ps}}
\caption{Same as Fig. 1 , but for the correlation
$\hat{k}_iC_{ij}\hat{k}_j$.}
\label{F6}
\end{figure}
We now turn to the discussion of numerical results obtained from
the exact calculation including the QCD corrections.
We consider unpolarized positron beams and
the three cases $\lambda_-=0,\pm 1$.
For c.m. energies not too far from the $t\bar{t}$ threshold,
the QCD corrections are quite small. (As mentioned before, the parton model
results presented here can not be used in the threshold region itself,
where the expansion in $\alpha_s$
does not make sense.) For example, at $\sqrt{s}=0.4$ TeV
the corrections are smaller than $0.5\%$ ($1\%$) for the top quark polarization
projected onto the electron beam (top quark direction of flight) for all
scattering angles. Far above threshold, the QCD correction to the polarization
can reach values above $5\%$ for special values of
$z$ (see right plots of Figs. 1 and 2). The normal
polarization shown in Fig. 3 reaches values of a few percent for not too
high c.m. energies.
\par
Since the production of the top quarks proceeds through a single spin-one
gauge boson, the correlation $C_{ij}\delta_{ij}=4\langle {\bf S}_t\cdot
{\bf S}_{\bar{t}} \rangle$
is exactly equal to 1 at the Born level, independent of the scattering
angle. Only hard gluon emission leads to a deviation from this result.
The QCD correction to this correlation is therefore extremely small
at $\sqrt{s}=0.4$ TeV due to the phase space suppression (see Fig. 4, left).
However, at $\sqrt{s}=1$ TeV, the hard gluon emission leads to a substantial
decrease of this correlation, which exceeds $10\%$ for top quarks emitted
in the backward direction in the case of right-handed electron beams (Fig. 4, right).
Fig. 5 shows the `beamline' spin correlation $\hat{p}_iC_{ij}\hat{p}_j$. The QCD corrections to this quantity
are smaller than $1\%$ at $\sqrt{s}=0.4$ TeV and of the order of $5\%$ at
$\sqrt{s}=1$ TeV (right plot of Fig. 5). Finally,
Fig. 6 depicts our results for the
correlation
$\hat{k}_iC_{ij}\hat{k}_j$. Note that this correlation
is at Born level equal to $(-1)$ times the
`helicity' correlation
$P_{\ell\ell}=\hat{k}_{t,i}C_{ij}\hat{k}_{\bar{t},j}$. This
special spin correlation, averaged
over the scattering angle, was computed analytically to order $\alpha_s$
in ref. \cite{TuBePe98,GrKoLe98}. Further results for other
c.m. energies and for additional spin observables can be found in
ref. \cite{BrFlUw99}.
\section{Conclusions}
At a future linear collider, it will be possible to precisely
study the rich phenomenology of top quark spin effects, both
at threshold and in the continuum. Theoretical predictions
for the top quark polarization and the $t\bar{t}$ spin correlations
above threshold are available to order $\alpha_s$. The QCD corrections
are in general small not too far away from threshold, but can reach, for
energies around 1 TeV, values of the order of 5\% or larger in certain
kinematic regions. Their inclusion is mandatory in searches
for nonstandard interactions of the top quark.
\bigskip
{\small We would like to thank W. Bernreuther, M. Je\.{z}abek, J.H. K\"uhn, and
T. Teubner for discussions. A.B. would like to thank the organizers of the
Spin 99 conference for their kind hospitality.}
\bigskip
|
2,869,038,156,826 | arxiv | \section{Introduction}
Slug tests are a common tool in hydrogeology for hydraulic characterization of aquifers because they are quick, obviate the need for waste water disposal, require less equipment, and are not as labor intensive as pumping tests. Fundamentally, they involve instantaneous (step) perturbation of fluid pressure in an interval followed by continuous monitoring of the pressure change as it dissipates by fluid flow through the aquifer. This is typically achieved by either dropping a slug mass into a well \citep{cooper1967} or pneumatically pressurizing the water column in a well \citep{butler1998, malama2011}, a configuration referred to as a single well test. Several mathematical models are available in the hydrogeology literature for analyzing confined \citep{cooper1967, bredehoeft1980, zurbuchen2002, butler2004} and unconfined \citep{bouwer1976, springer1991, hyder1994, spane1996a, zlotnik1998, malama2011} aquifer slug test data under the Darcian flow regime. Consideration of slug tests under non-Darcian flow regimes may be found in \citet{quinn2013} and \citet{wang2015}.
Slug tests have the advantage of only involving limited contact with and minimal disposal of effluent formation water. As such, they have found wide application for characterizing heterogeneous formations at contaminated sites \citep{shapiro1998} and for investigating flow in fractured rock \citep{quinn2013, ji2015, ostendorf2015}. However, the small volumes of water involved impose a physical limit on the volume of the formation interrogated during tests \citep{shapiro1998, beckie2002} because the resulting pressure perturbations often do not propagate far enough to be measurable in observation wells. As a result, hydraulic parameters estimated from single well slug-test data can only be associated with the formation volume within the immediate vicinity of the source well \citep{beckie2002, butler2005}.
Cross-hole (or multi-well) slug tests are less common but have been applied to interrogate relatively large formation volumes in what has come to be known as hydraulic tomography \citep{yeh2000, illman2009}. For example, \citet{vesselinov2001b} and \citet{illman2001} used pneumatic cross-hole injection tests to hydraulically characterized a fractured unsaturated rock formation with dimensions of $30\times30\times30$~$\mathrm{m}^3$. \citet{barker1983} presented evidence of measurable pressure responses in observation wells several meters from the source well. \citet{audouin2008} reported cross-hole slug tests conducted in fractured rock, where they collected data in observations wells at radial distances 30 to about 120~m from the source well, and observed maximum peak amplitudes ranging from 5 to 20~cm. This demonstrated empirically that slug test pressure perturbations can propagate over relatively large distances beyond the immediate vicinity of the source well, albeit for fractured rocks, which have high hydraulic diffusivities. \citet{brauchler2010} attempted to intensively apply cross-hole slug tests to obtained a detailed image of confined aquifer heterogeneity. They used the model of \citet{butler2004} to estimate aquifer hydraulic conductivity, specific storage and anisotropy. Cross-hole slug tests in unconfined aquifers, neglecting wellbore inertial effects, have been reported by \citet{spane1996a}, \citet{spane1996b}, and \citet{belitz1999} for source-to-observation well distances not exceeding 15 m.
Recently \citet{paradis2013} and \citet{paradis2014, paradis2015} analysed synthetic cross-hole slug test data using a model for over-damped observation well responses. The need, therefore, still exists to analyse field data and characterize high permeability heterogeneous unconfined aquifers using cross-hole slug tests where source and observation well inertial effects may not be neglected. \citet{malama2011} developed a slug test model for unconfined aquifers using the linearised kinematic condition of \citet{neuman1972} at the water-table, and accounting for inertial effects of the source well. They analysed data from single-well tests performed in a shallow unconfined aquifer. This work extends the application of the model of \citet{malama2011} to multi-well tests and to response data collected in observation wells. The data analysed were collected at multiple vertical intervals in an observation well about 4 m from the source well, which itself was perturbed at multiple intervals. The model and data are used to estimate hydraulic conductivity, specific storage, and specific yield. The sensitivity of predicted model behaviour to these parameters is also analysed. In the following, the mathematical model is presented, the multi-level multi-well tests are described, and data analysed. The work concludes with an analysis of the sensitivity coefficients for the hydraulic and well characteristic parameters.
\begin{figure}[h]
\includegraphics[width=0.75\textwidth]{slugschematic.eps}
\caption{\label{fig:slugschematic} Schematic of a typical cross-hole slug test set-up for an unconfined aquifer. For the tests reported herein, the source and observation intervals were isolated with a multi chamber well not a multi-packer system.}
\end{figure}
\section{Slug Test Model}
\citet{malama2011} developed a model for formation and source well response to slug tests performed in unconfined aquifers using the linearised kinematic condition at the water-table. The model allows for estimation of specific yield in addition to hydraulic conductivity and specific storage. The model also accounts for source-well wellbore storage and inertial effects. Wellbore storage in the source well is treated in the manner of \citet{cooper1967}. A schematic of the conceptual model used to derive the semi-analytical solution is shown in Figure \ref{fig:slugschematic}. Whereas the solution of \citep{malama2011} was obtained for and applied to source wells, here a more complete solution is presented that applies to observation wells. The complete aquifer response for both source and observation wells is given by (see Appendix~A and \citet{malama2011} for details)
\begin{equation}
\label{eqn:}
\hat{\overline{s}}_D = \hat{\overline{u}}_D\left\{
\begin{split}
\left[1 - \hat{\overline{v}}_D\left(d_D\right)\right] \hat{\overline{f}}_1(z_D) & \quad \mbox{$\forall z_D\in[0,d_D]$}\\
1 - \hat{\overline{v}}_D & \quad \mbox{$\forall z_D\in[d_D,l_D]$}\\
\left[1 - \hat{\overline{v}}_D\left(l_D\right)\right]\hat{\overline{f}}_2(z_D) & \quad \mbox{$\forall z_D\in[l_D,1]$},
\end{split} \right.
\end{equation}
where $\hat{\overline{s}}_D$ is the double Laplace-Hankel transform of the dimensionless formation head response $s_D =s/H_0$, $d_D=d/B$ and $l_D=l/B$ are dimensionless depths to the top and bottom of the test interval, $z_D=z/B$ ($z\in [0,B]$) is dimensionless depth below the water-table, $B$ is initial saturated thickness,
\begin{equation}
\label{eqn:uD}
\hat{\overline{u}}_D = \frac{C_D(1-p\overline{H}_D)}{\kappa\eta^2\xi_w\mathrm{K}_1(\xi_w)},
\end{equation}
\begin{equation}
\label{eqn:vD}
\hat{\overline{v}}_D = \frac{\Delta_0(d_D)}{\Delta_0(1)} \cosh(\eta z_D^\ast) + \sinh(\eta \l_D^\ast)\frac{\Delta_0' (z_D)}{\eta\Delta_0(1)},
\end{equation}
\begin{align}
\hat{\overline{f}}_1(z_D) & = \frac{\Delta_0'(z_D)}{\Delta_0'(d_D)} ,\\
\hat{\overline{f}}_2(z_D) & = \frac{\cosh(\eta z_D^\ast)}{\cosh(\eta l_D^\ast)} ,
\end{align}
\begin{equation}
\Delta_0 (z_D) = \sinh(\eta z_D) + \varepsilon \cosh(\eta z_D),
\end{equation}
and
\begin{equation}
\Delta'_0 (z_D) = \eta \left[\cosh\left(\eta z_D\right) + \varepsilon \sinh\left(\eta z_D\right) \right].
\end{equation}
Additionally, $z_D^\ast=1-z_D$, $l_D^\ast = 1 - l_D$, $\eta = \sqrt{(p+a_i^2)/\kappa}$, $p$ and $a_i$ are the dimensionless Laplace and finite Hankel transform parameters, $C_D=r_{D,c}^2/(b_s S_s)$ is the dimensionless wellbore storage coefficient of the source well, $S_s$ is formation specific (elastic) storage, $b_s=l-d$ the length of the source well completion interval, $\kappa = K_z/K_r$ is the formation anisotropy ratio, $K_z$ and $K_r$ are vertical and radial hydraulic conductivities, $\xi_w=r_{D,w}\sqrt{p}$, $\varepsilon=p/(\eta\alpha_D)$, and $\mathrm{K}_1()$ is the first-order second-kind modified Bessel function \citep[\S 10.25]{Olver:2010:NHMF}. The relevant dimensionless parameters are listed in Table \ref{tab:dimlessparameters}.
\begin{table}[ht]
\caption{\label{tab:dimlessparameters}Dimensionless variables and parameters}
\centering
\begin{tabular}{lll}
\hline
$s_{D,i}$ &=& $s_i/H_0$\\
$H_D$ &=& $H(t)/H_0$\\
$r_D$ &=& $r/B$\\
$r_{D,w}$ &=& $r_w/B$\\
$r_{D,c}$ &=& $r_c/B$\\
$r_{D,s}$ &=& $r_s/B$\\
$R_D$ &=& $R/B$\\
$z_D$ &=& $z/B$\\
$d_D$ &=& $d/B$\\
$t_D$ &=& $\alpha_{r,1} t/B^2$\\
$C_{D}$ &=& $r_{D,c}^2/(bS_s)$\\
$\alpha_{D}$ &=& $\kappa\sigma$\\
$\beta_1$ &=& $8\nu L/(r_c^2 g T_c)$\\
$\beta_2$ &=& $L_e/(g T_c^2)$\\
$\beta_D$ &=& $\beta_1/\sqrt{\beta_2}$\\
$\kappa_i$ &=& $K_{z,i}/K_{r,i}$\\
$\sigma$ &=& $BS_s/S_y$\\
$\gamma$ &=& $K_{r,2}/K_{r,1}$\\
$\vartheta$ &=& $2b S_{s,2} (r_w/r_c)^2$\\
$\xi_\mathrm{sk}$ &=& $r_\mathrm{sk}/r_w$\\
$\xi_w$ &=& $r_{D,w}\sqrt{p}$\\
$\eta^2$ &=& $(p+a_i^2)/\kappa$\\
\hline
\end{tabular}
\end{table}
The function $\overline{H}_D(p)$ in \eqref{eqn:uD} is the Laplace transform of $H_D(t_D) = H(t)/H_0$, the normalized response in the source well, and is given by
\begin{align}
\label{eqn:solution2}
\overline{H}_D(p) & = \frac{\overline{\psi}_1(p)}{\omega_{D,s}^2 + p\overline{\psi}_1(p)},
\end{align}
where $\omega_{D,s} = \omega_s T_c$, $\omega_s = \sqrt{g/L_e}$ is the source well frequency, $L_e$ is a characteristic length associated with the source well oscillatory term, $T_c = B^2/\alpha_r$ is a characteristic time, $g$ is the acceleration due to gravity, and
\begin{align}
\overline{\psi}_1(p) & = p + \gamma_{D,s} + \frac{\omega_{D,s}^2}{2}\overline{\Omega}\left(r_{D,w},p\right).
\end{align}
The function $\overline{\Omega}$ is defined by
\begin{equation}
\label{eqn:Lap_Omega}
\overline{\Omega}(r_{D,w},p) = \left.\mathsf{H}_0^{-1}\left\{\hat{\overline{\Omega}} \left(a_i,p \right)\right\}\right|_{r_{D,w}},
\end{equation}
where $\mathsf{H}_0^{-1}\{\}$ denotes the inverse zeroth-order finite Hankel transform operator, $r_{D,w} = r_w/B$ is the dimensionless wellbore radius, $\gamma_{D,s} = \gamma_s T_c$, $\gamma_s$ is the source well damping coefficient, and
\begin{equation}
\label{eqn:HankLap_Omega}
\hat{\overline{\Omega}} (a_i,p) = \frac{C_D \left[1 - \left\langle \hat{\overline{w}}_D \left(a_i,p \right) \right \rangle \right]}{\kappa \eta^2 \xi_w \mathrm{K}_1(\xi_w)}.
\end{equation}
\citet{malama2011} showed that
\begin{equation}
\label{eqn:wD}
\langle \hat{\overline{w}}_D \rangle = \frac{1}{\eta b_{D,s}} \frac{\Delta_0(d_D)}{\Delta_0(1)} \left\{\sinh(\eta d_D^\ast) - \left[2 - \frac{\Delta_0(l_D)}{\Delta_0(d_D)}\right] \sinh(\eta l_D^\ast)\right\},
\end{equation}
where $d_D^\ast = 1-d_D$, and $b_{D,s}=b_s/B$. According to \citet{butler2004}, the source well damping coefficient is $\gamma_s = 8\nu L /(L_e r_c^2)$, where $\nu$ is the kinematic viscosity of water and $L$ is a characteristic length associated with the perturbed column of water in the source well.
Whereas \cite{malama2011} used the infinite Hankel transform, here a finite Hankel transform \citep{sneddon95,miles71} is used for inversion, with the transform pair defined as
\begin{align}
\label{eq:finite-Hankel-pair}
\hat{f}(a_i) &= \mathsf{H}_0 \left\{ f(r_D)\right\} = \int_0^{R_D} r_D f(r_D) \mathrm{J}_0 (r_D a_i) \; \mathrm{d}r_D, \nonumber\\
f(r_D) &= \mathsf{H}^{-1}_0 \left\{ \hat{f}(a_i)\right\} = \frac{2}{R_D^2}\sum_{i=0}^{\infty} \hat{f}(a_i) \frac{ \mathrm{J}_0(r_D a_i)}{\left[ \mathrm{J}_1 (R_D a_i)\right]^2},
\end{align}
where $a_i$ are the roots of $\mathrm{J}_0(R_D a_i)=0$, $R_D = R/B$, $R$ is the radius of influence of the source well, and $\mathrm{J}_n()$ is the $n$th-order first-kind Bessel function \citep[\S 10.2]{Olver:2010:NHMF}. For the specified roots and Hankel transform pair given in \eqref{eq:finite-Hankel-pair}, a homogeneous Dirichlet boundary condition is enforced at $r_D=R_D$. Due to the short duration of the signal, a radius of influence such that $R \ge 2 r_\mathrm{obs}$ is sufficient. The finite Hankel transform is chosen for computational expedience; it is simpler to invert numerically than the infinite Hankel transform \citep{malama13}. Laplace transform inversion is performed using the algorithm of \cite{dehoog1982}. The software used to implement the analytical solution described here is released under an open-source MIT license and is available from a public Bitbucket repository (\texttt{https://bitbucket.org/klkuhlm/slug-osc}).
\subsection{Approximation of observation well skin}
It is assumed here that the slug test response at the observation well is due to fluid flow through the sub-domains associated with the source and observation wells and the formation shown in Figure~\ref{fig:slugschematic}. The well skin and formation hydraulic conductivities, $K_i$, $i=1,2,3$, are arranged in series for radial flow, and in parallel for vertical flow. Hence, the effective radial and vertical hydraulic conductivity, $\langle K_r \rangle$ and $\langle K_z \rangle$, of the formation between the source and observation wells are approximated as $$\langle K_r \rangle = \delta_T/\sum_{n = 1}^3 \frac{\delta_i^\ast}{K_i},$$ and $$\langle K_z \rangle = \frac{1}{\delta_T} \sum_{n=1}^3 \delta_i K_i,$$
where $\delta_1^\ast= (\hat{r}/r_1)\delta_1$ and $\delta_2^\ast= (\hat{r}/r_2)\delta_2$, $\delta_T=\sum_{n=1}^3 \delta_i$, $\delta_i$ is the radial thickness of zone $i$, $r_1 = (r_w+r_\mathrm{skin})/2$, $r_2 = (r_\mathrm{skin} + r_\mathrm{obs})/2$, and $\hat{r} = (r_w + r_\mathrm{obs})/2$. This approximate approach follows the work of \citet{shapiro1998} for using the equivalent hydraulic conductivity approach to account for simple heterogeneity. It is based on the simplifying assumption of a piecewise linear head distribution in the skin and formation. It follows directly from an application of mass conservation and Darcy's law in a radial (cylindrical) flow system. The result may also be obtained using a centered finite difference approximation of the hydraulic gradient at $r_1$ and $r_2$ for a head distribution given by Theim equation.
\subsection{Observation well storage \& inertial effects}
The column of water in the observation well oscillates in response to a source well perturbation. It is reasonable to assume that the effective weight of the water column in the observation well controls its head response and damping of the oscillations. Mass balance in the manner of \citet{blackkipp1977} and momentum balance \citep{kipp1985, butler2004} in the observation well account for wellbore storage and inertial effects. In non-dimensional form, the momentum balance equation is given by
\begin{equation}
\frac{\mathrm{d}^2 s_{D,\mathrm{obs}}}{\mathrm{d} t_D^2} + \gamma_{D,\mathrm{o}} \frac{\mathrm{d} s_{D,\mathrm{obs}}}{\mathrm{d} t_D} + \omega_{D,\mathrm{o}}^2 \; s_{D,\mathrm{obs}} = \omega_{D,\mathrm{o}}^2 \langle s_D \rangle
\label{eqn:dim-obs-well-storage}
\end{equation}
where $\gamma_{D,\mathrm{o}}$ is dimensionless observation well damping coefficient and $\omega_{D,\mathrm{o}}$ is dimensionless observation well characteristic frequency. where $s_{D,\mathrm{obs}}$ is the dimensionless observation well response, $\langle s_D \rangle$ is the depth-averaged dimensionless formation response across the observation interval. It follows from \citet{butler2004} that $\gamma_{D,\mathrm{o}} = T_c 8\nu L_\mathrm{obs}/(L_{e,\mathrm{obs}}r_{c,\mathrm{obs}}^2)$, $\omega_{D,\mathrm{o}} = T_c\sqrt{g/L_{e,\mathrm{obs}}}$, where $L_{e,\mathrm{obs}}$ and $L_\mathrm{obs}$ are the characteristic length scales for observation well inertial effects. Here we estimate $\gamma_\mathrm{o}$ and $\omega_\mathrm{o}$ through $L_\mathrm{obs}$ and $L_{e,\mathrm{obs}}$ from observation well data. Applying the Laplace transform and solving for $s_\mathrm{obs}$ gives
\begin{equation}
\overline{s}_{D,\mathrm{obs}} = \overline{\psi_2}(p) \left\langle \overline{s}_D \left(r_D,p \right) \right\rangle,
\end{equation}
where $\overline{\psi}_2(p) = \omega_{D,\mathrm{o}}^2/(p^2 + p \gamma_{D,\mathrm{o}} + \omega_{D,\mathrm{o}}^2)$,
\begin{equation}
\label{eqn:s_obs}
\langle \hat{\overline{s}}_D \rangle = \frac{1}{b_{D,\mathrm{o}}} \int_{d_{D,\mathrm{o}}}^{l_{D,\mathrm{o}}} \hat{\overline{s}}_D (a_i,p,z_D) \;\mathrm{d}z_D,
\end{equation}
and $l_{D,\mathrm{o}}=l_\mathrm{o}/B$ and $d_{D,\mathrm{o}}=d_\mathrm{o}/B$ are the dimensionless depths to the top and bottom of the observation well interval from the water-table. Upon inverting the Laplace transform, one obtains
\begin{equation}\label{eqn:obsresponse}
s_{D,\mathrm{obs}} = \int_0^{t_D} \psi_2(t_D-\tau) \left\langle s_D \left(r_D,\tau \right) \right\rangle \;\mathrm{d}\tau
\end{equation}
with $\psi_2(t) = \mathcal{L}^{-1} \left\{\overline{\psi}_2(p) \right\}$. Equation \ref{eqn:obsresponse} is the solution accounting for observation well inertial effects. It is used in the subsequent analysis to estimate hydraulic parameters.
\section{Model Application to Cross-hole Slug Test Data}
The model described above is applied to observations collected in a series of multi-level cross-hole pneumatic slug tests performed in June 2013 at the Widen site in north-east Switzerland. The site is on the floodplain of the Thur River, a tributary of the Rhine river \citep{diem2010}. The multi-well layout of the test site is depicted schematically in Figure \ref{fig:setup}(a). The wells used in the experiments are completed in an unconfined sand and gravel aquifer with a saturated thickness of 5.8~m. The aquifer is quaternary post-glacial sediment underlain by an aquitard of low permeability lacustrine sediment comprising fine silt and clay \citep{diem2010, coscia2011}. It is overlain with alluvial loam that constitutes the top soil. The aquifer itself can be further subdivided into a silty sand top layer underlain with silty gravel and a sand layer to a thickness of about 7 m \citep{diem2010}. The source well is screened across the entire saturated thickness (see Figure \ref{fig:setup}(a)). Straddle packers were used to sequentially isolate discrete intervals in the source well. The pressure responses were recorded in three observation wells, which were equipped with a Continuous Multichannel Tubing (CMT) system \citep{einarson2002} in which pressure transducers were installed. This system was originally designed for multi-level sampling. It consists of a PVC pipe with seven continuous separate channels or chambers (inner diameter 0.014~m), which are arranged in a honeycomb structure. Each individual chamber has a 0.08~m long slot covered with a sand filter and allows for hydraulic contact with the formation.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{widen_site2}\\(a)\\ \includegraphics[width=0.5\textwidth]{widen-setup}\\(b)\\
\caption{\label{fig:setup} (a) Multi-well layout and (b) example experimental system setup for cross-hole slug tests at the Widen site, Switzerland. For the example shown, data from interval $i$ is denoted P13-MC1-$i$}
\end{figure}
\subsection{Experimental procedure}
The cross-hole pneumatic slug tests were initiated by applying gas pressure to the water column in a chosen interval, then releasing the gas pressure through an outflow valve to provide the instantaneous initial slug perturbation. A double-packer system straddling the test interval ($b_s = 0.35$~m) was used with the pneumatic slug applied through a smaller tubing ($r_c = 1.55\times 10^{-2}$~m). The source well used in these tests was well P13, with a wellbore radius of $r_w = 3.15\times 10^{-2}$~m. The dissipation of the slug was monitored with a pressure transducer in the source well positioned at the top of the water column above the test interval.
The data considered here was obtained in three observations wells labelled MC1, MC2, and MC4 (in Figure \ref{fig:setup}(a)) and located at radial distances of 3.9, 2.9, and 2.8~m, respectively, from the source well. The responses at multiple vertical positions in each observation well were monitored with pressure transducers in a seven-channel CMT system with screen intervals of $b_\mathrm{o} = 8\times 10^{-2}$~m. Each channel in the CMT system has an equivalent radius of $r_{c,\mathrm{o}}=6.5\times 10^{-3}$~m; installation of a pressure transducer in these channels reduces their effective radii (and effective wellbore storage) significantly. The CMT system allows for simultaneous monitoring of the response at seven vertical positions for each slug test. Pressure responses were recorded at a frequency of 50~Hz (every 0.02~s) for a period of about 20 seconds from slug initiation using miniature submersible level transmitters MTM/N 10 manufactured by STS Sensor Technik in Switzerland. The housing diameter of 0.39 inches allowed for pressure measurements in small diameter (1/2 inch) monitoring wells, stand pipes and bore holes. The stainless steel construction and integral polyurethane cable is ideal for long term installation. The transducer cable is reinforced with Kevlar to avoid elongation in deep boreholes. The experiments reported herein were performed in shallow wells and over a relatively short duration to make cable elongation is negligible.
Only data from the observation intervals at approximately the same vertical position as the source-well test interval are analysed here because of their favourable signal-to-noise ratio (SNR). Data from ports not directly in line with the tested interval showed significant decay for the magnitudes of the perturbation used in the field tests. Transducers with greater precision and accuracy or larger source well perturbation are needed to obtain analysable responses in such ports. A schematic of the experimental setup for tests between wells P13 and MC1 is shown in Figure \ref{fig:setup}(b).
\subsection{Observation well data}
The typical slug test responses observed during tests at the Widen site are shown in Figure \ref{fig:typical-obs}. The plots in Figure \ref{fig:typical-obs}(a) are the source well responses, and those in (b) are the corresponding responses in an observation well about 3~m radially from the source well. The results clearly show damped oscillations generated in the source well are measurable in an observation well a few meters away. Comparing the results in (a) and (b) also shows the maximum amplitude of the signal decays about two orders of magnitude from the source to the observation well, which decreases the SNR.
\begin{figure}[h]
\includegraphics[width=0.48\textwidth]{source.eps} \includegraphics[width=0.48\textwidth]{profile-p13mc1.eps}\\
\includegraphics[width=0.48\textwidth]{profile-p13mc2.eps} \includegraphics[width=0.48\textwidth]{profile-p13mc4.eps}
\caption{\label{fig:typical-obs} Typical (a) source and (b-d) observation well responses measured during cross-hole slug tests. Observation well data show increasing damping when approaching the watertable for all three profiles.}
\end{figure}
The observation well response pairs generally are increasingly damped moving towards the water-table, even when the initial displacements from the equilibrium position are comparable. This is evident in the data from all three profiles shown in Figure \ref{fig:typical-obs}, where observation well data collected closer to the water-table appear to be more damped than those at greater depths. Measurable observation well displacements are still obtainable near the water-table (i.e., interval 9 in Figure~\ref{fig:setup}(b)). The configuration of the equipment made it physically impossible to record the response at the water-table. Placing a pressure transducer at the water-table would be useful to confirm the appropriate type of boundary condition to represent the water-table. While \cite{malama2011} and the modified model presented here use the linearised kinematic water-table representation, \cite{hyder1994} use a constant-head boundary condition to represent the water-table.
\subsection{Parameter estimation}
The modified model was used to estimate model parameters from data collected in observation wells during the tests at the Widen site. For the present study, to reduce the number of estimated parameters, it is sufficient to assume the aquifer is isotropic ($K_r = K_z = K$), and the skin conductivities of the source and observation wells are equal ($K_1 = K_3 = K_\mathrm{skin}$). Using the non-linear optimization software PEST \citep{doherty2010, doherty15}, we estimated skin hydraulic conductivity ($K_\mathrm{skin}$), formation hydraulic conductivity ($K$), specific storage ($S_s$), and the length parameters $L$ and $L_e$ that characterize the source and observation well damping coefficients and frequencies. It is typical to compute $L$ and $L_e$ using the formulas \citep{butler2002, kipp1985, zurbuchen2002}
\begin{equation}
\label{eqn:L}
L = d + \frac{b}{2} \left(\frac{r_c}{r_w}\right)^4,
\end{equation}
and
\begin{equation}
\label{eqn:Le}
L_e = L + \frac{b}{2} \left(\frac{r_c}{r_w}\right)^2.
\end{equation}
The values of $L$ and $L_e$ computed with these formulas were used as initial guesses during the parameter estimation procedure. The parameters $L_{e,\mathrm{obs}}$ and $L_\mathrm{obs}$, which determine the frequency and damping coefficient of the observation well were also estimated with initial guesses determined similarly. The non-linear optimization software PEST \citep{doherty2010, doherty15} was used to estimate the optimal parameters and the model parameter sensitivity at the optimal solution.
The fit of the model to observed cross-hole responses was very sensitive to the time of the initial observation (i.e., the syncing of the clocks at the source and observation wells). Initially it was difficult to get model/data agreement to both early and late-time data without assigning non-physical parameter values. Estimating a modest time shift (off-set) for each test greatly improved model fits to the data. Estimated observation data time delays were between 4 and 6 tenths of a second, which is a permissible time off-set between two synced transducer clocks.
PEST-estimated parameters are summarized in Table \ref{tab:params}. A subset of the complete dataset (25\% of the 50 Hz data stream) was used in the PEST optimization; this subset is shown in Figure~\ref{fig:modelfits}. The corresponding model fits to observation well data are shown in Figure \ref{fig:modelfits}. The relatively large average value of skin conductivity (averaging $K_\mathrm{skin} = 8.5\times 10^{-2}$~m/s) estimated from tests is consistent with a disturbed zone resulting from well installation by direct-push. The technology uses a hydraulic hammer supplemented with weight of the direct-push unit to push down drive rods to the desired depth of the projected well. The well casing is then lowered into the drive rods (inner diameter: 0.067~m, outer diameter 0.083~m). By retracting the drive rods, the formation is allowed to collapse back against the casing. The negative skin estimates ($K_\mathrm{skin}$ greater than formation $K$) are indicative of formation collapse due to material bridging resulting in a disturbed zone around the well casing. Skin conductivity estimation variances range from $10^{-2}\;\mathrm{m^2/s^2}$ for low noise data to $10^3\;\mathrm{m^2/s^2}$ noisy data and are indicative of dependence of estimation uncertainties on measurement errors.
\begin{table}[ht]
\caption{\label{tab:params}PEST-estimated model parameters.}
\centering
\begin{tabular}{lcccccccccc}
\hline
& $K$ & $K_\mathrm{skin}$ & $S_s$ & $S_y$ & $L$ & $L_e$ & $L_\mathrm{obs}$ & $L_{e,\mathrm{obs}}$\\
Test & [$\mathrm{m \cdot s^{-1}}$] & [$\mathrm{m \cdot s^{-1}}$] & [$\mathrm{m^{-1}}$] & [-] & [m] & [m] & [m] & [m] \\
\hline
P13-MC1-1 & $7.81\times 10^{-4}$ & $2.27\times 10^{-1}$ & $3.39\times 10^{-5}$ & 0.037 & 1.90 & 5.71 & 4.07 & $1.87\times 10^{-2}$\\
P13-MC1-3 & $8.85\times 10^{-4}$ & $1.07\times 10^{-1}$ & $1.25\times 10^{-5}$ & 0.40 & 1.18 & 4.31 & 2.39 & $1.80\times 10^{-2}$\\
P13-MC1-5 & $7.70\times 10^{-4}$ & $2.21\times 10^{-1}$ & $1.70\times 10^{-5}$ & 0.36 & 2.53 & 3.21 & 3.10 & $1.76\times 10^{-2}$\\
P13-MC1-7 & $1.28\times 10^{-3}$ & $1.02\times 10^{-2}$ & $3.85\times 10^{-5}$ & 0.018 & 0.23 & 2.26 & 11.4 & $9.74\times 10^{-2}$\\
P13-MC1-9 & $1.48\times 10^{-3}$ & $6.42\times 10^{2}$ & $2.79\times 10^{-8}$ & 0.001 & 2.95 & 0.83 & 8.83 & $6.04\times 10^{-2}$\\
\hline
P13-MC2-2 & $7.67\times 10^{-4}$ & $1.73\times 10^{-1}$ & $2.76\times 10^{-5}$ & 0.40 & 1.07 & 5.07 & 3.92 & $5.73\times10^{-1}$ \\
P13-MC2-4 & $1.36\times 10^{-3}$ & $2.15\times 10^{-1}$ & $5.11\times 10^{-5}$ & 0.04 & 8.01 & 3.66 & 4.63 & $4.84\times10^{-2}$\\
P13-MC2-6 & $1.08\times 10^{-3}$ & $6.42\times 10^{-2}$ & $1.95\times 10^{-5}$ & 0.40 & 1.38 & 2.91 & 2.55 & $1.79\times 10^{-2}$\\
P13-MC2-8 & $2.22\times 10^{-3}$ & $4.44\times 10^{-1}$ & $3.06\times 10^{-5}$ & 0.001 & 2.26 & 1.80 & 5.34 & $1.64\times 10^{-3}$\\
\hline
P13-MC4-2 & $1.60\times 10^{-3}$ & $3.17\times 10^{-1}$ & $7.41\times 10^{-5}$ & 0.005 & 2.39 & 4.81 & 6.42 & $1.35\times 10^{-2}$\\
P13-MC4-4 & $5.27\times 10^{-4}$ & $3.37\times 10^{-1}$ & $9.16\times 10^{-5}$ & 0.40 & 4.56 & 3.49 & 9.69 & $1.38\times 10^{-2}$\\
P13-MC4-6 & $3.79\times 10^{-3}$ & $2.04\times 10^{-2}$ & $7.22\times 10^{-5}$ & 0.001 & 3.24 & 2.82 & 3.09 & $4.85\times 10^{-1}$\\
P13-MC4-8 & $1.46\times 10^{-3}$ & $6.15\times 10^{-2}$ & $1.80\times 10^{-4}$ & 0.40 & 0.78 & 1.76 & 9.24 & $3.48\times 10^{-2}$\\
\hline
\end{tabular}
\end{table}
\FloatBarrier
\begin{figure}[h]
\begin{tabular}{ccc}
\includegraphics[width=0.32\textwidth]{finite-P13-MC1-7-model_pest_fit} & \includegraphics[width=0.32\textwidth]{finite-P13-MC2-08-model_pest_fit} & \includegraphics[width=0.32\textwidth]{finite-P13-MC4-8-model_pest_fit}\\
\includegraphics[width=0.32\textwidth]{finite-P13-MC1-5-model_pest_fit} & \includegraphics[width=0.32\textwidth]{finite-P13-MC2-06-model_pest_fit} & \includegraphics[width=0.32\textwidth]{finite-P13-MC4-6-model_pest_fit}\\
\includegraphics[width=0.32\textwidth]{finite-P13-MC1-3-model_pest_fit} & \includegraphics[width=0.32\textwidth]{finite-P13-MC2-04-model_pest_fit} & \includegraphics[width=0.32\textwidth]{finite-P13-MC4-4-model_pest_fit}\\
\includegraphics[width=0.32\textwidth]{finite-P13-MC1-1-model_pest_fit} & \includegraphics[width=0.32\textwidth]{finite-P13-MC2-02-model_pest_fit} & \includegraphics[width=0.32\textwidth]{finite-P13-MC4-2-model_pest_fit}
\end{tabular}
\caption{\label{fig:modelfits} Model fits to cross-hole slug test data collected along vertical profiles in three observation wells at the Widen Site, Switzerland. The columns correspond to profiles in observation wells 1, 2, and 4.}
\end{figure}
\FloatBarrier
Drilling logs and previous hydrogeophysical investigations at the site \citep{lochbuhler2013, coscia2011} indicate a sand and gravel aquifer. The formation hydraulic conductivities estimated here are on the order of $10^{-4}$ to $10^{-3}$~m/s, and in general agreement with the findings from earlier studies at the site. \citet{coscia2011} report estimates of the order of $10^{-3}$ to $10^{-2}$~m/s from multiple pumping and single-well slug tests conducted at the site by \citet{diem2010}. The average values estimated here range from a low of $7.1 \times 10^{-4}$~m/s to a high value of $3.8 \times 10^{-3}$~m/s. These and estimates from earlier studies at the site are reasonable for unconsolidated well-sorted sand and gravel aquifers \citep{bear1972, fetter2001}. The vertical variability in the estimates is reflective of site heterogeneity. The objective of multi-level slug tests is to characterize such heterogeneity using a physically based flow model. It should be understood that the model used in this analysis was developed for a homogeneous but anisotropic aquifer. Its application to characterizing heterogeneity is thus limited and only approximate, with data collected at discrete depth intervals assumed to yield hydraulic parameter values associated with that interval. Estimation variances for formation hydraulic conductivity range in magnitude from $6\times 10^{-2}$ to $1.2\times 10^1\;\mathrm{m^2/s^2}$.
Estimates of specific storage, $S_s$, also show only modest variability and are generally of the order of $10^{-5}\;\mathrm{m^{-1}}$, with the largest value being about $10^{-5}\;\mathrm{m^{-1}}$ and the smallest $10^{-7}\;\mathrm{m^{-1}}$. The estimated values are indicative of poorly consolidated shallow alluvium, and variability may reflect uncertainty or non-uniqueness in the solution for this configuration and dataset. Estimates of $S_y$ were quite variable, with estimation variances of the order of $10^{-7}$ to $10^2$. Estimated values of 0.4 correspond to the upper bound during optimization. P13-MC1-1, 7, and 9 resulted in estimated $S_y$ values of a few percent, which are physically realistic for these types of sediments and for the linearized kinematic condition at the watertable. In this parameter estimation analysis no significant physical constraints where introduced on the objective function; the observations were allowed to freely constrain the estimates of model parameters. Estimates of the parameter $L_e$ from data are comparable to those predicted by equation (\ref{eqn:Le}). However, estimates of $L$ from data are consistently larger than the values predicted by (\ref{eqn:L}).
\subsection{Sensitivity analysis}
The model sensitivity or Jacobian matrix, $\mathbf{J}$, of dimensions $N\times M$, where $N$ is the number of observations and $M$ is the number of estimated parameters, is of central importance to parameter estimation. The sensitivity coefficients are simply the elements of the Jacobian matrix; they are the partial derivatives of the model-predicted aquifer head response, $s$, with respect to the estimated parameter $\theta_m$. Sensitivity coefficients are represented here as functions of time using the nomenclature
\begin{equation}
J_{\theta_m}(t) = \frac{\partial s_{D,\mathrm{obs}}}{\partial \theta_m},
\end{equation}
where $m=1,2,...,M$. They describe the sensitivity of predicted model behavior (head response) to the model parameters. They provide a measure of the ease of estimation (identifiability) of the parameters from system state observations \citep{jacquez85}. The Jacobian matrix $\mathbf{J}$ has to satisfy the identifiability condition, $|\mathbf{J}^T\mathbf{J}| \neq 0$, for parameters to be estimable. This condition is typically satisfied for linearly independent sensitivity coefficients with appreciably large magnitudes. For this work, the number of parameters estimated is $M=8$, and the vector of estimated parameters is
\begin{equation}
\left(\theta_1,...,\theta_8 \right) = \left(K, K_\mathrm{skin}, S_s, S_y, L, L_e, L_\mathrm{obs}, L_{e,\mathrm{obs}} \right).
\end{equation}
Sensitivity coefficients for tests P13-MC1-1 (deepest) and P13-MC1-5 (intermediate depth) are shown as functions of time in Figures~\ref{fig:jac1-1} and \ref{fig:jac1-9}. Semi-log plots of the same information are included to more clearly show the non-zero sensitivity values at late-time.
\begin{figure}[h]
\includegraphics[width=0.48\textwidth]{p13mc1-1sen1}
\includegraphics[width=0.48\textwidth]{p13mc1-1sen1-log}
\includegraphics[width=0.48\textwidth]{p13mc1-1sen2}
\includegraphics[width=0.48\textwidth]{p13mc1-1sen2-log}
\caption{\label{fig:jac1-1} Temporal variation of the sensitivity coefficients (linear scale (a \& c) and log scale (b \& d)) for the indicated parameters at the source-observation pair P13-MC1-1 ($5.1$~m below watertable). Subplot (a) shows observed response.}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.48\textwidth]{p13mc1-5sen1}
\includegraphics[width=0.48\textwidth]{p13mc1-5sen1-log}
\includegraphics[width=0.48\textwidth]{p13mc1-5sen2}
\includegraphics[width=0.48\textwidth]{p13mc1-5sen2-log}
\caption{\label{fig:jac1-9} Temporal variation of the sensitivity coefficients (linear scale (a \& c) and log scale (b \& d)) for the indicated parameters at the source-observation pair P13-MC1-5 ($3.1$~m below watertable). Subplot (a) shows observed response (same scale as response in Figure~\ref{fig:jac1-1}).}
\end{figure}
Generally, the sensitivities are oscillatory functions of time with decaying amplitudes that vary over several orders of magnitude among the parameters. Figure \ref{fig:jac1-1}(a) shows the sensitivity to the parameters $K$, $K_\mathrm{skin}$, $S_s$, and $S_y$. It is clear that well skin conductivity, $K_\mathrm{skin}$, has the highest peak sensitivity at early-time, and is therefore the most easily identifiable parameter from early-time data. Specific yield, $S_y$, has the smallest sensitivities (about an order of magnitude smaller than $K_\mathrm{skin}$) and was the least identifiable (most difficult to estimate) of all the parameters.
Figures~\ref{fig:jac1-1}(a) and (b) also show that the sensitivity functions are generally out of phase with each other as well as with the observed response. For example, the sensitivity function $J_{K}(t)$ is almost completely out of phase (phase-shift of $\sim\pi$) with $J_{K_\mathrm{skin}}$. The same is true for $J_{S_s}(t)$ and $J_{S_y}(t)$. This indicates linear-independence of the sensitivity coefficient among all four parameters. This is desirable as it implies that the identifiability condition is satisfied, permitting concomitant estimation of all these four parameters.
Figure~\ref{fig:jac1-1}(a) shows the $J_{S_y}$ is oscillatory with the small amplitudes and does not change sign, but decay more slowly than the other sensitivity responses. The predicted model response showed only modest sensitivity to specific yield, $S_y$, but the sensitivity becomes appreciably dominant at late-time (Figure~\ref{fig:jac1-1}(b)). \citet{malama2011} showed that slug tests are more sensitive to $S_y$ at late-time, and for relatively large initial perturbation. At late-time slug test head data are typically of low SNR (diminished data quality) making it difficult to discern effects of specific yield. However, with measurements such as those reported in \citet{malama2011} for a site in Montana, it is possible to obtain single-well slug tests data with clear effects due to $S_y$. The cross-hole slug test data analysed herein showed only modest watertable effects and the late-time data were not of sufficient quality. This suggests the importance of late-time data to maximize $S_y$ identifiability and estimability as also noted in \citet{malama2011}.
Figures~\ref{fig:jac1-1}(c) and (d) show scaled slug response sensitivities to parameters $L$, $L_e$, $L_\mathrm{obs}$, and $L_{e,\mathrm{obs}}$. They show orders of magnitude of variability, with sensitivity $L_e$ being three orders of magnitude larger than sensitivity to $L_{e,\mathrm{obs}}$. Whereas those of the parameters $L$, $L_e$, and $L_{e,\mathrm{obs}}$ are linearly independent (not of the same phase), the pair $L_e$ and $L_\mathrm{obs}$ are only linearly independent at very early time; they oscillate with the same phase after about 4 seconds. This illustrates a long temporal record of observations would not improve the joint estimation of these two parameters.
Figure~\ref{fig:jac1-9} shows the same information as depicted in Figure~\ref{fig:jac1-1} for a more damped observation location closer to the watertable. Model sensitivity to $K$ is equal to or larger than $K_\mathrm{skin}$ for this interval. Sensitivity to $S_s$ is also higher at early-time. Among parameters $K$, $K_\mathrm{skin}$, $S_s$, and $S_y$, sensitivity to $S_y$ is the smallest at early time (Figure~\ref{fig:jac1-9}(b)). The sensitivity to $S_y$ stays approximately constant with time after the first 10 second of the test, while sensitivities to $K$, $K_\mathrm{skin}$, and $S_s$ continue to decrease. It should be noted however, that the unfavorable SNR (low data quality) makes it very difficult to estimate $S_y$ from late-time data. Collecting data at 3.11~m below the watertable did not yield an appreciable improvement in specific yield identifiability over the interval at 5.11~m depth in Figure~\ref{fig:jac1-1}. The behavior depicted in Figure \ref{fig:jac1-9} also suggests only data collected in the first 12 seconds of the test are needed to estimate model parameters at this depth. The sensitivity coefficients for all but $K$ essentially vanish after about 12 seconds and the identifiability condition is no longer satisfied. Additionally, even for the case where the sensitivity coefficients appear to be in phase (linearly dependent) at early-time (compare $J_{K}(t)$ and $J_{K_\mathrm{skin}}(t)$ for $t\leq 2$ s for test P13-MC1-5), they quickly (in the first 12 s) become linearly independent with time. This again indicates that a temporal record of the response longer than a few seconds is sufficient for joint estimation of these two parameters.
\section{Conclusions}
Cross-hole slug test data were analysed with an extended version of the model of \citet{malama2011}. The semi-analytical model was modified for:
\begin{enumerate}
\item predicting heads at observation wells,
\item inclusion of borehole skin effects,
\item use of the finite Hankel transform for computation expediency, and
\item inclusion of observation well storage and inertial effects.
\end{enumerate}
Estimates were obtained of formation and source/observation well skin hydraulic conductivity, specific storage, specific yield, and well characteristics that control oscillation frequency and degree of damping. The aim of the study was to evaluate the use of cross-hole slug test data to characterize vertical unconfined aquifer heterogeneity and understand identifiability and estimability of these parameters, especially specific yield. Estimated values of hydraulic conductivity and specific storage from PEST are indicative of a heterogeneous sand and gravel aquifer. Parameter estimation and sensitivity analysis show the model has effectively linearly independent sensitivity coefficients with respect to seven of the eight parameters estimated. These parameters are clearly jointly estimable from the data over the duration of the tests. It should be understood that the model used in this analysis was developed for a homogeneous but anisotropic aquifer and is thus of only limited and approximate applicability to analysis of a heterogeneous system.
Of the parameters estimated, model predictions were least sensitive to specific yield even near the watertable, which implies it was the least identifiable parameter. This is due to a combination of factors, including
\begin{enumerate}
\item the short duration of the data record due to rapid signal decay with time ($<20$ seconds);
\item the increasing damping observed in monitoring locations near the watertable (resulting in even shorter temporal records), and;
\item the decreasing signal strength near the watertable, resulting in a lower signal-to-noise.
\end{enumerate}
The sensitivity function with respect to specific yield shows a relatively modest increase in magnitude with time (model sensitivity to the other model parameters tends to decrease, while that of $S_y$ asymptotically tends to a non-zero constant value), suggesting the importance of late-time data to improve its estimation. The analysis of \cite{malama2011} also indicated that the largest effect of specific yield on slug test response is at late-time, at which time the amplitude of the signal has decayed significantly in magnitude and quality. The absence of good quality late-time observations and the relative low sensitivities of specific yield explain the the wide variability of the estimates of $S_y$.
An important shortcoming of using cross-hole slug tests to characterize heterogeneity, as has been suggested in several field \citep{brauchler2010, brauchler2011} and synthetic \citep{paradis2015} hydraulic tomography studies, is the significant decay of the signal with distance from the source well and close to the water-table. These lead to low quality observations with low signal-to-noise ratios (SNR), and would require test redesign to improve parameter identifiability and estimability. One approach to change test design is to conduct tests with a sufficiently large initial displacement in the source well to achieve favorable SNR at late-time in the observation wells. This may, however, introduce non-linearities and potentially increase the importance of unsaturated flow above the watertable \citep{mishra2011saturated}. Another approach is to use more sensitive and low-noise pressure sensors, which would increase costs significantly, especially in the cross-hole multilevel testing set-up where a large network of sensors is used for data acquisition. This would be particularly useful close to the watertable and further from the source well due to significant signal strength decay decline. This decline in signal strength limits the usefulness of crosshole slug tests for large-scale aquifer characterization using hydraulic tomography. Lastly, conducting multiple test repetitions and stacking the response data, akin to seismic data stacking \citep{jones1987}, can be used to amplify signal and increase the SNR.
\section*{Acknowledgements}
Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
\renewcommand{\theequation}{A-\arabic{equation}}
\setcounter{equation}{0}
\section*{Appendix A: Solution with Linearized Watertable Kinematic Condition}
\label{sec:appendixa}
The solution can be written in dimensionless form for the intervals above, below, and across from the source well completion interval as
\begin{equation}
\label{eqn:}
s_D = \left\{
\begin{split}
s_D^{(1)} & \quad \mbox{$\forall z_D\in[0,d_D]$}\\
s_D^{(2)} & \quad \mbox{$\forall z_D\in[d_D,l_D]$}\\
s_D^{(3)} & \quad \mbox{$\forall z_D\in[l_D,1]$},
\end{split} \right.
\end{equation}
where $s_D^{(n)}$ solves
\begin{equation}
\label{eqn:unconfinedPDE}
\frac{\partial s_D^{(n)}}{\partial t_D} = \frac{1}{r}\frac{\partial}{\partial r_D} \left(r_D \frac{\partial s_D^{(n)}}{\partial r_D}\right) + \kappa \frac{\partial^2 s_D^{(n)}}{\partial z_D^2}.
\end{equation}
The initial and boundary conditions are
\begin{equation}
\label{eqn:initialfarfieldBC2}
s_D^{(n)} (t_D=0) = s_D^{(n)} (r_D=R_D) = 0
\end{equation}
\begin{equation}
\label{eqn:centerBC}
\lim_{r_D\rightarrow 0} r_D\frac{\partial s_D^{(1)}}{\partial r_D} = \lim_{r_D\rightarrow 0} r_D \frac{\partial s_D^{(3)}}{\partial r_D} = 0
\end{equation}
\begin{equation}
\label{eqn:kinematicBCD}
\left.\frac{\partial s_D^{(1)}}{\partial z_D} \right|_{z_D=0} = \frac{1}{\alpha_D}\left.\frac{\partial s_D^{(1)}}{\partial t_D} \right|_{z_D=0}
\end{equation}
\begin{equation}
\label{eqn:noflow}
\left.\frac{\partial s_D^{(3)}}{\partial z_D} \right|_{z_D=1} = 0
\end{equation}
\begin{equation}
\label{eqn:massD}
\left. r_D\frac{\partial s_D^{(2)}}{\partial r_D}\right|_{r_D = r_{D,w}} = C_D \frac{\mathrm{d} \Phi_\mathrm{uc}}{\mathrm{d}t_D},
\end{equation}
\begin{equation}
\Phi_\mathrm{uc}(t_D = 0) = 1,
\end{equation}
and
\begin{equation}
\label{eqn:momentumDAp}
\beta_2 \frac{\mathrm{d}^2\Phi_\mathrm{uc}}{\mathrm{d}t_D^2} + \beta_1 \frac{\mathrm{d}\Phi_\mathrm{uc}}{\mathrm{d}t_D} +\Phi_\mathrm{uc}=\frac{1}{b_D}\int_{d_D}^{l_D} s_D^{(2)}(r_{D,w},z_D,t_D) \;\mathrm{d}z_D.
\end{equation}
Additionally, continuity of head and flux is imposed at $z_D=d_D$ and $z_D=l_D$ via
\begin{equation}
\label{eqn:s_continuity1}
s_D^{(1)} (t_D,r_D,z_D=d_D) = s_D^{(2)} (z_D=d_D),
\end{equation}
\begin{equation}
\label{eqn:ds_continuity1}
\left. \frac{\partial s_D^{(1)}}{\partial z_D} \right|_{z_D=d_D} = \left. \frac{\partial s_D^{(2)}}{\partial z_D} \right|_{z_D=d_D},
\end{equation}
\begin{equation}
s_D^{(3)} (t_D,r_D,z_D=l_D) = s_D^{(2)} (z_D=l_D),
\end{equation}
and
\begin{equation}
\left. \frac{\partial s_D^{(3)}}{\partial z_D} \right|_{z_D=l_D} = \left. \frac{\partial s_D^{(2)}}{\partial z_D} \right|_{z_D=l_D}.
\end{equation}
This flow problem is solved using Laplace and Hankel transforms. Taking the Laplace and Hankel transforms of Equation (\ref{eqn:unconfinedPDE}) for $n=1,3$, and taking into account the initial and boundary conditions in (\ref{eqn:initialfarfieldBC2}) and (\ref{eqn:centerBC}), gives the ordinary differential equation
\begin{equation}
\label{eqn:n1n3ODE}
\frac{\mathrm{d}^2 \hat{\overline{s}}_D^{(n)}}{\mathrm{d} z_D^2} - \eta^2 \hat{\overline{s}}_D^{(n)} = 0
\end{equation}
where $\hat{\overline{s}}_D^{(n)} = \mathsf{H}\{\mathcal{L}\{s_D^{(n)}\}\}$ is the double Laplace-Hankel transform of the function $s_D^{(n)}$, $\eta^2 = (p+a_i^2)/\kappa$, and $p$ and $a_i$ are the Laplace and finite Hankel transform parameters, respectively. Equation (\ref{eqn:n1n3ODE}) has the general solution
\begin{equation}
\hat{\overline{s}}_D^{(n)} = A_n e^{\eta z_D} + B_n e^{-\eta z_D}.
\end{equation}
The boundary condition at the watertable, Equation (\ref{eqn:kinematicBCD}), in Laplace--Hankel transform space, becomes
\begin{equation}
\label{eqn:kinematicBCD}
\left.\frac{\partial \hat{\overline{s}}_D^{(1)}}{\partial z_D} \right|_{z_D=0} = \frac{p}{\alpha_D} \hat{\overline{s}}_D^{(1)}(z_D=0).
\end{equation}
Applying this boundary condition leads to
\begin{equation}
\label{eqn:AB1}
(1-\varepsilon)A_1 - (1+\varepsilon)B_1 = 0,
\end{equation}
where $\varepsilon=p/(\eta\alpha_D)$. Applying the continuity conditions at $z_D=d_D$ (Equations (\ref{eqn:s_continuity1}) and (\ref{eqn:ds_continuity1})), lead to
\begin{equation}
\label{eqn:AB2}
A_1 e^{\eta d_D} + B_1 e^{-\eta d_D} = \hat{\overline{s}}_D^{(2)} (z_D=d_D),
\end{equation}
and
\begin{equation}
\label{eqn:AB3}
\eta\left(A_1 e^{\eta d_D} - B_1 e^{-\eta d_D}\right) = \left.\frac{\mathrm{d} \hat{\overline{s}}_D^{(2)}}{\mathrm{d} z_D} \right|_{z_D=d_D}.
\end{equation}
Similarly, applying the no flow boundary condition at $z_D=1$ (Equation \ref{eqn:noflow}), leads to
\begin{equation}
\hat{\overline{s}}_D^{(3)} = 2B_3 e^{-\eta} \cosh\left(\eta z_D^\ast\right),
\end{equation}
where $z_D^\ast = 1-z_D$. Continuity conditions at $z_D=l_D$ lead to
\begin{equation}
\label{eqn:AB4}
2B_3 e^{-\eta} \cosh\left (\eta l_D^\ast \right) = \hat{\overline{s}}_D^{(2)} (z_D=l_D),
\end{equation}
\begin{equation}
\label{eqn:AB5}
-2\eta B_3 e^{-\eta} \sinh\left(\eta l_D^\ast\right) = \left.\frac{\mathrm{d} \hat{\overline{s}}_D^{(2)}}{\mathrm{d} z_D} \right|_{z_D=l_D},
\end{equation}
where $l_D^\ast = 1 - l_D$ and $d_D^\ast = 1-d_D$.
For $n=2$, solving Equation (\ref{eqn:unconfinedPDE}) in Laplace-Hankel transform space yields
\begin{equation}
\label{eqn:sD2solution}
\hat{\overline{s}}_D^{(2)} = \hat{\overline{u}}_D + \hat{\overline{v}}_D,
\end{equation}
where
\begin{equation}
\hat{\overline{u}}_D = \frac{C_D(1-p\overline{\Phi}_\mathrm{uc})}{\kappa\eta^2\xi_w\mathrm{K}_1(\xi_w)},
\end{equation}
and
\begin{equation}
\label{eqn:vDsolution}
\hat{\overline{v}}_D = A_2 e^{\eta z_D} + B_2 e^{-\eta z_D}.
\end{equation}
The five equations (\ref{eqn:AB1})--(\ref{eqn:AB3}), (\ref{eqn:AB4}) and (\ref{eqn:AB5}), together with Equation (\ref{eqn:sD2solution}) can be used to determine the five unknown coefficients $A_1$, $A_2$, and $B_1$--$B_3$. It can then be shown that
\begin{equation}
\label{eqn:vDfunction}
\hat{\overline{v}}_D = -\frac{\hat{\overline{u}}_D}{\Delta_0} \left\{\Delta_1 \cosh\left(\eta z_D^\ast\right) + \sinh\left(\eta \l_D^\ast\right) \left[\cosh\left(\eta z_D\right) + \varepsilon \sinh\left(\eta z_D\right)\right] \right\}.
\end{equation}
The integral in Equation (\ref{eqn:momentumDAp}) is
\begin{equation}
\label{eqn:average_sD2}
\begin{split}
\frac{1}{b_D}\int_{d_D}^{l_D} \hat{\overline{s}}_D^{(2)} \;\mathrm{d}z_D & = \hat{\overline{u}}_D + \frac{1}{b_D} \int_{d_D}^{l_D} \hat{\overline{v}}_D \;\mathrm{d}z_D\\
& = \hat{\overline{u}}_D + \left\langle \hat{\overline{v}}_D \right\rangle.
\end{split}
\end{equation}
Substituting Equation (\ref{eqn:vDfunction}) into (\ref{eqn:average_sD2}) leads to
\begin{equation}
\label{eqn:average_sD2b}
\frac{1}{b_D}\int_{d_D}^{l_D} \hat{\overline{s}}_D^{(2)} \mathrm{d}z_D =\hat{\overline{u}}_D\left(1-\left\langle \hat{\overline{w}}_D \right\rangle\right),
\end{equation}
where
\begin{equation}
\label{eqn:wD}
\begin{split}
\langle \hat{\overline{w}}_D \rangle &= \frac{1}{b_D\eta \Delta_0} \left[\Delta_1 \sinh\left(\eta d_D^\ast\right) + \left(\Delta_2 - 2\Delta_1\right) \sinh\left(\eta l_D^\ast\right)\right],\\
\Delta_0 &= \sinh(\eta) + \varepsilon \cosh(\eta),\\
\Delta_1 &= \sinh(\eta d_D) + \varepsilon \cosh(\eta d_D),\\
\Delta_2 &= \sinh(\eta l_D) + \varepsilon \cosh(\eta l_D).
\end{split}
\end{equation}
Taking the Laplace transform of (\ref{eqn:momentumDAp}) and replacing the integral on the left-hand-side with (\ref{eqn:average_sD2b}), gives
\begin{equation}
\label{eq:phiuc2}
(p^2 + \beta_1p +\beta_2)\overline{\Phi}_\mathrm{uc} - p - \beta_1 = \frac{1}{2}\left(1-p\overline{\Phi}_\mathrm{uc} \right) \overline{\Omega}
\end{equation}
where $\hat{\overline{\Omega}}$ is defined in (\ref{eqn:HankLap_Omega}). Solving \eqref{eq:phiuc2} for $\overline{\Phi}_\mathrm{uc}$ yields the required source well response in Laplace space.
\section*{Notation}
\begin{tabular}{llc}
$a_i$ & finite Hankel transform parameter & $\mathrm{-}$\\
$B$ & Aquifer initial thickness & $\mathrm{L}$ \\
$b_s$ & Length of source well test interval & $\mathrm{L}$ \\
$C_w$ & Coefficient of wellbore storage & $\mathrm{L^2}$ \\
$d/d_\mathrm{o}$ & Depth of top of source/observation well test interval below watertable & $\mathrm{L}$ \\
$g$ & Acceleration due to gravity & $\mathrm{L \cdot T^{-2}}$ \\
$H$ & Hydraulic head change from equilibrium position in source well & $\mathrm{L}$ \\
$H_0$ & Initial slug input & $\mathrm{L}$ \\
$K$ & Formation hydraulic conductivity & $\mathrm{L \cdot T^{-1}}$ \\
$K_r$ & Radial formation hydraulic conductivity & $\mathrm{L \cdot T^{-1}}$ \\
$K_{z}$ & Vertical formation hydraulic conductivity & $\mathrm{L \cdot T^{-1}}$ \\
$K_\mathrm{skin}$ & Skin hydraulic conductivity & $\mathrm{L \cdot T^{-1}}$ \\
$l/l_\mathrm{o}$ & Depth of bottom of source/observation well test interval below watertable & $\mathrm{L}$ \\
$L/L_\mathrm{obs}$ & Characteristic length for source/observation well damping term & $\mathrm{L}$ \\
$L_e/L_{e,\mathrm{obs}}$ & Characteristic length for source/observation well oscillatory term & $\mathrm{L}$ \\
$p$ & Laplace transform parameter & $\mathrm{-}$\\
$r$ & Radial coordinate, out from center of source well & $\mathrm{L}$ \\
$R$ & Domain radius, out from center of source well & $\mathrm{L}$ \\
$r_c$ & Radius of source well tubing at water-table & $\mathrm{L}$ \\
$r_w$ & Radius of source well at test interval & $\mathrm{L}$ \\
$s$ & Hydraulic head change from initial conditions & $\mathrm{L}$ \\
$S_{s}$ & Specific storage & $\mathrm{L^{-1}}$\\
$S_y$ & Specific yield & $-$ \\
$t$ & Time since slug initiation & $\mathrm{T}$ \\
$T_c$ & Characteristic time ($T_c=B^2/\alpha_{r,1}$) & $\mathrm{T}$ \\
$z$ & Vertical coordinate, down from water-table & $\mathrm{L}$ \\
$\alpha_{r,i}$ & Hydraulic diffusivity of $i^\mathrm{th}$ zone & $\mathrm{L^2 \cdot T^{-1}}$ \\
$\gamma_s$ & Source well damping coefficient & $\mathrm{T^{-1}}$ \\
$\nu$ & Kinematic viscosity of water & $\mathrm{L^2 \cdot T^{-1}}$ \\
\end{tabular}
\section*{References}
|
2,869,038,156,827 | arxiv | \section{Introduction}
\label{sec:1}
The Standard Model (SM) of particle physics is a very successful theory describing a
wealth of experimental data up to collision energies of 13 TeV reached at CERN's Large
Hadron Collider (LHC). This includes the recent observation of a Higgs-like particle with
a mass of 125 GeV that seems to corroborate the simplest description of electroweak symmetry
breaking \cite{Aad:2012tfa,Chatrchyan:2012ufa,Aad:2015zhl}. However, the SM is based on
the unintuitive semi-simple gauge group SU(3)$_C\times$SU(2)$_L\times$U(1)$_Y$, that
together with the running behavior of the associated gauge couplings intriguingly
points towards a larger unification at some higher mass scale. The simple gauge group
SU(5) can accomodate the complete SM gauge group and its 15 fermions, but not a right-handed
neutrino, and it is in addition strongly disfavored by searches for proton decay. It also does
not allow to restore parity symmetry and does not provide a natural solution to the
neutrino mass hierarchy. Both of these important and perhaps related problems are solved
in simple gauge groups of higher rank like $E_6$ or SO(10), that can be broken consecutively
as in $E_6\to$SO(10)$\times$U(1)$_\psi$ and SO(10)$\to$SU(5)$\times$U(1)$_\chi$, respectively.
Parity restoration is achieved in left-right symmetric models, SU(3)$_C\times$SU(2)$_L\times$%
SU(2)$_R\times$U(1)$_Y$, which together with other models of similar group structure, but
different quantum number assignments form a class of general lower-scale models, commonly called
$G(221)$ models. They have recently been classified \cite{Hsieh:2010zr}, and their
phenomenology has been studied not only at the LHC \cite{Jezo:2012rm,Jezo:2014wra,Jezo:2015rha},
but also
in ultrahigh-energy cosmic rays \cite{Jezo:2014kla}. Common to all these possible extensions
of the SM is their prediction of a new heavy neutral gauge boson ($Z'$), that is associated with
the additional SU(2) or U(1) subgroup after symmetry breaking \cite{Langacker:2008yv,%
Agashe:2014kda}. In many cases, the $Z'$ boson can decay leptonically, making it a prime object of
experimental searches at the LHC. For simplification, these searches are mostly based on
the (theoretically unmotivated) Sequential SM (SSM), where the $Z'$ boson couples to other SM
particles like the SM $Z$ boson. In this model and the leptonic (i.e.\ Drell-Yan) channel, the
ATLAS and CMS collaborations have already excluded $Z'$ bosons with masses below
2.90 TeV \cite{Aad:2014cka} and 2.96 TeV \cite{CMS:2013qca}, respectively. For a recent overview
of experimental mass limits see Ref.\ \cite{Jezo:2014wra}, where it is also shown that
for certain $G(221)$ models the mass limits are enhanced to 3.2-4.0 TeV, when higher-order
QCD corrections are included.
In this paper, we focus not only on the SSM, but also on a
situation where the $Z'$ boson does not couple to leptons,
but preferentially to top quarks, so that the above mass limits are invalidated.
Models of the $G(221)$ class, where processes of the Drell-Yan type are inaccessible
at the LHC, include leptophobic (LP), hadrophobic (HP) and fermiophobic (FP) models,
whereas left-right (LR), un-unified (UU) and non-universal (NU) models remain accessible.
The LP model with a $W'$-boson mass of about 2 TeV has been put forward as a possible
explanation for the excesses of $WZ$ and $Wh$ production observed recently by ATLAS and
CMS at the LHC \cite{Gao:2015irw}.
As the heaviest particle in the SM with a mass of 173 GeV \cite{ATLAS:2014wva}, the top quark
may very well play a special role in electroweak symmetry breaking. This motivates, e.g.,
the NU model, where the first and second SU(2) gauge groups couple exclusively to the
first/second and third generation fermions, respectively.
It also motivates models with new strong dynamics such as the topcolor
model \cite{Hill:1991at,Hill:1994hp}, which can generate a large top-quark mass through the
formation of a top-quark condensate. This is achieved by introducing
a second strong SU(3) gauge group which couples preferentially to the
third generation, while the
original SU(3) gauge group couples only to the first and second generations. To block the
formation of a bottom-quark condensate, a new U(1) gauge group and associated $Z'$ boson are
introduced. Different couplings of the $Z'$ boson to the three fermion generations then
define different variants of the model \cite{Harris:1999ya}. A popular choice with the LHC
collaborations is the leptophobic topcolor model (also called Model IV in the reference cited
above) \cite{Harris:2011ez}, where the $Z'$ couples only to the first and third generations of
quarks and has no significant couplings to leptons, but an experimentally accessible cross
section.
The strongest limits on $Z'$ bosons arise of course from their Drell-Yan like decays
into electrons and muons at the LHC. This is due to the easily identifiable experimental
signatures
\cite{Jezo:2014wra}. The top-pair signature is more difficult, as top quarks decay to $W$ bosons
and bottom quarks, where the latter must be tagged and the two $W$ bosons may decay hadronically,
i.e.\ to jets, or leptonically, i.e.\ into electrons or muons and missing energy carried away by
a neutrino. In addition and in contrast to the Drell-Yan process, the electroweak top-pair
production cross section obtains QCD corrections not only in the initial, but also in the
final state. For conclusive analyses,
precision calculations are therefore extremely important
to reduce theoretical uncertainties, arising from variations of the renormalization and
factorization scales $\mu_r$ and $\mu_f$ and of the parton density functions (PDFs)
$f_{a/p}(x_a,\mu_f)$, and for an accurate description of the possible experimental signal and the
SM backgrounds.
At the LHC, the hadronic top-pair production cross section
\begin{eqnarray}
\sigma&=& \sum_{ab}\int f_{a/p}(x_a,\mu_f)f_{b/p}(x_b,\mu_f)\,{d\sigma_{ab}\over d t}(\mu_r)\,dt\, dx_a dx_b
\end{eqnarray}
obtains up to next-to-leading order (NLO) the contributions
\begin{eqnarray}
\sigma_{ab}(\mu_r) &=& \sigma_{2;0}(\alpha_{S}^2)
+ {\color{red} \sigma_{0;2}(\alpha^2)}
+ \sigma_{3;0}(\alpha_S^3)
+ \sigma_{2;1}(\alpha_S^2\alpha)
+ {\color{red}\sigma_{1;2}(\alpha_{S}\alpha^2)}
+ \sigma_{0;3}(\alpha^3)\,,
\label{eq:1.2}
\end{eqnarray}
where the numerical indices represent the powers of the strong coupling $\alpha_S(\mu_r)$ and of
the electromagnetic coupling $\alpha$, respectively.
The first and third terms representing the SM QCD background
processes $q\bar{q},gg\to t\bar{t}$ and their NLO QCD corrections, including the
$qg$ channel, have been computed in the late 1980
\cite{Nason:1987xz,Nason:1989zy,Beenakker:1988bq,Beenakker:1990maa}.
Furthermore, NLO predictions for heavy quark correlations have
been presented in \cite{Mangano:1991jk}, and the spin correlations
between the top quark and antiquark have been studied in the early
2000s \cite{Bernreuther:2001rq,Bernreuther:2004jv}.
The fourth term represents the electroweak
corrections to the QCD backgrounds, for which a gauge-invariant subset was first investigated
neglecting the interferences between QCD and electroweak interactions arising from box-diagram
topologies and pure photonic contributions \cite{Beenakker:1993yr} and later including also
additional Higgs boson contributions arising in 2-Higgs doublet models (2HDMs) \cite{Kao:1999kj}.
The rest of the electroweak corrections was calculated in a subsequent series of papers and
included also $Z$-gluon interference effects and QED corrections with real and virtual photons
\cite{Kuhn:2005it,Moretti:2006nf,Bernreuther:2005is,Bernreuther:2006vg,Hollik:2007sw}. In this
paper, we focus on
the second and fifth terms in Eq.\ (\ref{eq:1.2}) (highlighted in red), i.e.\ the contribution
$\sigma_{0;2}$ for the $Z'$ signal and its interferences with the photon and SM $Z$ boson and the
corresponding QCD corrections $\sigma_{1;2}$. Due to the resonance of the $Z'$ boson, we expect
these terms to be the most relevant for new physics searches. A particular advantage of this
choice is that the calculation of $\sigma_{1;2}$ can then be carried out in a model-independent
way as long as the $Z'$ couplings are kept general, whereas the fourth term $\sigma_{2;1}$ is
highly model-dependent due to the rich structure of the scalar sector in many models. The sixth
term in Eq.\ (\ref{eq:1.2}) is suppressed by a relative factor $\alpha/\alpha_s$ with respect
to the fifth and thus small.
The production of $Z'$ bosons (and Kaluza-Klein gravitons) decaying to top pairs has been
computed previously in NLO
QCD by Gao et al.\ in a factorized approach, i.e.\ neglecting all SM interferences and quark-gluon
initiated diagrams with the $Z'$ boson in the $t$-channel, and for purely vector- and/or
axial-vector-like couplings as those of the SSM \cite{Gao:2010bb}. We have verified that we
can reproduce their $K$-factors (i.e.\ the ratio of NLO over LO predictions) of 1.2 to 1.4
(depending on the $Z'$ mass) up to 2\%, if we reduce our calculation to their theoretical
set-up and employ their input parameters. Their result has triggered the Tevatron and LHC
collaborations to routinely use a $K$-factor of 1.3 in their experimental analyses (see below).
The factorized calculation by Gao et al.\ has been confirmed previously in an independent NLO
QCD calculation by Caola et al.\ \cite{Caola:2012rs}. Like us, these last authors include also the
additional quark-gluon initiated processes and show that after kinematic cuts
they reduce the $K$-factor by about 5
\%. However, they still do not include the additional SM interferences, which they claim to be
small for large $Z'$-boson masses. As we will show, this is not always true due to logarithmically
enhanced QED contributions from initial photons. In contrast to us, they also include
top-quark decays in the narrow-width approximation with spin correlations and
box-diagram
corrections to interferences of the electroweak and QCD Born processes ($\sigma_{2;1}$ in Eq.\
(\ref{eq:1.2})), which are, however, only relevant for very broad resonances.
If the (factorizable) QCD corrections to the top-quark decay are included,
the $K$-factor is reduced by an additional 15\%. The globally smaller $K$-factor of Caola
et al.\ is thus explained by calculational aspects and not by different choices of input
parameters.
The SM backgrounds are today routinely calculated not just in NLO QCD, but at NLO combined with
parton showers (PS), e.g.\ within the framework of MC@NLO or POWHEG \cite{Frixione:2002ik,Frixione:2007vw}.
A particularly useful tool is the POWHEG BOX, in which new processes can be implemented once
the spin- and color-correlated Born amplitudes along with their virtual and real NLO QCD
corrections are known and where the regions of singular radiation are then automatically
determined \cite{Alioli:2010xd}. Calculations of this type have already been performed by us
in the past for the Drell-Yan like production of $Z'$ bosons \cite{Fuks:2007gk}, heavy-quark
production in the ALICE experiment \cite{Klasen:2014dba}, and the
associated production of top quarks and charged Higgs bosons \cite{Weydert:2009vr,Klasen:2012wq}.
In this work, we provide a calculation of the $Z'$ signal with a final top-quark pair at the
same level of accuracy, including all interferences with SM $Z$ bosons and photons as well as
the logarithmically enhanced QED contributions from initial-state photons, which we will
discuss in some detail. We also present details about the spin- and color-correlated Born
amplitudes, the treatment of $\gamma_5$ and renormalization procedure in our calculation of
the virtual corrections, as well as the validation of our NLO+PS calculation, which we have
performed with the calculation for $Z'$ bosons of Gao et al.\ at NLO \cite{Gao:2010bb}
and for tree-level and one-loop SM matrix elements with MadGraph5\_aMC@NLO \cite{Alwall:2014hca}
and GoSam \cite{Cullen:2011ac}.
Experimental searches for resonant top-antitop production have been performed at the Tevatron
and at the LHC mostly for the leptophobic topcolor model with a $Z'$-boson coupling only
to first and third generation quarks \cite{Harris:1999ya,Harris:2011ez}. In this model,
the LO cross section is controlled by three parameters: the ratio of the two U(1) coupling
constants, $\cot\theta_H$, which should be large to enhance the condensation of top quarks, but
not bottom quarks, and which also controls both the $Z'$ production cross section and decay
width, as well as the relative
strengths $f_1$ and $f_2$ of the couplings of right-handed up- and down-type quarks with respect
to those of the left-handed quarks. The LO cross sections for this model are usually computed for
a fixed small $Z'$ width, $\Gamma_{Z'}=1.2\%\times m_{Z'}$, effectively setting the parameter
$\cot\theta_H$, and the choices $f_1=1$, $f_2=0$, which maximize the fraction of $Z'$ bosons that
decay into top-quark pairs.
We have verified that we can reproduce the LO numerical results in the paper by Harris and Jain
\cite{Harris:2011ez} for $Z'$ masses above 1 TeV and relative widths of 1\% and 1.2\%,
but not 10\%, if we neglect all SM interferences. As stated above, the LO cross sections are
routinely multiplied by the experimental collaborations by a $K$-factor of 1.3 \cite{Gao:2015irw}.
At the Tevatron with center-of-mass energy $\sqrt{S}=1.96$ TeV and in the lepton+jets
top-quark decay channel, CDF and D0 exclude $Z'$ bosons with masses up to 0.915 TeV
\cite{Aaltonen:2012af} and 0.835 TeV \cite{Abazov:2011gv}, respectively. The weaker
D0 limit can be
explained by the fact that CDF use the full integrated luminosity of 9.45 fb$^{-1}$, while
D0 analyze only 5.3 fb$^{-1}$ and furthermore do not use a $K$-factor for the signal cross
section.
At the LHC, the ATLAS and CMS collaborations have analyzed 20.3 fb$^{-1}$ and 19.7 fb$^{-1}$
of integrated luminosity of the $\sqrt{S}=8$ TeV LHC run employing the $K$-factor of 1.3.
The result is that narrow leptophobic
topcolor $Z'$ bosons are excluded below masses of 1.8 TeV and 2.4 TeV, respectively
\cite{Aad:2015fna,Khachatryan:2015sma}. At the LHC, the CMS limit is currently
considerably stronger than the one by ATLAS despite the slightly smaller exploited luminosity.
The reason is that CMS performed a combined analysis of all top-quark decay
channels (dilepton, lepton+jets and all hadronic), while ATLAS analyzed only the
lepton+jets channel. For $\Gamma_{Z'}=10\%\times m_{Z'}$, the CMS mass limit is even stronger and
is found to be 2.9 TeV.
We emphasize that the narrow width assumption employed in most experimental analyses
need not be realized in nature and that in this case a proper treatment of SM interference
terms as provided in our full calculation is required.
The LHC has just resumed running with an increased center-of-mass energy of 13 TeV,
which is planned to be increased to 14 TeV in the near future. We therefore provide
numerical predictions in this paper for both of these energies and for two benchmark
models, i.e.\ the SSM and the leptophobic topcolor model. The predictions for the SSM
are readily obtained by taking over the $Z'$-boson couplings from the SM, with the consequence
of again a relatively small width $\Gamma_{Z'}\simeq 3\% \times m_{Z'}$ for $Z'$ masses
between 3 and 6 TeV. We focus on the invariant-mass distribution of the top-quark pair,
which is the main observable exploited
for resonance (and in particular $Z'$-boson) searches, but also show results for the
distributions that are most sensitive to soft parton radiation beyond NLO, i.e.\
the transverse momentum $p_{t\bar{t}}$ of the top-antitop pair and their relative azimuthal angle
$\phi_{t\bar{t}}$. The forward-backward asymmetry $A_{FB}$ of top-antitop events with positive
vs.\ negative rapidity difference between the two has also been suggested as
a very useful observable to distinguish among different models \cite{Kamenik:2011wt}.
At the Tevatron (a
$p\bar{p}$ collider, where top quarks are produced predominantly in the direction of the
proton beam), long-standing discrepancies of CDF and D0 measurements with the SM prediction
at NLO \cite{Aaltonen:2011kc,Abazov:2011rq} have triggered numerous suggestions of new
physics contributions \cite{Kamenik:2011wt}, e.g.\ of light $Z'$ bosons coupling in a flavor
non-diagonal way to up and top quarks \cite{Buckley:2011vc}. Only recently the SM
prediction at next-to-next-to-leading order (NNLO) \cite{Czakon:2014xsa} has been
brought in agreement with the newest inclusive measurement by CDF \cite{Aaltonen:2012it} and
differential measurement by D0 \cite{Abazov:2014cca}. At the LHC (a $pp$ collider), a
charge asymmetry $A_C$ can be defined with respect to the difference in absolute value of
the top and antitop rapidities \cite{AguilarSaavedra:2012rx}. We therefore also provide
numerical predictions for this observable in our two benchmark models and at current and
future LHC center-of-mass energies.
Our paper is organized as follows: In Sec.\ \ref{sec:2} we present analytical results
of our calculations at LO and the NLO virtual and real corrections, including details
about SM interference terms, our treatment of $\gamma_5$, our renormalization procedure
and the subtraction method employed for the soft and collinear divergences in the
real corrections. In Sec.\ \ref{sec:3} we discuss the implementation of our calculation
in POWHEG and present in particular the color- and spin-correlated Born
amplitudes, the definition of the finite remainder of the virtual corrections,
the implementation of the real corrections with a focus on the rather involved
treatment of QED divergences, and the
validation of our tree-level matrix elements in the SM against those of the automated
tool MadGraph5\_aMC@NLO \cite{Alwall:2014hca}
and of the virtual corrections against those of GoSam \cite{Cullen:2011ac}
as well
as of our numerical pure $Z'$-boson results against those obtained
by Gao et al.\ and Caola et al. Our new numerical predictions for the LHC are shown and
discussed in Sec.\ \ref{sec:4}, and Sec.\ \ref{sec:5} contains our conclusions.
Several technical details of our calculation can be found in the Appendix.
\section{NLO QCD corrections to electroweak top-pair production}
\label{sec:2}
In this section, we present in detail our calculation of the NLO QCD corrections
to electroweak top-pair production through photons, SM $Z$ bosons and additional
$Z'$ bosons with generic vector and axial-vector couplings to the SM fermions.
We generate all Feynman diagrams automatically with QGRAF \cite{Nogueira:1991ex}
and translate them into amplitudes using DIANA \cite{Tentyukov:1999is}. The traces
of the summed and squared amplitudes with all interferences are then calculated
in the Feynman gauge and $D=4-2\varepsilon$ dimensions in order to regularize the
ultraviolet (UV) and infrared (IR) divergences using FORM \cite{Vermaseren:2000nd}.
Traces involving the Dirac matrix $\gamma_5$ are treated in the Larin
prescription \cite{Larin:1993tq} by replacing $\gamma_{\mu}\gamma_5 = i\frac{1}{3!}
\varepsilon_{\mu\nu\rho\sigma}\gamma^{\nu}\gamma^\rho\gamma^\sigma$.
To restore the Ward identities and thus preserve gauge invariance at one loop, we
perform an additional finite renormalization for vertices involving $\gamma_{5}$.
\subsection{Leading-order contributions}
\label{sec:2.1}
The leading-order (LO) Feynman diagrams contributing to the electroweak production
of top-quark pairs at ${\cal O}(\alpha)$ through photons, SM $Z$ bosons and
new $Z'$ bosons are shown summarily in Fig.\ \ref{fig:01}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.25\textwidth]{fig01}
\caption{Tree-level Feynman diagrams of order $\gr{O}(\alpha)$ contributing to electroweak
top-pair production through vector bosons $V$, i.e.\ photons ($\gamma$), SM $Z$ bosons and
new $Z'$ bosons.}
\label{fig:01}
\end{figure}
The cross section $d\sigma/dt$, differential in the Mandelstam variable $t$ denoting the
squared momentum transfer, is then obtained by summing all three corresponding amplitudes,
squaring them, summing/averaging them over final-/initial-state spins and colors and multiplying
them with the flux factor $1/(2s)$ of the incoming and the differential phase space $1/(8\pi s)$
of the outgoing particles. The Mandelstam variable $s$ denotes the squared partonic
center-of-mass energy. The result, given here for brevity only in four and not $D$ dimensions,
is
\begin{eqnarray}
{d\sigma_{q\bar{q}}\over dt}&=& \frac{1}{2s} \frac{1}{8\pi s} {B}_{q\bar{q}} \\
&=& \frac{1}{2s} \frac{1}{8\pi s} \sum_{V,V'}
\frac{2e^4 D_V D_{V^\prime}}{s_W^4} \left\{s(t-u)
\left( A_{V}^{{q}} B_{V^\prime}^{{q}} + A_{V^\prime}^{{q}} B_{V}^{{q}} \right)
\left( A_{V}^{{t}} B_{V^\prime}^{{t}} + A_{V^\prime}^{{t}} B_{V}^{{t}} \right)
\right.\nonumber\\
&+& \left. \left( A_{V}^{{q}} A_{V^\prime}^{{q}}\! +\! B_{V}^{{q}} B_{V^\prime}^{{q}} \right)
\left[ \left( t^2 + u^2 + 4sm_{{t}}^2 - 2m_{{t}}^4 \right)
A_{V}^{{t}} A_{V^\prime}^{{t}} + \left( t^2 + u^2 - 2m_{{t}}^4 \right)
B_{V}^{{t}} B_{V^\prime}^{{t}} \right] \right\}\nonumber\\
&\times& \left\{ \left[ (s\! -\! m_{V}^2) (s\! -\! m_{V^\prime}^2) + m_{V} m_{V^\prime}
\Gamma_{V} \Gamma_{V^\prime} \right] + i \left[ (s\! -\! m_{V}^2) m_{V^\prime}
\Gamma_{V^\prime} - (s\! -\! m_{V^\prime}^2) m_{V} \Gamma_{V} \right] \right\},\nonumber
\label{eq:BornRes1}
\end{eqnarray}
where ${B}_{q\bar{q}}$ is the modulus squared of the Born amplitude averaged/summed over
initial/final spins and colors, $V,V^\prime \in\{\ensuremath{\gamma}\xspace,\ensuremath{Z}\xspace,\ensuremath{Z'}\xspace\}$, the superscript
$q$ denotes the flavor of the incoming massless quarks, $s,\ t,\ u$ are the partonic
Mandelstam variables, and $m_t$ is the top-quark mass. Note that we
use the Pauli metric, in which the dot-product has an overall minus sign with respect to
the Bjorken-Drell metric \cite{Veltman:1994wz}. The terms $D_{V},D_{V^\prime}$ stem from the
propagator denominators and take the usual form
\begin{equation}
D_{\ensuremath{\gamma}\xspace} = \frac{1}{s^2},
\ D_{\ensuremath{Z}\xspace} = \frac{1}{(s-m_{\ensuremath{Z}\xspace}^2)^2+m_{\ensuremath{Z}\xspace}^2\Gamma_{\ensuremath{Z}\xspace}^2},
\ D_{\ensuremath{Z'}\xspace} = \frac{1}{(s-m_{\ensuremath{Z'}\xspace}^2)^2+m_{\ensuremath{Z'}\xspace}^2\Gamma_{\ensuremath{Z'}\xspace}^2}\,.
\label{eq:BornRes2}
\end{equation}
To take into account the finite widths of the $Z$ and $Z^\prime$ bosons, we have introduced
complex masses $m_{Z(Z^{\prime})}\rightarrow m_{Z(Z^{\prime})} - i \Gamma_{Z(Z^\prime)}/2$
with the consequence that $m_{Z(Z^{\prime})}^{2} \to m_{Z(Z^\prime)}^2 - \Gamma^{2}_{Z(Z^\prime)}/4$.
The coefficients $A^{{q}}_{V(V^\prime)},B^{{q}}_{V(V^\prime)},A^{{t}}_{V(V^\prime)},B^{{t}}_{V(V^\prime)}$
are proportional to the axial ($A$) and vector ($B$) couplings of the various gauge bosons
to the massless quarks (${q}=u,d,s,c,b$) and the top quark (${t}$),
\begin{align}
A_{\ensuremath{\gamma}\xspace}^q & = s_W Q_q, & A_{\ensuremath{\gamma}\xspace}^{{t}} & = s_W Q_t, & B_{\ensuremath{\gamma}\xspace}^{q} & = 0, & B_{\ensuremath{\gamma}\xspace}^{{t}} & = 0, \nonumber \\
A_{\ensuremath{Z}\xspace}^q & = \frac{a_{\ensuremath{Z}\xspace}^q}{4c_W }, & A_{\ensuremath{Z}\xspace}^{{t}} & = \frac{a_{\ensuremath{Z}\xspace}^t}{4c_W}, & B_{\ensuremath{Z}\xspace}^{q} & = \frac{b_{\ensuremath{Z}\xspace}^{q}}{4c_W}, & B_{\ensuremath{Z}\xspace}^{{t}} & = \frac{b_{\ensuremath{Z}\xspace}^{t}}{4c_W}, \nonumber \\
A_{\ensuremath{Z'}\xspace}^q & = \frac{a_{\ensuremath{Z'}\xspace}^{q}}{4c_W}, & A_{\ensuremath{Z'}\xspace}^{{t}} & = \frac{a_{\ensuremath{Z'}\xspace}^{{t}}}{4c_W}, & B_{\ensuremath{Z'}\xspace}^{q} & = \frac{b_{\ensuremath{Z'}\xspace}^{q}}{4c_W}, & B_{\ensuremath{Z'}\xspace}^{{t}} & = \frac{b_{\ensuremath{Z'}\xspace}^{{t}}}{4c_W}\,,
\end{align}
where $s_W\,(c_W)$ are the sine (cosine) of the weak mixing angle $\theta_W$, $Q_q$ is
the fractional
charge of quark flavor $q$, and $a_V^q$ and $b_V^q$ are the model-dependent vector and axial-vector
couplings of the $Z$ and $Z'$ bosons, e.g. $a_Z^u=1-8/3s_W^2$, $a_Z^d=4/3s_W^2-1$, $b_Z^u=1$,
$b_Z^d=-1$ for all up- and down-type quarks in the SM. Although individual interference terms
may contain imaginary parts, they cancel as expected after summation.
\subsection{One-loop virtual corrections}
\label{sec:2.2}
The one-loop virtual corrections contributing to electroweak top-pair production
at $\gr{O}( \alpha_s\alpha^2)$ originate from the interferences among the one-loop
diagrams shown in Fig.~\ref{fig:02} with the tree-level diagrams in Fig.~\ref{fig:01}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{fig02}
\caption{One-loop Feynman diagrams of order $\gr{O}(\alpha_{S}\alpha)$ contributing to
electroweak top-pair production.}
\label{fig:02}
\end{figure}
Note that one-loop electroweak corrections to the QCD process $q\bar q \to g^*
\to t \bar{t}$ have zero interference with the electroweak diagrams in Fig.~\ref{fig:01}, since
such contributions are proportional to the vanishing color trace $\mathrm{Tr}(T^a)$.
In particular, the interference term of the box diagram in Fig.~\ref{fig:03} with the amplitudes
\begin{figure}[!h]
\centering
\includegraphics[width=0.25\textwidth]{fig03}
\caption{Example of a box diagram of $\gr{O}(\alpha_S\alpha)$ leading to a vanishing
contribution. This diagram would, however, contribute to electroweak corrections to
the QCD Born processes.}
\label{fig:03}
\end{figure}
in Fig.~\ref{fig:01} vanishes, whereas it would of course contribute at $\gr{O}(\alpha_s^2
\alpha)$.
As already mentioned, the virtual amplitudes are regularized dimensionally. The appearing 30
distinct loop integrals are then reduced to a basis of three master integrals
using integration-by parts identities \cite{Tkachov:1981wb,Chetyrkin:1981qh} in the form of the
Laporta algorithm \cite{Laporta:2001dd} as implemented in the public tool REDUZE
\cite{2010CoPhC.181.1293S,vonManteuffel:2012np}.
One is thus left with the evaluation of three master integrals: the
massive tadpole, the equal-masses two-point function, and the massless
two-point function. The solutions of these integrals are well known \cite{Hooft:1978xw}.
For completeness, we provide their analytic expressions in App.\ \ref{sec:a}.
In dimensional regularization, the UV and IR singularities in the virtual corrections appear
as poles of $1/\varepsilon$ and $1/\varepsilon^2$. Since neither the couplings nor the top-quark
mass have to be renormalized at NLO, the UV singularities can be removed by simply adding the Born
cross section multiplied with the quark wave-function renormalization constants
\begin{equation}
\sum_{\psi\in\{{q},\bar {q},{t},\bar {t}\}}\frac{1}{2}\delta Z_\psi \,.
\end{equation}
We use the on-shell renormalization scheme, in which $\delta Z_q=0$ for the
initial-state massless quarks and
\begin{equation}
\delta Z_t=(4\pi)^\varepsilon\Gamma(1+\varepsilon)
\left({\mu_r^2\over m_t^2}\right)^\varepsilon {C_F\alpha_s\over\pi}\left(-{3\over4\varepsilon}
-{1\over1-2\varepsilon}\right)
\end{equation}
for the final-state top quarks.
Since we are using the Larin prescription for $\gamma_5$ (see above), we must perform
an additional finite renormalization to restore the Ward identities. The corresponding
constant has been calculated up to three loops in the $\ensuremath{\overline{\mathrm{MS}}}\xspace$ scheme \cite{Larin:1993tq}.
At one loop, it reads
\begin{equation}
\delta Z_5=-{C_F\alpha_s\over\pi}
\end{equation}
and multiplies all appearing factors of $\gamma_5$.
Once the UV divergences are renormalized, we are left with infrared collinear and
soft divergences that match the correct structure given for instance in Refs.\
\cite{Catani:2002hc,Frixione:1995ms}. For completeness, we provide the analytic
expressions of the IR poles in App.\ \ref{sec:b}.
\subsection{Real emission corrections}
\label{sec:2.3}
At $\gr{O}(\alpha_S\alpha^2)$, the following $2\rightarrow 3$ tree-level processes
contribute:
\begin{inparaenum}[(i)]
\item $q +\bar q \to t + \bar t + g$ and
\item $g + q(\bar q) \to t + \bar t + q(\bar q)$.
\end{inparaenum}
The corresponding Feynman diagrams are depicted in Figs.~\ref{fig:04} and \ref{fig:05}.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{fig04}
\caption{Diagrams contributing to the $q + \bar q \to t + \bar t + g$ subprocess at
$\gr{O}(\alpha_{S}\alpha^2)$ with $V\in \{\gamma,Z,Z^{\prime}\}$.}
\label{fig:04}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{fig05}
\caption{Diagrams contributing to the $g+q \to t + \bar t + q$ subprocess at $\gr{O}(\alpha_{S}
\alpha^2)$ with $V\in \{\gamma,Z,Z^{\prime}\}$. Similar diagrams contribute to the $g \bar q$
channel.}
\label{fig:05}
\end{figure}
In the $q \bar q$ channel, the diagrams in Figs.~\ref{fig:04} (a) and (b) only have a singularity
when the gluon emitted from the heavy top-quark line becomes soft, whereas those in Figs.\
\ref{fig:04} (c) and (d) diverge when the radiated gluon becomes soft and/or collinear to the
emitting light quark or antiquark. The $g q$ and $g\bar q$ channels exhibit at most collinear
singularities. While the diagram in Fig.\ \ref{fig:05} (a) is completely finite, the outgoing
quarks in Figs.~\ref{fig:05} (b) or (c) and (d) can become collinear to the initial gluon or
quark.
As a consequence of the KLN theorem, the soft and soft-collinear divergences
cancel in the sum of the real and virtual cross sections, while the collinear
singularities are absorbed into the parton distribution functions (PDFs) by
means of the mass factorization procedure. The singularities in the real
corrections are removed in the numerical phase space integration by subtracting
the corresponding unintegrated counter terms \cite{Catani:2002hc,Frixione:1995ms}.
The fact that the collinear divergences appearing in Figs.~\ref{fig:05} (c) and (d) involve a
photon propagator has two consequences:
\begin{inparaenum}[(i)]\item we have to introduce a PDF for the photon inside the proton and
\item the corresponding underlying Born process shown in Fig.~\ref{fig:06}, $g+\gamma \to
t + \bar t$, must be included in the calculation.
\end{inparaenum}
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{fig06}
\caption{Photon-induced top-pair production of $\gr{O}(\alpha_S\alpha)$. These
diagrams must be added for a consistent subtraction of the collinear singularities.}
\label{fig:06}
\end{figure}
The squared modulus of the corresponding Born amplitude, averaged/summed over initial/final
state spins and colors, is
\begin{eqnarray}
{B}_{g\gamma}&=&
16\pi^2\alpha_s\alpha Q_t^2
\left[{t_t\over u_t}+{u_t\over t_t}+{4m_t^2s\over t_tu_t}
\left(1-{m_t^2 s\over t_tu_t}\right)\right],
\end{eqnarray}
with $Q_t$ the fractional electric charge of the top quark (2/3),
$N_C=3$, $C_F=4/3$, $t_t=t-m_t^2$ and $u_t=u-m_t^2$. Although this process is formally of
$\gr{O}(\alpha_S\alpha)$ and thus contributes to $\sigma_{1;1}$, it is multiplied by a
photon distribution inside the proton of $\gr{O}(\alpha)$, so that the hadronic subprocess
$p+p\to g+\gamma \to t + \bar t$ is effectively of $\gr{O}(\alpha_S\alpha^2)$.
As we will see in Sec.~\ref{sec:4}, this channel is indeed
numerically important.
\section{POWHEG implementation}
\label{sec:3}
We now turn to the implementation of our NLO corrections to electroweak top-pair
production, described in the previous section, in the NLO+PS program POWHEG
\cite{Alioli:2010xd}. We thus combine the NLO precision of our analytical
calculation with the flexibility of parton shower Monte Carlo programs like PYTHIA
\cite{Sjostrand:2007gs} or HERWIG \cite{Corcella:2000bw} that are indispensible tools
to describe complex multi-parton final states, their hadronization, and particle
decays at the LHC. Since the leading emission is generated both at NLO and with the
PS, the overlap must be subtracted, which is achieved using the POWHEG method
\cite{Frixione:2007vw} implemented in the POWHEG BOX \cite{Alioli:2010xd}.
In the following, we describe the required color- and
spin-correlated Born amplitudes, the definition and implementation of the finite
remainder of the virtual corrections, and the real corrections with a focus on
the subtleties associated with the encountered QED divergences. All other aspects
such as lists of the flavor structure of the Born and real-emission processes, the
Born phase space, and the four-dimensional real-emission squared matrix elements
have either already been discussed above or are trivial to obtain following the POWHEG
instructions \cite{Alioli:2010xd}. We end this section with a description of the
numerical validation of our implementation.
\subsection{Color-correlated Born amplitudes}
The automated calculation of the subtraction terms in POWHEG requires the knowledge
of the color correlations between all pairs of external legs $i,j$. The color-correlated
squared Born amplitude $\gr{B}_{ij}$ is formally defined by
\begin{eqnarray}
\gr{B}_{ij} &=&-N
\sum_{\begin{array}{c}\text{\scriptsize spins}\\[-0.18cm]\text{\scriptsize colors}\end{array}}
\gr{M}_{\{c_k\}}\left( \gr{M}^\dagger_{\{c_k\}} \right)_{
\begin{array}{c} c_i\rightarrow c^\prime_i\\c_j\rightarrow c^\prime_j\end{array}}
T^a _{c_i,c^\prime_i}T^a_{c_j,c^\prime_j}\,,
\label{eq:3.1}
\end{eqnarray}
where $N$ is the normalization factor for initial-state spin/color averages and final-state
symmetrization, $\gr{M}_{\{c_k\}}$ is the Born amplitude and $\{c_k\}$ are the color indices of all
external colored particles. The suffix of $(\gr{M}^{\dagger}_{\{c_k\}})$ indicates that the color
indices of partons $i,j$ must be replaced with primed indices. For incoming quarks and
outgoing antiquarks $T^{a}_{c_i,c_i'}=t^a_{c_ic_i'}$, where $t$ are the color
matrices in the fundamental representation of SU(3), for incoming antiquarks and
outgoing quarks $T^{a}_{c_ic_i'}=-t^a_{c_i'c_i}$, and for gluons $T^{a}_{c_ic_i'}=if_{c_i a c_i'}$,
where $f_{abc}$ are the structure constants of SU(3).
For the $q\bar{q}$-initiated electroweak top-pair production, one obtains in a
straightforward way
\begin{equation}
\gr{B}_{ij} = C_{F} {B}_{q\bar{q}}
\end{equation}
for two incoming ($i,j=q,\bar{q}$) or outgoing ($i,j=t,\bar{t}$) particles and zero otherwise.
As we have seen in Sec.\ \ref{sec:2.3}, we also have to include the gluon-photon induced pair
production process in order to treat the QED divergence occurring in the $gq$ real-emission
correction. We thus also have to calculate the color-correlated squared Born matrix element
for this process. The color structure of the corresponding Feynman diagrams, see Fig.\
\ref{fig:06}, factorizes in the amplitude, and we can thus directly calculate the
color-correlated in terms of the averaged/summed modulus squared of the Born matrix element
with color factor $\gr{C}=N_CC_F=(N_C^2-1)/2$. Applying Eq.\ (\ref{eq:3.1}) to all pairs of colored
external legs, we obtain
\begin{eqnarray}
\gr{B}_{13} &=& - \frac{1}{\gr{C}}t^{a}_{\alpha\beta}t^{a^\prime}_{\beta \alpha^{\prime}} T^{e}_{a,a^\prime}T^{e}_{\alpha\alpha^{\prime}}B_{g\gamma} =-t^{a}_{\alpha\beta} t^{a^\prime}_{\beta\alpha^\prime}if_{aea^{\prime}}( -t^{e}_{\alpha^\prime \alpha})\frac{B_{g\gamma}}{\gr{C}}\\
&=&-if_{a^\prime e a}\mathrm{Tr}(t^{a^\prime}t^e t^a)\frac{B_{g\gamma}}{\gr{C}} =\frac{1}{2}N_C \mathrm{Tr}(t^a t^a) \frac{B_{g\gamma}}{\gr{C}}\nonumber\\
&=& \frac{1}{2}N_CB_{g\gamma}\,,\nonumber\\
\gr{B}_{14} &=& \gr{B}_{13}= \frac{1}{2}N_CB_{g\gamma}\,,\\
\gr{B}_{34} &=& \gr{B}_{43} ~=~
-\frac{1}{\gr{C}}B_{g\gamma}t^{a}_{\alpha\beta}t^{b}_{\beta^{\prime}\alpha^{\prime}}T^{e}_{\beta\beta^\prime}T^e_{\alpha\alpha^\prime}\delta^{ab}=\mathrm{Tr}(t^at^et^at^e) \frac{1}{\gr{C}}B_{g\gamma} = \frac{-1}{2N_C}B_{g\gamma}\,.
\end{eqnarray}
As is easily verified, a completeness relation coming from color conservation holds:
\begin{eqnarray}
\gr{B}_{13} + \gr{B}_{14} &=& \left( \frac{1}{2}N_C + \frac{1}{2}N_C \right)B_{g\gamma} = N_C B_{g\gamma}\,,\nonumber\\
\gr{B}_{34} + \gr{B}_{31} &=& \left( \frac{-1}{2N_C} + \frac{1}{2}N_C \right)B_{g\gamma} = \frac{N_C^2-1}{2N_C} B_{g\gamma} = C_F B_{g\gamma}\,,
\end{eqnarray}
and similarly for $\gr{B}_{41}+\gr{B}_{43}$. These cross checks are also performed automatically
in POWHEG.
\subsection{Spin-correlated Born amplitudes}
The spin-correlated squared Born amplitude $B^{\mu\nu}_{j}$ only differs from zero, if leg $j$
is a gluon. It is obtained by leaving uncontracted the polarization indices of this leg, i.e.\
\begin{equation}
\gr{B}^{\mu\nu}_{j} =N\sum_{\{i\},s_j,s^\prime_j}\gr{M}(\{i\},s_j)\gr{M}^{\dagger}(\{i\},s^{\prime}_j)(\varepsilon^{\mu}_{s_j})^{\ast}\varepsilon^{\nu}_{s^\prime_j}\,,
\end{equation}
where $\gr{M}(\{i\},s_j)$ is the Born amplitude, $\{i\}$ represents collectively all remaining
spins and colors of the incoming and outgoing particles, and $s_j$ is the spin of particle $j$.
The polarization vectors $\varepsilon^{\mu}_{s_j}$ are normalized according to
\begin{equation}
\sum_{\mu,\nu} g_{\mu\nu}(\varepsilon^{\mu}_{s_j})^{\ast}\varepsilon^{\nu}_{s^\prime_j}=-\delta_{s_js^\prime_j}\,.
\end{equation}
Similarly to the color-correlated Born amplitudes, we have a closure relation, namely
\begin{equation}
\sum_{\mu,\nu}g_{\mu\nu} \gr{B}^{\mu\nu}_j =-{B}\,,
\label{eq:3.10}
\end{equation}
where $B$ is the squared Born amplitude after summing over all polarizations.
Since processes without external gluons lead to vanishing contributions, we must only
consider the gluon-photon induced top-pair production and then modify POWHEG in such a way
that the subtraction terms for the QED divergence in the $gq$ channel can also be
constructed.
We therefore compute here explicitly the expression for $\gr{B}_2^{\mu\nu}$, where the subscript $2$
designates the photon leg (see Fig.\ \ref{fig:06}). Applying the above procedure then leads to
\begin{eqnarray}
\gr{B}^{\mu\nu}_2 &=& {8\pi^2 \alpha_s\alpha Q_t^{2} \over m_t^2 z_1^2 y_1^2}\left(\begin{pmatrix}p_1^{\mu}&p_2^{\mu}&p_3^\mu\end{pmatrix}\gr{A}_1\begin{pmatrix}p_1^{\nu}\\p_2^{\nu}\\p_3^{\nu}\end{pmatrix} - \gr{A}_2 g^{\mu\nu}\right)\,,
\end{eqnarray}
where
\begin{eqnarray}
\gr{A}_1 &=&
\begin{pmatrix}8 z_1^2& 2 \gr{P}_2 z_1& - 8 \gr{P}_1 z_1\\
2 \gr{P}_2 z_1& 4 (\gr{P}_1 - z_1)^2 z_1& 6 \gr{P}_1 z_1^2 - 4 z_1^3 - 2\gr{P}_1^2 (2 + z_1)\\
-8 \gr{P}_1 z_1& 6 \gr{P}_1 z_1^2 - 4 z_1^3 - 2\gr{P}_1^2 (2 + z_1)& 8 \gr{P}_1^2
\end{pmatrix}\,,\\
\gr{A}_2 &=& m_t^2 \gr{P}_3 (\gr{P}_1 - z_1) z_1\,,\\
\gr{P}_1 &=& y_1+z_1\,,\\
\gr{P}_2 &=& 2(y_1+z_1) + y_1^2\,,\\
\gr{P}_3 &=& y_1^2 + z_1^2\,,\\
y_1 &=& \left( 1-\frac{t}{m_t^2} \right)\quad {\rm and}\\
z_1 &=& \left( 1-\frac{u}{m_t^2} \right)\,.
\end{eqnarray}
As for the color-correlated squared Born matrix element, the closure relation of
Eq.\ (\ref{eq:3.10}) is implemented in POWHEG as a consistency check.
\subsection{Implementation of the virtual corrections}
For the implementation in POWHEG, the virtual corrections must be put into the form
\begin{eqnarray}
\gr{V} &=& \gr{N}\frac{\alpha_S}{2\pi}\left[ \frac{1}{\varepsilon^{2}}a\gr{B}
+ \frac{1}{\varepsilon}\sum_{i,j}c_{ij}\gr{B}_{ij} +\gr{V}_{\mathrm{fin.}} \right]
\label{eq:3.19}
\end{eqnarray}
with the normalization constant
\begin{eqnarray}
\gr{N} &=& \frac{(4\pi)^{\varepsilon}}{\Gamma(1-\varepsilon)}\left( \frac{\mu_r^2}{Q^2}
\right)^{\varepsilon}\,.
\label{eq:3.20}
\end{eqnarray}
General expressions for the coefficients $a$ and $c_{ij}$ can be found, e.g., in App.\ B of
Ref.\ \cite{Frederix:2009yq} and in Refs.\ \cite{Jezo:2013,Lyonnet:2014wfa}.
$\mu_r$ is the renormalization scale, and $Q$ is an arbitrary
scale first introduced by Ellis and Sexton \cite{Ellis:1985er} and identified in POWHEG with
$\mu_r$. The finite part $\gr{V}_{\rm fin.}$ is then obtained form our calculation of the virtual
corrections in Sec.\ \ref{sec:2.2}.
\subsection{Real corrections and QED divergences}
Like the Born contributions, the real-emission squared amplitudes have been
implemented in POWHEG for each individual flavor structure contributing to the
real cross section. As already stated above, the diagram in Fig.\ \ref{fig:05}
(a) is finite and does not involve any singular regions. The diagrams in Fig.\
\ref{fig:04} and Fig.\ \ref{fig:05} (b) have the same underlying Born structure
as the LO process $q\bar{q}\to t\bar{t}$, followed or preceded by singular QCD
splittings of quarks into quarks (and gluons) or of gluons into quarks (and
antiquarks), so that their singular regions are automatically identified by POWHEG.
The diagrams in Fig.\ \ref{fig:05} (c) and (d) involve, however, the photon-induced
underlying Born diagrams in Fig.\ \ref{fig:06}, preceded by a singular QED
splitting of a quark into a photon (and a quark). The corresponding QED singularities
were so far not treated properly in POWHEG. Only the singular emission of final-state
photons had previously been implemented in Version 2 of the POWHEG BOX in the context
of the production of single $W$ bosons \cite{Barze:2012tt} and the neutral-current
Drell-Yan process \cite{Barze':2013yca}.
We therefore also implemented the photon-induced Born structures in Fig.\ \ref{fig:06},
replaced the POWHEG subtraction for the QCD splitting of initial quarks into gluons (and
quarks), which doesn't occur in our calculation, by a similar procedure for the QED
splitting of initial quarks into photons (and quarks), and enabled in addition the POWHEG
flag for real photon emission, which then allows for the automatic factorization of the
initial-state QED singularity and the use of photonic parton densities in the proton.
Note that this also restricts the possible choices of PDF parametrizations, as photon PDFs
are provided in very few global fits.
\subsection{Validation}
Our implementation of the electroweak top-pair production with new gauge-boson contributions
has been added to the list of POWHEG processes under the name PBZp. It allows for maximal
flexibility with respect to the choices of included interferences between SM photons and
$Z$ bosons as well as $Z'$ bosons, the vector and axial-vector couplings of the latter, and
the choices of renormalization and factorization scales (fixed or running with
$\sqrt{p_T^2+m_t^2}$ or $s$) in addition to the standard POWHEG options.
The SM Born, real and $1/\varepsilon$-expansion of the virtual matrix elements have been
checked against those provided by MadGraph5\_aMC@NLO \cite{Alwall:2014hca} and GoSam
\cite{Cullen:2011ac}, respectively. After including the $Z'$-boson contributions, we checked our
full implementation with respect to the cancellation of UV and IR divergences. We validated,
in addition to the renormalization procedure described in Sec.\
\ref{sec:2.2}, the completeness relations for the color- and spin-correlated Born amplitudes
and performed the automated POWHEG checks of
the kinematic limits of the real-emission amplitudes. In particular, we have checked explicitly
that the variable describing the collinear QED singularity shows a regular behavior after
the implementation of our new QED subtraction procedure. Restricting ourselves again to the
SM, our total hadronic cross section with the $q\bar{q}$ initial state only could be shown to
fully agree with the results in MadGraph5\_aMC@NLO, which does not allow for a proper treatment
of the QED divergence in the $gq$ initial state.
As already discussed in the introduction, the production of $Z'$ bosons
decaying to top pairs has been computed previously in NLO QCD by Gao et al.\ in a factorized approach for purely vector- and/or axial-vector-like couplings as those of the SSM \cite{Gao:2010bb}.
They neglected, however, all SM interferences and quark-gluon initiated diagrams with the $Z'$
boson in the $t$-channel. We can reproduce their $K$-factors of 1.2 to 1.4
(depending on the $Z'$ mass) up to 2\%, if we reduce our calculation to their theoretical
set-up and employ their input parameters. In the independent NLO QCD calculation by Caola et al.\
\cite{Caola:2012rs}, the authors include also the
additional quark-gluon initiated processes and show that they reduce the $K$-factor by about 5
\%. However, they still do not include the additional SM interferences, which they claim to be
small for large $Z'$-boson masses. As we have discussed in detail, this is not always true due to
the logarithmically enhanced QED contributions from initial photons. If we exclude SM interferences
and the (factorizable) QCD corrections to the top-quark decay, we can also reproduce their $K$-factors.
\section{Numerical results}
\label{sec:4}
In this section, we present numerical results for electroweak top-quark pair production including
$Z'$-boson contributions at LO and NLO from our new POWHEG code \cite{Alioli:2010xd}, which
we coupled to the parton shower and hadronization procedure in PYTHIA 8 \cite{Sjostrand:2007gs}.
Our results pertain to $pp$ collisions at the LHC with its current center-of-mass energy of
$\sqrt{S} = 13$ TeV. Only for total cross sections, we also study how much the reach in $Z'$
mass is extended in a future run at $\sqrt{S} = 14$ TeV. The top quark is assigned a
mass of $m_t = 172.5$ GeV as in the most recent ATLAS searches for $Z'$ bosons in this channel
\cite{Aad:2015fna} and is assumed to be properly reconstructed from its decay products.
At the top-pair production threshold, $\alpha(2m_t) = 1/126.89$. The
values of $\sin^2 \theta_W=0.23116$, $m_Z=91.1876$ GeV and $\Gamma_{Z} = 2.4952$ GeV were
taken from the Particle Data Group \cite{Agashe:2014kda}. The width of the $Z'$ boson depends
on its mass and its sequential Standard Model (SSM) or leptophobic topcolor
(TC) couplings. We vary the mass for total cross sections between 2 and 6 TeV and fix it
to 3 TeV for differential distributions.
As stated in Sec.\ \ref{sec:1}, in the case of TC the $Z'$ width is set to 1.2\% of its mass,
and the couplings are $f_1=1$ and $f_2=0$.
We use the NNPDF23\_nlo\_as\_0118\_qed set of parton densities fitted with
$\alpha_{s}(m_Z)=0.118$, which includes the required photon PDF and allows
to estimate the PDF uncertainty \cite{Ball:2012cx,Ball:2013hta}.
The renormalization and factorization scales are varied by individual factors of two,
but excluding relative factors of four, around the central value $\mu_r=\mu_f=\sqrt{s}$.
In contrast to the two existing NLO calculations \cite{Gao:2010bb,Caola:2012rs}, which take only
the $Z'$-boson exchange and no SM interferences into account and where $m_{Z'}$ was chosen as the
central scale, our choice of $\sqrt{s}$ also applies to the SM channels and interpolates between
the different physical scales appearing in the process.
\subsection{Total cross sections}
To illustrate the total number of events to be expected from resonant-only $Z'$-boson
production at the LHC, we show in Fig.\ \ref{fig:07} the total NLO cross sections at a
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{fig07}
\caption{Total cross sections for $pp\to Z'\to t\bar{t}$ at the LHC with $\sqrt{S}=13$ TeV
(dashed lines) and 14 TeV (full lines) as a function of the $Z'$ mass in NLO QCD for the
sequential SM (SSM, red) and leptophobic topcolor model (TC, black). For $\sqrt{S}=13$ TeV,
we also show the associated scale (blue) and PDF uncertainties (green) (color online).}
\label{fig:07}
\end{figure}
center-of-mass energy of $\sqrt{S}=13$ TeV
in the SSM (dashed red curve) and TC (dashed black curve), together with the associated
renormalization and factorization scale uncertainties (blue bands) and PDF uncertainties
(green bands).
As one can see, in the case of the SSM (lower curves) the PDF uncertainty is larger than the
scale uncertainty in the entire range of $m_{Z'}$ masses from 2 to 6 TeV considered here.
Conversely, for the TC model (upper curves), it is the scale uncertainty which dominates for
$m_{Z'} \lesssim 5$ TeV, while the PDF uncertainty takes over only at larger values of
$m_{Z'}$, since the PDFs at large momentum fractions $x_{a,b}$ are less precisely known.
The uncertainties at NLO (note that the PS don't
affect the total cross sections) are about $\pm15$\% at low masses and increase to $\pm$35\%
in the SSM and $\pm20$\% in TC at higher masses. For an integrated luminosity of 100 fb$^{-1}$,
the number of expected
events falls from 10$^4$ for $m_{Z'}=2$ TeV to 10 for $m_{Z'}=6$ TeV
in the SSM and is about an order of magnitude larger in TC.
When the LHC energy is increased to 14 TeV, the corresponding total cross sections (full
curves) at high $Z'$-boson mass are larger by about 50\%, and the mass reach is extended by about
500 GeV, less of course than the increase in the hadronic energy $\sqrt{S}$, of which
only a fraction is transferred to the initial partons and the hard scattering.
Even for resonant-only $Z'$-boson production, the $K$-factor is not completely mass-independent,
as can be seen in Fig.\ \ref{fig:08}. In TC (lower plot), it increases only modestly from 1.3 to
1.45 in the
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{fig08}
\caption{$K$-factors (i.e.\ ratios of NLO/LO cross sections) at the LHC with $\sqrt{S}=13$ TeV
(open circles) and 14 TeV (full circles) as functions of the $Z'$ mass for the SSM (top) and TC
(bottom). For $\sqrt{S}=13$ TeV, we also show the associated scale (blue) and PDF uncertainties
(green) (color online).}
\label{fig:08}
\end{figure}
mass range considered here, while in the SSM (upper plot) it increases much more from about 1.45
to 1.85. In contrast, it depends very little on the LHC center-of-mass energy of 13 TeV (open
circles) or 14 TeV (full circles). In this figure, the scale and PDF uncertainties can also
be read off more precisely than in the previous figure. \\
In Tab.\ \ref{tab:01} we list the total cross sections in LO for top-pair production at
\begin{table}
\caption{\label{tab:01}Total cross sections in LO for top-pair production at
${\cal O}(\alpha_s^2)$, ${\cal O}(\alpha_s\alpha)$ and ${\cal O}(\alpha^2)$
in the SM, SSM and TC, together with the corresponding NLO corrections.
The $Z'$-boson mass is set to 3 TeV.}\vspace*{3mm}
\begin{tabular}{lllrr}
Order&Processes& Model & $\sigma$ [pb] & $\sigma$ [pb] $(m_{t\bar{t}}>{3\over4}m_{Z'})$\\
\hline
\hline
LO& $q\bar{q}/gg\to t\bar{t}$ && 473.93(7) & 0.15202(2)\\
NLO&$q\bar{q}/gg+qg\to t\bar{t}+q$ &&1261.0(2) & 0.45255(7)\\
\hline
LO&$\gamma g+g\gamma \to t\bar{t}$ && 4.8701(8) & 0.0049727(6)\\
LO &$\gamma g+g\gamma \to t\bar{t}$ \hfill (NLO $\alpha_s$ and PDFs) && 5.1891(8) & 0.004661(6)\\
\hline
LO&$q\bar{q}\to \gamma/Z\to t\bar{t}$ &SM& 0.36620(7) & 0.00017135(3)\\
NLO&$q\bar{q}\to \gamma/Z\to t\bar{t}$ &SM& 0.5794(1) & 0.00017174(5)\\
NLO&$q\bar{q}+qg\to \gamma/Z+q\to t\bar{t}+q$ &SM& 4.176(2) & 0.001250(6)\\
\hline
LO&$q\bar{q}\to Z' \to t\bar{t}$ &SSM & 0.0050385(8) & 0.0044848(7)\\
LO&$q\bar{q}\to \gamma/Z/Z'\to t\bar{t}$ &SSM & 0.35892(7) & 0.0043464(7)\\
NLO&$q\bar{q}\to \gamma/Z/Z'\to t\bar{t}$ & SSM & 0.5676(1) & 0.005155(3)\\
NLO&$q\bar{q}+qg\to \gamma/Z/Z'+q\to t\bar{t}+q$ &SSM & 4.172(2) & 0.007456(9)\\
\hline
LO&$q\bar{q}\to Z' \to t\bar{t}$ &TC & 0.012175(2) & 0.011647(2)\\
LO&$q\bar{q}\to \gamma/Z/Z'\to t\bar{t}$ &TC & 0.38647(7) & 0.011984(2)\\
NLO&$q\bar{q}\to \gamma/Z/Z'\to t\bar{t}$ &TC & 0.6081(2) & 0.01468(1)\\
NLO&$q\bar{q}+qg\to \gamma/Z/Z'+q\to t\bar{t}+q$ &TC & 4.202(2) & 0.01002(1)\\
\end{tabular}
\end{table}
${\cal O}(\alpha_s^2)$, ${\cal O}(\alpha_s\alpha)$ and ${\cal O}(\alpha^2)$
in the SM, SSM and TC, i.e.\ including the SM backgrounds, together with the
corresponding NLO corrections. The $Z'$-boson mass is set here to 3 TeV,
and for our LO predictions we use the NNPDF23\_lo\_as\_0119\_qed PDF set,
since a set with $\alpha_s(m_Z)=0.118$ is not available at this order.
Comparing first the LO results only, we observe that the pure QCD processes of
${\cal O}(\alpha_s^2)$ have a total cross section of about 474 pb, i.e.\
two orders of magnitude larger than the photon-gluon induced processes of
${\cal O}(\alpha_s\alpha)$ with 4.87 pb as naively expected from the ratio of
strong and electromagnetic coupling constants in the hard scattering and in
the PDFs. The suppression of the pure electroweak with respect to QCD processes is
more than three orders of magnitude, as expected from the ratio of
coupling constants in the hard scattering and when taking into account
that the QCD processes have both quark- and gluon-initiated contributions.
The $Z'$-mediated processes in the SSM and TC have only cross sections of
5 and 12 fb, respectively compared to 366 fb from the SM channels alone,
which therefore clearly dominate the total electroweak cross sections.
The interference effects are destructive in the SSM ($-4$\%), but constructive
in TC ($+2$\%).
When a cut on the invariant mass of the top-quark pair of 3/4 of the $Z'$
mass (i.e.\ at 2.25 TeV) is introduced, the SM backgrounds are reduced by more than
three orders of magnitude, while the signal cross sections drop only by
about 10\%. The interference effects then become more important in
the SSM ($-7$\%), but not in TC ($+2$\%) with its very narrow $Z'$ width of 1.2\%
of its mass. While an invariant-mass cut strongly enhances the signal-to-background
ratio, the LHC experiments still have to cope with signals that reach only
3 to 8 \% of the QCD background, which makes additional cuts on kinetic variables
necessary.
The NLO corrections for the QCD processes are well-known and can be computed
with the published version of POWHEG (HVQ) \cite{Frixione:2007nw}.
At the LHC with its high gluon luminosity,
the $qg$ channels opening up at NLO are known to introduce large $K$-factors,
here of about a factor of three. The NLO corrections for the purely electroweak processes
are new even in the SM, where we have introduced a proper subtraction procedure
for the photon-induced processes. The $K$-factors for the $q\bar{q}$ channel
are moderate in the SM (+56\%), SSM (+58\%) and TC (+56\%), where the last
two numbers are dominated by SM contributions and therefore very similar.
Only after the invariant-mass cut the differences in the models become more
apparent in the $K$-factors for the SM ($\pm0$\%), SSM ($+19$\%) and TC ($+23$\%).
However, similarly to the QCD case the $qg$ channel,
and also the $\gamma g$ channel opening up for the first time at this order,
introduce contributions much larger than the underlying Drell-Yan type Born
process. Note that the LO $\gamma g$ cross section computed with NLO
$\alpha_s$ and PDFs must still be added to the full NLO $q\bar{q}+gg$ cross
sections. An invariant-mass cut is then very instrumental to bring down
the $K$-factors and enhance perturbative stability, as one can see from the
LO $\gamma g$ and in particular the NLO results in the SSM and TC.
\subsection{Differential distributions}
We now turn to differential cross sections for the electroweak production of top-quark
pairs that includes the contribution of a SSM or TC $Z'$ boson with a fixed mass of 3 TeV.
The invariant-mass distributions of top-quark pairs in Fig.\ \ref{fig:09} exhibit
steeply falling spectra from the SM background from 10$^{-2}$ to 10$^{-7}$ pb/GeV together
with clearly visible resonance peaks of SSM (top) and TC (bottom) $Z'$ bosons at 3 TeV,
whose heights and widths differ of course due to the different couplings to SM particles
in these two models. In particular, the TC resonance cross section is about an order of
magnitude larger than the one in the SSM in accordance with the total cross section results in
the previous subsection (see Fig.\ \ref{fig:07}).
What becomes also clear from the lower panels in Fig.\ \ref{fig:09} (top and bottom) is that
the $K$-factors are highly dependent on the invariant-mass region and can reach large
factors around the resonance region. This is particularly true for TC (bottom), but also
for the SSM, and related to the fact that the position of the
resonance peak is shifted towards lower invariant masses from LO to NLO due to additional
radiation at this order. As one can see, this effect is already present if parton showers
are added to the LO calculation, so that the NLO+PS to LO+PS comparison mostly results in
an increased $K$-factor at and above the resonance.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.65]{fig09a}
\includegraphics[scale=0.65]{fig09b}
\caption{Invariant-mass distributions of top-quark pairs produced through $\gamma$, $Z$ and
$Z'$ bosons and their interferences at the LHC with $\sqrt{S}=13$ TeV at LO (light blue),
LO+PS (dark blue), NLO (green) and NLO+PS (red) accuracy together with the corresponding
$K$-factors in the SSM (top) and TC (bottom). The dashed red curves have been obtained
with HERWIG 6 \cite{Corcella:2000bw} instead of PYTHIA 8 \cite{Sjostrand:2007gs} (color online).}
\label{fig:09}
\end{figure}
The effect of interferences between SM and new physics contributions is shown in
Fig.\ \ref{fig:10}, where the sum of the squared individual contributions (blue)
is compared with the square of the sum of all contributions (green) in the SSM
(top) and TC (bottom). As one can see, the interference effects shift the resonance
peaks to smaller masses, and their sizes are reduced. When the ratios
of the two predictions are taken (lower panels), it becomes clear that predictions
without interferences overestimate the true signal by a factor of two or more.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.65]{fig10a}
\includegraphics[scale=0.65]{fig10b}
\caption{Invariant-mass distributions of top-quark pairs produced through $\gamma$, $Z$ and
$Z'$ bosons with (green) and without interferences (blue) at the LHC with
$\sqrt{S}=13$ TeV at NLO+PS accuracy together with the corresponding ratios in the SSM (top)
and TC (bottom) (color online).}
\label{fig:10}
\end{figure}
The two variables that are particularly sensitive to soft-parton radiation and the
associated resummation in NLO+PS Monte Carlo programs are the net transverse momentum
of the observed particle (here top-quark) pair ($p_{t\bar{t}}$) and the azimuthal opening
angle between them ($\phi_{t\bar{t}}$), which are 0 and $\pi$, respectively, at LO.
At NLO they are
balanced by just one additional parton and thus diverge and exhibit physical
behavior and turnover only at NLO+PS, i.e.\ after resummation of the left-over kinematical
singularities. These well-known facts can also be observed in Figs.\ \ref{fig:11} and
\ref{fig:12}, where for obvious reasons the LO $\delta$-distributions at 0 and $\pi$
are not shown. As expected, the NLO (green) predictions diverge close to these end points,
while the NLO+PS (red) predictions approach finite asymptotic values. Again, a similar
behavior is already observed at LO+PS accuracy, although with different normalization and
shape. Interestingly, the resummation works much better for purely $Z'$-mediated processes
(lower panels) than if SM and interference contributions are included (upper panels).
This effect can be traced back to the fact that in the SM-dominated full cross section the
top-pair production threshold at $2m_t=345$ GeV is almost one order of magnitude smaller than
the mass $m_{Z'}=3$ TeV governing the exclusive $Z'$-boson channel.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.65]{fig11a}
\includegraphics[scale=0.65]{fig11b}
\caption{Transverse-momentum distributions of top-quark pairs produced through $\gamma$, $Z$ and
$Z'$ bosons and their interferences (top) and through $Z'$ bosons alone (bottom) at the LHC with
$\sqrt{S}=13$ TeV at LO+PS (dark blue), NLO (green) and NLO+PS (red) accuracy in the SSM.
The TC distributions look very similar. The dashed red curves have been obtained
with HERWIG 6 \cite{Corcella:2000bw} instead of PYTHIA 8 \cite{Sjostrand:2007gs} (color online).}
\label{fig:11}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.65]{fig12a}
\includegraphics[scale=0.65]{fig12b}
\caption{Distributions in the azimuthal opening angle of top-quark pairs produced through
$\gamma$, $Z$ and $Z'$ bosons and their interferences (top) and through $Z'$ bosons alone
(bottom) at the LHC with $\sqrt{S}=13$ TeV at LO+PS (dark blue), NLO (green) and NLO+PS
(red) accuracy in the SSM. The TC distributions look very similar (color online).}
\label{fig:12}
\end{figure}
In our discussion of total cross sections in Sec.\ 4.1, we had
included analyses of scale and PDF uncertainties at NLO, but
not of the uncertainty coming from different PS implementations,
as the PS does not influence total cross sections, but only
differential distributions. To estimate this uncertainty, we
therefore show in Figs.\ \ref{fig:09} and \ref{fig:11} also results
obtained with the HERWIG 6 PS (dashed red) \cite{Corcella:2000bw} in
addition to those obtained with our standard PYTHIA 8 PS (full red)
\cite{Sjostrand:2007gs}. The dashed red curves in the lower
panels of Fig.\ \ref{fig:09} represent the ratios of the HERWIG 6
over the PYTHIA 8 PS results. As one can see there, the invariant-mass
distributions in the SSM and TC are enhanced by the HERWIG 6 PS at the
resonance at 3 TeV by about
10\%, while the region just below it is depleted by a smaller
amount, but over a larger mass region. The PS differences are therefore
smaller (by factors of three to six, except for the PDF error in TC)
than those of the scale and PDF uncertainties in Fig.\ \ref{fig:08}.
The SSM transverse-momentum distribution in Fig.\ \ref{fig:11} falls
off a bit faster with the HERWIG 6 PS than with the PYTHIA 8 PS at large
transverse momenta, while in TC it is slightly enhanced at low values,
but no significant differences appear between the angularly ordered
HERWIG 6 PS and the dipole PS in PYTHIA 8.
The importance of next-to-leading-logarithmic (NLL) contributions
that go beyond the leading-logarithmic (LL) PS accuracy can be
estimated by a comparison with analytic NLL resummation calculations.
These have not been performed for top-quark, but only for lepton final
states \cite{Fuks:2007gk}. In Fig.\ 5 of this paper, it has been
found that the invariant-mass distribution shows no significant
difference, while the LL transverse-momentum distribution computed with
the HERWIG 6 PS is somewhat smaller than the one obtained with NLL resummation,
but that it stays within the residual scale uncertainty of the latter.
Rapidity distributions of the top-quark pair are shown in Figs.\ \ref{fig:13} and \ref{fig:14}.
If SM contributions are taken into account (top), they are much flatter than if only the heavy
resonance contributes (bottom), i.e.\ the top-quark pairs are then produced much more centrally.
The effect is similar, but somewhat less pronounced in TC (Fig.\ \ref{fig:14}) than in the SSM
(Fig.\ \ref{fig:13}) due to the broader resonance in this model. Even for rapidity distributions
NLO effects are not simply parametrizable by a global $K$-factor, as it varies from 1.6 to
2.1, when SM contributions are taken into
account (blue curves in the upper $K$-factor panels) and drops from 1.6 to 1.4 or even below,
if they are not taken into account (blue curves in the lower $K$-factor panels). As expected, the
parton showers (green curves in the $K$-factor panels) have little effect on the central parts of
the rapidity distributions, and they only slightly influence the forward/backward regions through
additional parton radiation from the initial state.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.65]{fig13a}
\includegraphics[scale=0.65]{fig13b}
\caption{Rapidity distributions of top-quark pairs produced through
$\gamma$, $Z$ and $Z'$ bosons and their interferences (top) and through $Z'$ bosons alone
(bottom) at the LHC with $\sqrt{S}=13$ TeV at LO+PS (dark blue), NLO (green) and NLO+PS
(red) accuracy together with the corresponding $K$-factors in the SSM. The NLO and NLO+PS
curves nearly coincide here (color online).}
\label{fig:13}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.65]{fig14a}
\includegraphics[scale=0.65]{fig14b}
\caption{Same as Fig.\ \ref{fig:13}, but for TC (color online).}
\label{fig:14}
\end{figure}
A particularly sensitive observable for the distinction of new physics models is the
forward-backward asymmetry
\begin{eqnarray}
A_{FB}&=&{N(\Delta y>0)-N(\Delta y<0)
\over
N(\Delta y>0)+N(\Delta y<0)}
\end{eqnarray}
defined at $p\bar{p}$ colliders, where $\Delta y=y_t-y_{\bar{t}}$ is the rapidity difference of top
and antitop quarks, and the somewhat more complex charge asymmetry
\begin{eqnarray}
A_C&=&{N(\Delta |y|>0)-N(\Delta |y|<0)
\over
N(\Delta |y|>0)+N(\Delta |y|<0)}
\end{eqnarray}
defined at $pp$ colliders, where $\Delta |y|=|y_t|-|y_{\bar{t}}|$ is the corresponding difference
in absolute rapidity \cite{AguilarSaavedra:2012rx}. In Fig.\ \ref{fig:15}, the sensitivity of
$A_C$ to distinguish between the SSM (top) and TC (bottom) is confirmed, as this observable
exhibits very different magnitudes at the resonance ($11\pm1$\% vs.\ $\pm0.1$\%) and far below it
($2.5\pm0.5$\% in both plots), where the SM contributions dominate. Since $A_C$ is
defined as a ratio of cross sections, NLO and PS corrections cancel to a large extent and
are barely visible above the statistical noise. Only for TC, where the rapidity distribution
in Fig.\ \ref{fig:14} (lowest panel) showed distinct features in the ratio of NLO+PS/LO+PS,
the transition from the low-mass to the resonance region happens more abruptly in fixed order
(NLO) than with PS.
If we assume an integrated luminosity of 100 fb$^{-1}$ and integrate over an invariant-mass
window of 100 GeV around the resonance peak at 3 TeV, one would expect $10^{-5}$ pb/GeV$\times100$
fb$^{-1}\times100$ GeV
= 100 events. A 10\% asymmetry in the SSM then implies a difference of 10 events with an
error of 3, so that $A_C=(10\pm 3)\%$. This would be sufficient to distinguish the SSM
from the SM and TC.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.65]{fig15a}
\includegraphics[scale=0.65]{fig15b}
\caption{Invariant-mass distributions of the charge asymmetry $A_{C}$ of
top-quark pairs produced through $\gamma$, $Z$ and
$Z'$ bosons and their interferences at the LHC with $\sqrt{S}=13$ TeV at
LO+PS (dark blue), NLO (green) and NLO+PS (red) accuracy together with the corresponding
$K$-factors in the SSM (top) and TC (bottom) (color online).}
\label{fig:15}
\end{figure}
\section{Conclusions}
\label{sec:5}
In this paper we presented the calculation of the ${\mathcal O}(\alpha_S\alpha^2)$
corrections to the electroweak production of top-antitop pairs through SM photons,
$Z$ and $Z'$ bosons, as predicted in the Sequential SM or in tecnicolor models.
Our corrections are implemented in the NLO parton shower Monte Carlo program
POWHEG. $Z'$ reconances are actively searched for by the ATLAS and CMS experiments
at the LHC with its now increased
center-of-mass energy of 13 TeV. We have consistently included interferences
between SM and new physics contributions and have introduced a proper subtraction
formalism for QED singularities. With a great variety of numerical predictions, we have
demonstrated the mass dependence of the $K$-factor, the changing relative sizes
of scale and PDF uncertainties, the large impact of new
partonic channels opening up at NLO (in particular of those induced by photon
PDFs in the proton), and the non-negligibility of interference effects.
Distributions in invariant mass were shown to be particularly sensitive to the
latter. The all-order resummation of perturbative corrections implicit in
the parton shower has been shown to make the transverse-momentum and azimuthal
angle distributions of the top-antitop pair finite and physical. Heavy new
gauge-boson contributions were seen to lead to much more centrally produced
top pairs, and the charge asymmetry has been shown to be a promising observable
to distinguish between different new physics models. Our implementation of this
new process in POWHEG, called PBZp, is very flexible, as it allows for the
simulation of any $Z'$-boson model, and should thus prove to be a useful tool
for $Z'$-boson searches in the top-antitop channel at the LHC, in particular
for leptophobic models.
\subsection*{Acknowledgments}
We thank J.\ Gao for making possible detailed numerical comparisons of our NLO
$Z'$ calculations as well as R.M.\ Harris and J.\ Ferrando for help with
matching their LO calculations in the topcolor model.
T.J.\ thanks P.\ Nason and C.\ Oleari for useful discussions.
This work was partially supported by the BMBF Verbundprojekt 05H2015 through grant 05H15PMCCA,
the DFG Graduiertenkolleg 2149, the CNRS/IN2P3 Theory-LHC-France initiative, and the EU program
FP7/2007-2013 through grant 302997.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.